Our database solution is very JSON heavy, and as such, our SQL queries are all JSON based (for the most part). This includes extensive use of JSON_ARRAYAGG().
The problem I'm encountering is using a returned array of indexes in WHERE IN, which simply doesn't work. From what I can tell it's a simple formatting issue where MySQL wants an () encapsulation and a JSON array is a [] encapsulation.
For example:
SELECT COUNT(si.ID) AS item_count, JSON_ARRAYAGG(si.ID) AS item_array
FROM sourcing_item si;
Returns:
7, [1,2,3,4,5,6,7]
What I need to do is write a complex nested query that allows for selecting record IDs that are IN the JSON_ARRAYAGG result. Like:
SELECT si.item_name
FROM sourcing_item si
WHERE si.ID IN item_array
Of course the above doesn't work because MySQL doesn't recognize [] vs. ().
Is there a viable workaround for this issue? I'm surprised they haven't updated MySQL to allow the WHERE IN clause to work with a JSON array...
The MEMBER OF operator does this.
SELECT si.item_name
FROM sourcing_item si
WHERE si.ID MEMBER OF (item_array)
Related
Using a SQLAlchemy class, I'm trying to generate a query that resembles
SELECT
DISTINCT(non_unique_key)
FROM
`tablename`,
UNNEST(tasks_dns) AS dns
WHERE
create_date_utc = TIMESTAMP("2020-12-31T23:59:59")
AND dns LIKE "%whatever%"
Being an implicit join using unnest(), I don't have a clue how to construct my statement.
Using a combination of .label() and moving the unnest() call around, I've managed to move the unnest clause to either the SELECT or WHERE clauses, but not in the FROM.
For example,
session.query(Table.non_unique_key).filter(func.unnest(Table.dns) != '').filter(Table.create_date == "2021-04-22")
leaves me with
SELECT `tablename`.`non_unique_key` AS `tablename_non_unique_key`
FROM `tablename`
WHERE unnest(`tablename`.`tasks_dns`) IS NOT NULL AND `tablename`.`create_date_utc` = %(create_date_utc_1)s
So far, using join() has just caused exceptions around not having a column to join on (which while yes, I understand what that means, I'm not sure how to get around that since an unnest is basically doing an expansion of a nested data type that doesn't have a column to join on.. which is probably where my ignorance around how to properly use SQLAlchemys join() method comes in)
Is this just a SQLAlchemy / BigQuery dialect issue at this point? Or am I just a dunce? I know the dialect library is still infant, but even with Postgres, I would have thought that this should be a somewhat common query pattern?
After some additional digging, I've figured it out
Model().query().select_from(func.unnest(Model.col1).alias("whatever")).filter()....
I've created a MySQL sproc which returns 3 separate result sets. I'm implementing the npm mysql package downstream to exec the sproc and get a result structured in json with the 3 result sets. I need the ability to filter the json result sets that are returned based on some type of indicator in each result set. For example, if I wanted to get the result set from the json response which deals specifically with Suppliers then I could use some type of js filter similar to this:
var supplierResultSet = mySqlJsonResults.filter(x => x.ResultType === 'SupplierResults');
I think SQL Server provides the ability to include a hard-coded column value in a SQL result set like this:
select
'SupplierResults',
*
from
supplier
However, this approach appears to be invalid in MySQL b/c MySQL Workbench is telling me that the sproc syntax is invalid and won't let me save the changes. Do you know if something like what I'm trying to achieve is possible in MySQL and if not then can you recommend alternative approaches that would help me achieve my ultimate goal of including some type of fixed indicator in each result set to provide a handle for downstream filtering of the json response?
If I followed you correctly, you just need to prefix * with the table name or alias:
select 'SupplierResults' hardcoded, s.* from supplier s
As far as I know, this is the SQL Standard. select * is valid only when no other expression is added in the selec clause; SQL Server is lax about this, but most other databases follow the standard.
It is also a good idea to assign a name to the column that contains the hardcoded value (I named it hardcoded in the above query).
In MySQL you can simply put the * first:
SELECT *, 'SupplierResults'
FROM supplier
Demo on dbfiddle
To be more specific, in your case, in your query you would need to do this
select
'SupplierResults',
supplier.* -- <-- this
from
supplier
Try this
create table a (f1 int);
insert into a values (1);
select 'xxx', f1, a.* from a;
Basically, if there are other fields in select, prefix '*' with table name or alias
I've tried to dig through the documentation and can't seem to find anything for what I'm looking for. Are there equivalent functions for the various JSON/B operators (->, ->>, #>, ?, etc) in PostgreSQL?
Edit: To clarify, I would like to know if it is possible to have the following grouped queries return the same result:
SELECT '{"foo": "bar"}'::json->>'foo'; -- 'bar'
SELECT json_get_value('{"foo": "bar"}'::json, 'foo'); -- 'bar'
SELECT '{"foo": "bar"}'::jsonb ? 'foo'; -- t
SELECT jsonb_key_exists('{"foo": "bar"}'::jsonb, 'foo'); -- t
You can use the system catalogs to discover the function equivalent to each operator.
select * from pg_catalog.pg_operator where oprname ='?';
This shows that the function is named "jsonb_exists". Some operators are overloaded and will give more than one function, you have to look at the argument types to distinguish them.
Every operator has a function 'behind' it. That function may or may not be documented in its own right.
AFAIK there are two functions which are equivalents to #> and #>> operators.
These are:
#> json_extract_path
#>> json_extract_path_text
->> json_extract_path_text -- credits: #a_horse_with_no_name
Discover more in the docs
Other than that you could extract json to a table and take values you need using regular SQL query vs a table using json_each or json_each_text.
Same thing with checking if a key exists in a JSON would be to use json_object_keys and also query the table which comes out of it.
If you need to wrap things up in a different language / using ORM then what you could do is move the data retrieval logic to a PL/SQL procedure and just execute it and obtain prepared data from it.
Obviously, you could also build your own functions that would implement the behaviour of forementioned operators, but the question is: is it really worth it? It will definitely be slower.
I have a table that contains a GEOMETRY data type. SQL Server 2008 ships with a built in function to convert these GEOMETRY data types to GML - GEOMETRY.AsGml(). I believe this function is nothing more than a custom user defined function.
This function works exactly as expected, until I try to use it in a view that is joined to other tables/views. In that case, I get an error message along the lines "Remote function reference 'dbo.PROPERTY.SHAPE.AsGml' is not allowed, and the column name 'dbo' could not be found or is ambiguous."
What I have been doing is creating an initial view that contains all of the joins needed to get the desired fields, leaving the GEOMETRY field in its native format. Then, in a secondary view, I will perform the GML conversion.
The layering of these views has obvious performance implications, and I am wondering why I can't just do the AsGml() in the views with joins?
Using an inline Select statement solved this for me.
This didn't work:
SELECT dbo.H1N1_2009.COUNTY, dbo.states.STATE_NAME, dbo.states.geom.AsGml()
AS GML
FROM dbo.H1N1_2009 INNER JOIN dbo.states ON dbo.H1N1_2009.ID = dbo.states.ID
This works:
SELECT dbo.H1N1_2009.COUNTY, states_1.STATE_NAME,
(SELECT geom.AsGml() AS Expr1 FROM dbo.states WHERE(ID =dbo.H1N1_2009.ID)) AS GML
FROM dbo.H1N1_2009 INNER JOIN dbo.states AS states_1 ON dbo.H1N1_2009.ID states_1.ID
Hope this helps someone else.
The following doesn't work, but something like this is what I'm looking for.
select *
from Products
where Description like (#SearchedDescription + %)
SSRS uses the # operator in-front of a parameter to simulate an 'in', and I'm not finding a way to match up a string to a list of strings.
There are a few options on how to use a LIKE operator with a parameter.
OPTION 1
If you add the % to the parameter value, then you can customize how the LIKE filter will be processed. For instance, your query could be:
SELECT name
FROM master.dbo.sysobjects
WHERE name LIKE #ReportParameter1
For the data set to use the LIKE statement properly, then you could use a parameter value like sysa%. When I tested a sample report in SSRS 2008 using this code, I returned the following four tables:
sysallocunits
sysaudacts
sysasymkeys
sysaltfiles
OPTION 2
Another way to do this that doesn't require the user to add any '%' symbol is to generate a variable that has the code and exceute the variable.
DECLARE #DynamicSQL NVARCHAR(MAX)
SET #DynamicSQL =
'SELECT name, id, xtype
FROM dbo.sysobjects
WHERE name LIKE ''' + #ReportParameter1 + '%''
'
EXEC (#DynamicSQL)
This will give you finer controller over how the LIKE statement will be used. If you don't want users to inject any additional operators, then you can always add code to strip out non alpha-numeric characters before merging it into the final query.
OPTION 3
You can create a stored procedure that controls this functionality. I generally prefer to use stored procedures as data sources for SSRS and never allow dynamically generated SQL, but that's just a preference of mine. This helps with discoverability when performing dependency analysis checks and also allows you to ensure optimal query performance.
OPTION 4
Create a .NET code assembly that helps dynamically generate the SQL code. I think this is overkill and a poor choice at best, but it could work conceivably.
Have you tried to do:
select * from Products where Description like (#SearchedDescription + '%')
(Putting single quotes around the % sign?)
Dano, which version of SSRS are you using? If it's RS2000, the multi-parameter list is
not officially supported, but there is a workaround....
put like this:
select *
from tsStudent
where studentName like #SName+'%'
I know this is super old, but this came up in my search to solve the same problem, and I wound up using a solution not described here. I'm adding a new potential solution to help whomever else might follow.
As written, this solution only works in SQL Server 2016 and later, but can be adapted for older versions by writing a custom string_split UDF, and by using a subquery instead of a CTE.
First, map your #SearchedDescription into your Dataset as a single string using JOIN:
=JOIN(#SearchedDedscription, ",")
Then use STRING_SPLIT to map your "A,B,C,D" kind of string into a tabular structure.
;with
SearchTerms as (
select distinct
Value
from
string_split(#SearchedDescription, ',')
)
select distinct
*
from
Products
inner join SearchTerms on
Products.Description like SearchTerms.Value + '%'
If someone adds the same search term multiple times, this would duplicate rows in the result set. Similarly, a single product could match multiple search terms. I've added distinct to both the SearchTerms CTE and the main query to try to suppress this inappropriate row duplication.
If your query is more complex (including results from other joins) then this could become an increasingly big problem. Just be aware of it, it's the main drawback of this method.