I got a table with below configurations
Table: jsontesttable Column : ArrayCol of type Medium Text Sample
value: [123,45,67,85,78]
I use the below query to select List of IDS except one of the selected IDs
/*query to select list of IDs from the column and remove unwanted columns from the selection */
SELECT hm.Id FROM jsontesttable tbl,
JSON_TABLE(ArrayCol, '$[*]' columns (Id int path '$')) AS hm
WHERE hm.Id NOT IN (67,85)
I used below query to get the JSON array back but its treating as string at present
/*Convert the IDs back to a JSON int array like [123,45,78] */
SELECT JSON_ARRAY(GROUP_CONCAT(hm.Id SEPARATOR ',')) AS IDs FROM jsontesttable tbl,
JSON_TABLE(ArrayCol, '$[*]' columns (Id int path '$')) AS hm
WHERE hm.Id NOT IN (67,85)
But its generating rows like this only Not really needed to have " in the beggining and end
//["123,45,78,88,9,3,53,6"] //["83,97"]
/*Update the ArrayCol , column as the new array without the removed values!
Perform this updation only if ArrayCol contains one of these IDs in thisc ase it is 67 & 85
*/
Can i write direct UPDATE query to modify the column without the IDs that needs to be removed like
[123,45,67,85,78] => [123,45,78] //2 IDs changed
[67,222,14] => [222,14] //Just 1 ID is changed
[83,85,97] => [83,97] //Just 1 ID is changed
[21,12,17,19] => [21,12,17,19] //No change
WITH
cte AS ( SELECT ArrayCol, ROW_NUMBER() OVER () rn
FROM jsontesttable )
SELECT rn, JSON_ARRAYAGG(hm.Id) AS IDs
FROM cte
CROSS JOIN JSON_TABLE(ArrayCol, '$[*]' columns (Id int path '$')) AS hm
WHERE hm.Id NOT IN (67,85)
GROUP BY rn
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=ce067ef12d9ed0d0ef1866e0a32d2a40
Related
I have a table in MySQL with data in a column in the following format:
[{"type_id":1,"price":50},{"type_id":3,"price":60}]
I need to find out price of the item based on its id. For example, I need to find out price of an item with type_id = 3
I have tried:
select JSON_EXTRACT(JSONColumn, '$[*].price') as prices, JSON_EXTRACT(JSONColumn, '$[*].type_id') as quantities from Items where ItemId = 123
and json_contains(JSONColumn, '{"type_id" : 3}')
This is not working. Can someone specify the correct way of querying json data?
SELECT test.id, jsontable.price
FROM test
CROSS JOIN JSON_TABLE (test.val,
'$[*]' COLUMNS (type_id INT PATH '$.type_id',
price INT PATH '$.price')) jsontable
WHERE jsontable.type_id = 3;
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=baa0a24a4bbf10ba30202c7156720018
Here is the microsoft document for how to optimize a json column with out using OpenJson and only using Json_Value: https://learn.microsoft.com/en-us/sql/relational-databases/json/index-json-data?view=sql-server-ver15
My issue is that I have a JSON column that contains an array where I am trying to grab all keys called Test_ID from each element in the array to compare with a joined statements testId and, while it works, its relatively slow. Takes about 9 seconds for 400 rows. I am trying to speed this up exponentially and it seems the only way to do so is through the indexing mentioned in that article but I can not seem to figure out how to do it for an array.
My JSON is similar to this: '{"Property":{"Label":"0"},"Tests":[{"Test_ID":"GUID_HERE","Type":{"Label":" "},"Name":{"Label":" "},"Value":null,{"Test_ID":"GUID_HERE","Type":{"Label":" "},"Name":{"Label":" "},"Value":" "}]}'
Here is my scrubbed query
SELECT DISTINCT w.W_ID,
'Proc' ProcHeaderName, p.ProcNumber ProcValue,
'Class' ClassHeaderName, p.Class ClassValue
INTO #Procs
FROM proc p
LEFT JOIN (SELECT wt.W_ID, wt.TestId
from TestValue wt where wt.IsDeleted = 0) as wtRow on wtRow.W_ID in (SELECT ID FROM #tmp)
LEFT JOIN TableNameHere c on c.IsDeleted = 0 and c.col_ID in (SELECT col_ID FROM tmp)
WHERE p.IsDeleted = 0 and [dbo].[GetTestIdJson](c.Json, wtRow.TestId) = wtRow.TestId
AND p.ProcNumber + ',' + p.RNumber = JSON_VALUE(c.Json,'$.Property.Label') + ',' + JSON_VALUE(c.Json,'$.Property.Label')
GROUP BY wtRow.W_ID, p.ProcNumber, p.Class
… indexed view … fiddle
create table dbo.a (id int primary key, json nvarchar(max));
insert into dbo.a values(1, '{"Property":{"Label":"0"},"Tests":[{"Test_ID":"GUID_HERE1","Type":{"Label":" "},"Name":{"Label":" "},"Value":null},{"Test_ID":"GUID_HERE2","Type":{"Label":" "},"Name":{"Label":" "},"Value":" "}]}');
insert into dbo.a values(2, '{"Property":{"Label":"0"},"Tests":[{"Test_ID":"GUID_HERE21","Type":{"Label":" "},"Name":{"Label":" "},"Value":null},{"Test_ID":"GUID_HERE22","Type":{"Label":" "},"Name":{"Label":" "},"Value":" "}]}');
GO
--numbers table
create table dbo.n(n int primary key);
insert into dbo.n values(0),(1),(2),(3),(4),(5),(6),(7),(8),(9),(10); --assume max 11 elements in Tests[]
GO
create view dbo.v
with schemabinding
as
select a.id, n.n, json_value(a.json, concat('$.Tests[', n.n,'].Test_ID')) as Test_Id
from dbo.a as a
cross join dbo.n as n
where json_value(a.json, concat('$.Tests[', n.n,'].Test_ID')) is not null;
GO
create unique clustered index vuidx on dbo.v(id,n);
create index idTestId on dbo.v(Test_Id);
GO
select * from dbo.v
GO
set statistics xml on;
select *
from dbo.v with(noexpand)
where Test_Id = 'GUID_HERE2';
GO
drop view if exists dbo.v;
GO
drop table if exists dbo.n;
GO
drop table if exists dbo.a;
I am facing a challenge while filtering records in a SQL Server 2017 table which has a VARCHAR column having JSON type values:
Sample table rows with JSON column values:
Row # 1. {"Department":["QA"]}
Row # 2. {"Department":["DEV","QA"]}
Row # 3. {"Group":["Group 2","Group 12"],"Cluster":[Cluster 11"],"Vertical":
["XYZ"],"Department":["QAT"]}
Row # 4. {"Group":["Group 20"],"Cluster":[Cluster 11"],"Vertical":["XYZ"],"Department":["QAT"]}
Now I need to filter records from this table based on an input parameter which can be in the following format:
Sample JSON input parameter to query:
1. `'{"Department":["QA"]}'` -> This should return Row # 1 as well as Row # 2.
2. `'{"Group":["Group 2"]}'` -> This should return only Row # 3.
So the search should be like if the column value contains "any available json tag with any matching value" then return those matching records.
Note - This is exactly similar to PostgreSQL jsonb as shown below:
PostgreSQL filter clause:
TableName.JSONColumnName #> '{"Department":["QA"]}'::jsonb
By researching on internet I found OPENJSON capability that is available in SQL Server which works as below.
OPENJSON sample example:
SELECT * FROM
tbl_Name UA
CROSS APPLY OPENJSON(UA.JSONColumnTags)
WITH ([Department] NVARCHAR(500) '$.Department', [Market] NVARCHAR(300) '$.Market', [Group] NVARCHAR(300) '$.Group'
) AS OT
WHERE
OT.Department in ('X','Y','Z')
and OT.Market in ('A','B','C')
But the problem with this approach is that if in future there is a need to support any new tag in JSON (like 'Area'), that will also need to be added to every stored procedure where this logic is implemented.
Is there any existing SQL Server 2017 capability I am missing or any dynamic way to implement the same?
Only thing I could think of as an option when using OPENJSON would be break down your search string into its key value pair, break down your table that is storing the json you want to search into its key value pair and join.
There would be limitations to be aware of:
This solution would not work with nested arrays in your json
The search would be OR not AND. Meaning if I passed in multiple "Department" I was searching for, like '{"Department":["QA", "DEV"]}', it would return the rows with either of the values, not those that only contained both.
Here's a working example:
DECLARE #TestData TABLE
(
[TestData] NVARCHAR(MAX)
);
--Load Test Data
INSERT INTO #TestData (
[TestData]
)
VALUES ( '{"Department":["QA"]}' )
, ( '{"Department":["DEV","QA"]}' )
, ( '{"Group":["Group 2","Group 12"],"Cluster":["Cluster 11"],"Vertical": ["XYZ"],"Department":["QAT"]}' )
, ( '{"Group":["Group 20"],"Cluster":["Cluster 11"],"Vertical":["XYZ"],"Department":["QAT"]}' );
--Here is the value we are searching for
DECLARE #SeachJson NVARCHAR(MAX) = '{"Department":["QA"]}';
DECLARE #SearchJson TABLE
(
[Key] NVARCHAR(MAX)
, [Value] NVARCHAR(MAX)
);
--Load the search value into a temp table as its key\value pair.
INSERT INTO #SearchJson (
[Key]
, [Value]
)
SELECT [a].[Key]
, [b].[Value]
FROM OPENJSON(#SeachJson) [a]
CROSS APPLY OPENJSON([a].[Value]) [b];
--Break down TestData into its key\value pair and then join back to the search table.
SELECT [TestData].[TestData]
FROM (
SELECT [a].[TestData]
, [b].[Key]
, [c].[Value]
FROM #TestData [a]
CROSS APPLY OPENJSON([a].[TestData]) [b]
CROSS APPLY OPENJSON([b].[Value]) [c]
) AS [TestData]
INNER JOIN #SearchJson [srch]
ON [srch].[Key] COLLATE DATABASE_DEFAULT = [TestData].[Key]
AND [srch].[Value] = [TestData].[Value];
Which gives you the following results:
TestData
-----------------------------
{"Department":["QA"]}
{"Department":["DEV","QA"]}
In My Flat File source, i want to transfer all these data in OLDEDB.
But I want to DIVIDE data into different tables.
Example.
Table one starts in first %F and ends before another %F in col[0].
And table two starts in second %F with different header because it has different fields than the first table.
Is this possible in SSIS?
Looks like, in a single flat file, 2 table data are provided. From image, it looks like, both tables have different data structure also. I think, it is difficult to load the file at one step.
May be, this steps will hep you.
Step 1. Load all the data into a table (Let to a table named [Table]). Load including the column headers.Data may look like this (just a pattern as example.)
In this table make sure you add an increment column
Step 2. A query like below will help you identifying from which row does the 2nd table starts.
Select Top 1 Column0 From [Table] Where Column1 = '%F' Order By Column0 Desc
In your ssis package, add a variable to store above result
Step 3. Add a dft with source as [Table]. After the source add a conditional split.
If Column0 < variable value, sent row to [Table1]
else to [Table2]
There may be some more modifications, still.
Added as per comment:
If you have more than 1 table.
step 1. Load all data to one table.
step 2. Add an additional column ([columnX] in image). Its value should be in such a way that, with it you should be able to identify the table.
step 3. Use a conditional split itself, using columnX map each rows to its corresponding table.
As per request, added Edit:
use a logic like this..Run the script in SSMS and see the result.
Declare #table table (id int identity(1,1),Col1 varchar(5), ColX int)
Insert into #table (Col1) Values
('%F'),('%R'),('%R'),('%R'),('%R'),('%R'),('%R'),
('%F'),('%R'),('%R'),('%R'),('%R'),('%R'),('%R'),
('%F'),('%R'),('%R'),('%R'),('%R')
Select *
from #table A
Update Y
Set ColX = Z.X
From #table Y Join(
Select A.id FromId,B.id ToId,A.X From
(
Select id,ROW_NUMBER() Over (Order By id) X From (
Select id from #table Where Col1 = '%F'
Union
Select max(id) id From #table ) Lu ) A,
(
Select id,ROW_NUMBER() Over (Order By id) X From (
Select id from #table Where Col1 = '%F'
Union
Select max(id) id From #table ) Lu ) B
Where A.X = B.X - 1 ) Z On Y.id >= Z.FromId and Y.id < Z.ToId
Select *
from #table A
Select *
from #table A
Please take a look at the following table:
I am building a search engine which returns card_id values, based on search of category_id and value_id values.
To better explain the search mechanism, imagine that we are trying to find a car (card_id) by supplying information what part (value_id) the car should has in every category (category_id).
In example, we may want to find a car (card_id), where category "Fuel Type" (category_id) has a value "Diesel" (value_id), and category "Gearbox" (category_id) has a value "Manual" (value_id).
My problem is that my knowledge is not sufficient to build a query, which will returns card_ids which contains more than one pair of category_id and value_id.
For example, if I want to search a car with diesel engine, I could build a query like this:
SELECT card_id FROM cars WHERE category_id=1 AND value_id=2
where category_id = 1 is a category "Fuel Type" and value_id = 2 is "Diesel".
My question is, how can I build a query, which will look for more category-value pairs? For example, I want to look for diesel cars with manual gearbox.
Any help will be very appreciated. Thank you in advance.
You can do this using aggregation and a having clause:
SELECT card_id
FROM cars
GROUP BY card_id
HAVING SUM(category_id = 1 AND value_id = 2) > 0 AND
SUM(category_id = 3 and value_id = 43) > 0;
Each condition in the having clause counts the number of rows that match a given condition. You can add as many conditions as you like. The first, for instance, says that there is at least one row where the category is 1 and the value is 2.
SQL Fiddle
Another approach is to create a user defined function that takes a table of attribute/value pairs and returns a table of matching cars. This has the advantage of allowing an arbitrary number of attribute/value pairs without resorting to dynamic SQL.
--Declare a "sample" table for proof of concept, replace this with your real data table
DECLARE #T TABLE(PID int, Attr Int, Val int)
--Populate the data table
INSERT INTO #T(PID , Attr , Val) VALUES (1,1,1), (1,3,5),(1,7,9),(2,1,2),(2,3,5),(2,7,9),(3,1,1),(3,3,5), (3,7,9)
--Declare this as a User Defined Table Type, the function would take this as an input
DECLARE #C TABLE(Attr Int, Val int)
--This would be populated by the code that calls the function
INSERT INTO #C (Attr , Val) VALUES (1,1),(7,9)
--The function (or stored procedure) body begins here
--Get a list of IDs for which there is not a requested attribute that doesn't have a matching value for that ID
SELECT DISTINCT PID
FROM #T as T
WHERE NOT EXISTS (SELECT C.ATTR FROM #C as C
WHERE NOT EXISTS (SELECT * FROM #T as I
WHERE I.Attr = C.Attr and I.Val = C.Val and I.PID = T.PID ))