The table I am using has an array of objects, I need to fetch data of supposing 0 row:
create table scientist (id integer, firstname varchar(1000), lastname varchar(100));
insert into scientist (id, firstname, lastname) values (1,'[
{
"ch":"1",
"id":"12",
"area":"0",
"level":"Superficial",
"width":"",
"length":"",
"othern":"5",
"percent":"100",
"location":" 2nd finger base"
},
{
"ch":"1",
"id":"13",
"area":"0",
"level":"Skin",
"width":"",
"length":"",
"othern":"1",
"percent":"100",
"location":" Abdomen "
}
]', 'einstein');
select json_array_elements_text(firstname::json) from scientist
This will return 2 rows of data. How can I get only a specified row of data, suppose I need to get an object where "level":"Superficial"or 0th row data
Just use a WHERE clause to get only the row that you want.
You will however need to change your query to a) use json_array_elements instead of json_array_elements_text and b) use a lateral subquery instead of calling the function in the SELECT clause.
SELECT value
FROM scientist, json_array_elements(firstname::json)
WHERE value ->> 'level' = 'Superficial'
(online demo)
Related
I have this table named Employee I want to Pivot this into a new table:
That is into the new table dynamically means the number of attributes of JSON data can be changed dynamically:
How can I do in SQL Server 2017 dynamically?
You can use OPENJSON with a table definition for this.
You only need to use REPLACE to make it valid JSON
SELECT
e.id,
e.name,
j.score,
j.point
FROM Employee AS e
CROSS APPLY OPENJSON(REPLACE(REPLACE(e.info, 'point', '"point"'), 'score', '"score"'))
WITH (score int, point int) AS j;
I want lower case the value in for specific keys:-
Table:
logs
id bigint , jsondata text
[
{
"loginfo": "somelog1",
"id": "App1",
"identifier":"IDENTIF12"
},
{
"loginfo": "somelog2",
"id": "APP2",
"identifier":"IDENTIF1"
}
]
I need to lower only id and identifier..
Need acheive something as below
UPdATE SET json_agg(elems.id) = lowered_val...
SELECT
id,
lower(json_agg(elems.id)) as lowered_val
FROM logs,
json_array_elements(jsondata::json) as elems
GROUP BY id;
demo:db<>fiddle
This is not that simple. You need to expand and extract the complete JSON object and have to do this manually:
SELECT
id,
json_agg(new_object) -- 5
FROM (
SELECT
id,
json_object_agg( -- 4
attr.key,
CASE -- 3
WHEN attr.key IN ('id', 'identifier') THEN LOWER(attr.value)
ELSE attr.value
END
) as new_object
FROM mytable,
json_array_elements(jsondata::json) WITH ORDINALITY as elems(value, index), -- 1
json_each_text(elems.value) as attr -- 2
GROUP BY id, elems.index -- 4
) s
GROUP BY id
Extract the arrays. WITH ORDINALITY adds an index to the array elements, to be able to group the original arrays afterwards.
Extract the elements of the array elements into a new record each. This creates two columns key and value.
If the key is your expected keys to be modified, make the related values lower case, leave all others
Rebuild the JSON objects
Reagggregate them into a new JSON array
I created a new Table by Joining 2 other tables together with a specific where clause, this resulted in only 169 records being added to the new table.
What I now need to do is the following
Clone all those records to the current table (DUPLICATE)
Add a String to current record name
Example
Original Record Name: Record 1
Cloned Record Name: Record 1 - COPY
The Records all have unique ID's When cloning, the ID's should be incremented but not be the same as the other ID's of the other records.
NOTE: THE RECORDS IN THIS NEW TABLE IS NOT SEQUENTIAL EXAMPLE BELOW
ID: bf378ee4-2430-264a-e7ec-546e68b12301
ID: bf378ee4-2430-264a-e7ec-546e68b12302
ID: bf378ee4-2430-264a-e7ec-546e68b12303
THIS IS NOT THE CASE
Copy data to and from the same table and change the value of copied data in one column to a specified value
USED TO CREATE NEW TABLE
create table sgr_New.tim_time_temp_merged_and_purged as
SELECT * from sgr_New.tim_time as T
join sgr_New.tim_time_cstm as C
on T.id = C.id_c
where
(C.billable_time_c > "0")and
(C.unbillable_time_c > "0")and
(C.unbillable_reason_c not like "No_Unbillable_Time");
Found this online (Is this relevant)
Apologies for the long query I had to specify all the fields as there is one specific field I am not supposed to insert.
insert into sgr_new.tim_time_temp_merged_and_purged (c1, c2, ...)
select
unpaid_billable_revenue_c,
unbilled_hours_c,
unbillable_time_c,
unbillable_reason_c,
training_time_c,
touched_c,
total_time_c,
time_netsuite_id_c,
time_entered_c,
tag_c,
requirements_time_c,
reporting_time_c,
related_account_c,
project_rate_c,
project_or_case_name_c,
project_number_c,
project_mgmt_time_c,
platform_build_time_c,
overrun_c,
num_minutes_c,
num_hours_c,
internal_notes_c,
free_hours_c,
expense_calculation_c,
expense_amount_c,
duration_c,
dev_time_c,
date_performed_c,
data_load_time_c,
currency_id,
counter_field_c,
configuration_time_c,
category_c,
case_rate_c,
case_number_c,
billing_rate_override_c,
billing_rate_c,
billing_notes_c,
billed_time_c,
billable_time_wf_copy_c,
billable_time_override_c,
billable_time_c,
billable_override_c,
billable_hours_kpi_c,
billable_c,
billable_amount_c,
base_rate,
amount_c,
account_short_name_c
from sgr_new.tim.time_cstm
where id = 1;
I expect to see
338 Records in my Table ALL WITH UNIQUE ID's no duplicates
169 of these has "Copy" added to the record name
There is a mysql function UUID() (in MSSQL: LOWER(NEWID()) ) that you could call without parameters to generate the IDs in the correct format.
For the name just use CONCAT(), but I see you already found it yourself :)
INSERT into mytable
(
id,
name,
field1,
field2,
...
)
SELECT max(id) + 1, concat('Copy of ', name), field1, field2, ... FROM mytable
I am facing a challenge while filtering records in a SQL Server 2017 table which has a VARCHAR column having JSON type values:
Sample table rows with JSON column values:
Row # 1. {"Department":["QA"]}
Row # 2. {"Department":["DEV","QA"]}
Row # 3. {"Group":["Group 2","Group 12"],"Cluster":[Cluster 11"],"Vertical":
["XYZ"],"Department":["QAT"]}
Row # 4. {"Group":["Group 20"],"Cluster":[Cluster 11"],"Vertical":["XYZ"],"Department":["QAT"]}
Now I need to filter records from this table based on an input parameter which can be in the following format:
Sample JSON input parameter to query:
1. `'{"Department":["QA"]}'` -> This should return Row # 1 as well as Row # 2.
2. `'{"Group":["Group 2"]}'` -> This should return only Row # 3.
So the search should be like if the column value contains "any available json tag with any matching value" then return those matching records.
Note - This is exactly similar to PostgreSQL jsonb as shown below:
PostgreSQL filter clause:
TableName.JSONColumnName #> '{"Department":["QA"]}'::jsonb
By researching on internet I found OPENJSON capability that is available in SQL Server which works as below.
OPENJSON sample example:
SELECT * FROM
tbl_Name UA
CROSS APPLY OPENJSON(UA.JSONColumnTags)
WITH ([Department] NVARCHAR(500) '$.Department', [Market] NVARCHAR(300) '$.Market', [Group] NVARCHAR(300) '$.Group'
) AS OT
WHERE
OT.Department in ('X','Y','Z')
and OT.Market in ('A','B','C')
But the problem with this approach is that if in future there is a need to support any new tag in JSON (like 'Area'), that will also need to be added to every stored procedure where this logic is implemented.
Is there any existing SQL Server 2017 capability I am missing or any dynamic way to implement the same?
Only thing I could think of as an option when using OPENJSON would be break down your search string into its key value pair, break down your table that is storing the json you want to search into its key value pair and join.
There would be limitations to be aware of:
This solution would not work with nested arrays in your json
The search would be OR not AND. Meaning if I passed in multiple "Department" I was searching for, like '{"Department":["QA", "DEV"]}', it would return the rows with either of the values, not those that only contained both.
Here's a working example:
DECLARE #TestData TABLE
(
[TestData] NVARCHAR(MAX)
);
--Load Test Data
INSERT INTO #TestData (
[TestData]
)
VALUES ( '{"Department":["QA"]}' )
, ( '{"Department":["DEV","QA"]}' )
, ( '{"Group":["Group 2","Group 12"],"Cluster":["Cluster 11"],"Vertical": ["XYZ"],"Department":["QAT"]}' )
, ( '{"Group":["Group 20"],"Cluster":["Cluster 11"],"Vertical":["XYZ"],"Department":["QAT"]}' );
--Here is the value we are searching for
DECLARE #SeachJson NVARCHAR(MAX) = '{"Department":["QA"]}';
DECLARE #SearchJson TABLE
(
[Key] NVARCHAR(MAX)
, [Value] NVARCHAR(MAX)
);
--Load the search value into a temp table as its key\value pair.
INSERT INTO #SearchJson (
[Key]
, [Value]
)
SELECT [a].[Key]
, [b].[Value]
FROM OPENJSON(#SeachJson) [a]
CROSS APPLY OPENJSON([a].[Value]) [b];
--Break down TestData into its key\value pair and then join back to the search table.
SELECT [TestData].[TestData]
FROM (
SELECT [a].[TestData]
, [b].[Key]
, [c].[Value]
FROM #TestData [a]
CROSS APPLY OPENJSON([a].[TestData]) [b]
CROSS APPLY OPENJSON([b].[Value]) [c]
) AS [TestData]
INNER JOIN #SearchJson [srch]
ON [srch].[Key] COLLATE DATABASE_DEFAULT = [TestData].[Key]
AND [srch].[Value] = [TestData].[Value];
Which gives you the following results:
TestData
-----------------------------
{"Department":["QA"]}
{"Department":["DEV","QA"]}
Please take a look at the following table:
I am building a search engine which returns card_id values, based on search of category_id and value_id values.
To better explain the search mechanism, imagine that we are trying to find a car (card_id) by supplying information what part (value_id) the car should has in every category (category_id).
In example, we may want to find a car (card_id), where category "Fuel Type" (category_id) has a value "Diesel" (value_id), and category "Gearbox" (category_id) has a value "Manual" (value_id).
My problem is that my knowledge is not sufficient to build a query, which will returns card_ids which contains more than one pair of category_id and value_id.
For example, if I want to search a car with diesel engine, I could build a query like this:
SELECT card_id FROM cars WHERE category_id=1 AND value_id=2
where category_id = 1 is a category "Fuel Type" and value_id = 2 is "Diesel".
My question is, how can I build a query, which will look for more category-value pairs? For example, I want to look for diesel cars with manual gearbox.
Any help will be very appreciated. Thank you in advance.
You can do this using aggregation and a having clause:
SELECT card_id
FROM cars
GROUP BY card_id
HAVING SUM(category_id = 1 AND value_id = 2) > 0 AND
SUM(category_id = 3 and value_id = 43) > 0;
Each condition in the having clause counts the number of rows that match a given condition. You can add as many conditions as you like. The first, for instance, says that there is at least one row where the category is 1 and the value is 2.
SQL Fiddle
Another approach is to create a user defined function that takes a table of attribute/value pairs and returns a table of matching cars. This has the advantage of allowing an arbitrary number of attribute/value pairs without resorting to dynamic SQL.
--Declare a "sample" table for proof of concept, replace this with your real data table
DECLARE #T TABLE(PID int, Attr Int, Val int)
--Populate the data table
INSERT INTO #T(PID , Attr , Val) VALUES (1,1,1), (1,3,5),(1,7,9),(2,1,2),(2,3,5),(2,7,9),(3,1,1),(3,3,5), (3,7,9)
--Declare this as a User Defined Table Type, the function would take this as an input
DECLARE #C TABLE(Attr Int, Val int)
--This would be populated by the code that calls the function
INSERT INTO #C (Attr , Val) VALUES (1,1),(7,9)
--The function (or stored procedure) body begins here
--Get a list of IDs for which there is not a requested attribute that doesn't have a matching value for that ID
SELECT DISTINCT PID
FROM #T as T
WHERE NOT EXISTS (SELECT C.ATTR FROM #C as C
WHERE NOT EXISTS (SELECT * FROM #T as I
WHERE I.Attr = C.Attr and I.Val = C.Val and I.PID = T.PID ))