Remove params from JSON in SQL Server - json

There is a JSON column in SQL Server tables with data like:
["1","2","3","4"]
and I want to delete "3" or ("2","4") (for example) from it.
Can I do it with Json_Modify or anything else?

JSON modify can modify by PATH if you have not any key to modify and just a simple list like that you can do this:
DECLARE #JsonList NVARCHAR(1000) = N'["1","2","3","4"]';
DECLARE #NewList NVARCHAR(1000);
SET #NewList =
(
SELECT CONCAT('[', STRING_AGG(CONCAT('"', oj.Value, '"'), ','), ']')
FROM OPENJSON(#JsonList) AS oj
WHERE oj.Value NOT IN ( 2, 4 )
);
PRINT #NewList

Related

Merging JSON objects in SQL Server

We have a base table which has custom column_values column
Table_Structure:
User_Dimension-
Userid,username,addresss,Custom_value
Userid is the primary key and customer can map the fields present in the file using our UI.
If any of the columns present in the files doesn't fit in the column present in our base tables, we will create a custom column and store it in the form of json.
Userid,username,addresss,Custom_value
234,AK4140,BANGLORE,{"Pin:"522413","State":"Maharastra"}
The data will be stored as mentioned in the above in a staging table
Note: for the table User_Dimension there can be data from multiple files so my custom values are different for each file and that information is stored in a meta data table.
We are using SCD Type 1 for dimension tables
The problem is to merge JSON column.
Consider this scenario:
User_Dimension
Userid,username,addresss,Service_Type,User_Type,Custom_value
234,ak4140,banglore,null,null,{"Pin:"522413","State":"Maharastra"}
The above entry was present in my user_dimension from File1
Now I need to push below value to my table from File2
Userid,username,addresss,Service_Type,User_Type,Custom_value
234,NULL,NULL,Customer,DVV,{"Birthdate:"19-09-1995","State":"Karnataka"}
I am merging both the values based on the Userid.
The problem is Custom_Value column. From the above entries I need to update this column as shown here:
Userid,username,addresss,Service_Type,User_Type,Custom_value
234,ak4140,banglore,Customer,DVV,{"Pin:"522413","State":"Karnataka","Birthdate":"19-09-1995"}
I wrote a function which can perhaps be used to merge two json files:
create or alter function dbo.FN_JSON_MERGE(#pJson1 NVARCHAR(MAX), #pJson2 NVARCHAR(MAX))
RETURNS NVARCHAR(MAX)
AS
BEGIN
-- Get keys...
declare #t table ([key] nvarchar(max) collate database_default, [value] nvarchar(max) collate database_default, row_id int identity)
insert into #t
select [key], [value]
from OPENJSON(#pJson1)
-- Merge values from #pjson2
update t
set value = oj.value
from #t t
inner join OPENJSON(#pJson2) oj
ON oj.[key] collate database_default = t.[key] collate database_default
insert into #t
select [key], [value]
from OPENJSON(#pJson2) o
where not exists(
select 1
from #t t2
where t2.[key] collate database_default = o.[key] collate database_default
)
-- Finally generate new json...
set #pJson2 = ''
select #pJson2 = #pJson2 + ',' + '"' + [key] + '": "' + (value) + '"'
from #t
order by [row_id]
return '{' + stuff(#pJson2, 1, 1, '') + '}'
END
Test code:
select dbo.FN_JSON_MERGE('{"Pin":"522413","State":"Maharastra"}'
, '{"Birthdate":"19-09-1995","State":"Karnataka"}')
-- returns {"Pin": "522413","State": "Karnataka","Birthdate": "19-09-1995"}
But there are lot of BUTs. It might not handle very long strings / strings with quotes or other more weird jsons etc.
Also, it doesn't always keep the same attribute order as original json.
It's likely to be very slow.
Finally, it doesn't handle if you want to merge data from 3 files.
Right now, the second argument values always overwrite first.
But maybe it can be of some use. You can always create a procedure which does this for better performance.

Sql Server: Select String array of JSON

Given the following test data:
declare #mg nvarchar(max);
set #mg = '{"fiskepind":["ko","hest","gris"]}';
select #mg, JSON_VALUE(#mg,'$.fiskepind')
How do i get returned a column with:
ko,hest,gris
Example returns: NULL, and i dont want to [index] to only get one returned.
Starting from SQL Server 2017, a possible solution is a combination of OPENJSON() and STRING_AGG().
SELECT STRING_AGG([value], ',') WITHIN GROUP (ORDER BY CONVERT(int, [key])) AS Result
FROM OPENJSON(#mg, '$.fiskepind')
Note, that JSON_VALUE() returns a scalar value, so the NULL value is the expected result when you try to extract a JSON array ('$.fiskepind') from the input JSON text.
If you just want a combine list, you can use OPENJSON to get a table and then use FOR XML PATH or STRING_AGG to combine into a single string.
declare #mg nvarchar(max);
set #mg = '{"fiskepind":["ko","hest","gris"]}';
select #mg, JSON_VALUE(#mg,'$.fiskepind')
, STUFF((
SELECT
',' + value
FROM OPENJSON(#mg, '$.fiskepind')
FOR XML PATH('')
),1,1,'') as combined_list

MSSQL select JSON file with multirows and insert into table

I read the docs of handling a JSON file here. So far I am able to read the file and get a result:
QRY: SELECT * FROM OPENROWSET (BULK 'c:\ne.db', SINGLE_CLOB) as import
Result: {"res":{"number":"123", "info":"c-PM6900"},"_id":"aHMIeu6ZwB9lIBZk"} {"res":{"number":"456", "info":"a-PMs900"},"_id":"aHaIeu6ZwB9sIBZ1"}....
if I qry this, I only get the first row with the res nested:
Declare #JSON varchar(max)
SELECT #JSON=BulkColumn
FROM OPENROWSET (BULK 'C:\ne.db', SINGLE_CLOB) import
SELECT *
FROM OPENJSON (#JSON)
What I want to achieve, is to read every entry of the JSON file and insert "res" from the json query into a row of a table in the database containing columns "number","info","id". If anyone could help me to finish this, I would appreciate.
The JSON file contains about 400000 lines and comes from a NodeJS script which uses nedb.
Here is the example file: LINK
The JSON in the file is not a valid JSON, it contains multiple root elements or a single row for each JSON object. It's strange, but OPENJSON() reads only the first element in this JSON input without generating an error.
But you may try to transform the input JSON into a valid JSON array ({...} {...} into [{}, {...}]) and parse this JSON array with OPENJSON() and explicit schema. If the input file has a single row for each JSON object, you need to know the new line separator (it's usually CHAR(10)):
DECLARE #json nvarchar(MAX)
-- Read the file's content
-- SELECT #json = BulkColumn
-- FROM OPENROWSET (BULK 'C:\ne.db', SINGLE_CLOB) AS [Insert]
-- Only for test
SELECT #json =
N'{"res":{"number":"123", "info":"c-PM6900"},"_id":"aHMIeu6ZwB9lIBZk"}' +
CHAR(10) +
N'{"res":{"number":"456", "info":"a-PMs900"},"_id":"aHaIeu6ZwB9sIBZ1"}'
SELECT [number], [info], [_id]
FROM OPENJSON(CONCAT('[', REPLACE(#json, CONCAT('}', CHAR(10), '{'), '},{'), ']')) WITH (
[number] varchar(3) '$.res.number',
[info] varchar(10) '$.res.info',
_id varchar(50) '$._id'
)
Result:
number info _id
123 c-PM6900 aHMIeu6ZwB9lIBZk
456 a-PMs900 aHaIeu6ZwB9sIBZ1
You need to use a couple of calls to OPENJSON to achieve this, with a WITH:
DECLARE #JSON nvarchar(MAX) = N'{"res":{"number":"123", "info":"c-PM6900"},"_id":"aHMIeu6ZwB9lIBZk"} {"res":{"number":"456", "info":"a-PMs900"},"_id":"aHaIeu6ZwB9sIBZ1"}'
SELECT J._id,
r.number,
r.info
FROM OPENJSON(#JSON)
WITH (_id varchar(30),
res nvarchar(MAX) AS JSON) J
CROSS APPLY OPENJSON(J.res)
WITH(number int,
info varchar(10)) r;
Because the OP appears to think I am telling them to change their DECLARE and assignment statement... to confirm how you get the value into the #JSON, from the OP's own question:
DECLARE #JSON varchar(max);
SELECT #JSON=BulkColumn
FROM OPENROWSET (BULK 'C:\ne.db', SINGLE_CLOB);
Final edit, is also appears that the OP's JSON is malformed, as I would expect a comma, or something, before the second res definition. Guessing we need to split it into rows as well, which means some string splitting:
SELECT J._id,
r.number,
r.info
FROM STRING_SPLIT(REPLACE(#JSON,N'}} {"res"',N'}}|{"res"'),'|') SS --I assume a pipe (|`) won't appear in the data
CROSS APPLY OPENJSON(SS.[value])
WITH (_id varchar(30),
res nvarchar(MAX) AS JSON) J
CROSS APPLY OPENJSON(J.res)
WITH(number int,
info varchar(10)) r;
db<>fiddle

Update/Delete JSON array value in SQL Server

I have a json array in my table. It contains an array. I can create, append or make my array NULL. Inside my stored procedure but I don't see any way to pop value from array. Apparently JSON_Modify may have solution as you can update key as well as Single value but how can I use it to modify my array?
--My Array
Declare #json = '{"array":[123,456]}'
Desired results after update:
'{"array":[123]}'
Please note that array contain int values. Which are my sub department id. All values are (supposed to be) unique.
You could use:
DECLARE #json NVARCHAR(MAX) = '{"array":[123,456]}';
WITH cte AS (
SELECT *, MAX([key]) OVER() AS m_key
FROM OPENJSON(#json, '$.array') s
)
SELECT JSON_QUERY('[' + IIF(MAX(m_key) = 0, '', STRING_AGG(value,',')
WITHIN GROUP (ORDER BY [key])) + ']','$') AS array
FROM cte
WHERE [key] != m_key OR m_key = 0
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER;
Output:
{"array":[123]}
DBFiddle Demo SQL Server 2017
As I was in hurry I solved my problem following way, but I would really recommend not to use it. Please see answer above by #lad2025.
DECLARE #json VARCHAR(MAX)
=(SELECT jsonDept
FROM tblEmployee
WHERE tblEmployeeID = #empid)
DECLARE #newjson VARCHAR(MAX)= (
SELECT LEFT(subdept, LEN(subdept)-1)
FROM (
SELECT Distinct value + ', ' FROM OPENJSON(#json,'$.array') Where value <> #subdeptid
FOR XML PATH ('')
) t (subdept))
UPDATE tblEmployee SET jsonDept = '{"array":['+ #newjson +']}' WHERE tblEmployeeID = #empid

SQL server Json with single array

I am storing ids in comma separated string.
e.g
1,2,3,4
How can I store this in JSON in the column and should be able to insert delete any particular value?
Thanks
Part of the following answer comes from here, so all credits go there: https://stackoverflow.com/a/37844117/2695832
Here's a solution that enables you to store your string values in a JSON array in a table column. However, the should be able to insert delete any particular value part of your question is not totally clear to me.
DECLARE #source VARCHAR(20);
SET #source = '1,2,3,4';
DECLARE #values TABLE
(
[Id] VARCHAR(20)
);
INSERT INTO #values
(
[Id]
)
SELECT
value
FROM [STRING_SPLIT](#source, ',')
WHERE RTRIM(value) <> '';
INSERT INTO #values ([Id]) VALUES ('5')
DELETE FROM #values WHERE Id = 2
SELECT
JSON_QUERY('[' + STUFF(( SELECT ',' + '"' + Id + '"'
FROM #values FOR XML PATH('')),1,1,'') + ']' ) ids
FOR JSON PATH , WITHOUT_ARRAY_WRAPPER
This produces the following JSON object:
{"ids":["1","3","4","5"]}
The code might need some tweaking to completely match your needs since you're probably not using a table variable and also maybe want to use a numeric data type for your id values.