Lets say there is a table A that has column Information, and data is stored there in JSON format. JSON string, stored there, may have properties Comment and Timestamp or properties comment and timestamp. Like this:
[{"Timestamp":"2018-04-11 18:14:59.9708","Comment":"first comment"}]
[{"timestamp":"2017-04-11 18:14:59.9708","comment":"second comment"}]
[{"Timestamp":"2019-04-11 18:14:59.9708","Comment":"third comment"}, {"timestamp":"2017-04-11 18:14:59.9708","comment":"last comment"}]
Below script parses the JSON string only for capital case properties, and throw error for JSON string with small cases.
Select jsonInfo.*
From OPENJSON(#Information, N'$')
with(
Comment nvarchar(max) N'$.Comment',
TimeStamp datetime '$.Timestamp'
) as jsonInfo;
Is there any syntax that return both Comment or comment properties, by ignoring case.
As is explained in the documentation, with explicit schema (the WITH clause), OPENJSON() matches keys in the input JSON expression with the column names in the WITH clause and the match is case sensitive. But, as a possible workaround, you may try to use OPENJSON() with default schema and conditional aggregation:
Statement:
DECLARE #information nvarchar(max) = N'[
{"Timestamp":"2019-04-11 18:14:59.9708","Comment":"third comment"},
{"timestamp":"2017-04-11 18:14:59.9708","comment":"last comment"}
]'
SELECT
MAX(CASE WHEN LOWER(j2.[key]) = N'timestamp' THEN j2.[value] END) AS [TimeStamp],
MAX(CASE WHEN LOWER(j2.[key]) = N'comment' THEN j2.[value] END) AS [Comment]
FROM OPENJSON(#information, '$') j1
CROSS APPLY OPENJSON(j1.[value]) j2
GROUP BY j1.[key]
Result:
TimeStamp Comment
-----------------------------------------
2019-04-11 18:14:59.9708 third comment
2017-04-11 18:14:59.9708 last comment
I know it's too late to give an answer, but just for the community the easiest way to figure this out is by applying LOWER or UPPER function to the json string. Something like this:
SET #Information = LOWER(#Information)
SELECT jsonInfo.*
FROM OPENJSON(#Information, N'$')
WITH(
Comment NVARCHAR(MAX) N'$.comment',
TimeStamp DATETIME'$.timestamp'
) AS jsonInfo;
Related
When I use a variable (or a field from a table) in a case statement with "FOR JSON PATH", the JSON provided is not well formed.
Ex:
declare #MyValue nvarchar(50)
set #MyValue='1'
select CASE WHEN #MyValue='1' THEN (select 'ROLE_CLIENT_READONLY' as id FOR JSON PATH) end as [Role] FOR JSON PATH
Return
[{"Role":"[{\"id\":\"ROLE_CLIENT_READONLY\"}]"}]
But if I put this, it's works:
select CASE WHEN '1'='1' THEN (select 'ROLE_CLIENT_READONLY' as id FOR JSON PATH) end as [Role] FOR JSON PATH
Return
[{"Role":[{"id":"ROLE_CLIENT_READONLY"}]}]
Any idea on the reason for this behavior?
How can I fix this in the first scenario?
Not sure why it treats one different than the other. I certainly would not expect the difference in JSON between using a variable in the query and a string literal.
Interestingly
CASE WHEN CAST(N'1' AS NVARCHAR(MAX)) = CAST(N'1' AS NVARCHAR(MAX))
also produces the problematic JSON, however
CASE WHEN CAST(N'1' AS NVARCHAR(50)) = CAST(N'1' AS NVARCHAR(50))
does not.
This seems to work as a workaround for using the variable in the query:
WITH ids AS (
SELECT CASE WHEN #MyValue = '1' THEN 'ROLE_CLIENT_READONLY' END id
)
SELECT (SELECT id FROM ids FOR JSON PATH) AS [Role] FOR JSON PATH;
I'm using DataTables, DataTables Editor, JavaScript, and MSSQL 2016.
I'd like to parse this string in SQL Server:
{
"action":"edit",
"data": {
"2019-08-03":{
"Description":"sdfsafasdfasdf",
"FirstFrozenStep":"333"
}
}
}
I don't know how to access the key "2019-08-03". This represents the primary key, or the DT_RowId in DataTables Editor. It's dynamic... It could change.
Historically, I have just manipulated the data in JavaScript to a FLAT object, which is WAY easier to parse in SQL Server:
{
"action":"edit",
"DT_RowId":"2019-08-03",
"Description":"sdfsafasdfasdf",
"FirstFrozenStep":"333"
}
HOWEVER, I would like to know how to use json_query, json_value, and openjson() to drill down to the "dynamic" key mentioned above, and then access its values.
Here are all my FAILED attempts:
declare
#jsonRequest nvarchar(max) = '{"action":"edit","data":{"2019-08-03":{"Description":"sdfsafasdfasdf","FirstFrozenStep":"333"}}}'
,#json2 nvarchar(max) = '{"2019-08-03":{"Description":"sdfsafasdfasdf","FirstFrozenStep":"333"}}'
,#jsonEASY nvarchar(max) = '{"action":"edit","DT_RowId":"2019-08-03","Description":"sdfsafasdfasdf","FirstFrozenStep":"333"}'
select
json_value(#jsonRequest, '$.action') as [action]
--,json_value(#jsonRequest, '$.data.[0]') as [action]
--,json_query(#jsonRequest, '$.data[0]')
--,json_query(#jsonRequest, '$.data.[0]')
--,json_query(#jsonRequest, '$.data[0].Description')
--,json_query(#jsonRequest, '$.data.Description')
--,json_query(#jsonRequest, '$.data.[0].Description')
select
[Key]
,Value
,Type
--,json_query(value, '$')
from
openjson(#jsonRequest)
SELECT x.[Key], x.[Value]
FROM OPENJSON(#jsonRequest, '$') AS x;
select
x.[Key]
,x.[Value]
--,json_query(x.value, '$')
--,(select * from openjson(x.value))
FROM OPENJSON(#jsonRequest, '$') AS x;
SELECT x.[Key], x.[Value]
FROM OPENJSON(#json2, '$') AS x;
select
json_value(#jsonEASY, '$.action') as [action]
,json_value(#jsonEASY, '$.DT_RowId') as [DT_RowId]
,json_value(#jsonEASY, '$.Description') as [Description]
The most explicit and type-safe approach might be this:
I define your JSON with two dynamic keys
DECLARE #json NVARCHAR(MAX)=
N'{
"action":"edit",
"data": {
"2019-08-03":{
"Description":"sdfsafasdfasdf",
"FirstFrozenStep":"333"
},
"2019-08-04":{
"Description":"blah4",
"FirstFrozenStep":"444"
}
}
}';
--the query
SELECT A.[action]
,B.[key]
,C.*
FROM OPENJSON(#json) WITH([action] NVARCHAR(100)
,[data] NVARCHAR(MAX) AS JSON) A
OUTER APPLY OPENJSON(A.[data]) B
OUTER APPLY OPENJSON(B.[value]) WITH([Description] NVARCHAR(100)
,FirstFrozenStep INT) C;
The result
action key Description FirstFrozenStep
edit 2019-08-03 sdfsafasdfasdf 333
edit 2019-08-04 blah4 444
The idea in short:
The first OPENJSON() will return the two first-level-keys under the alias A. The data element is returned AS JSON, which allows to proceed with this later.
The second OPENJSON() gets A.[data] as input and is needed to get hands on the key, which is your date.
The third OPENJSON() now gets B.[value] as input.
The WITH-clause allows to read the inner elements implicitly pivoted and typed.
In general: In generic data containers it is no good idea to use descriptive parts as content. This is possible and might look clever, but it was much better to place your date as content within a date-key.
You can use OUTER APPLY to get to the next level in JSON:
SELECT L1.[key], L2.[key], L2.[value]
FROM openjson(#json,'$.data') AS L1
OUTER APPLY openjson(L1.[value]) AS L2
It will return:
key key value
2019-08-03 Description sdfsafasdfasdf
2019-08-03 FirstFrozenStep 333
Using SQL Server, I want to take column data and copy it into a json object column
I am using SQL Server to a query a column and a json data. What I want to do is to copy the data in column ename to fieldvalue column in the code below. If I could do it using SQL that would be great.
SELECT
a.id, a.ssn, a.ename, p.CaptionName, p.FieldName, p.FieldType, p.FieldValue
FROM
tablename as a
CROSS APPLY
OPENJSON (details)
WITH (CaptionName NVARCHAR(100),
FieldName NVARCHAR(100),
FieldType NVARCHAR(15),
FieldValue NVARCHAR(50)) AS P
WHERE
p.captionname = 'txtEname'
AND a.ssn = '000-00-0000'
My json string in the details column
[{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":""}]
I'm really not that good with sql which is what i want to use. after copying the data to the json object i will remove the ename column.
UPDATE 2019-07-11
Here's an amended solution which works for scenarios when there are multiple values in the JSON: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=1fde45dfb604b2d5540c56f6c17a822d
update a
set details = JSON_MODIFY(details, '$[' + x.[key] + '].FieldValue', ename)
from dbo.tblUissAssignments a
CROSS APPLY OPENJSON (details, '$') x
CROSS APPLY OPENJSON (x.Value)
WITH (CaptionName NVARCHAR(100),
FieldName NVARCHAR(100),
FieldType NVARCHAR(15),
FieldValue NVARCHAR(50)) AS P
WHERE a.ssn = '000-00-0000'
and p.CaptionName = 'txtEname'
This is similar to my original answer (see below). However:
We now have 2 cross apply statements. The first is used to split the JSON array into elements, so we get a key (index) and value (JSON object as a string), as documented here: https://learn.microsoft.com/en-us/sql/t-sql/functions/openjson-transact-sql?view=sql-server-2017#path
The second does what your original CROSS APPLY did, only acting on the single array element.
We use the [key] returned by the first cross apply to target the item in the array that we wish to update in our JSON_MODIFY statement.
NB: If it's possible for your JSON array to contain multiple objects that need updating, the best solution I can think of is to put the above statement into a loop; since 1 update will only update 1 index on a given JSON. Here's an example: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=120d2ac7dd3a024e5e503a5f64b0089e
declare #doWhileTrueFlag bit = 1
while (#doWhileTrueFlag = 1)
begin
update a
set details = JSON_MODIFY(details, '$[' + x.[key] + '].FieldValue', ename)
from dbo.tblUissAssignments a
CROSS APPLY OPENJSON (details, '$') x
CROSS APPLY OPENJSON (x.Value)
WITH (CaptionName NVARCHAR(100),
FieldName NVARCHAR(100),
FieldType NVARCHAR(15),
FieldValue NVARCHAR(50)) AS P
WHERE a.ssn = '000-00-0000'
and p.CaptionName = 'txtEname'
and p.FieldValue != ename --if it's already got the correct value, don't update it again
set #doWhileTrueFlag = case when ##RowCount > 0 then 1 else 0 end
end
ORIGINAL ANSWER
Try this: https://dbfiddle.uk/?rdbms=sqlserver_2017&fiddle=b7b4d075cac6cd46239561ddb992ac90
update a
set details = JSON_MODIFY(details, '$[0].FieldValue', ename)
from dbo.tblUissAssignments a
cross apply
OPENJSON (details)
WITH (CaptionName NVARCHAR(100),
FieldName NVARCHAR(100),
FieldType NVARCHAR(15),
FieldValue NVARCHAR(50)) AS P
where a.ssn = '000-00-0000'
and p.captionname = 'txtEname'
More info on the JSON_MODIFY method here: https://learn.microsoft.com/en-us/sql/t-sql/functions/json-modify-transact-sql?view=sql-server-2017
The subtle bit is that you're updating a json array containing a json object; not a single object. For that you have to include the index on the root element. See this post for some useful info on JsonPath if you're unfamiliar: https://support.smartbear.com/alertsite/docs/monitors/api/endpoint/jsonpath.html
Regarding scenarios where there's multiple items in the array, ideally we'd use a filter expression, such as this:
update a
set details = JSON_MODIFY(details, '$[?(#.CaptionName == ''txtEname'')].FieldValue', ename)
from dbo.tblUissAssignments a
where a.ssn = '000-00-0000'
Sadly MS SQL doesn't yet support these (see this excellent post: https://modern-sql.com/blog/2017-06/whats-new-in-sql-2016)
As such, I think we need to apply a nasty hack. Two such approaches spring to mind:
Implement a loop to iterate through all matches
Convert from JSON to some other type, then convert back to JSON afterwards
I'll think on these / whether there's something cleaner, since neither sits comfortably at present...
If I understand your question, then one possible approach (if you use SQL Server 2017+) is to use OPENJSON() and string manipulations with STRING_AGG():
Table:
CREATE TABLE #Data (
id int,
ssn varchar(12),
ename varchar(40),
details nvarchar(max)
)
INSERT INTO #Data
(id, ssn, ename, details)
VALUES
(1, '000-00-0000', 'stackoverflow1', N'[{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":""}, {"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":""}]'),
(2, '000-00-0000', 'stackoverflow2', N'[{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":""}, {"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":""}]')
Statement:
SELECT
d.id, d.ssn, d.ename,
CONCAT(N'[', STRING_AGG(JSON_MODIFY(j.[value], '$.FieldValue', ename), ','), N']') AS details
FROM #Data d
CROSS APPLY OPENJSON (d.details) j
WHERE JSON_VALUE(j.[value], '$.CaptionName') = N'txtEname' AND (d.ssn = '000-00-0000')
GROUP BY d.id, d.ssn, d.ename
Output:
id ssn ename details
1 000-00-0000 stackoverflow1 [{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":"stackoverflow1"},{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":"stackoverflow1"}]
2 000-00-0000 stackoverflow2 [{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":"stackoverflow2"},{"CaptionName":"txtEname","FieldName":null,"FieldType":null,"FieldValue":"stackoverflow2"}]
For SQL Server 2016 you may use FOR XML PATH for string aggregation:
SELECT
d.id, d.ssn, d.ename,
CONCAT(N'[', STUFF(s.details, 1, 1, N''), N']') AS details
FROM #Data d
CROSS APPLY (
SELECT CONCAT(N',', JSON_MODIFY(j.[value], '$.FieldValue', ename))
FROM #Data
CROSS APPLY OPENJSON (details) j
WHERE
(JSON_VALUE(j.[value], '$.CaptionName') = N'txtEname') AND
(ssn = '000-00-0000') AND
(id = d.id) AND (d.ssn = ssn) AND (d.ename = ename)
FOR XML PATH('')
) s(details)
I have data like this:
I want to query result like this:
Here is my code
SELECT
PML_CODE
,PML_NAME_ENG
,(
SELECT
PML_ID
,PML_NO
,PML_CODE
,PML_NAME_ENG
,PML_FORMULA
FROM DSP.PARAMET_LIST AS A WITH(NOLOCK)
WHERE A.PML_ID = B.PML_ID
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
) AS BR_OBJECT
FROM DSP.PARAMET_LIST AS B WITH(NOLOCK)
My code works for what I want, but I want to know if there is a better, faster way to write this query?
Next time please do not post pictures, but rather try to create some DDL, fill it with sample data and state your own attempts and the expected output. This makes it easier for us to understand and to answer your issue.
You can try it like this:
DECLARE #tbl TABLE(PML_ID BIGINT, PML_NO INT, PML_CODE VARCHAR(10), PML_NAME_ENG VARCHAR(10), PML_FORMULA VARCHAR(10));
INSERT INTO #tbl VALUES
(2017102600050,1,'KHR','Riel','01')
,(2017102600051,2,'USD','Dollar','02')
,(2017102600052,3,'THB','Bath','05')
SELECT
PML_CODE
,PML_NAME_ENG
,BR_OBJECT
FROM #tbl
CROSS APPLY(
SELECT
(
SELECT
PML_ID
,PML_NO
,PML_CODE
,PML_NAME_ENG
,PML_FORMULA
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
)) AS A(BR_OBJECT);
The big difference to your own approach is that I use a CROSS APPLY using the columns we have already instead of calling a correlated sub-query.
You can just concatenate the values. Be sure to cast the integers and to handle the NULL values. For example, if there is NULL value for column, there can be two cases - ignore the property or add the property with null, right?
For SQL Server 2016 SP1+ and later you can use FOR JSON. Basically, you should end up with something like this:
DECLARE #DataSource TABLE
(
[PML_ID] VARCHAR(64)
,[PML_NO] INT
,[PML_CODE] VARCHAR(3)
,[PML_NAME_ENG] NVARCHAR(32)
,[PML_FORMULA] VARCHAR(2)
);
INSERT INTO #DataSource ([PML_ID], [PML_NO], [PML_CODE], [PML_NAME_ENG], [PML_FORMULA])
VALUES ('201710260000000050', 1, 'KHR', 'Riel', 01)
,('201710260000000051', 2, 'USD', 'Dollar', 02)
,('201710260000000052', 3, 'THB', 'Bath', 05);
SELECT [PML_CODE]
,[PML_NAME_ENG]
,'{"PML_ID":'+ [PML_ID] +',"PML_NO":'+ CAST([PML_NO] AS VARCHAR(12)) +',"PML_CODE":'+ [PML_CODE] +',"PML_NAME_ENG":'+ [PML_NAME_ENG] +',"PML_FORMULA":'+ [PML_FORMULA] +'}' AS [BR_OBJECT]
FROM #DataSource;
-- SQL Server 2016 SP1 and latter
SELECT DS1.[PML_CODE]
,DS1.[PML_NAME_ENG]
,DS.[BR_OBJECT]
FROM #DataSource DS1
CROSS APPLY
(
SELECT *
FROM #DataSource DS2
WHERE DS1.[PML_CODE] = DS2.[PML_CODE]
AND DS2.[PML_NAME_ENG] = DS2.[PML_NAME_ENG]
FOR JSON AUTO
) DS ([BR_OBJECT]);
Using Greenplum 5.* database which is based on Postgres 8.4.
I am using row_to_json and array_to_json functions to create JSON output; but this ending up having keys with null values in JSON. Postgres latest version have json_strip_null function to remove keys with null values.
I need to import the generated JSON files to MongoDB; but mongoimport also doesn't have option to ignore null keys from JSON.
One way I tried it to create JSON file with null and then use sed to remove null fields from JSON file.
sed -i 's/\(\(,*\)"[a-z_]*[0-9]*":null\(,*\)\)*/\3/g' output.json
But looking for a way to do it database itself as it will be faster. Any suggestions how to render json_strip_null function in Greenplum without affecting the query performance?
i've had the same issue in GP 5.17 on pg8.3 - and have had success with this regex to remove the null value key-pairs. i use this in the initial insert to a json column, but you could adapt however:
select
col5,
col6,
regexp_replace(regexp_replace(
(SELECT row_to_json(j) FROM
(SELECT
col1,col2,col3,col4
) AS j)::text,
'(?!{|,)("[^"]+":null[,]*)','','g'),'(,})$','}')::json
AS nvp_json
from foo
working from the inside-out, the result of the row_to_json constructor is first cast to text, then the inner regexp replaces any "name":null, values, the outer regexp trims any hanging commas from the end, and finally the whole thing is cast back to json.
I solved this problem using plpython function. This generic function can be used to remove null and empty valued keys from any JSON.
CREATE OR REPLACE FUNCTION json_strip_null(json_with_nulls json)
RETURNS text
AS $$
import json
def clean_empty(d):
if not isinstance(d, (dict, list)):
return d
if isinstance(d, list):
return [v for v in (clean_empty(v) for v in d) if v not in (None, '')]
return {k: v for k, v in ((k, clean_empty(v)) for k, v in d.items()) if v not in (None, '')}
json_to_dict = json.loads(json_with_nulls)
json_without_nulls = clean_empty(json_to_dict)
return json.dumps(json_without_nulls, separators=(',', ':'))
$$ LANGUAGE plpythonu;
This function can be used as,
SELECT json_strip_null(row_to_json(t))
FROM table t;
You can use COALESCE to replace the nulls with an empty string or another value.
https://www.postgresql.org/docs/8.3/functions-conditional.html
The COALESCE function returns the first of its arguments that is not null. Null is returned only if all arguments are null. It is often used to substitute a default value for null values when data is retrieved for display, for example:
SELECT COALESCE(description, short_description, '(none)') ...
This returns description if it is not null, otherwise short_description if it is not null, otherwise (none).
...