I'm using PostgreSQL 9.4.5. I'd like to update a jsonb column.
My table is structured this way:
CREATE TABLE my_table (
gid serial PRIMARY KEY,
"data" jsonb
);
JSON strings are like this:
{"files": [], "ident": {"id": 1, "country": null, "type ": "20"}}
The following SQL doesn't do the job (syntax error - SQL state = 42601):
UPDATE my_table SET "data" -> 'ident' -> 'country' = 'Belgium';
Is there a way to achieve that?
Ok there are two functions:
create or replace function set_jsonb_value(p_j jsonb, p_key text, p_value jsonb) returns jsonb as $$
select jsonb_object_agg(t.key, t.value) from (
select
key,
case
when jsonb_typeof(value) = 'object' then set_jsonb_value(value, p_key, p_value)
when key = p_key then p_value
else value
end as value from jsonb_each(p_j)) as t;
$$ language sql immutable;
First one just changes the value of the existing key regardless of the key path:
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "country": null, "type ": "20"}}',
'country',
'"foo"');
set_jsonb_value
--------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "type ": "20", "country": "foo"}, "country": "foo"}
(1 row)
create or replace function set_jsonb_value(p_j jsonb, p_path text[], p_value jsonb) returns jsonb as $$
select jsonb_object_agg(t.key, t.value) from (
select
key,
case
when jsonb_typeof(value) = 'object' then set_jsonb_value(value, p_path[2:1000], p_value)
when key = p_path[1] then p_value
else value
end as value from jsonb_each(p_j)
union all
select
p_path[1],
case
when array_length(p_path,1) = 1 then p_value
else set_jsonb_value('{}', p_path[2:1000], p_value) end
where not p_j ? p_path[1]) as t;
$$ language sql immutable;
Second one changes the value of the existing key using the path specified or creates it if the path does not exists:
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "type ": "20"}}',
'{ident,country}'::text[],
'"foo"');
set_jsonb_value
-------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "type ": "20", "country": "foo"}, "country": null}
(1 row)
postgres=# select set_jsonb_value(
'{"files": [], "country": null, "ident": {"id": 1, "type ": "20"}}',
'{ident,foo,bar,country}'::text[],
'"foo"');
set_jsonb_value
-------------------------------------------------------------------------------------------------------
{"files": [], "ident": {"id": 1, "foo": {"bar": {"country": "foo"}}, "type ": "20"}, "country": null}
(1 row)
Hope it will help to someone who uses the PostgreSQL < 9.5
Disclaimer: Tested on PostgreSQL 9.5
In PG 9.4 you are out of luck with "easy" solutions like jsonb_set() (9.5). Your only option is to unpack the JSON object, make the changes and re-build the object. That sounds very cumbersome and it is indeed: JSON is horrible to manipulate, no matter how advanced or elaborate the built-in functions.
CREATE TYPE data_ident AS (id integer, country text, "type" integer);
UPDATE my_table
SET "data" = json_build_object('files', "data"->'files', 'ident', ident.j)::jsonb
FROM (
SELECT gid, json_build_object('id', j.id, 'country', 'Belgium', 'type', j."type") AS j
FROM my_table
JOIN LATERAL jsonb_populate_record(null::data_ident, "data"->'ident') j ON true) ident
WHERE my_table.gid = ident.gid;
In the SELECT clause "data"->'ident' is unpacked into a record (for which you need to CREATE TYPE a structure). Then it is built right back into a JSON object with the new country name. In the UPDATE that "ident" object is re-joined with the "files" object and the whole thing cast to a jsonb.
A pure thing of beauty -- just so long as speed is not your thing...
My previous solution relied on 9.5 functionality.
I would recommend instead either going with abelisto's solutions below or using pl/perlu, plpythonu, or plv8js to write json mutators in a language that has better support for them.
Related
I have a table which has ID & JSON columns. ID is auto incrementing column. Here are my sample data.
Row 1
1 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "1", "Value": "$1.00", "Code": "A", "Desc": "asdf" },
{ "ID1": "2", "Value": "$1.00", "Code": "B", "Desc": "pqr" },
{ "ID1": "3", "Value": "$1.00", "Code": "C", "Desc": "xyz" }
]
}
Row 2
2 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "76", "Value": "$1.00", "Code": "X", "Desc": "asdf" },
{ "ID1": "25", "Value": "$1.00", "Code": "Y", "Desc": "pqr" },
{ "ID1": "52", "Value": "$1.00", "Code": "Z", "Desc": "lmno" },
{ "ID1": "52", "Value": "$1.00", "Code": "B", "Desc": "xyz" }
]
}
and it keep goes. Items inside the HData section is infinite. It can be any numbers of items.
On this JSON I need to update the Value = "$2.00" where "Code" is "B". I should be able to do this with 2 scenarios. My parameter inputs are #id=2, #code="B", #value="$2.00". #id sometimes will be null. So,
If #id is null then the update statement should go through all records and update the Value="$2.00" for all items inside the HData section which has Code="B".
If #id = 2 then the update statement should update only the second row which Id is 2 for the items which Code="b"
Appreciate your help in advance.
Thanks
See DB Fiddle for an example.
declare #id bigint = 2
, #code nvarchar(8) = 'B'
, #value nvarchar(8) = '$2.00'
update a
set json = JSON_MODIFY(json, '$.HData[' + HData.[key] + '].Value', #value)
from so75416277 a
CROSS APPLY OPENJSON (json, '$.HData') HData
CROSS APPLY OPENJSON (HData.Value, '$')
WITH (
ID1 bigint
, Value nvarchar(8)
, Code nvarchar(8)
, [Desc] nvarchar(8)
) as HDataItem
WHERE id = #id
AND HDataItem.Code = #Code
The update / set statement says we want to replace the value of json with a new generated value / functions exactly the same as it would in any other context; e.g. update a set json = 'something' from so75416277 a where a.column = 'some condition'
The JSON_MODIFY does the manipulation of our json.
The first input is the original json field's value
The second is the path to the value to be updated.
The third is the new value
'$.HData[' + HData.[key] + '].Value' says we go from our JSON's root ($), find the HData field, filter the array of values for the one we're after (i.e. key here is the array item's index), then use the Value field of this item.
key is a special term; where we don't have a WITH block accompanying our OPENJSON statement we get back 3 items: key, value and type; key being the identifier, value being the content, and type saying what sort of content that is.
CROSS APPLY allows us to perform logic on a value from a single DB rowto return potentially multiple rows; e.g. like a join but against its own contents.
OPENJSON (json, '$.HData') HData says to extract the HData field from our json column, and return this with the table alias HData; as we've not included a WITH, this HData column has 3 fields; key, value, and type, as mentioned above (this is the same key we used in our JSONMODIFY).
The next OPENJSON works on HData.Value; i.e. the contents of the array item under HData. Here we take the object from this array (i.e. that's the root from the current context; hence $), and use WITH to parse it into a specific structure; i.e. ID1, Value, Code, and Desc (brackets around Desc as it's a keyword). We give this the alias HDataItem.
Finally we filter for the bit of the data we're interested in; i.e. on id to get the row we want to update, then on HDataItem.Code so we only update those array items with code 'B'.
Try the below SP.
CREATE PROC usp_update_75416277
(
#id Int = null,
#code Varchar(15),
#value Varchar(15)
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLStr Varchar(MAX)=''
;WITH CTE
AS
( SELECT ROW_NUMBER()OVER(PARTITION BY YourTable.Json ORDER BY (SELECT NULL))RowNo,*
FROM YourTable
CROSS APPLY OPENJSON(YourTable.Json,'$.HData')
WITH (
ID1 Int '$.ID1',
Value Varchar(20) '$.Value',
Code Varchar(20) '$.Code',
[Desc] Varchar(20) '$.Desc'
) HData
WHERE (#id IS NULL OR ID =#id)
)
SELECT #SQLStr=#SQLStr+' UPDATE YourTable
SET [JSON]=JSON_MODIFY(YourTable.Json,
''$.HData['+CONVERT(VARCHAR(15),RowNo-1)+'].Value'',
'''+CONVERT(VARCHAR(MAX),#value)+''') '+
'WHERE ID ='+CONVERT(Varchar(15),CTE.ID) +' '
FROM CTE
WHERE Code=#code
AND (#id IS NULL OR ID =#id)
EXEC( #SQLStr)
END
I have a JSONB array below
[
{
"name": "test",
"age": "21",
"phone": "6589",
"town": "54"
},
{
"name": "test12",
"age": "67",
"phone": "6546",
"town": "54"
},
{
"name": "test123",
"age": "21",
"phone": "6589",
"town": "54"
},
{
"name": "test125",
"age": "67",
"phone": "6546",
"town": "54"
}
]
Now I want to delete the object if the name is test or test125. How to delete multiple or single values in JSONB array?
An update statement including a subquery, which eleminates the unwanted elements with NOT IN operator and aggregates the rest by using jsonb_agg() function, would find out this operation :
Choose this :
1. UPDATE tab
SET jsdata = t.js_new
FROM
(
SELECT jsonb_agg( (jsdata ->> ( idx-1 )::int)::jsonb ) AS js_new
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'name' NOT IN ('test','test125')
) t
or this one :
2. WITH t AS (
SELECT jsonb_agg( (jsdata ->> ( idx-1 )::int)::jsonb ) AS js_new
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'name' NOT IN ('test','test125')
)
UPDATE tab
SET jsdata = js_new
FROM t
Demo
If you have the Postgres 12, you can use jsonb_path_query_array function to filter the jsonb here is the sample for your question:
with t (j) as ( values ('[
{"name":"test","age":"21","phone":"6589","town":"54"},
{"name":"test12","age":"67","phone":"6546","town":"54"},
{"name":"test123","age":"21","phone":"6589","town":"54"},
{"name":"test125","age":"67","phone":"6546","town":"54"}
]'::jsonb) )
select jsonb_path_query_array(j,
'$[*] ? (#.name != "test" && #.name != "test125")')
from t;
more info on https://www.postgresql.org/docs/12/functions-json.html
I would create a function that does that:
create function remove_array_elements(p_data jsonb, p_key text, p_value text[])
returns jsonb
as
$$
select jsonb_agg(e order by idx)
from jsonb_array_elements(p_data) with ordinality as t(e,idx)
where t.e ->> p_key <> ALL (p_value) ;
$$
language sql
immutable;
Then you can use it like this:
update the_table
set the_column = remove_array_elements(the_column, 'name', array['test', 'test125'])
where id = ...;
Online example
We are exploring the JSON feature in SQL Sever and for one of the scenarios we want to come up with a SQL which can return a JSON like below
[
{
"field": {
"uuid": "uuid-field-1"
},
"value": {
"uuid": "uuid-value" //value is an object
}
},
{
"field": {
"uuid": "uuid-field-2"
},
"value": "1". //value is simple integer
}
... more rows
]
The value field can be a simple integer/string or a nested object.
We are able to come up with a table which looks like below:
field.uuid | value.uuid | value|
------------|---------- | -----|
uuid-field-1| value-uuid | null |
uuid-field-2| null | 1 |
... more rows
But as soon as we apply for json path, it fails saying
Property 'value' cannot be generated in JSON output due to a conflict with another column name or alias. Use different names and aliases for each column in SELECT list.
Is it possible to do it somehow generate this? The value will either be in the value.uuid or value not both?
Note: We are open to possibility of if we can convert each row to individual JSON and add all of them in an array.
select
json_query((select v.[field.uuid] as 'uuid' for json path, without_array_wrapper)) as 'field',
value as 'value',
json_query((select v.[value.uuid] as 'uuid' where v.[value.uuid] is not null for json path, without_array_wrapper)) as 'value'
from
(
values
('uuid-field-1', 'value-uuid1', null),
('uuid-field-2', null, 2),
('uuid-field-3', 'value-uuid3', null),
('uuid-field-4', null, 4)
) as v([field.uuid], [value.uuid], value)
for json auto;--, without_array_wrapper;
The reason for this error is that (as is mentioned in the documentation) ... FOR JSON PATH clause uses the column alias or column name to determine the key name in the JSON output. If an alias contains dots, the PATH option creates nested objects. In your case value.uuid and value both generate a key with name value.
I can suggest an approach (probably not the best one), which uses JSON_MODIFY() to generate the expected JSON from an empty JSON array:
Table:
CREATE TABLE Data (
[field.uuid] varchar(100),
[value.uuid] varchar(100),
[value] int
)
INSERT INTO Data
([field.uuid], [value.uuid], [value])
VALUES
('uuid-field-1', 'value-uuid', NULL),
('uuid-field-2', NULL, 1),
('uuid-field-3', NULL, 3),
('uuid-field-4', NULL, 4)
Statement:
DECLARE #json nvarchar(max) = N'[]'
SELECT #json = JSON_MODIFY(
#json,
'append $',
JSON_QUERY(
CASE
WHEN [value.uuid] IS NOT NULL THEN (SELECT d.[field.uuid], [value.uuid] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
WHEN [value] IS NOT NULL THEN (SELECT d.[field.uuid], [value] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
END
)
)
FROM Data d
SELECT #json
Result:
[
{
"field":{
"uuid":"uuid-field-1"
},
"value":{
"uuid":"value-uuid"
}
},
{
"field":{
"uuid":"uuid-field-2"
},
"value":1
},
{
"field":{
"uuid":"uuid-field-3"
},
"value":3
},
{
"field":{
"uuid":"uuid-field-4"
},
"value":4
}
]
I have a PostgreSQL table with unique key/value pairs, which were originally in a JSON format, but have been normalized and melted:
key | value
-----------------------------
name | Bob
address.city | Vancouver
address.country | Canada
I need to turn this into a hierarchical JSON:
{
"name": "Bob",
"address": {
"city": "Vancouver",
"country": "Canada"
}
}
Is there a way to do this easily within SQL?
jsonb_set() almost does everything for you, but unfortunately it can only create missing leafs (i.e. missing last keys on a path), but not whole missing branches. To overcome this, here is a modified version of it, which can set values on any missing levels:
create function jsonb_set_rec(jsonb, jsonb, text[])
returns jsonb
language sql
as $$
select case
when array_length($3, 1) > 1 and ($1 #> $3[:array_upper($3, 1) - 1]) is null
then jsonb_set_rec($1, jsonb_build_object($3[array_upper($3, 1)], $2), $3[:array_upper($3, 1) - 1])
else jsonb_set($1, $3, $2, true)
end
$$;
Now you only need to apply this function one-by-one to your rows, starting with an empty json object: {}. You can do this with either recursive CTEs:
with recursive props as (
(select distinct on (grp)
pk, grp, jsonb_set_rec('{}', to_jsonb(value), string_to_array(key, '.')) json_object
from eav_tbl
order by grp, pk)
union all
(select distinct on (grp)
eav_tbl.pk, grp, jsonb_set_rec(json_object, to_jsonb(value), string_to_array(key, '.'))
from props
join eav_tbl using (grp)
where eav_tbl.pk > props.pk
order by grp, eav_tbl.pk)
)
select distinct on (grp)
grp, json_object
from props
order by grp, pk desc;
Or, with a custom aggregate defined as:
create aggregate jsonb_set_agg(jsonb, text[]) (
sfunc = jsonb_set_rec,
stype = jsonb,
initcond = '{}'
);
your query could became as simple as:
select grp, jsonb_set_agg(to_jsonb(value), string_to_array(key, '.'))
from eav_tbl
group by grp;
https://rextester.com/TULNU73750
There are no ready to use tools for this. The function generates a hierarchical json object based on a path:
create or replace function jsonb_build_object_from_path(path text, value text)
returns jsonb language plpgsql as $$
declare
obj jsonb;
keys text[] := string_to_array(path, '.');
level int := cardinality(keys);
begin
obj := jsonb_build_object(keys[level], value);
while level > 1 loop
level := level- 1;
obj := jsonb_build_object(keys[level], obj);
end loop;
return obj;
end $$;
You also need the aggregate function jsonb_merge_agg(jsonb) described in this answer. The query:
with my_table (path, value) as (
values
('name', 'Bob'),
('address.city', 'Vancouver'),
('address.country', 'Canada'),
('first.second.third', 'value')
)
select jsonb_merge_agg(jsonb_build_object_from_path(path, value))
from my_table;
gives this object:
{
"name": "Bob",
"first":
{
"second":
{
"third": "value"
}
},
"address":
{
"city": "Vancouver",
"country": "Canada"
}
}
The function do not recognize json arrays.
I can't really think of something simpler, although I think there should be an easier way.
I assume there is some additional column that can be used to bring the keys that belong to one "person" together, I used p_id for that in my example.
select p_id,
jsonb_object_agg(k, case level when 1 then v -> k else v end)
from (
select p_id,
elements[1] k,
jsonb_object_agg(case cardinality(elements) when 1 then ky else elements[2] end, value) v,
max(cardinality(elements)) as level
from (
select p_id,
"key" as ky,
string_to_array("key", '.') as elements, value
from kv
) t1
group by p_id, k
) t2
group by p_id;
The innermost query just converts the dot notation to an array for easier access later.
The next level then builds JSON objects depending on the "key". For the "single level" keys, it just uses key/value, for the others it uses the second element + the value and then aggregates those that belong together.
The second query level returns the following:
p_id | k | v | level
-----+---------+--------------------------------------------+------
1 | address | {"city": "Vancouver", "country": "Canada"} | 2
1 | name | {"name": "Bob"} | 1
2 | address | {"city": "Munich", "country": "Germany"} | 2
2 | name | {"name": "John"} | 1
The aggregation done in the second step, leaves one level too much for the "single element" keys, and that's what we need level for.
If that distinction wasn't made, the final aggregation would return {"name": {"name": "Bob"}, "address": {"city": "Vancouver", "country": "Canada"}} instead of the wanted: {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}.
The expression case level when 1 then v -> k else v end essentially turns {"name": "Bob"} back to "Bob".
So, with the following sample data:
create table kv (p_id integer, "key" text, value text);
insert into kv
values
(1, 'name','Bob'),
(1, 'address.city','Vancouver'),
(1, 'address.country','Canada'),
(2, 'name','John'),
(2, 'address.city','Munich'),
(2, 'address.country','Germany');
then query returns:
p_id | jsonb_object_agg
-----+-----------------------------------------------------------------------
1 | {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}
2 | {"name": "John", "address": {"city": "Munich", "country": "Germany"}}
Online example: https://rextester.com/SJOTCD7977
create table kv (key text, value text);
insert into kv
values
('name','Bob'),
('address.city','Vancouver'),
('address.country','Canada'),
('name','John'),
('address.city','Munich'),
('address.country','Germany');
create view v_kv as select row_number() over() as nRec, key, value from kv;
create view v_datos as
select k1.nrec, k1.value as name, k2.value as address_city, k3.value as address_country
from v_kv k1 inner join v_kv k2 on (k1.nrec + 1 = k2.nrec)
inner join v_kv k3 on ((k1.nrec + 2= k3.nrec) and (k2.nrec + 1 = k3.nrec))
where mod(k1.nrec, 3) = 1;
select json_agg(json_build_object('name',name, 'address', json_build_object('city',address_city, 'country', address_country)))
from v_datos;
I'm working on a Web project where the client application communicates with the DB via JSONs.
The initial implementation took place with SQL Server 2012 (NO JSON support and hence we implemented a Stored Function that handled the parsing) and now we are moving to 2016 (YES JSON support).
So far, we are reducing processing time by a significant factor (in some cases, over 200 times faster!).
There are some interactions that contain arrays that need to be converted into tables. To achieve that, the OPENJSON function does ALMOST what we need.
In some of these (array-based) cases, records within the arrays have one or more fields that are also OBJECTS (in this particular case, also arrays), for instance:
[{
"Formal_Round_Method": "Floor",
"Public_Round_Method": "Closest",
"Formal_Precision": "3",
"Public_Precision": "3",
"Formal_Significant_Digits": "3",
"Public_Significant_Digits": "3",
"General_Comment": [{
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "Routine_Report",
"Body": "[To + Media + What]: Comment 1",
"$$hashKey": "object:1848"
}, {
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "User_Comment",
"Body": "[]: Comment 2",
"$$hashKey": "object:1857"
}, {
"Timestamp": "2018-07-16 09:19",
"From": "1",
"Type": "Routine_Report",
"Body": "[To + Media + What]: Comment 3",
"$$hashKey": "object:1862"
}]
}, {
"Formal_Round_Method": "Floor",
"Public_Round_Method": "Closest",
"Formal_Precision": "3",
"Public_Precision": "3",
"Formal_Significant_Digits": "3",
"Public_Significant_Digits": "3",
"General_Comment": []
}]
Here, General_Comment is also an array.
When running the command:
SELECT *
FROM OPENJSON(#_l_Table_Data)
WITH ( Formal_Round_Method NVARCHAR(16) '$.Formal_Round_Method' ,
Public_Round_Method NVARCHAR(16) '$.Public_Round_Method' ,
Formal_Precision INT '$.Formal_Precision' ,
Public_Precision INT '$.Public_Precision' ,
Formal_Significant_Digits INT '$.Formal_Significant_Digits' ,
Public_Significant_Digits INT '$.Public_Significant_Digits' ,
General_Comment NVARCHAR(4000) '$.General_Comment'
) ;
[#_l_Table_Data is a variable holding the JSON string]
we are getting the column General_Comment = NULL even though the is data in there (at least in the first element of the array).
I guess that I should be using a different syntax for those columns that may contain OBJECTS and not SIMPLE VALUES, but I have no idea what that syntax should be.
I found a Microsoft page that actually solves the problem.
Here is how the query should look like:
SELECT *
FROM OPENJSON(#_l_Table_Data)
WITH ( Formal_Round_Method NVARCHAR(16) '$.Formal_Round_Method' ,
Public_Round_Method NVARCHAR(16) '$.Public_Round_Method' ,
Formal_Precision INT '$.Formal_Precision' ,
Public_Precision INT '$.Public_Precision' ,
Formal_Significant_Digits INT '$.Formal_Significant_Digits' ,
Public_Significant_Digits INT '$.Public_Significant_Digits' ,
General_Comment NVARCHAR(MAX) '$.General_Comment' AS JSON
) ;
So, you need to add AS JSON at the end of the column definition and (God knows why) the type MUST be NVARCHAR(MAX).
Very simple indeed!!!
Create Function ParseJson:
Create or Alter FUNCTION [dbo].[ParseJson] (#JSON NVARCHAR(MAX))
RETURNS #Unwrapped TABLE
(
[id] INT IDENTITY, --just used to get a unique reference to each json item
[level] INT, --the hierarchy level
[key] NVARCHAR(100), --the key or name of the item
[Value] NVARCHAR(MAX),--the value, if it is a null, int,binary,numeric or string
type INT, --0 TO 5, the JSON type, null, numeric, string, binary, array or object
SQLDatatype sysname, --whatever the datatype can be parsed to
parent INT, --the ID of the parent
[path] NVARCHAR(4000) --the path as used by OpenJSON
)
AS begin
INSERT INTO #Unwrapped ([level], [key], Value, type, SQLDatatype, parent,
[path])
VALUES
(0, --the level
NULL, --the key,
#json, --the value,
CASE WHEN Left(ltrim(#json),1)='[' THEN 4 ELSE 5 END, --the type
'json', --SQLDataType,
0 , --no parent
'$' --base path
);
DECLARE #ii INT = 0,--the level
#Rowcount INT = -1; --the number of rows from the previous iteration
WHILE #Rowcount <> 0 --while we are still finding levels
BEGIN
INSERT INTO #Unwrapped ([level], [key], Value, type, SQLDatatype, parent,
[path])
SELECT [level] + 1 AS [level], new.[Key] AS [key],
new.[Value] AS [value], new.[Type] AS [type],
-- SQL Prompt formatting off
/* in order to determine the datatype of a json value, the best approach is to a determine
the datatype that can be parsed. It JSON, an array of objects can contain attributes that arent
consistent either in their name or value. */
CASE
WHEN new.Type = 0 THEN 'bit null'
WHEN new.[type] IN (1,2) then COALESCE(
CASE WHEN TRY_CONVERT(INT,new.[value]) IS NOT NULL THEN 'int' END,
CASE WHEN TRY_CONVERT(NUMERIC(14,4),new.[value]) IS NOT NULL THEN 'numeric' END,
CASE WHEN TRY_CONVERT(FLOAT,new.[value]) IS NOT NULL THEN 'float' END,
CASE WHEN TRY_CONVERT(MONEY,new.[value]) IS NOT NULL THEN 'money' END,
CASE WHEN TRY_CONVERT(DateTime,new.[value],126) IS NOT NULL THEN 'Datetime2' END,
CASE WHEN TRY_CONVERT(Datetime,new.[value],127) IS NOT NULL THEN 'Datetime2' END,
'nvarchar')
WHEN new.Type = 3 THEN 'bit'
WHEN new.Type = 5 THEN 'object' ELSE 'array' END AS SQLDatatype,
old.[id],
old.[path] + CASE WHEN old.type = 5 THEN '.' + new.[Key]
ELSE '[' + new.[Key] COLLATE DATABASE_DEFAULT + ']' END AS path
-- SQL Prompt formatting on
FROM #Unwrapped old
CROSS APPLY OpenJson(old.[Value]) new
WHERE old.[level] = #ii AND old.type IN (4, 5);
SELECT #Rowcount = ##RowCount;
SELECT #ii = #ii + 1;
END;
return
END
For Usage:
select * from ParseJson(jsonString)