How to update a nested array in JSON in mssql - json

I am using mssql and one column is having json data, I want to update that part of that json which is an array, by passing the id.
{
"customerName":"mohan",
"custId":"e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds":[
{
"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e",
"feedName":"ccsdcdscsdc",
"format":"Excel",
"sources":[
{
"sourceId":69042417,
"name":"TV 2 Livsstil"
},
{
"sourceId":69042419,
"name":"Turk Max"
}
]
},
{
"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName":"dfgdfgdfgdfgsdfg",
"format":"XmlTV",
"sources":[
{
"sourceId":69042417,
"name":"TV 2 Livsstil"
},
{
"sourceId":69042419,
"name":"Turk Max"
}
]
}
]
}
suppose if I am going to pass customerId and feedId, it should update the whole feed with the feed which I have passed.
I tried with below query, but no help.
UPDATE
ExtractsConfiguration.dbo.Customers
SET
configJSON = JSON_MODIFY(configJSON,'$.feeds[]',{"feedName":"ccsdcdscsdc"})
WHERE
CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND json_query(configJSON,'$.feeds[].feedId'='57f221d0-c310-11e9-8af7-cf1cf42fc72e');

This, #mohan, is a tricky one and I took it on as a challenge to myself. There is a way to update a nested JSON object's value like you're asking, however, it's not as straight forward as it seems.
Because you're working within an array, you need the array's index in order to update a nested value. In your case you don't know the index within the array, however, you do have a key-value you can reference, in this case, your feedName.
In order to update your value, you first need to "unpack" your JSON so that you can filter for a specific feedName, "ccsdcdscsdc" in your example.
Here is an example that you can run in SSMS that will get you moving in the right direction.
The first thing I created was #Customers TABLE variable to mimic the data structure you showed in your example and inserted your sample data.
DECLARE #Customers TABLE ( CustomerId VARCHAR(50), configJSON VARCHAR(MAX) );
INSERT INTO #Customers ( CustomerID, configJSON ) VALUES ( '9ee07040-c001-11e9-b29a-55eb3439cd7c', '{"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]}' );
Running a SELECT against #Customers returns the following:
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| CustomerId | configJSON |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Next, I matched your rules for the update: Update a nested JSON value that is restricted to a specific CustomerId (9ee07040-c001-11e9-b29a-55eb3439cd7c) and a feedName (ccsdcdscsdc).
Like I mentioned, we need to "unpack" the JSON first because we don't know the specific key (index) value that should be updated. The easiest way to accomplish both tasks (unpack/update) is to use a Common Table Expression (CTE).
So, here's how I did that:
;WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
The CTE allows us to "unpack" (I made this word up as it seemed fitting) the JSON contained in configJSON, which then allows us to apply a filter against the feedName.
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
You'll also note that we included the CustomerId rule:
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
Both the CustomerId and feedName could easily be SQL variables.
So, what did this do? If we were to look at Configs_CTE resultset ( by changing the UPDATE... to SELECT * FROM Config_CTE ) we would see:
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
| CustomerId | configJSON | key | value | type |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} | 0 | {"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]} | 5 |
+--------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------+
There is a bunch of information here, but what we really care about is the "key" column as this contains the feed index ( in this case 0 ) that we want to update.
With that, was able to complete the request and UPDATE format from "Excel" to "MS Excel" for the "feed" with the feedName of "ccsdcdscsdc".
This guy ( note the use of Config_CTE.[key] ):
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
Did it work? Let's look at the updated table's data.
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| CustomerId | configJSON |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 9ee07040-c001-11e9-b29a-55eb3439cd7c | {"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"MS Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]} |
+--------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Here's the updated JSON "beautified" (pretty sure I didn't make up that one).
{
"customerName": "mohan",
"custId": "e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds": [{
"feedId": "57f221d0-c310-11e9-8af7-cf1cf42fc72e",
"feedName": "ccsdcdscsdc",
"format": "MS Excel",
"sources": [{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
}, {
"sourceId": 69042419,
"name": "Turk Max"
}]
}, {
"feedId": "59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName": "dfgdfgdfgdfgsdfg",
"format": "XmlTV",
"sources": [{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
}, {
"sourceId": 69042419,
"name": "Turk Max"
}]
}]
}
Well, there you have it, format for feedName "ccsdcdscsdc" has been updated from "Excel" to "MS Excel". I was not clear on what you were trying to update, so I used format for my testing/example.
I hope this gets you moving in the right direction with your task. Happy coding!
Here's the complete example that can be run in SSMS:
-- CREATE A CUSTOMERS TABLE TO MIMIC SCHEMA --
DECLARE #Customers TABLE ( CustomerId VARCHAR(50), configJSON VARCHAR(MAX) );
INSERT INTO #Customers ( CustomerID, configJSON ) VALUES ( '9ee07040-c001-11e9-b29a-55eb3439cd7c', '{"customerName":"mohan","custId":"e35273d0-c002-11e9-8188-a1525f580dfd","feeds":[{"feedId":"57f221d0-c310-11e9-8af7-cf1cf42fc72e","feedName":"ccsdcdscsdc","format":"Excel","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]},{"feedId":"59bbd360-c312-11e9-8af7-cf1cf42fc72e","feedName":"dfgdfgdfgdfgsdfg","format":"XmlTV","sources":[{"sourceId":69042417,"name":"TV 2 Livsstil"},{"sourceId":69042419,"name":"Turk Max"}]}]}' );
-- SHOW CURRENT DATA --
SELECT * FROM #Customers;
-- UPDATE "format" FROM "Excel" to "MS Excel" FOR feedName: ccsdcdscsdc --
WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + '].format', 'MS Excel' );
-- SHOW UPDATED DATA --
SELECT * FROM #Customers;
EDIT:
i wanted to update the feed with the given feedId with the whole new
feed
To replace one "feed" with an entirely new feed, you may do the following:
-- REPLACE AN ENTIRE JSON ARRAY OBJECT --
DECLARE #MyNewJson NVARCHAR(MAX) = '{"feedId": "this_is_an_entirely_new_node","feedName": "ccsdcdscsdc","format": "NewFormat","sources": [{"sourceId": 1,"name": "New Source 1"},{"sourceId": 2,"name": "New Source 2"}]}';
WITH Config_CTE AS (
SELECT * FROM #Customers AS Customer
CROSS APPLY OPENJSON( configJSON, '$.feeds' ) AS Config
WHERE
Customer.CustomerId = '9ee07040-c001-11e9-b29a-55eb3439cd7c'
AND JSON_VALUE( Config.value, '$.feedName' ) = 'ccsdcdscsdc'
)
UPDATE Config_CTE
SET configJSON = JSON_MODIFY( configJSON, '$.feeds[' + Config_CTE.[key] + ']', JSON_QUERY( #MyNewJson ) );
After running this, the feeds now appear as:
{
"customerName": "mohan",
"custId": "e35273d0-c002-11e9-8188-a1525f580dfd",
"feeds": [
{
"feedId": "this_is_an_entirely_new_node",
"feedName": "ccsdcdscsdc",
"format": "NewFormat",
"sources": [
{
"sourceId": 1,
"name": "New Source 1"
},
{
"sourceId": 2,
"name": "New Source 2"
}
]
},
{
"feedId": "59bbd360-c312-11e9-8af7-cf1cf42fc72e",
"feedName": "dfgdfgdfgdfgsdfg",
"format": "XmlTV",
"sources": [
{
"sourceId": 69042417,
"name": "TV 2 Livsstil"
},
{
"sourceId": 69042419,
"name": "Turk Max"
}
]
}
]
}
Note the use of JSON_QUERY( #MyNewJson ) in the UPDATE. This is important.
From Microsoft's Docs:
JSON_QUERY without its optional second parameter returns only the
first argument as a result. Since JSON_QUERY always returns valid
JSON, FOR JSON knows that this result does not have to be escaped.
If you were to pass #MyNewJson without the JSON_QUERY your new json would be escaped ( e.g., "customerName" becomes \"customerName\" ) as if it were being stored as plain text. JSON_QUERY will return unescaped, valid JSON which is necessary in your case.
Also note that the only change I made to replace the entire feed vs. a single item value was switching
'$.feeds[' + Config_CTE.[key] + '].format'
to
'$.feeds[' + Config_CTE.[key] + ']'.

Related

Conditionally update JSON column

I have a table which has ID & JSON columns. ID is auto incrementing column. Here are my sample data.
Row 1
1 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "1", "Value": "$1.00", "Code": "A", "Desc": "asdf" },
{ "ID1": "2", "Value": "$1.00", "Code": "B", "Desc": "pqr" },
{ "ID1": "3", "Value": "$1.00", "Code": "C", "Desc": "xyz" }
]
}
Row 2
2 | {
"HeaderInfo":
{
"Name": "ABC",
"Period": "2010",
"Code": "123"
},
"HData":
[
{ "ID1": "76", "Value": "$1.00", "Code": "X", "Desc": "asdf" },
{ "ID1": "25", "Value": "$1.00", "Code": "Y", "Desc": "pqr" },
{ "ID1": "52", "Value": "$1.00", "Code": "Z", "Desc": "lmno" },
{ "ID1": "52", "Value": "$1.00", "Code": "B", "Desc": "xyz" }
]
}
and it keep goes. Items inside the HData section is infinite. It can be any numbers of items.
On this JSON I need to update the Value = "$2.00" where "Code" is "B". I should be able to do this with 2 scenarios. My parameter inputs are #id=2, #code="B", #value="$2.00". #id sometimes will be null. So,
If #id is null then the update statement should go through all records and update the Value="$2.00" for all items inside the HData section which has Code="B".
If #id = 2 then the update statement should update only the second row which Id is 2 for the items which Code="b"
Appreciate your help in advance.
Thanks
See DB Fiddle for an example.
declare #id bigint = 2
, #code nvarchar(8) = 'B'
, #value nvarchar(8) = '$2.00'
update a
set json = JSON_MODIFY(json, '$.HData[' + HData.[key] + '].Value', #value)
from so75416277 a
CROSS APPLY OPENJSON (json, '$.HData') HData
CROSS APPLY OPENJSON (HData.Value, '$')
WITH (
ID1 bigint
, Value nvarchar(8)
, Code nvarchar(8)
, [Desc] nvarchar(8)
) as HDataItem
WHERE id = #id
AND HDataItem.Code = #Code
The update / set statement says we want to replace the value of json with a new generated value / functions exactly the same as it would in any other context; e.g. update a set json = 'something' from so75416277 a where a.column = 'some condition'
The JSON_MODIFY does the manipulation of our json.
The first input is the original json field's value
The second is the path to the value to be updated.
The third is the new value
'$.HData[' + HData.[key] + '].Value' says we go from our JSON's root ($), find the HData field, filter the array of values for the one we're after (i.e. key here is the array item's index), then use the Value field of this item.
key is a special term; where we don't have a WITH block accompanying our OPENJSON statement we get back 3 items: key, value and type; key being the identifier, value being the content, and type saying what sort of content that is.
CROSS APPLY allows us to perform logic on a value from a single DB rowto return potentially multiple rows; e.g. like a join but against its own contents.
OPENJSON (json, '$.HData') HData says to extract the HData field from our json column, and return this with the table alias HData; as we've not included a WITH, this HData column has 3 fields; key, value, and type, as mentioned above (this is the same key we used in our JSONMODIFY).
The next OPENJSON works on HData.Value; i.e. the contents of the array item under HData. Here we take the object from this array (i.e. that's the root from the current context; hence $), and use WITH to parse it into a specific structure; i.e. ID1, Value, Code, and Desc (brackets around Desc as it's a keyword). We give this the alias HDataItem.
Finally we filter for the bit of the data we're interested in; i.e. on id to get the row we want to update, then on HDataItem.Code so we only update those array items with code 'B'.
Try the below SP.
CREATE PROC usp_update_75416277
(
#id Int = null,
#code Varchar(15),
#value Varchar(15)
)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #SQLStr Varchar(MAX)=''
;WITH CTE
AS
( SELECT ROW_NUMBER()OVER(PARTITION BY YourTable.Json ORDER BY (SELECT NULL))RowNo,*
FROM YourTable
CROSS APPLY OPENJSON(YourTable.Json,'$.HData')
WITH (
ID1 Int '$.ID1',
Value Varchar(20) '$.Value',
Code Varchar(20) '$.Code',
[Desc] Varchar(20) '$.Desc'
) HData
WHERE (#id IS NULL OR ID =#id)
)
SELECT #SQLStr=#SQLStr+' UPDATE YourTable
SET [JSON]=JSON_MODIFY(YourTable.Json,
''$.HData['+CONVERT(VARCHAR(15),RowNo-1)+'].Value'',
'''+CONVERT(VARCHAR(MAX),#value)+''') '+
'WHERE ID ='+CONVERT(Varchar(15),CTE.ID) +' '
FROM CTE
WHERE Code=#code
AND (#id IS NULL OR ID =#id)
EXEC( #SQLStr)
END

SQL Server For JSON Path dynamic column name

We are exploring the JSON feature in SQL Sever and for one of the scenarios we want to come up with a SQL which can return a JSON like below
[
{
"field": {
"uuid": "uuid-field-1"
},
"value": {
"uuid": "uuid-value" //value is an object
}
},
{
"field": {
"uuid": "uuid-field-2"
},
"value": "1". //value is simple integer
}
... more rows
]
The value field can be a simple integer/string or a nested object.
We are able to come up with a table which looks like below:
field.uuid | value.uuid | value|
------------|---------- | -----|
uuid-field-1| value-uuid | null |
uuid-field-2| null | 1 |
... more rows
But as soon as we apply for json path, it fails saying
Property 'value' cannot be generated in JSON output due to a conflict with another column name or alias. Use different names and aliases for each column in SELECT list.
Is it possible to do it somehow generate this? The value will either be in the value.uuid or value not both?
Note: We are open to possibility of if we can convert each row to individual JSON and add all of them in an array.
select
json_query((select v.[field.uuid] as 'uuid' for json path, without_array_wrapper)) as 'field',
value as 'value',
json_query((select v.[value.uuid] as 'uuid' where v.[value.uuid] is not null for json path, without_array_wrapper)) as 'value'
from
(
values
('uuid-field-1', 'value-uuid1', null),
('uuid-field-2', null, 2),
('uuid-field-3', 'value-uuid3', null),
('uuid-field-4', null, 4)
) as v([field.uuid], [value.uuid], value)
for json auto;--, without_array_wrapper;
The reason for this error is that (as is mentioned in the documentation) ... FOR JSON PATH clause uses the column alias or column name to determine the key name in the JSON output. If an alias contains dots, the PATH option creates nested objects. In your case value.uuid and value both generate a key with name value.
I can suggest an approach (probably not the best one), which uses JSON_MODIFY() to generate the expected JSON from an empty JSON array:
Table:
CREATE TABLE Data (
[field.uuid] varchar(100),
[value.uuid] varchar(100),
[value] int
)
INSERT INTO Data
([field.uuid], [value.uuid], [value])
VALUES
('uuid-field-1', 'value-uuid', NULL),
('uuid-field-2', NULL, 1),
('uuid-field-3', NULL, 3),
('uuid-field-4', NULL, 4)
Statement:
DECLARE #json nvarchar(max) = N'[]'
SELECT #json = JSON_MODIFY(
#json,
'append $',
JSON_QUERY(
CASE
WHEN [value.uuid] IS NOT NULL THEN (SELECT d.[field.uuid], [value.uuid] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
WHEN [value] IS NOT NULL THEN (SELECT d.[field.uuid], [value] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
END
)
)
FROM Data d
SELECT #json
Result:
[
{
"field":{
"uuid":"uuid-field-1"
},
"value":{
"uuid":"value-uuid"
}
},
{
"field":{
"uuid":"uuid-field-2"
},
"value":1
},
{
"field":{
"uuid":"uuid-field-3"
},
"value":3
},
{
"field":{
"uuid":"uuid-field-4"
},
"value":4
}
]

Convert flattened key/value table into hierarchical JSON in PostgreSQL

I have a PostgreSQL table with unique key/value pairs, which were originally in a JSON format, but have been normalized and melted:
key | value
-----------------------------
name | Bob
address.city | Vancouver
address.country | Canada
I need to turn this into a hierarchical JSON:
{
"name": "Bob",
"address": {
"city": "Vancouver",
"country": "Canada"
}
}
Is there a way to do this easily within SQL?
jsonb_set() almost does everything for you, but unfortunately it can only create missing leafs (i.e. missing last keys on a path), but not whole missing branches. To overcome this, here is a modified version of it, which can set values on any missing levels:
create function jsonb_set_rec(jsonb, jsonb, text[])
returns jsonb
language sql
as $$
select case
when array_length($3, 1) > 1 and ($1 #> $3[:array_upper($3, 1) - 1]) is null
then jsonb_set_rec($1, jsonb_build_object($3[array_upper($3, 1)], $2), $3[:array_upper($3, 1) - 1])
else jsonb_set($1, $3, $2, true)
end
$$;
Now you only need to apply this function one-by-one to your rows, starting with an empty json object: {}. You can do this with either recursive CTEs:
with recursive props as (
(select distinct on (grp)
pk, grp, jsonb_set_rec('{}', to_jsonb(value), string_to_array(key, '.')) json_object
from eav_tbl
order by grp, pk)
union all
(select distinct on (grp)
eav_tbl.pk, grp, jsonb_set_rec(json_object, to_jsonb(value), string_to_array(key, '.'))
from props
join eav_tbl using (grp)
where eav_tbl.pk > props.pk
order by grp, eav_tbl.pk)
)
select distinct on (grp)
grp, json_object
from props
order by grp, pk desc;
Or, with a custom aggregate defined as:
create aggregate jsonb_set_agg(jsonb, text[]) (
sfunc = jsonb_set_rec,
stype = jsonb,
initcond = '{}'
);
your query could became as simple as:
select grp, jsonb_set_agg(to_jsonb(value), string_to_array(key, '.'))
from eav_tbl
group by grp;
https://rextester.com/TULNU73750
There are no ready to use tools for this. The function generates a hierarchical json object based on a path:
create or replace function jsonb_build_object_from_path(path text, value text)
returns jsonb language plpgsql as $$
declare
obj jsonb;
keys text[] := string_to_array(path, '.');
level int := cardinality(keys);
begin
obj := jsonb_build_object(keys[level], value);
while level > 1 loop
level := level- 1;
obj := jsonb_build_object(keys[level], obj);
end loop;
return obj;
end $$;
You also need the aggregate function jsonb_merge_agg(jsonb) described in this answer. The query:
with my_table (path, value) as (
values
('name', 'Bob'),
('address.city', 'Vancouver'),
('address.country', 'Canada'),
('first.second.third', 'value')
)
select jsonb_merge_agg(jsonb_build_object_from_path(path, value))
from my_table;
gives this object:
{
"name": "Bob",
"first":
{
"second":
{
"third": "value"
}
},
"address":
{
"city": "Vancouver",
"country": "Canada"
}
}
The function do not recognize json arrays.
I can't really think of something simpler, although I think there should be an easier way.
I assume there is some additional column that can be used to bring the keys that belong to one "person" together, I used p_id for that in my example.
select p_id,
jsonb_object_agg(k, case level when 1 then v -> k else v end)
from (
select p_id,
elements[1] k,
jsonb_object_agg(case cardinality(elements) when 1 then ky else elements[2] end, value) v,
max(cardinality(elements)) as level
from (
select p_id,
"key" as ky,
string_to_array("key", '.') as elements, value
from kv
) t1
group by p_id, k
) t2
group by p_id;
The innermost query just converts the dot notation to an array for easier access later.
The next level then builds JSON objects depending on the "key". For the "single level" keys, it just uses key/value, for the others it uses the second element + the value and then aggregates those that belong together.
The second query level returns the following:
p_id | k | v | level
-----+---------+--------------------------------------------+------
1 | address | {"city": "Vancouver", "country": "Canada"} | 2
1 | name | {"name": "Bob"} | 1
2 | address | {"city": "Munich", "country": "Germany"} | 2
2 | name | {"name": "John"} | 1
The aggregation done in the second step, leaves one level too much for the "single element" keys, and that's what we need level for.
If that distinction wasn't made, the final aggregation would return {"name": {"name": "Bob"}, "address": {"city": "Vancouver", "country": "Canada"}} instead of the wanted: {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}.
The expression case level when 1 then v -> k else v end essentially turns {"name": "Bob"} back to "Bob".
So, with the following sample data:
create table kv (p_id integer, "key" text, value text);
insert into kv
values
(1, 'name','Bob'),
(1, 'address.city','Vancouver'),
(1, 'address.country','Canada'),
(2, 'name','John'),
(2, 'address.city','Munich'),
(2, 'address.country','Germany');
then query returns:
p_id | jsonb_object_agg
-----+-----------------------------------------------------------------------
1 | {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}
2 | {"name": "John", "address": {"city": "Munich", "country": "Germany"}}
Online example: https://rextester.com/SJOTCD7977
create table kv (key text, value text);
insert into kv
values
('name','Bob'),
('address.city','Vancouver'),
('address.country','Canada'),
('name','John'),
('address.city','Munich'),
('address.country','Germany');
create view v_kv as select row_number() over() as nRec, key, value from kv;
create view v_datos as
select k1.nrec, k1.value as name, k2.value as address_city, k3.value as address_country
from v_kv k1 inner join v_kv k2 on (k1.nrec + 1 = k2.nrec)
inner join v_kv k3 on ((k1.nrec + 2= k3.nrec) and (k2.nrec + 1 = k3.nrec))
where mod(k1.nrec, 3) = 1;
select json_agg(json_build_object('name',name, 'address', json_build_object('city',address_city, 'country', address_country)))
from v_datos;

How to parse JSON into relational format in SQL Server 2016?

I have some Json stored in SQL Server 2016 table as under (partitial)
{
"AFP": [
{
"AGREEMENTID": "29040400001330",
"LoanAccounts": {
"Product": "OD003",
"BUCKET": 0,
"ZONE": "MUMBAI ZONE",
"Region": "MUMBAI METRO-CENTRAL REGION",
"STATE": "GOA",
"Year": 2017,
"Month": 10,
"Day": 13
},
"FeedbackInfo": {
"FeedbackDate": "2017-10-13T12:07:44.2317198",
"DispositionDate": "2017-10-13T12:07:44.2317198",
"DispositionCode": "PR"
},
"PaymentInfo": {
"ReceiptNo": "2000000170",
"ReceiptDate": "2017-10-13T12:07:42.1218299",
"PaymentMode": "Cheque",
"Amount": 200,
"PaymentStatus": "CollectionBatchCreated"
}
}
]
}
table schema as under
create table tblHistoricalDataDemo(
AGREEMENTID nvarchar(40)
,Year_Json nvarchar(4000)
)
I would like to fetch the records from JSON into relational format as
AgreementID Product Bucket .... PaymentStatus
I tried with below but something wrong i am doing for which I am not able to get the result
SELECT AGREEMENTID,
JSON_VALUE(Year_Json, '$.LoanAccounts') AS records
FROM tblHistoricalDataDemo
Use the OPENJSON built in table value function:
SELECT *
FROM tblHistoricalDataDemo
CROSS APPLY
OPENJSON(Year_Json, '$.AFP') WITH
(
-- You don't have to specify the json path
-- if the column name is the same as the json name
AGREEMENTID bigint
)
As afp
CROSS APPLY
OPENJSON(Year_Json, '$.AFP') WITH
(
Product varchar(10) '$.LoanAccounts.Product',
bucket int '$.LoanAccounts.BUCKET'
)
As LoanAccounts
In case the array in JSON has a fixed number of element, use
$.P1[x]
If AFP has only 1 element,
SELECT t.AGREEMENTID,
JSON_Value(Year_Json, '$.AFP[0].LoanAccounts.Product') Product,
JSON_Value(Year_Json, '$.AFP[0].LoanAccounts.BUCKET') Bucket,
JSON_Value(Year_Json, '$.AFP[0].PaymentInfo.PaymentStatus') PaymentStatus
FROM tblHistoricalDataDemo t
Run it in SQLFiddle, thx Jacob H.

converting xml to json using postgresql

I am working on converting XML to j son string using the PostgreSQL.
we have attributecentric XML and would like to know how to convert it to j son.
Example XML:
<ROOT><INPUT id="1" name="xyz"/></ROOT>
Need to get the j son as follows:
{ "ROOT": { "INPUT": { "id": "1", "name": "xyz" }}}
got the above json format from an online tool.
any help or guidance will be appreciated.
Regards
Abdul Azeem
Basically, breakdown of this problem is the following:
extract values from given XML using xpath() function
generate and combine JSON entries using json_object_agg() function
The only tricky thing here is to combine key,value pairs together from xpath()/json_object_agg() results, which are{ "id": "1"} and { "name" : "xyz"}.
WITH test_xml(data) AS ( VALUES
('<ROOT><INPUT id="1" name="xyz"/></ROOT>'::XML)
), attribute_id(value) AS (
-- get '1' value from id
SELECT (xpath('//INPUT/#id',test_xml.data))[1] AS value
FROM test_xml
), attribute_name(value) AS (
-- get 'xyz' value from name
SELECT (xpath('//INPUT/#name',test_xml.data))[1] AS value
FROM test_xml
), json_1 AS (
-- generate JSON 1 {"id": "1"}
SELECT json_object_agg('id',attribute_id.value) AS payload
FROM attribute_id
), json_2 AS (
-- generate JSON 2 {"name" : "xyz"}
SELECT json_object_agg('name',attribute_name.value) AS payload FROM attribute_name
), merged AS (
-- Generate INPUT result - Step 1 - combine JSON 1 and 2 as single key,value source
SELECT key,value
FROM json_1,json_each(json_1.payload)
UNION ALL
SELECT key,value
FROM json_2,json_each(json_2.payload)
), input_json_value AS (
-- Generate INPUT result - Step 2 - use json_object_agg to create JSON { "id" : "1", "name" : "xyz" }
SELECT json_object_agg(merged.key,merged.value) AS data
FROM merged
), input_json AS (
-- Generate INPUT JSON as expected { "INPUT" : { "id" : "1", "name" : "xyz" } }
SELECT json_object_agg('INPUT',input_json_value.data) AS data
FROM input_json_value
)
-- Generate final reult
SELECT json_object_agg('ROOT',input_json.data)
FROM input_json;