I'm trying to query two values (DISCOUNT_TOTAL and ITEM_TOTAL) from a JSON object in a PostgreSQL database. Take the following query as reference:
SELECT
mt.customer_order
totals -> 0 -> 'amount' -> centAmount DISCOUNT_TOTAL
totals -> 1 -> 'amount' -> centAmount ITEM_TOTAL
FROM
my_table mt
to_jsonb(my_table.my_json -> 'data' -> 'order' -> 'totals') totals
WHERE
mt.customer_order in ('1000001', '1000002')
The query code works just fine, the big problem is that, for some reason out of my control, the values DISCOUNT_TOTAL and ITEM_TOTAL some times change their positions in the JSON object from one customer_order to other:
JSON Object
So i cannot aim to totals -> 0 -> 'amount' -> centAmount assuming that it contains the value related to type : DISCOUNT_TOTAL (same for type: ITEM_TOTAL). Is there any work around to get the correct centAmount for each type?
Use a path query instead of hardcoding the array positions:
with sample (jdata) as (
values (
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "DISCOUNT_TOTAL").amount.centAmount'
) as discount_total,
jsonb_path_query_first(
jdata,
'$.data.order.totals[*] ? (#.type == "ITEM_TOTAL").amount.centAmount'
) as item_total
from sample;
db<>fiddle here
EDIT: In case your PostgreSQL version does not support json path queries, you can do it by expanding the array into rows and then doing a pivot by case and sum:
with sample (order_id, jdata) as (
values ( 1,
'{
"data": {
"order": {
"email": "something",
"totals": [
{
"type": "ITEM_TOTAL",
"amount": {
"centAmount": 14990
}
},
{
"type": "DISCOUNT_TOTAL",
"amount": {
"centAmount": 6660
}
}
]
}
}
}'::jsonb)
)
select order_id,
sum(
case
when el->>'type' = 'DISCOUNT_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as discount_total,
sum(
case
when el->>'type' = 'ITEM_TOTAL' then (el->'amount'->'centAmount')::int
else 0
end
) as item_total
from sample
cross join lateral jsonb_array_elements(jdata->'data'->'order'->'totals') as a(el)
group by order_id;
db<>fiddle here
I have a table (log_table) and in this table there is a nested array json field (activities). With using this activities field, I want to normalize my row.
log_table:
- id:long
- activities:json
- date:timestamp
example activities field:
[
{
"actionType":"NOTIFICATION",
"items":null
},
{
"actionType":"MUTATION",
"items":[
{
"id":387015007,
"name":"epic",
"value":{
"currency":"USD",
"amount":1.76
}
},
{
"id":386521039,
"name":"test",
"value":{
"currency":"USD",
"amount":1.76
}
}
]
}
]
As query, I've tried:
select
*
from
log_table l,
json_array_elements(l.activities) elems,
json_array_elements(elems->'items') obj;
With this query, I got error like below:
ERROR: cannot call json_array_elements on a scalar
Is there any suggestion?
The lack of items should be marked as [null], not null. You can use the case expression to correct this, e.g.:
select elems->>'actionType' as action_type, obj
from log_table
cross join jsonb_array_elements(l.activities::jsonb) elems
cross join jsonb_array_elements(case elems->'items' when 'null' then '[null]' else elems->'items' end) obj
action_type | obj
--------------+---------------------------------------------------------------------------------
NOTIFICATION | null
MUTATION | {"id": 387015007, "name": "epic", "value": {"amount": 1.76, "currency": "USD"}}
MUTATION | {"id": 386521039, "name": "test", "value": {"amount": 1.76, "currency": "USD"}}
(3 rows)
I am trying to update several rows in SQL with JSON.
I'd like to match a primary key on a table row to an index nested in an array of JS objects.
Sample data:
let json = [{
"header": object_data,
"items": [{
"id": {
"i": 0,
"name": "item_id"
},
"meta": {
"data": object_data,
"text": "some_text"
}
}, {
"id": {
"i": 4,
"name": "item_id4"
},
"meta": {
"data": object_data,
"text": "some_text"
}
}, {
"id": {
"i": 17,
"name": "item_id17"
},
"meta": {
"data": object_data,
"text": "some_text"
}}]
}]
Sample table:
i | json | item_id
---+---------------------------+---------
0 | entire_object_at_index_0 | item_id
4 | entire_object_at_index_4 | item_id4
17 | entire_object_at_index_17 | item_id17
entire_object_at_index, meaning appending the item data to the header to create a new object for each row.
"header" "some_data",
"items": [{
"id": {
"i": 0,
"name": "item_id1"
},
"meta": {
"data": "some_data",
"text": "some_text"
}
}]
SQL:
update someTable set
json = json_value(#jsons, '$') -- not sure how to join on index here
item_id = json_value(#jsons, '$.items[?].id.name' -- not sure how to select by index here
where [i] = json_query(#jsons, '$.items.id.i')
The requirement to repeat the other properties complicates this a bit, because we need to build a new object explicitly. Even so it's not too hard:
update someTable
set
[json] = (
select (
select
"header" = json_query(#json, '$.header'),
"items" = json_query(N'[' + items.item + N']')
for json path, without_array_wrapper
)
),
item_id = items.item_id
from openjson(#json, '$.items') with (
item nvarchar(max) '$' as json,
item_id varchar(50) '$.id.name',
i int '$.id.i'
) items
join someTable on [someTable].i = items.i
Here I'm assuming the #json has already been unwrapped from its array, as your query seems to assume. If it's not, substitute $.[0] for $ in the outer query.
Update:
It's an attempt to improve my answer (I missed the header part of the JSON content in the original answer). Of course, the #JeroenMostert's answer is an excellent solution, so this is just another possible approach. Note, that if header part of JSON content is scalar value, you should use JSON_VALUE().
Table and JSON:
-- Table
CREATE TABLE #Data (
i int,
[json] nvarchar(max),
item_id nvarchar(100)
)
INSERT INTO #Data
(i, [json], [item_id])
VALUES
(0 , N'entire_object_at_index_0', N'item_id'),
(4 , N'entire_object_at_index_4', N'item_id4'),
(17, N'entire_object_at_index_17', N'item_id17')
-- JSON
DECLARE #json nvarchar(max) = N'[{
"header": {"key": "some_data"},
"items": [{
"id": {
"i": 0,
"name": "item_id"
},
"meta": {
"data": "some_data",
"text": "some_text"
}
}, {
"id": {
"i": 4,
"name": "item_id4"
},
"meta": {
"data": "some_data",
"text": "some_text"
}
}, {
"id": {
"i": 17,
"name": "item_id17"
},
"meta": {
"data": "some_data",
"text": "some_text"
}}]
}]'
Statement:
UPDATE #Data
SET #Data.Json = j.Json
FROM #Data
CROSS APPLY (
SELECT
JSON_QUERY(#json, '$[0].header') AS header,
JSON_QUERY(j.[value], '$') AS items
FROM OPENJSON(#json, '$[0].items') j
WHERE JSON_VALUE(j.[value], '$.id.i') = #Data.[i]
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
) j ([Json])
Original answer:
One possible approach is to use OPENJSON and appropriate join:
Table and JSON:
-- Table
CREATE TABLE #Data (
i int,
[json] nvarchar(max),
item_id nvarchar(100)
)
INSERT INTO #Data
(i, [json], [item_id])
VALUES
(0 , N'entire_object_at_index_0', N'item_id'),
(4 , N'entire_object_at_index_4', N'item_id4'),
(17, N'entire_object_at_index_17', N'item_id17')
-- JSON
DECLARE #json nvarchar(max) = N'[{
"header": "some_data",
"items": [{
"id": {
"i": 0,
"name": "item_id"
},
"meta": {
"data": "some_data",
"text": "some_text"
}
}, {
"id": {
"i": 4,
"name": "item_id4"
},
"meta": {
"data": "some_data",
"text": "some_text"
}
}, {
"id": {
"i": 17,
"name": "item_id17"
},
"meta": {
"data": "some_data",
"text": "some_text"
}}]
}]'
Statement:
UPDATE #Data
SET [json] = j.[value]
FROM #Data
LEFT JOIN (
SELECT
[value],
JSON_VALUE([value], '$.id.i') AS [i]
FROM OPENJSON(#json, '$[0].items')
) j ON (#Data.[i] = j.[i])
I have a JSON structure where there are Sections, consisting of multiple Renders, which consist of multiple Fields.
How do I do 1 OPENJSON call on the lowest level (Fields) to get all information from there?
Here is an example JSON:
Declare #layout NVARCHAR(MAX) = N'
{
"Sections": [
{
"SectionName":"Section1",
"SectionOrder":1,
"Renders":[
{
"RenderName":"Render1",
"RenderOrder":1,
"Fields":[
{
"FieldName":"Field1",
"FieldData":"Data1"
},
{
"FieldName":"Field2",
"FieldData":"Data2"
}
]
},
{
"RenderName":"Render2",
"RenderOrder":2,
"Fields":[
{
"FieldName":"Field1",
"FieldData":"Data1"
},
{
"FieldName":"Field2",
"FieldData":"Data2"
}
]
}
]
},
{
"SectionName":"Section2",
"SectionOrder":2,
"Renders":[
{
"RenderName":"Render1",
"RenderOrder":1,
"Fields":[
{
"FieldName":"Field1",
"FieldData":"Data1"
}
]
},
{
"RenderName":"Render2",
"RenderOrder":2,
"Fields":[
{
"FieldName":"Field1",
"FieldData":"Data1"
},
{
"FieldName":"Field2",
"FieldData":"Data2"
}
]
}
]
}
]
}
'
Here is some example of code of a nested OPENJSON call, which works, but is very complex and can't be generated dynamically, how do I make it one level call?
SELECT SectionName, SectionOrder, RenderName, RenderOrder, FieldName, FieldData FROM (
SELECT SectionName, SectionOrder, RenderName, RenderOrder, Fields FROM (
select SectionName, SectionOrder, Renders
from OPENJSON(#layout,'$.Sections')
WITH (
SectionName nvarchar(MAX) '$.SectionName',
SectionOrder nvarchar(MAX) '$.SectionOrder',
Renders nvarchar(MAX) '$.Renders' as JSON
)
) as Sections
CROSS APPLY OPENJSON(Renders,'$')
WITH (
RenderName nvarchar(MAX) '$.RenderName',
RenderOrder nvarchar(MAX) '$.RenderOrder',
Fields nvarchar(MAX) '$.Fields' as JSON
)
) as Renders
CROSS APPLY OPENJSON(Fields,'$')
WITH (
FieldName nvarchar(MAX) '$.FieldName',
FieldData nvarchar(MAX) '$.FieldData'
)
This is what I would like to achieve:
select FieldName, FieldData
from OPENJSON(#layout,'$.Sections.Renders.Fields')
WITH (
FieldName nvarchar(MAX) '$.Sections.Renders.Fields.FieldName',
FieldData nvarchar(MAX) '$.Sections.Renders.Fields.FieldData'
)
While you can't get away with using only a single OPENJSON, you can simplify your query a bit to make it easier to create dynamically by removing the nested subqueries:
SELECT SectionName, SectionOrder, RenderName, RenderOrder, FieldName, FieldData
FROM OPENJSON(#layout, '$.Sections')
WITH (
SectionName NVARCHAR(MAX) '$.SectionName',
SectionOrder NVARCHAR(MAX) '$.SectionOrder',
Renders NVARCHAR(MAX) '$.Renders' AS JSON
)
CROSS APPLY OPENJSON(Renders,'$')
WITH (
RenderName NVARCHAR(MAX) '$.RenderName',
RenderOrder NVARCHAR(MAX) '$.RenderOrder',
Fields NVARCHAR(MAX) '$.Fields' AS JSON
)
CROSS APPLY OPENJSON(Fields,'$')
WITH (
FieldName NVARCHAR(MAX) '$.FieldName',
FieldData NVARCHAR(MAX) '$.FieldData'
)
EDIT:
If you have a primitive array, you can access the data using the value property after you expose the nested array as a JSON field. Using the JSON from the comment below, you can do this to get the values from a primitive array:
DECLARE #layout NVARCHAR(MAX) = N'{ "id":123, "locales":["en", "no", "se"] }'
SELECT
a.id
, [Locale] = b.value
FROM OPENJSON(#layout, '$')
WITH (
id INT '$.id',
locales NVARCHAR(MAX) '$.locales' AS JSON
) a
CROSS APPLY OPENJSON(a.locales,'$') b
This can be done by CROSS Applying the JSON child node with the parent node and using the JSON_Value() function, like shown below:
DECLARE #json NVARCHAR(1000)
SELECT #json =
N'{
"OrderHeader": [
{
"OrderID": 100,
"CustomerID": 2000,
"OrderDetail": [
{
"ProductID": 2000,
"UnitPrice": 350
},
{
"ProductID": 3000,
"UnitPrice": 450
},
{
"ProductID": 4000,
"UnitPrice": 550
}
]
}
]
}'
SELECT
JSON_Value (c.value, '$.OrderID') as OrderID,
JSON_Value (c.value, '$.CustomerID') as CustomerID,
JSON_Value (p.value, '$.ProductID') as ProductID,
JSON_Value (p.value, '$.UnitPrice') as UnitPrice
FROM OPENJSON (#json, '$.OrderHeader') as c
CROSS APPLY OPENJSON (c.value, '$.OrderDetail') as p
Result
-------
OrderID CustomerID ProductID UnitPrice
100 2000 2000 350
100 2000 3000 450
100 2000 4000 550
select
json_query(questionsList.value, '$.answers'),
json_value(answersList2.value, '$.text[0].text') as text,
json_value(answersList2.value, '$.value') as code
from RiskAnalysisConfig c
outer apply OpenJson(c.Configuration, '$.questions') as questionsList
outer apply OpenJson(questionsList.value, '$.answers') as answersList2
where questionsList.value like '%"CATEGORIES"%'
Sample of List of answers for each question into list
questionsList list level 1
answersList2 list level 2
Json sample
{
"code": "Code x",
"questions": [
{},
{},
{
"code": "CATEGORIES",
"text": [
{
"text": "How old years are you?",
"available": true
}
],
"answers": [
{
"text": [
{
"text": "more than 18",
"available": true
}
],
"text": [
{
"text": "less than 18",
"available": true
}
]
}
]
}
]
}
Partial result
| text | code |
| --- | --- |
| How old years are you? | more than 18 |
| How old years are you? | lass than 18 |
I have the JSON code and inserted into the table called MstJson, the column name which contains JSON code is JSON data.
JSON Code :
[
{
"id":100,
"type":"donut",
"name":"Cake",
"ppu":0.55,
"batters":{
"batter":[
{
"id":"1001",
"type":"Regular"
},
{
"id":"1002",
"type":"Chocolate"
},
{
"id":"1003",
"type":"Blueberry"
},
{
"id":"1004",
"type":"Havmor",
"BusinessName":"HussainM"
},
"id",
"type"
]
},
"topping":[
{
"id":"5001",
"type":"None"
},
{
"id":"5002",
"type":"Glazed"
},
{
"id":"5005",
"type":"Sugar"
},
{
"id":"5007",
"type":"Powdered Sugar"
},
{
"id":"5006",
"type":"Chocolate with Sprinkles"
},
{
"id":"5003",
"type":"Chocolate"
},
{
"id":"5004",
"type":"Maple"
}
]
},
{
"id":"0002",
"type":"donut",
"name":"Raised",
"ppu":0.55,
"batters":{
"batter":[
{
"id":"1001",
"type":"Regular"
}
]
},
"topping":[
{
"id":"5001",
"type":"None"
},
{
"id":"5002",
"type":"Glazed"
},
{
"id":"5005",
"type":"Sugar"
},
{
"id":"5003",
"type":"Chocolate"
},
{
"id":"5004",
"type":"Maple"
}
]
},
{
"id":"0003",
"type":"donut",
"name":"Old Fashioned",
"ppu":0.55,
"batters":{
"batter":[
{
"id":"1001",
"type":"Regular"
},
{
"id":"1002",
"type":"Chocolate"
}
]
},
"topping":[
{
"id":"5001",
"type":"None"
},
{
"id":"5002",
"type":"Glazed"
},
{
"id":"5003",
"type":"Chocolate"
},
{
"id":"5004",
"type":"Maple"
}
]
}
]
Sql Code for OpenJson using CrossApply (Nested Array):
SELECT
d.ID
,a.ID
,a.Type
,a.Name
,a.PPU
,c.Batterid
,c.Battertype
FROM MstJson d
CROSS APPLY
OPENJSON(Jsondata)
WITH
(
ID NVARCHAR(MAX) '$.id'
,Type NVARCHAR(MAX) '$.type'
,Name NVARCHAR(MAX) '$.name'
,PPU DECIMAL(18, 2) '$.ppu'
,Batters NVARCHAR(MAX) '$.batters' AS JSON
) AS a
CROSS APPLY
OPENJSON(Batters, '$')
WITH
(
Batter NVARCHAR(MAX) '$.batter' AS JSON
) AS b
CROSS APPLY
OPENJSON(Batter, '$')
WITH
(
Batterid INT '$.id'
,Battertype NVARCHAR(MAX) '$.type'
) AS c
WHERE d.ID = 12; ---above Json Code is on Id 12 of Table MstJson
Document sample from my giata_properties bucket: link
Relevant json paste
{
"propertyCodes": {
"provider": [
{
"code": [
{
"value": [
{
"value": "304387"
}
]
}
],
"providerCode": "hotelbeds",
"providerType": "gds"
},
{
"code": [
{
"value": [
{
"name": "Country Code",
"value": "EG"
},
{
"name": "City Code",
"value": "HRG"
},
{
"name": "Hotel Code",
"value": "91U"
}
]
}
],
"providerCode": "gta",
"providerType": "gds"
}
]
},
"name": "Arabia Azur Resort"
}
I want a query (and an index) to retrieve a document based on propertyCodes.provider.code.value.value and propertyCodes.provider.providerCode. I've managed to do each separately but I'm not sure how to merge both of them in a single query.
SELECT meta().id FROM giata_properties AS gp USE INDEX(`#primary`) WHERE ANY v WITHIN gp.propertyCodes.provider[*].code SATISFIES v.`value` = '150613' END;
SELECT meta().id FROM giata_properties AS gp USE INDEX(`#primary`) WHERE ANY v within gp.propertyCodes.provider[*].providerCode SATISFIES v = 'hotelbeds' END;
So for example I want to fetch the document that includes propertyCodes.provider.code.value.value of 304387 and that provider is also hotelbeds, because code value can be duplicated over documents, but code and providerCode combination is unique.
Here are the query and the indexes.
The query.
SELECT META().id
FROM giata_properties AS gp
WHERE ANY p IN propertyCodes.provider SATISFIES ( ANY v WITHIN p.code SATISFIES v.`value` = '304387' END ) AND p.providerCode = 'hotelbeds' END;
The indexes.
CREATE INDEX idx_value ON giata_properties
( DISTINCT ARRAY ( DISTINCT ARRAY v.`value` FOR v WITHIN p.code END ) FOR p IN propertyCodes.provider END );
CREATE INDEX idx_providerCode ON giata_properties
( DISTINCT ARRAY p.providerCode FOR p IN propertyCodes.provider END );