Aggregate JSON in PostgreSQL - json

I have a json column with entries that look like this:
{
"pages": "64",
"stats": {
"1": { "200": "55", "400": "4" },
"2": { "200": "1" },
"3": { "200": "1", "404": "13" },
}
}
The 'stats' are collections (of various sizes) containing http status codes versus counts.
I would like to aggregate the stats into two calculated columns - one for the total number of 200 responses and the other for the total number of responses (including 200s).

You can use two lateral joins to unnest the inner objects, then do conditional aggregation:
select
sum(z.cnt::int) no_responses,
sum(z.cnt::int) filter(where z.code::int = 200) no_200_responses
from mytable t
cross join lateral jsonb_each(t.data -> 'stats') as x(kx, obj)
cross join lateral jsonb_each_text(x.obj) as z(code, cnt)
Demo on DB Fiddle:
no_responses | no_200_responses
-----------: | ---------------:
74 | 57

Related

T-SQL query that takes json columnar data and displays it in row format

In an Azure SQL database, I have a column that is storing json data.
SELECT a.Response
FROM [dbo].[Api] a
The result of this query is json in columnar format
{
"PRODUCT": {
"0": "a1",
"1": "a2",
"2": "a3",
"3": "a4"
},
"STOCK": {
"0": 3.0,
"1": 3.0,
"2": 0.5,
"3": 6.0
},
"SALES": {
"0": 2487.0,
"1": 1841.0,
"2": 391.0,
"3": 2732.0
}
}
What I'd like to do is to have a query which displays the above data in SQL row format
PRODUCT | STOCK | SALES
--------+-------+------
a1 | 3.0 | 2487.0
a2 | 3.0 | 1841.0,
a3 | 0.5 | 391.0,
a1 | 6.0 | 2732.0
Assuming that the value of the key is the relation, and all keys have a value in all 3 JSON objects, one method would be to use a few calls to OPENJSON and define the relationship in the WHERE:
SELECT P.[value] aS Product,
St.[value] AS Stock,
Sa.[value] AS Sales
FROM [dbo].[Api] a
CROSS APPLY OPENJSON (a.Response, '$.PRODUCT') P
CROSS APPLY OPENJSON (a.Response, '$.STOCK') St
CROSS APPLY OPENJSON (a.Response, '$.SALES') Sa
WHERE P.[key] = St.[Key]
AND St.[Key] = Sa.[Key];
db<>fiddle

How to deal with not existing values using JSON_EXTRACT?

I have a list ob objects. Each object contains several properties. Now I want to make a SELECT statement that gives me a list of a single property values. The simplified list look like this:
[
[
{
"day": "2021-10-01",
"entries": [
{
"name": "Start of competition",
"startTimeDelta": "08:30:00"
}
]
},
{
"day": "2021-10-02",
"entries": [
{
"name": "Start of competition",
"startTimeDelta": "03:30:00"
}
]
},
{
"day": "2021-10-03",
"entries": [
{
"name": "Start of competition"
}
]
}
]
]
The working SELECT is now
SELECT
JSON_EXTRACT(column, '$.days[*].entries[0].startTimeDelta') AS list
FROM table
The returned result is
[
"08:30:00",
"03:30:00"
]
But what I want to get (and also have expected) is
[
"08:30:00",
"03:30:00",
null
]
What can I do or how can I change the SELECT statement so that I also get NULL values in the list?
SELECT startTimeDelta
FROM test
CROSS JOIN JSON_TABLE(val,
'$[*][*].entries[*]' COLUMNS (startTimeDelta TIME PATH '$.startTimeDelta')) jsontable
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=491f0f978d200a8a8522e3200509460e
Do you also have a working idea for MySQL< 8? – Lars
What is max amount of objects in the array on the 2nd level? – Akina
Well it's usually less than 10 – Lars
SELECT JSON_EXTRACT(val, CONCAT('$[0][', num, '].entries[0].startTimeDelta')) startTimeDelta
FROM test
-- up to 4 - increase if needed
CROSS JOIN (SELECT 0 num UNION SELECT 1 UNION SELECT 2 UNION SELECT 3) nums
WHERE JSON_EXTRACT(val, CONCAT('$[0][', num, '].entries[0]')) IS NOT NULL;
https://www.db-fiddle.com/f/xnCCSTGQXevcpfPH1GAbUo/0

Sql Query Json Array items by Value

I have searched and can't seem to find somewhere doing exactly what I am trying.
I have a json similar to as follows in multiple rows in my database:
{
"date": "0001-01-01T00:00:00",
"details": {
"detail": [
{
"item": "11",
"value": "xt"
},
{
"item": "12",
"value": "xy"
},
{
"item": "13",
"value": "xz"
},
{
"item": "14",
"value": "zz"
}
]
}
}
I want to do sql that does this:
select ID
jsonColumn.value where item=11 as X
jsonColumn.value where item=12 as Y
from tbl
So I have results like this
----------------------
|ID |X |Y |
----------------------
|1 |xt |xy |
----------------------
I have tried using JSONVALUE but I seem to need to do it by the array item number like this:
'$.details.detail[3].value'
which doesn't really work
I have also tried this:
SELECT id, x.item, x.value
FROM
tbl F
CROSS APPLY (select *
FROM OPENJSON(F.Json,'$.details.detail')
CROSS APPLY OPENJSON(value)
WITH (item NVARCHAR(25) '$.item',
value NVARCHAR(max) '$.value') As x
where F.ID=55
Which I can use to print out all the items and values but then I'd have to query each separately again.
Is there a way of combining the two in to one big query that won't be completely inefficient?
Seems what you want is a pivot. I personally use conditional aggregation over the far more restrictive PIVOT operator. The JSON you supplied was invalid, so I took some liberties correcting it in my sandbox environment:
SELECT --ID,
MAX(CASE d.item WHEN 11 THEN d.[value] END) AS X,
MAX(CASE d.item WHEN 12 THEN d.[value] END) AS Y
FROM (VALUES(#JSON))V(J) --Your Table
CROSS APPLY OPENJSON(V.J,'$.details')
WITH (detail nvarchar(MAX) AS JSON ) OJ
CROSS APPLY OPENJSON(OJ.detail)
WITH(item int,
[value] nvarchar(2)) d;
If you are using this against a table, and not limiting the data to a single row, you'll need to also add a GROUP BY clause on the relevant columns (ID?).

Expand Postgresql Nested Array Json Field

I have a table (log_table) and in this table there is a nested array json field (activities). With using this activities field, I want to normalize my row.
log_table:
- id:long
- activities:json
- date:timestamp
example activities field:
[
{
"actionType":"NOTIFICATION",
"items":null
},
{
"actionType":"MUTATION",
"items":[
{
"id":387015007,
"name":"epic",
"value":{
"currency":"USD",
"amount":1.76
}
},
{
"id":386521039,
"name":"test",
"value":{
"currency":"USD",
"amount":1.76
}
}
]
}
]
As query, I've tried:
select
*
from
log_table l,
json_array_elements(l.activities) elems,
json_array_elements(elems->'items') obj;
With this query, I got error like below:
ERROR: cannot call json_array_elements on a scalar
Is there any suggestion?
The lack of items should be marked as [null], not null. You can use the case expression to correct this, e.g.:
select elems->>'actionType' as action_type, obj
from log_table
cross join jsonb_array_elements(l.activities::jsonb) elems
cross join jsonb_array_elements(case elems->'items' when 'null' then '[null]' else elems->'items' end) obj
action_type | obj
--------------+---------------------------------------------------------------------------------
NOTIFICATION | null
MUTATION | {"id": 387015007, "name": "epic", "value": {"amount": 1.76, "currency": "USD"}}
MUTATION | {"id": 386521039, "name": "test", "value": {"amount": 1.76, "currency": "USD"}}
(3 rows)

Count occurences along with result using DISTINCT ON on PostgreSQL

I have data like this:
[
{"name": "pratha", "email": "p#g.com", "sub": { "id": 1 } },
{"name": "john", "email": "x#x.com", "sub": { "id": 5 } },
{"name": "pratha", "email": "c#d.com", "sub": { "id": 2 } }
]
This is my query to get unique and latest emails:
SELECT DISTINCT ON (jae.e->>'name')
jae.e->>'name' as name,
jae.e->>'email' as email
FROM survey_results sr
CROSS JOIN LATERAL jsonb_array_elements(sr.data_field) jae (e)
ORDER BY jae.e->>'name', jae.e->'sub'->>'id' desc
Problem is, when I add count(*) to select, all counts are equal.
I want to get unique result with distinct, and count their occurrences. So in this case, pratha should be 2 and john should be 1
with their data (not just counts)
How can achieve this with PostgreSQL?
See here: https://dbfiddle.uk/?rdbms=postgres_11&fiddle=f5c640958c3e4d594287632d0f4a835f
Do you need this?
SELECT DISTINCT ON (jj->>'name') jj->>'name', jj->>'email' , count(*) over(partition by jj->>'name' )
from survey_results
join lateral jsonb_array_elements(data_field) j(jj) on true
ORDER BY jj->>'name', jj->'sub'->>'id' desc
https://dbfiddle.uk/?rdbms=postgres_11&fiddle=5f07b7bcb0001ebe32aa2f1338d9d0f0