We have official documentation which unclear for me without examples.
I have JSON loke this structure, its a bit bigger but the same structure, more JSONs inside the array:
WITH
responses AS (
SELECT *
FROM (VALUES ((1,
'[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "5",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "yes",
"question": "quest",
"answer_id": 10094
}
]' :: JSON),
(2,
'[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "-1",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "no",
"question": "quest",
"answer_id": 10094
}
]' :: JSON))
) TEST(id,val)
)
SELECT * from responses
I could transform to flat structure:
id| question| answer
-----------
1, 'r1', 5
1, 'q6', yes
2, 'r1', -1
2, 'q6', no
But I need to get result in this way
id| r1| q6
----------
1, 5, yes
2, -1, no
How can I get result like last one?
There are some issues with your WITH query.
First, there is no need to do "select * from (values (..))" – VALUES is itself a query which you can use without SELECT:
test=# values (1, 'zz'), (2, 'bbb');
column1 | column2
---------+---------
1 | zz
2 | bbb
(2 rows)
Next, there is an issue with parentheses. Compare the previous query with this one:
test=# values ((1, 'zz'), (2, 'bbb'));
column1 | column2
---------+---------
(1,zz) | (2,bbb)
(1 row)
If you place additional parentheses like this, you got just one row of two columns, and values in these columns being of "anonymous" record type (its Postgres type magic, very powerful, but it's different story, not needed here).
So let's fix the first part of your CTE:
with responses(id, val) AS (
values
(1, '
[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "5",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "yes",
"question": "quest",
"answer_id": 10094
}
]'::json
), (2, '
[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "-1",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "no",
"question": "quest",
"answer_id": 10094
}
]'::json
)
)
select * from responses;
Now we can use json_array_elements(..) function to extract JSON elements from the JSON arrays:
with responses(id, val) AS (
values
...
)
select id, json_array_elements(val)
from responses;
Let's use it to construct the "second stage" of our CTE:
...
), extra(id, elem) as (
select id, json_array_elements(val)
from responses
)
...
And finally, we'll get the needed result like this:
...
select
id,
elem->>'id' as json_id,
elem->>'alias' as alias,
elem->>'question' as question,
elem->>'answer' as answer
from extra;
Entire query:
with responses(id, val) AS (
values
(1, '
[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "5",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "yes",
"question": "quest",
"answer_id": 10094
}
]'::json
), (2, '
[
{
"id": "13",
"alias": "r1",
"title": "quest",
"answer": "-1",
"question": "qq",
"answer_id": 10048
},
{
"id": "24",
"alias": "q6",
"title": "quest",
"answer": "no",
"question": "quest",
"answer_id": 10094
}
]'::json
)
), extra(id, elem) as (
select id, json_array_elements(val)
from responses
)
select
id as row_id,
elem->>'id' as json_id,
elem->>'alias' as alias,
elem->>'question' as question,
elem->>'answer' as answer
from extra;
Result:
row_id | id | alias | question | answer
--------+----+-------+----------+--------
1 | 13 | r1 | qq | 5
1 | 24 | q6 | quest | yes
2 | 13 | r1 | qq | -1
2 | 24 | q6 | quest | no
(4 rows)
This is a bit different from what you wanted. What you wanted cannot be achieved with pure SQL since you want to have dynamic column names in the output -- those "r1" and "q6" must be extracted from JSON data dynamically. It's possible with plpgsql though, or with tablefunc extension to create a pivoted table, let me know if you need it.
Related
Is it possible to get array/object number values.
I have a table called tableA:
create table "tableA" (
"_id" serial,
"userId" integer,
"dependentData" jsonb);
INSERT INTO "tableA"
("_id", "userId", "dependentData")
VALUES('55555', '1191', '[{"_id": 133, "type": "radio", "title": "questionTest7", "question": "questionTest7", "response": {"value": ["option_11"]}, "dependentQuestionResponse": [{"_id": 278, "type": "text", "title": "questionTest8", "question": "questionTest8", "response": {"value": ["street no 140"]}, "dependentQuestionResponse": []}]}, {"_id": 154, "type": "dropdown", "title": "questionTest8", "question": "questionTest8", "response": {"value": ["option_14"]}, "dependentQuestionResponse": []}]');
Array number is to be fetched. Output should be require below.
_id
userId
array/object
55555
1191
[0,0,1]
You can try something like this
select id, user_id,
(
with
base as (
select o1, oo1.a oo1 from (
select jsonb_array_elements(t.a) o1
from (select depend_data as a) t
) o
left join lateral (select a from jsonb_array_elements(o.o1-> 'dependentQuestionResponse') a) oo1 on true
)
select json_agg(nn) from (
select dense_rank() over(order by b.o1) - 1 nn, b.o1 from base b
union all
select dense_rank() over(order by b.o1) - 1 nn, b.oo1 from base b where oo1 is not null
order by nn
) tz
) as array_object
from
(select 55555 as id,
1191 as user_id,
'[{"_id": 133, "type": "radio", "title": "questionTest7", "question": "questionTest7", "response": {"value": ["option_11"]}, "dependentQuestionResponse": [
{"_id": 278, "type": "text", "title": "questionTest8", "question": "questionTest8", "response": {"value": ["street no 140"]},"dependentQuestionResponse": []}]},
{"_id": 154, "type": "dropdown", "title": "questionTest8", "question": "questionTest8", "response": {"value": ["option_14"]}, "dependentQuestionResponse": []}]'::jsonb as depend_data) t
given the below data model:
{
"events": [
{
"customerId": "a",
"type": "credit" ,
"value": 10
},
{
"customerId": "a",
"type": "credit" ,
"value": 10
},
{
"customerId": "b",
"type": "credit" ,
"value": 5
},
{
"customerId": "b",
"type": "credit" ,
"value": 5
}
]
}
how can i query the sum of credits by customerId ? i.e:
{
{
"customerId": "a",
"total": "20
},
{
"customerId": "b",
"total": "10
}
}
Use SUBQUERY expression per document aggregation
SELECT d.*,
(SELECT e.customerId, SUM(e.`value`) AS total
FROM d.events AS e
WHERE ......
GROUP BY e.customerId) AS events
FROM default AS d
WHERE ...........;
For Whole Query
SELECT e.customerId, SUM(e.`value`) AS total
FROM default AS d
UNNEST d.events AS e
WHERE ......
GROUP BY e.customerId;
I have JSON stored in a SQL Server database table in the below format. I have been able to fudge a way to get the values I need but feel like there must be a better way to do it using T-SQL. The JSON is output from a report in the below format where the column names in "columns" correspond to the "rows"-"data" array values.
So column "Fiscal Month" corresponds to data value "11", "Fiscal Year" to "2019", etc.
{
"report": "Property ETL",
"id": 2648,
"columns": [
{
"name": "Fiscal Month",
"dataType": "int"
},
{
"name": "Fiscal Year",
"dataType": "int"
},
{
"name": "Portfolio",
"dataType": "varchar(50)"
},
{
"name": "Rent",
"dataType": "int"
}
],
"rows": [
{
"rowName": "1",
"type": "Detail",
"data": [
11,
2019,
"West Group",
10
]
},
{
"rowName": "2",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "3",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "Totals: ",
"type": "Total",
"data": [
null,
null,
null,
30
]
}
]
}
In order to get at the data in the 'data' array I currently have a 2 step process in T-SQL where I create a temp table, and insert the row key/values from '$.Rows' there. Then I can then select the individual columns for each row
CREATE TABLE #TempData
(
Id INT,
JsonData VARCHAR(MAX)
)
DECLARE #json VARCHAR(MAX);
DECLARE #LineageKey INT;
SET #json = (SELECT JsonString FROM Stage.Report);
SET #LineageKey = (SELECT LineageKey FROM Stage.Report);
INSERT INTO #TempData(Id, JsonData)
(SELECT [key], value FROM OPENJSON(#json, '$.rows'))
MERGE [dbo].[DestinationTable] TARGET
USING
(
SELECT
JSON_VALUE(JsonData, '$.data[0]') AS FiscalMonth,
JSON_VALUE(JsonData, '$.data[1]') AS FiscalYear,
JSON_VALUE(JsonData, '$.data[2]') AS Portfolio,
JSON_VALUE(JsonData, '$.data[3]') AS Rent
FROM #TempData
WHERE JSON_VALUE(JsonData, '$.data[0]') is not null
) AS SOURCE
...
etc., etc.
This works, but I want to know if there is a way to directly select the data values without the intermediate step of putting it into the temp table. The documentation and examples I've read seem to all require that the data have a name associated with it in order to access it. When I try and access the data directly at a position by index I just get Null.
I hope I understand your question correctly. If you know the columns names you need one OPENJSON() call with explicit schema, but if you want to read the JSON structure from $.columns, you need a dynamic statement.
JSON:
DECLARE #json nvarchar(max) = N'{
"report": "Property ETL",
"id": 2648,
"columns": [
{
"name": "Fiscal Month",
"dataType": "int"
},
{
"name": "Fiscal Year",
"dataType": "int"
},
{
"name": "Portfolio",
"dataType": "varchar(50)"
},
{
"name": "Rent",
"dataType": "int"
}
],
"rows": [
{
"rowName": "1",
"type": "Detail",
"data": [
11,
2019,
"West Group",
10
]
},
{
"rowName": "2",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "3",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "Totals: ",
"type": "Total",
"data": [
null,
null,
null,
30
]
}
]
}'
Statement for fixed structure:
SELECT *
FROM OPENJSON(#json, '$.rows') WITH (
[Fiscal Month] int '$.data[0]',
[Fiscal Year] int '$.data[1]',
[Portfolio] varchar(50) '$.data[2]',
[Rent] int '$.data[3]'
)
Dynamic statement:
DECLARE #stm nvarchar(max) = N''
SELECT #stm = CONCAT(
#stm,
N',',
QUOTENAME(j2.name),
N' ',
j2.dataType,
N' ''$.data[',
j1.[key],
N']'''
)
FROM OPENJSON(#json, '$.columns') j1
CROSS APPLY OPENJSON(j1.value) WITH (
name varchar(50) '$.name',
dataType varchar(50) '$.dataType'
) j2
SELECT #stm = CONCAT(
N'SELECT * FROM OPENJSON(#json, ''$.rows'') WITH (',
STUFF(#stm, 1, 1, N''),
N')'
)
PRINT #stm
EXEC sp_executesql #stm, N'#json nvarchar(max)', #json
Result:
--------------------------------------------
Fiscal Month Fiscal Year Portfolio Rent
--------------------------------------------
11 2019 West Group 10
11 2019 East Group 10
11 2019 East Group 10
30
Yes, it is possible without temporary table:
DECLARE #json NVARCHAR(MAX) =
N'
{
"report": "Property ETL",
"id": 2648,
"columns": [
{
"name": "Fiscal Month",
"dataType": "int"
},
{
"name": "Fiscal Year",
"dataType": "int"
},
{
"name": "Portfolio",
"dataType": "varchar(50)"
},
{
"name": "Rent",
"dataType": "int"
}
],
"rows": [
{
"rowName": "1",
"type": "Detail",
"data": [
11,
2019,
"West Group",
10
]
},
{
"rowName": "2",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "3",
"type": "Detail",
"data": [
11,
2019,
"East Group",
10
]
},
{
"rowName": "Totals: ",
"type": "Total",
"data": [
null,
null,
null,
30
]
}
]
}
}';
And query:
SELECT s.value,
rowName = JSON_VALUE(s.value, '$.rowName'),
[type] = JSON_VALUE(s.value, '$.type'),
s2.[key],
s2.value
FROM OPENJSON(JSON_QUERY(#json, '$.rows')) s
CROSS APPLY OPENJSON(JSON_QUERY(s.value, '$.data')) s2;
db<>fiddle demo
Or as a single row per detail:
SELECT s.value,
rowName = JSON_VALUE(s.value, '$.rowName'),
[type] = JSON_VALUE(s.value, '$.type'),
JSON_VALUE(s.value, '$.data[0]') AS FiscalMonth,
JSON_VALUE(s.value, '$.data[1]') AS FiscalYear,
JSON_VALUE(s.value, '$.data[2]') AS Portfolio,
JSON_VALUE(s.value, '$.data[3]') AS Rent
FROM OPENJSON(JSON_QUERY(#json, '$.rows')) s;
db<>fiddle demo 2
My table contains string in json format. I need to get the sum and average of each key.
+----+------------------------------------------------------------------------------------+------------+
| id | json_data | subject_id |
+----+------------------------------------------------------------------------------------+------------+
| 1 | {"id": "a", "value": "30"}, {"id": "b", "value": "20"}, {"id": "c", "value": "30"} | 1 |
+----+------------------------------------------------------------------------------------+------------+
| 2 | {"id": "a", "value": "40"}, {"id": "b", "value": "50"}, {"id": "c", "value": "60"} | 1 |
+----+------------------------------------------------------------------------------------+------------+
| 3 | {"id": "a", "value": "20"} | 1 |
+----+------------------------------------------------------------------------------------+------------+
Expected result is
{"id": "a", "sum": 90, "avg": 30},
{"id": "b", "sum": 70, "avg": 35},
{"id": "c", "sum": 120, "avg": 40}
I've tried
SELECT (
JSON_OBJECT('id', id, 'sum', sum_data, 'avg', avg_data)
) FROM (
SELECT
JSON_EXTRACT(json_data, "$.id") as id,
SUM(JSON_EXTRACT(json_data, "$.sum_data")) as sum_data,
AVG(JSON_EXTRACT(json_data, "$.avg_data")) as avg_data
FROM Details
GROUP BY JSON_EXTRACT(json_data, "$.id")
) as t
But no luck. How can I sort this out?
Input json needs to correct
create table json_sum (id int primary key auto_increment, json_data json);
insert into json_sum values (0,'[{"id": "a", "value": "30"}, {"id": "b", "value": "20"}, {"id": "c", "value": "30"}]');
insert into json_sum values (0,'[{"id": "a", "value": "40"}, {"id": "b", "value": "50"}, {"id": "c", "value": "60"}]');
insert into json_sum values (0,'[{"id": "a", "value": "20"}]');
select
json_object("id", jt.id, "sum", sum(jt.value), "avg", avg(jt.value))
from json_sum, json_table(json_data, "$[*]" columns (
row_id for ordinality,
id varchar(10) path "$.id",
value varchar(10) path "$.value")
) as jt
group by jt.id
Output:
json_object("id", jt.id, "sum", sum(jt.value), "avg", avg(jt.value))
{"id": "a", "avg": 30.0, "sum": 90.0}
{"id": "b", "avg": 35.0, "sum": 70.0}
{"id": "c", "avg": 45.0, "sum": 90.0}
everyone , I face some issue to convert the data into json object. There is a table called milestone with the following data:
id name parentId
a test1 A
b test2 B
c test3 C
I want to convert the result into a json type in Postgres:
[{"id": "a", "name": "test1", "parentId": "A"}]
[{"id": "b", "name": "test2", "parentId": "B"}]
[{"id": "c", "name": "test3", "parentId": "C"}]
if there are anyone know how to handle , please let me know , thanks all
You can get each row of the table as simple json object with to_jsonb():
select to_jsonb(m)
from milestone m
to_jsonb
-----------------------------------------------
{"id": "a", "name": "test1", "parentid": "A"}
{"id": "b", "name": "test2", "parentid": "B"}
{"id": "c", "name": "test3", "parentid": "C"}
(3 rows)
If you want to get a single element array for each row, use jsonb_build_array():
select jsonb_build_array(to_jsonb(m))
from milestone m
jsonb_build_array
-------------------------------------------------
[{"id": "a", "name": "test1", "parentid": "A"}]
[{"id": "b", "name": "test2", "parentid": "B"}]
[{"id": "c", "name": "test3", "parentid": "C"}]
(3 rows)
You can also get all rows as a json array with jsonb_agg():
select jsonb_agg(to_jsonb(m))
from milestone m
jsonb_agg
-----------------------------------------------------------------------------------------------------------------------------------------------
[{"id": "a", "name": "test1", "parentid": "A"}, {"id": "b", "name": "test2", "parentid": "B"}, {"id": "c", "name": "test3", "parentid": "C"}]
(1 row)
Read about JSON Functions and Operators in the documentation.
You can use ROW_TO_JSON
From Documentation :
Returns the row as a JSON object. Line feeds will be added between
level-1 elements if pretty_bool is true.
For the query :
select
row_to_json(tbl)
from
(select * from tbl) as tbl;
You can check here in DEMO