Currently I have this piece of code
DECLARE #json NVARCHAR(MAX)
SET #json =
N'[
{
"objOrg": {
"EmpIds": [
{
"Id": 101
},
{
"Id": 102
},
{
"Id": 103
}
]
}
}
]'
How can I return EmpId values pivoted such as
Id1
Id2
Id3
101
102
103
OPENJSON without a schema will return the array index. Then pass the inner object to OPENJSON again to parse it out, and pivot the final result using PIVOT or MAX(CASE
DECLARE #json nvarchar(max) =
N'[
{
"objOrg": {
"EmpIds": [
{
"Id": 101
},
{
"Id": 102
},
{
"Id": 103
}
]
}
}
]';
SELECT MAX(CASE WHEN arr.[key] = 0 THEN ID END) AS Id1,
MAX(CASE WHEN arr.[key] = 1 THEN ID END) AS Id2,
MAX(CASE WHEN arr.[key] = 2 THEN ID END) AS Id3
FROM OPENJSON(#json, '$[0].objOrg.EmpIds') arr
CROSS APPLY OPENJSON (arr.value)
WITH (
Id int
) AS j;
-- alternatively
SELECT p.*
FROM (
SELECT arr.[key] + 1 AS [key], j.Id
FROM OPENJSON(#json, '$[0].objOrg.EmpIds') arr
CROSS APPLY OPENJSON (arr.value)
WITH (
Id int
) AS j
) j
PIVOT (
MAX(j.Id) FOR j.[key] IN
([1], [2], [3])
) p;
db<>fiddle
You can use OPENJSON() along with ROW_NUMBER() window function such as
DECLARE
#json AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX);
SET #json =
N'[
{
"objOrg": {
"EmpIds": [
{
"Id": 101
},
{
"Id": 102
},
{
"Id": 103
}
]
}
}
]';
SELECT j.*, ROW_NUMBER() OVER (ORDER BY j.Id) AS rn
INTO t_json
FROM OPENJSON(#json)
WITH (
JS NVARCHAR(MAX) '$.objOrg.EmpIds' AS JSON
) AS j0
CROSS APPLY OPENJSON (j0.JS)
WITH (
Id INT '$.Id'
) AS j;
SET #query = CONCAT('SELECT',
STUFF(
(SELECT CONCAT(', MAX(CASE WHEN rn=' , CAST(rn AS VARCHAR) , ' THEN Id END) AS Id', CAST(rn AS VARCHAR))
FROM t_json
ORDER BY rn
FOR XML PATH(''), type).value('.', 'NVARCHAR(MAX)'),
1,1,''
),' FROM t_json');
EXECUTE(#query)
Demo
Related
So, I'm trying to get data from MSSQL to update some fields in an HTML form, which includes 1 checkbox and a set of options for a select input.
I thought I was being smart by writing my query as shown below. It gets BOTH the two fields at once, instead of two independent queries... I mean, it's OKAY, but I have a lot of repeated items.
Is there a way to flatten this out?
// how do I flatten this
{
"Calculated": [
{
"Calculated": false
}
],
"Schedule": [
{
"Schedule": "THX-1138"
},
{
"Schedule": "LUH-3417"
},
{
"Schedule": "SEN-5241"
}
]
}
// into something more like this?
{
"Calculated": false,
"Schedule": [
"THX-1138",
"LUH-3417",
"SEN-5241"
]
}
here is the query:
declare
#EffectDate smalldatetime = '07-01-2012'
,#Grade varchar(3) = '001'
,#Schedule varchar(9) = 'THX-1138'
,#Step smallint = '15'
,#jsonResponse nvarchar(max)
;
select #jsonResponse = (
select
[Calculated] =
(
select
b.Calculated
from
tblScalesHourly a
inner join
tblSchedules b
on a.EffectDate = b.EffectDate
and a.Schedule = b.Schedule
where
a.EffectDate = #EffectDate
and a.Schedule = #Schedule
and a.Grade = #Grade
and a.Step = #Step
for json path
)
,[Schedule] =
(
select
Schedule
from
tblSchedules
where
EffectDate = #EffectDate
and Calculated = 0
order by
Schedule asc
for json path
)
for json path, without_array_wrapper
)
It's probably a late answer, but I'm able to reproduce this issue with the following test data:
declare #jsonResponse nvarchar(max)
select #jsonResponse = (
select
[Calculated] =
(
select CONVERT(bit, 0) AS Calculated
for json path
)
,
[Schedule] =
(
select Schedule
from (values ('THX-1138'), ('LUH-3417'), ('SEN-5241')) tblSchedules (Schedule)
order by Schedule asc
for json path
)
for json path, without_array_wrapper
)
You can get the expected results with the following approach. Note, that you can't generate a JSON array of scalar values using FOR JSON, so you need to use a string aggregation (FOR XML PATH('') for SQL Server 2016 or STRING_AGG() for SQL Server 2017+):
select #jsonResponse = (
select
[Calculated] = (
select CONVERT(bit, 0) AS Calculated
)
,
[Schedule] = JSON_QUERY(CONCAT(
'["',
STUFF(
(
select CONCAT('","', Schedule)
from (values ('THX-1138'), ('LUH-3417'), ('SEN-5241')) tblSchedules (Schedule)
order by Schedule asc
for xml path('')
), 1, 3, ''
),
'"]'
))
for json path, without_array_wrapper
)
Output:
{"Calculated":false,"Schedule":["LUH-3417","SEN-5241","THX-1138"]}
And finally, using the statement from the question (not tested):
declare
#EffectDate smalldatetime = '07-01-2012'
,#Grade varchar(3) = '001'
,#Schedule varchar(9) = 'THX-1138'
,#Step smallint = '15'
,#jsonResponse nvarchar(max)
;
select #jsonResponse = (
select
[Calculated] = (
select
b.Calculated
from
tblScalesHourly a
inner join
tblSchedules b
on a.EffectDate = b.EffectDate
and a.Schedule = b.Schedule
where
a.EffectDate = #EffectDate
and a.Schedule = #Schedule
and a.Grade = #Grade
and a.Step = #Step
),
[Schedule] = JSON_QUERY(CONCAT(
'["',
STUFF(
(
select CONCAT('","', Schedule)
from
tblSchedules
where
EffectDate = #EffectDate
and Calculated = 0
for xml path('')
), 1, 3, ''
),
'"]'
))
for json path, without_array_wrapper
)
I've got this table structure
| User | Type | Data |
|------|---------|------|
| 1 | "T1" | "A" |
| 1 | "T1" | "B" |
| 1 | "T2" | "C" |
| 2 | "T1" | "D" |
I want to get a hierarchical JSON string returned from my query
{
"1": {
"T1": [
"A",
"B"
],
"T2": [
"C"
]
},
"2": {
"T1": [
"D"
]
}
}
So one entry for each User with a sub-entry for each Type and then a sub-entry for each Data
All I'm finding is the FOR JSON PATH, ROOT ('x') or AUTO statement but nothing that would make this hierarchical. Is this even possible out of the box? I couldn't find anything so I've experimented with (recursive) CTE but didn't get very far. I'd much appreciate if someone could just point me in the right direction.
I'm not sure that you can create JSON with variable key names using FOR JSON AUTO and FOR JSON PATH. I suggest the following solutions:
using FOR XML PATH to generate JSON with string manipulations
using STRING_AGG() to generate JSON with string manipulations for SQL Server 2017+
using STRING_AGG() and JSON_MODIFY() for SQL Server 2017+
Table:
CREATE TABLE #InputData (
[User] int,
[Type] varchar(2),
[Data] varchar(1)
)
INSERT INTO #InputData
([User], [Type], [Data])
VALUES
(1, 'T1', 'A'),
(1, 'T1', 'B'),
(1, 'T2', 'C'),
(2, 'T1', 'D')
Statement using FOR XML PATH:
;WITH SecondLevelCTE AS (
SELECT
d.[User],
d.[Type],
Json1 = CONCAT(
'[',
STUFF(
(
SELECT CONCAT(',"', [Data], '"')
FROM #InputData
WHERE [User] = d.[User] AND [Type] = d.[Type]
FOR XML PATH('')
), 1, 1, ''),
']')
FROM #InputData d
GROUP BY d.[User], d.[Type]
), FirstLevelCTE AS (
SELECT
d.[User],
Json2 = CONCAT(
'{',
STUFF(
(
SELECT CONCAT(',"', [Type], '":', [Json1])
FROM SecondLevelCTE
WHERE [User] = d.[User]
FOR XML PATH('')
), 1, 1, ''),
'}'
)
FROM SecondLevelCTE d
GROUP BY d.[User]
)
SELECT CONCAT(
'{',
STUFF(
(
SELECT CONCAT(',"', [User], '":', Json2)
FROM FirstLevelCTE
FOR XML PATH('')
), 1, 1, '') ,
'}'
)
Statement using STRING_AGG():
;WITH SecondLevelCTE AS (
SELECT
d.[User],
d.[Type],
Json1 = (
SELECT CONCAT('["', STRING_AGG([Data], '","'), '"]')
FROM #InputData
WHERE [User] = d.[User] AND [Type] = d.[Type]
)
FROM #InputData d
GROUP BY d.[User], d.[Type]
), FirstLevelCTE AS (
SELECT
d.[User],
Json2 = (
SELECT STRING_AGG(CONCAT('"', [Type], '":', [Json1]), ',')
FROM SecondLevelCTE
WHERE [User] = d.[User]
)
FROM SecondLevelCTE d
GROUP BY d.[User]
)
SELECT CONCAT('{', STRING_AGG(CONCAT('"', [User], '":{', Json2, '}'), ','), '}')
FROM FirstLevelCTE
Statement using STRING_AGG() and JSON_MODIFY():
DECLARE #json nvarchar(max) = N'{}'
SELECT
#json = JSON_MODIFY(
CASE
WHEN JSON_QUERY(#json, CONCAT('$."', [User] , '"')) IS NULL THEN JSON_MODIFY(#json, CONCAT('$."', [User] , '"'), JSON_QUERY('{}'))
ELSE #json
END,
CONCAT('$."', [User] , '".', [Type]),
JSON_QUERY(Json)
)
FROM (
SELECT
d.[User],
d.[Type],
Json = (
SELECT CONCAT('["', STRING_AGG([Data], '","'), '"]')
FROM #InputData
WHERE [User] = d.[User] AND [Type] = d.[Type]
)
FROM #InputData d
GROUP BY d.[User], d.[Type]
) t
Output:
{"1":{"T1":["A","B"],"T2":["C"]},"2":{"T1":["D"]}}
This isn't exactly what you want (I'm not great with FOR JSON) but it does get you close to the shape you need until something better comes along...
(https://jsonformatter.org/json-parser/974b6b)
use tempdb
GO
drop table if exists users
create table users (
[user] integer
, [type] char(2)
, [data] char(1)
)
insert into users
values (1, 'T1', 'A')
, (1, 'T1', 'B')
, (1, 'T2', 'C')
, (2, 'T1', 'D')
select DISTINCT ONE.[user], two.[type], three.[data]
from users AS ONE
inner join users two
on one.[user] = two.[user]
inner join users three
on one.[user] = three.[user]
and two.[type] = three.[type]
for JSON AUTO
I have a N1QL query:
SELECT p.`ID`, p.`Name` FROM `Preferences` p WHERE `type` = "myType"
The result is a list of objects[{"ID": "123", "Name": "John"}, ...]
I want to get a result JSON such as:
{
"count": 5,
"result": [{"ID": "123", "Name": "John"}, ...]
}
How could I do this using N1QL?
SELECT
COUNT(t.ID) AS count,
ARRAY_AGG(t) AS results
FROM
(
SELECT
p.`ID`, p.`Name`
FROM
`Preferences` p
WHERE `type` = "myType"
) AS t
Trying to figure out how to make my couchbase return an object like so:
{
items: [],
totalItemsCount: T<number>,
}
My select is formatted like so:
SELECT a.*, ( SELECT COUNT(*) FROM table b WHERE b.environment = "test" AND b.DocType = "GM360.User") as Count
FROM table a WHERE a.environment = "test"
AND a.DocType = "Moderator.User"
limit 5 offset (5 * (1 -1) )
And the result looks like:
[
{ Accounts: [], UserId: 1, Count: 199 },
{ Accounts: [], UserId:, 2, Count: 199 },
]
The following query gives result object you are expecting. If that is not explain the problem more clearly.
SELECT (SELECT RAW a
FROM table AS a
WHERE a.environment = "test" AND a.DocType = "Moderator.User") AS items,
(SELECT RAW COUNT(1)
FROM table b
WHERE b.environment = "test" AND b.DocType = "GM360.User")[0] AS totalItemsCount;
OR
SELECT SUM(CASE WHEN a.DocType = "GM360.User" THEN 1 ELSE 0 END) AS totalItemsCount,
ARRAY_AGG(CASE WHEN a.DocType = "Moderator.User" THEN a ELSE MISSING END) AS items
FROM table AS a
WHERE a.environment = "test" AND a.DocType IN ["Moderator.User", "GM360.User"];
I have 3 tables with the following schema.
a.c1 int (pk),
a.c2 varchar(50),
a.c3 varchar(50),
b.c1 int(pk),
b.c2 int(fk) -> a.c1,
b.c3 varchar(50),
b.c4 varchar(50),
c.c1 int(pk),
c.c2 int(fk) -> b.c1,
c.c3 int(fk) -> a.c1,
c.c4 varchar(50),
c.c5 varchar(50)
I'm expecting the result to be
{
"json_doc": {
"a.c1": "val",
"a.c2": "val",
"a.c3": "val",
"b": [{
"b_c1_value": {
"b.c1": "val",
"b.c2": "val",
"b.c3": "val",
"b.c4": "val",
"c": [{
"c_c1_value": {
"c.c1": "val",
"c.c2": "val",
"c.c3": "val",
"c.c4": "val",
"c.c5": "val"
}
}]
}
}]
}
}
Can someone please help me with the right sql. I'm very very new to Postgres.
I have gotten this far:
select row_to_json(t)
from (
select
*,
(
select array_to_json(array_agg(row_to_json(d)))
from (
select
*,
(
select array_to_json(array_agg(row_to_json(dc)))
from (
select *
from c
where c.c2 = b.c1
) dc
) as c
from b
where c2 = a.c1
) d
) as b
from a
WHERE a.deployid = 19
) t;
I need key names for the arrays to be populated. I'm stuck with this. Any help is deeply appreciated!