I wish to convert data to JSON with separate batches. In each batch there should be no more than two users, without null values.
create table #test(userid int, status_a int, status_b int)
insert into #test values (135, 11,23),
(197, 14, null),
(254, null,21),
(261, 13, 25),
(391, null, 17)
result should be
[
{
"TrackingData":[
{
"userid":135,
"status_a":11,
"status_b":23
},
{
"userid":197,
"status_a":14
}
]
},
{
"TrackingData":[
{
"userid":254,
"status_b":21
},
{
"userid":261,
"status_a":13,
"status_b":25
}
]
},
{
"TrackingData":[
{
"userid":391,
"status_b":17
}
]
}
]
I tried to do this, but don't know how to divide into to batches
SELECT *
FROM #test
FOR JSON PATH
Looks like you are ordering by userid and you want the first two in a TrackingData object, next pair in a TrackingData object and so on.
This returns your desired results
DECLARE #BatchSize INT = 2;
WITH T AS
(
SELECT *,
RN = ROW_NUMBER() OVER (ORDER BY userid) - 1,
Json = (SELECT t.* FOR JSON PATH, WITHOUT_ARRAY_WRAPPER )
FROM #test t
)
SELECT TrackingData = JSON_QUERY('[' +
STRING_AGG(Json, ',') WITHIN GROUP (ORDER BY RN) +
']')
FROM T
GROUP BY RN/#BatchSize
ORDER BY RN/#BatchSize
FOR JSON PATH
It generates a zero based sequential row numbering and uses integer division on that to divide into <#BatchSize> sized groups.
userid
status_a
status_b
RN
RN/#BatchSize
Json
135
11
23
0
0
{"userid":135,"status_a":11,"status_b":23}
197
14
NULL
1
0
{"userid":197,"status_a":14}
254
NULL
21
2
1
{"userid":254,"status_b":21}
261
13
25
3
1
{"userid":261,"status_a":13,"status_b":25}
391
NULL
17
4
2
{"userid":391,"status_b":17}
The construction of the sub array is just done with string aggregation and wrapped in JSON_QUERY so it is treated as JSON and not escaped in the final FOR JSON PATH.
Related
As you can see below I have Name column. I want to split it by / and return the value in array.
MyTable
Id
Name
1
John/Warner/Jacob
2
Kol
If I write a query as
Select Id, Name from MyTable
it will return
{
"id": 1,
"name": "John/Warner/Jacob",
},
{
"id": 2,
"name": "Kol",
},
Which query should I write to get below result ?
{
"id": 1,
"name": ["John", "Warner", "Jacob"],
},
{
"id": 2,
"name": ["Kol"] ,
},
Don't think you can return an array in the query itself, but you could do this...
SELECT id,
SUBSTRING_INDEX(name, '/', 1)
AS name_part_1,
SUBSTRING_INDEX(name, '/', -1)
AS name_part_2
FROM tableName;
Only way to build it as an array would be when processing the result accordingly in whatever language you are using.
You can define a function split, which is based on the fact that substring_index(substring_index(name,'/',x),'/',-1) will return the x-th part of a name when separated by '/'.
CREATE FUNCTION `test`.`SPLIT`(s varchar(200), c char, i integer) RETURNS varchar(200) CHARSET utf8mb4
DETERMINISTIC
BEGIN
DECLARE retval varchar(200);
WITH RECURSIVE split as (
select 1 as x,substring_index(substring_index(s,c,1),c,-1) as y, s
union all
select x+1,substring_index(substring_index(s,c,x+1),c,-1),s from split where x<= (LENGTH(s) - LENGTH(REPLACE(s,c,'')))
)
SELECT y INTO retval FROM split WHERE x=i ;
return retval;
END
and then do:
with mytable as (
select 1 as Id, 'John/Warner/Jacob' as Name
union all
select 2, 'Kol')
select
id, split(Name,'/',x) as name
from mytable
cross join (select 1 as x union all select 2 union all select 3) x
order by id, name;
output:
Id
name
1
Jacob
1
John
1
Warner
2
[NULL]
2
[NULL]
2
Kol
It is, of course, possible to refine this, and leave out the NULL values ...
I will not convert this output to JSON for you ...
Gleaning several articles online, including this one with a CTE, and this one WITHOUT a CTE, I have been successful in getting the data I need, including a count of the results. However, I need this count to be in a specific place in the JSON object... Basically, I know how to get a rowset into a specific JSON structure with FOR JSON PATH, ROOT ('data'), etc.
However, I do not know how to get the "recordsFiltered" into the root of my JSON output. This count is is derived using COUNT(*) OVER () AS recordsFiltered
Basically, I need my structure to look like this (see below)... How do I get "recordsFiltered" into the root $. of the JSON result without it repeating a billion times under the "data":[] section?
The best idea I can come up with is to create a temporary table, and then use that to structure the JSON. But, I want to do it the fancy SQL way, if one exists, using SELECT statements or CTEs where applicable.
{
"draw": 1,
"recordsTotal": 57,
"recordsFiltered": 57, // <<<--- need records filtered HERE
"data": [
{
"DT_RowId": "row_3",
"recordsFiltered": "69,420", // <<<---- NOT HERE!!!
"first_name": "Angelica",
"last_name": "Ramos",
"position": "System Architect",
"office": "London",
"start_date": "9th Oct 09",
"salary": "$2,875"
},
...
]
}
Here is the example SQL code:
SELECT
COUNT(*) OVER () AS recordsFiltered,
id,
a,
b
FROM t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path, root ('data')
Looks like you need to generate your table results, then use two (or more?) sub-queries
Here's a simplified example:
declare #tbl table (ID int identity, Col1 varchar(50), Col2 int)
insert into #tbl (Col1, Col2) values ('A',1),('B',2),('C',3)
select
(select count(1) from #tbl) as 'total',
(select * from #tbl for json path) as 'data'
for json path
produces:
[
{
"total": 3,
"data": [
{
"ID": 1,
"Col1": "A",
"Col2": 1
},
{
"ID": 2,
"Col1": "B",
"Col2": 2
},
{
"ID": 3,
"Col1": "C",
"Col2": 3
}
]
}
]
Without knowing the rest of your code/schema, here's my guess at your needed query:
select
*
into
#MyTable
from
t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
select
(select count(*) from #MyTable) as recordsFiltered,
(
select
id,
a,
b
from
#MyTable
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path
) as [data]
for json path
Using a CTE:
with cte as ()
select
*
from
t1
WHERE
(#Search IS NULL OR
id LIKE '%'+#Search+'%' OR
a LIKE '%'+#Search+'%' OR
b LIKE '%'+#Search+'%')
)
select
(select count(*) from cte) as recordsFiltered,
(
select
id,
a,
b
from
cte
ORDER BY
CASE
WHEN #SortDir = 'ASC' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END desc,
CASE
WHEN #SortDir = 'desc' THEN
CASE #SortCol
WHEN 0 THEN id
WHEN 1 THEN a
WHEN 2 THEN b
END
END DESC
OFFSET #DisplayStart ROWS
FETCH NEXT #DisplayLength ROWS ONLY
for json path
) as [data]
for json path
I have a PostgreSQL table with unique key/value pairs, which were originally in a JSON format, but have been normalized and melted:
key | value
-----------------------------
name | Bob
address.city | Vancouver
address.country | Canada
I need to turn this into a hierarchical JSON:
{
"name": "Bob",
"address": {
"city": "Vancouver",
"country": "Canada"
}
}
Is there a way to do this easily within SQL?
jsonb_set() almost does everything for you, but unfortunately it can only create missing leafs (i.e. missing last keys on a path), but not whole missing branches. To overcome this, here is a modified version of it, which can set values on any missing levels:
create function jsonb_set_rec(jsonb, jsonb, text[])
returns jsonb
language sql
as $$
select case
when array_length($3, 1) > 1 and ($1 #> $3[:array_upper($3, 1) - 1]) is null
then jsonb_set_rec($1, jsonb_build_object($3[array_upper($3, 1)], $2), $3[:array_upper($3, 1) - 1])
else jsonb_set($1, $3, $2, true)
end
$$;
Now you only need to apply this function one-by-one to your rows, starting with an empty json object: {}. You can do this with either recursive CTEs:
with recursive props as (
(select distinct on (grp)
pk, grp, jsonb_set_rec('{}', to_jsonb(value), string_to_array(key, '.')) json_object
from eav_tbl
order by grp, pk)
union all
(select distinct on (grp)
eav_tbl.pk, grp, jsonb_set_rec(json_object, to_jsonb(value), string_to_array(key, '.'))
from props
join eav_tbl using (grp)
where eav_tbl.pk > props.pk
order by grp, eav_tbl.pk)
)
select distinct on (grp)
grp, json_object
from props
order by grp, pk desc;
Or, with a custom aggregate defined as:
create aggregate jsonb_set_agg(jsonb, text[]) (
sfunc = jsonb_set_rec,
stype = jsonb,
initcond = '{}'
);
your query could became as simple as:
select grp, jsonb_set_agg(to_jsonb(value), string_to_array(key, '.'))
from eav_tbl
group by grp;
https://rextester.com/TULNU73750
There are no ready to use tools for this. The function generates a hierarchical json object based on a path:
create or replace function jsonb_build_object_from_path(path text, value text)
returns jsonb language plpgsql as $$
declare
obj jsonb;
keys text[] := string_to_array(path, '.');
level int := cardinality(keys);
begin
obj := jsonb_build_object(keys[level], value);
while level > 1 loop
level := level- 1;
obj := jsonb_build_object(keys[level], obj);
end loop;
return obj;
end $$;
You also need the aggregate function jsonb_merge_agg(jsonb) described in this answer. The query:
with my_table (path, value) as (
values
('name', 'Bob'),
('address.city', 'Vancouver'),
('address.country', 'Canada'),
('first.second.third', 'value')
)
select jsonb_merge_agg(jsonb_build_object_from_path(path, value))
from my_table;
gives this object:
{
"name": "Bob",
"first":
{
"second":
{
"third": "value"
}
},
"address":
{
"city": "Vancouver",
"country": "Canada"
}
}
The function do not recognize json arrays.
I can't really think of something simpler, although I think there should be an easier way.
I assume there is some additional column that can be used to bring the keys that belong to one "person" together, I used p_id for that in my example.
select p_id,
jsonb_object_agg(k, case level when 1 then v -> k else v end)
from (
select p_id,
elements[1] k,
jsonb_object_agg(case cardinality(elements) when 1 then ky else elements[2] end, value) v,
max(cardinality(elements)) as level
from (
select p_id,
"key" as ky,
string_to_array("key", '.') as elements, value
from kv
) t1
group by p_id, k
) t2
group by p_id;
The innermost query just converts the dot notation to an array for easier access later.
The next level then builds JSON objects depending on the "key". For the "single level" keys, it just uses key/value, for the others it uses the second element + the value and then aggregates those that belong together.
The second query level returns the following:
p_id | k | v | level
-----+---------+--------------------------------------------+------
1 | address | {"city": "Vancouver", "country": "Canada"} | 2
1 | name | {"name": "Bob"} | 1
2 | address | {"city": "Munich", "country": "Germany"} | 2
2 | name | {"name": "John"} | 1
The aggregation done in the second step, leaves one level too much for the "single element" keys, and that's what we need level for.
If that distinction wasn't made, the final aggregation would return {"name": {"name": "Bob"}, "address": {"city": "Vancouver", "country": "Canada"}} instead of the wanted: {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}.
The expression case level when 1 then v -> k else v end essentially turns {"name": "Bob"} back to "Bob".
So, with the following sample data:
create table kv (p_id integer, "key" text, value text);
insert into kv
values
(1, 'name','Bob'),
(1, 'address.city','Vancouver'),
(1, 'address.country','Canada'),
(2, 'name','John'),
(2, 'address.city','Munich'),
(2, 'address.country','Germany');
then query returns:
p_id | jsonb_object_agg
-----+-----------------------------------------------------------------------
1 | {"name": "Bob", "address": {"city": "Vancouver", "country": "Canada"}}
2 | {"name": "John", "address": {"city": "Munich", "country": "Germany"}}
Online example: https://rextester.com/SJOTCD7977
create table kv (key text, value text);
insert into kv
values
('name','Bob'),
('address.city','Vancouver'),
('address.country','Canada'),
('name','John'),
('address.city','Munich'),
('address.country','Germany');
create view v_kv as select row_number() over() as nRec, key, value from kv;
create view v_datos as
select k1.nrec, k1.value as name, k2.value as address_city, k3.value as address_country
from v_kv k1 inner join v_kv k2 on (k1.nrec + 1 = k2.nrec)
inner join v_kv k3 on ((k1.nrec + 2= k3.nrec) and (k2.nrec + 1 = k3.nrec))
where mod(k1.nrec, 3) = 1;
select json_agg(json_build_object('name',name, 'address', json_build_object('city',address_city, 'country', address_country)))
from v_datos;
I'm storing a java class A as A_DOC in a clob column in my database.
The structure of A is like:
{
id : 123
var1: abc
subvalues : [{
id: 1
value : a
},
{
id: 1
value :b
}
...
}
]}
I know I can do things like
select json_query(a.A_DOC, '$.subvalues.value') from table_name a;
and so on, but how I'm looking for a way to count the number of elements in the subvalues array through an sql query. Is this possible?
the function exists in Oracle 18 only
SELECT json_query('[19, 15, [16,2,3]]','$[*].size()' WITH ARRAY WRAPPER) FROM dual;
SELECT json_value('[19, 15, [16,2,3]]','$.size()') FROM dual;
You can use JSON_TABLE:
SELECT
id, var1, count(sub_id) subvalues
FROM
JSON_TABLE (
to_clob('{ id: 123, var1: "abc", subvalues : [{ id: 1, value: "a", }, { id: 2, value: "b" } ]}'),
'$'
COLUMNS (
id NUMBER PATH '$.id',
var1 VARCHAR PATH '$.var1',
NESTED PATH '$.subvalues[*]'
COLUMNS (
sub_id NUMBER PATH '$.id'
)
)
)
GROUP BY id, var1
Using postgresql 9.3 (and the new json awesomness) if I have a simple table named 'races' with a two column description such as:
race-id integer,
race-data json
And the json is a payload for each race is something like
{ "race-time": some-date,
"runners": [ { "name": "fred","age": 30, "position": 1 },
{ "name": "john","age": 29, "position": 3 },
{ "name": "sam","age": 31, "position": 2 } ],
"prize-money": 200 }
How can I query the table for:
1) Races where sam has come 1st
2) Races where sam has come 1st and john has come 2nd
3) Where the number of runners with age greater than 30 is > 5 and prize-money > 5000
My experimentation (particularly in querying a nested array payload) so far has lead to further normalizing the data, i.e. creating a table called runners just to make such queries. Ideally I'd like to use this new fangled json query awesomeness but I can't seem to make heads or tails of it in respective to the 3 simple queries.
You can unwind json into one record and then do your queries as you want (see json functions):
with cte as (
select
race_id,
json_array_elements(r.race_data->'runners') as d,
(r.race_data->>'prize-money')::int as price_money
from races as r
), cte2 as (
select
race_id, price_money,
max(case when (d->>'position')::int = 1 then d->>'name' end) as name1,
max(case when (d->>'position')::int = 2 then d->>'name' end) as name2,
max(case when (d->>'position')::int = 3 then d->>'name' end) as name3
from cte
group by race_id, price_money
)
select *
from cte2
where name1 = 'sam' and name2 = 'john'
sql fiddle demo
It's a bit complicated because of your JSON structure. I think that if you change your structure a bit, your queries could be much simplier:
{
"race-time": some-date,
"runners":
{
"1": {"name": "fred","age": 30},
"2": {"name": "sam","age": 31},
"3": {"name": "john","age": 29}
},
"prize-money": 200
}
you can use ->> and -> operators or json_extract_path_text function to get data you need and then use it in the where clause:
select *
from races as r
where
r.race_data->'runners'->'1'->>'name' = 'sam';
select *
from races as r
where
json_extract_path_text(r.race_data, 'runners','1','name') = 'sam' and
json_extract_path_text(r.race_data, 'runners','2','name') = 'john';
select *
from races as r
where
(r.race_data->>'prize-money')::int > 100 and
(
select count(*)
from json_each(r.race_data->'runners')
where (value->>'age')::int >= 30
) >= 2
sql fiddle demo