Unable to get all the values from JSON_ARRAY_ELEMENTS() - json

Table with sample data:
create table tbl_jsdata
(
id int,
p_id int,
field_name text,
field_value text
);
insert into tbl_jsdata values
(1,101,'Name','Sam'),
(2,101,'City','Dubai'),
(3,101,'Pin','1235'),
(4,101,'Country','UAE'),
(5,102,'Name','Sam'),
(6,102,'City','Dubai'),
(7,102,'Name','Sam Jack'),
(8,102,'Name','Test'),
(9,102,'Name',null);
json_agg query:
drop table if exists tempJSData;
select p_id,
json_build_array(json_object_agg(field_name, field_value)) into tempJSData
from tbl_jsdata
group by p_id;
Getting Result:
select p_id,(json_array_elements(json_build_array)->>'Name')::text Namess
from tempJSData
p_id Namess
---------------------------------
101 Sam
102
Expected Result:
p_id Namess
---------------------------------
101 Sam
102 Sam
102 Sam Jack
102 Test
102

I think it's because you're not creating an array of Name.
If you check your query
select p_id,
json_build_array(json_object_agg(field_name, field_value))
from tbl_jsdata
group by p_id;
The result is
p_id | json_build_array
------+---------------------------------------------------------------------------------------------
101 | [{ "Name" : "Sam", "City" : "Dubai", "Pin" : "1235", "Country" : "UAE" }]
102 | [{ "Name" : "Sam", "City" : "Dubai", "Name" : "Sam Jack", "Name" : "Test", "Name" : null }]
(2 rows)
Having multiple adjacent entries of the Name field. The following json_array_elements(json_build_array)->>'Name' will fetch the first entry only. I suggest to create an array based on p_id and field_name first
with array_built as (
select p_id,field_name,
array_agg(field_value) field_value
from tbl_jsdata
group by p_id, field_name
)
select p_id,
jsonb_object_agg(field_name, field_value)
from array_built
group by p_id
;
The result can be optimised since it creates an array even if there is only one value
p_id | jsonb_object_agg
------+---------------------------------------------------------------------------
101 | {"Pin": ["1235"], "City": ["Dubai"], "Name": ["Sam"], "Country": ["UAE"]}
102 | {"City": ["Dubai"], "Name": ["Sam", "Sam Jack", "Test", null]}
(2 rows)
But now you can parse it correctly the whole query is
select p_id,
json_build_array(json_object_agg(field_name, field_value))
from tbl_jsdata
group by p_id;
select p_id,
json_build_array(json_object_agg(field_name, field_value))
from tbl_jsdata
group by p_id;
with array_built as (
select p_id,field_name,
array_agg(field_value) field_value
from tbl_jsdata
group by p_id, field_name
), agg as (
select p_id,
jsonb_object_agg(field_name, field_value) json_doc
from array_built
group by p_id
)
select p_id, jsonb_array_elements(json_doc->'Name') from agg;
;
With the expected result as
p_id | jsonb_array_elements
------+----------------------
101 | "Sam"
102 | "Sam"
102 | "Sam Jack"
102 | "Test"
102 | null
(5 rows)

You can use json_each_text to extract the values of your array and in the WHERE clause filter only the key you want:
SELECT p_id,j.value
FROM tempJSData, json_each_text(json_build_array->0) j
WHERE j.key = 'Name';
p_id | value
------+----------
101 | Sam
102 | Sam
102 | Sam Jack
102 | Test
102 |
(5 rows)
Note: this query assumes the format of your json is final. If not, consider creating an array of Name instead of an array of objects that contain names in it: name[foo,bar] instead of [name:foo,name:bar]. The answer from Ftisiot makes a pretty good point.
Demo: db<>fiddle

Your JSON aggregation is essentially invalid, as you are creating a JSON value where the same key appears more than once. If you had used the recommended jsonb data type, the duplicate keys would have been removed.
I think this aggregation makes more sense:
create table tempjsdata
as
select p_id,
jsonb_agg(jsonb_build_object(field_name, field_value)) as names
from tbl_jsdata
group by p_id
The above generates the following result:
p_id | names
-----+---------------------------------------------------------------------------------------------
101 | [{"Name": "Sam"}, {"City": "Dubai"}, {"Pin": "1235"}, {"Country": "UAE"}]
102 | [{"Name": "Sam"}, {"City": "Dubai"}, {"Name": "Sam Jack"}, {"Name": "Test"}, {"Name": null}]
Then you can use:
select p_id,
x.*
from tempjsdata
cross join lateral (
select x.item ->> 'Name'
from jsonb_array_elements(t.names) as x(item)
where x.item ? 'Name'
) x
Online example

Related

How could I map a list including fields that are not found in a table?

It's my first time to use sql in practice, and I've met with such a situation in my .net core project:
I have a table now like this:
name:string
age:int
ticketType:enum
Jack
20
0
Anna
16
1
Tom
30
2
And I have a list of name = ["Jack", "George", "William"]
What I need is a table that contains both persons included with certain values and those excluded with default values like:
name:string
age:string
ticket:string
Jack
20 years
adult
George
Not found
Not found
William
Not found
Not found
How could I do this with sql?
Thanks in advance.
You can use a left join, but you need a list of the values you care about:
select name, t.age, t.tickettype
from (select 'Jack' as name union all
select 'George' union all
select 'William'
) n left join
t
using (name);
Note that this represents the "not found" value using NULL. This is the typical method in SQL. If you want a string, I would suggest that you do that in your application layer, because int and enum cannot represent an arbitrary string value.
The texdt you have to adapt to your needs
CREATE TABLE users
(`nameg` varchar(4), `age` int, `ticketType` int)
;
INSERT INTO users
(`nameg`, `age`, `ticketType`)
VALUES
('Jack', 20, 0),
('Anna', 16, 1),
('Tom', 30, 2)
;
SELECT t1.name,IFNULL(CONCAT (`age`, ' years'),'Not Found') age
, CASE `ticketType`WHEN
0 THen 'adult'
WHEN 1 THEn 'teenager'
WHEN 2 THEN'elserly'
ELSE 'NOT Found' END "type"
FROM (SELECT "Jack" as name UNION SELECT "George" UNION SELECT "William") t1
LEFT JOIN users u ON u.nameg = t1.name
name | age | type
:------ | :-------- | :--------
Jack | 20 years | adult
George | Not Found | NOT Found
William | Not Found | NOT Found
db<>fiddle here

Unpack GROUP BY query for results that lack the key?

I'm trying to run a GROUP BY query to select all the items in a table grouping them by a collection id.
| id | collection | date |
| 1 | x | ... |
| 2 | x | ... |
| 3 | y | ... |
| 4 | | ... |
| 5 | | ... |
I'd like to obtain a list like this:
[
{
collection: x,
items: [1, 2]
},
{
collection: y,
items: [3]
},
{
collection: null,
items: [4]
},
{
collection: null,
items: [5]
}
]
My query right now is the following, but I need a way to unpack items that lack the collection ID so that they all end up in a separate group, how can I do?
SELECT id, collection FROM items ORDER BY date DESC GROUP BY collection
I'm using MySQL but any SQL syntax would still be helpful.
Here I have shared two query. One with conditional group by clause and the other one is using union all. I would prefer first one.
CREATE TABLE items( id int, collection varchar(10));
insert into items values( 1 , 'x');
insert into items values( 2 , 'x');
insert into items values( 3 , 'y');
insert into items (id)values( 4 );
insert into items (id)values( 5 );
Query#1 (conditional group by clause)
SELECT collection,group_concat(id) id FROM items
GROUP BY collection,
case when collection is not null then collection else id end
Output:
collection
id
null
4
null
5
x
1,2
y
3
Query#2 (using union all)
SELECT collection,group_concat(id) id FROM items where collection is not null
GROUP BY collection
union all
select collection,id from items where collection is null
Output:
collection
id
x
1,2
y
3
null
4
null
5
db<fiddle here
Sorted by collection and id:
Query#1
SELECT collection,group_concat(id) id FROM items
GROUP BY collection,
case when collection is not null then collection else id end
order by collection,id
Output:
collection
id
null
4
null
5
x
1,2
y
3
Query#2
select collection,id from
(
SELECT collection,group_concat(id) id FROM items where collection is not null
GROUP BY collection
union all
select collection,id from items where collection is null
)t order by collection,id
Output:
collection
id
null
4
null
5
x
1,2
y
3
db<fiddle here
SELECT
CASE WHEN collection is null THEN id ELSE collection END as id,
GROUP_CONCAT(collection) as collection
FROM items
GROUP BY 1
I see, you have an ORDER BY date?

Merge multiple MySQL tables which are related and return JSON

I've been trying to figure this out for hours. But with no luck.
This works, but not what i exactly want
How it's now:
{"text":"My first report","comment":"Great Report","display_name":"Xavier"},
{"text":"My First report","comment":"Do you call this a report?","display_name":"Logan"}
How I would like it to be:
{"text":"My first report","comments":[{comment: "Great Report","display_name":"Xavier"}, {comment: "Do you call this a report?","display_name":"Logan"}],
Current Setup
Report
ID | User_ID | TEXT |
15 | 3 My first report
Users
ID | DISPLAY_NAME |
1 | Xavier
2 | Logan
3 | Cyclops
How it is now:
Report_Comments
ID | User_ID | Report_ID | TEXT as comment |
3 | 1 15 Great Report
3 | 2 15 Bad Report
My code:
SELECT r.text,
Group_concat(r.user_id) AS User_ID,
Group_concat(u.display_name) AS User_Name,
r.id,
Group_concat(c.text) AS comment
FROM report r
LEFT JOIN users u
ON u.id = r.user_id
LEFT JOIN report_comments c
ON c.report_id = r.id
WHERE r.user_id = :userId
GROUP BY r.id,
r.text
Using JSON_ARRAYAGG and JSON_OBJECT, we can achieve this with only one join.
Recreating your situation:
CREATE TABLE reports (id INT, user_id INT, title VARCHAR(60), PRIMARY KEY (id));
CREATE TABLE users (id INT, name VARCHAR(30), PRIMARY KEY (id));
CREATE TABLE comments (id INT, user_id INT, report_id INT, comment VARCHAR(100), PRIMARY KEY (id));
INSERT INTO users VALUES (1, 'Xavier'), (2, 'Logan'), (3, 'Cyclops');
INSERT INTO reports VALUES (10, 1, 'My First Report');
INSERT INTO comments VALUES (100, 1, 10, 'bad report'), (200, 1, 10, 'good report');
We now can run a SELECT which joins the tables reports and comments, grouping by the id of reports. Using the JSON_OBJECT function we create the JSON document. With the JSON_ARRAYAGG, we aggregate into 'comments'.
SELECT JSON_OBJECT('text', r.title, 'comments',
JSON_ARRAYAGG(JSON_OBJECT('comment', c.comment, 'name', u.name))) as report_comments
FROM reports AS r JOIN comments c on r.id = c.report_id
JOIN users u on c.user_id = u.id
GROUP BY r.id;
Result:
+-------------------------------------------------------------------------------------------------------------------------------------+
| report_comments |
+-------------------------------------------------------------------------------------------------------------------------------------+
| {"text": "My First Report", "comments": [{"name": "Xavier", "comment": "bad report"}, {"name": "Logan", "comment": "good report"}]} |
+-------------------------------------------------------------------------------------------------------------------------------------+
Result using JSON_PRETTY and \G instead of ; executing the query:
*************************** 1. row ***************************
report_comments: {
"text": "My First Report",
"comments": [
{
"name": "Xavier",
"comment": "bad report"
},
{
"name": "Logan",
"comment": "good report"
}
]
}

PostgreSQL recursive rows to JSONB map

This question is best explained with an example. So, if you have 2 tables category and event in PostgreSQL as follows: -
create table category (
id integer primary key,
type varchar(255),
label varchar (255),
parent_id integer
);
insert into category (id, type, label, parent_id)
values (1, 'organisation', 'Google', null),
(2, 'product', 'Gmail', 1),
(3, 'organisation', 'Apple', null),
(4, 'product', 'iPhone', 3),
(5, 'product', 'Mac', 3);
create table event (
id integer primary key,
name varchar (255),
category_id integer
);
insert into event (id, name, category_id)
values (1, 'add', 4),
(2, 'delete', 5),
(3, 'update', 2);
As you can see, the category table is quite dynamic and a hierarchy of categories can be defined.
What I'm trying to achieve is selecting entries of the event table and join it with the categories but flatten it to a JSON structure. I can illustrate using the following query: -
select e.*,
jsonb_build_object(
c1.type, c1.label,
c2.type, c2.label
) as categories
from event e
left join category c2 on c2.id = e.category_id
left join category c1 on c1.id = c2.parent_id
This will return: -
+----+--------+-------------+------------------------------------------------+
| id | name | category_id | categories |
+----+--------+-------------+------------------------------------------------+
| 1 | add | 4 | {"organisation": "Apple", "product": "iPhone"} |
| 2 | delete | 5 | {"organisation": "Apple", "product": "Mac"} |
| 3 | update | 2 | {"organisation": "Google", "product": "Gmail"} |
+----+--------+-------------+------------------------------------------------+
However, this approach only works when an event.category_id column references a child category which has precisely 1 parent (2 levels). Really what I'm looking for is to generate categories, regardless if (a) it doesn't have a parent category (i.e. a 1 level category) OR (b) has more than 1 parent (e.g. 3 levels). For example, if I add the following rows to the event and category tables: -
insert into category (id, type, label, parent_id)
values (6, 'module', 'Mobile', 5), /* has 2 parents */
(7, 'organisation', 'AirBNB', null); /* has no parents */
insert into event (id, name, category_id)
values (4, 'event1', 6),
(5, 'event2', 7);
... and run the query from above it will return: -
ERROR: argument 1: key must not be null
SQL state:
My gut feeling is a recursive CTE could solve this.
Update 1
create or replace function category_array(category_parent_id int) returns setof jsonb as $$
select case
when count(x) > 0 then
jsonb_agg(f.x) || jsonb_build_object (
c.type, c.label
)
else jsonb_build_object (
c.type, c.label
)
end as category_pair
from category c
left join category_array (c.parent_id) as f(x) on true
where c.id = category_parent_id
group by c.id, c.type, c.label;
$$ language sql;
... and call using this SQL ...
select *,
category_array(category_id)
from event;
... will return the following ...
+----+--------+-------------+--------------------------------------------------------------------------+
| id | name | category_id | categories |
+----+--------+-------------+--------------------------------------------------------------------------+
| 1 | add | 4 | [{"organisation": "Apple"}, {"product": "iPhone"}] |
| 2 | delete | 5 | [{"organisation": "Apple"}, {"product": "Mac"}] |
| 3 | update | 2 | [{"organisation": "Google"}, {"product": "Gmail"}] |
| 4 | event1 | 6 | [[{"organisation": "Apple"}, {"product": "Mac"}], {"module": "Mobile"}] |
| 5 | event2 | 7 | {"organisation": "AirBNB"} |
+----+--------+-------------+--------------------------------------------------------------------------+
Pretty close but not quite there just yet!
Use the concatenation operator || to build cumulative jsonb objects:
with recursive events as (
select
e.id, e.name, e.category_id as parent_id,
jsonb_build_object(c.type, c.label) as categories
from event e
left join category c on c.id = e.category_id
union all
select
e.id, e.name, c.parent_id,
categories || jsonb_build_object(c.type, c.label)
from events e
join category c on c.id = e.parent_id
)
select id, name, categories
from events
where parent_id is null
order by id;
Note that the query is not protected against circular dependencies, so you need to be sure that all paths in the table are ended with nulls.
Test the query on DbFiddle.
Alternative solution:
create or replace function get_categories(int)
returns jsonb language sql as $$
select case
when parent_id is null then
jsonb_build_object (type, label)
else
jsonb_build_object (type, label) || get_categories(parent_id)
end as categories
from category
where id = $1
$$;
select id, name, get_categories(category_id)
from event
order by id;
DbFiddle.

Using a list of ids from a column to create a list of names to save to another

I have a table that has a column that contains a list of ids:
1234,2345,3456
Each of those ids is the primary key to one of the rows in this same table.
Each row has a name field. How can I change the id list to a name list?
ID, NAME, IDLIST
----------------------
1234, name1, (null)
2345, name2, 1234,2345
3456, name3, 1234,3456
using the above idList for each column would create a new list:
nameList for 2345 = name1,name2
nameList for 3456 = name1,name3
The particular dialect of SQL doesn't matter, I'm just trying to find if this can be done in a single Update.
Try this solution (MySQL, for Oracle change GROUP_CONCAT to WMSYS.WM_CONCAT)
Preparation:
create table tmp_table(
id int,
name varchar(5),
idslist varchar(20)
);
insert into tmp_table(id, name, idslist) values(
1234,
'name1',
null
);
insert into tmp_table(id, name, idslist) values(
2345,
'name2',
'1234,2345'
);
insert into tmp_table(id, name, idslist) values(
3456,
'name3',
'1234,3456'
);
Query:
select
t1.id, t1.name, t1.idslist, GROUP_CONCAT(distinct t2.name)
from
tmp_table t1
LEFT JOIN tmp_table t2 ON t1.idslist like CONCAT('%', t2.id, '%')
GROUP BY t1.id, t1.name, t1.idslist
;
result:
1234 name1
2345 name2 1234,2345 name1,name2
3456 name3 1234,3456 name3,name1
It works, but may cause troubles if you have id, say, 12 and 1234.
I'd advise you to use many-to-many relation table. If you need order, you can add order field, it will be something like
+----------+---------------------+
| child_id | parent_id | position|
+----------+---------------------+
| 2345 | 1234 | 1 |
| 2345 | 2345 | 2 |
+----------+---------------------+