Select - Oracle JSON Object - Join - json

I have a requirement to select column values in Oracle in a JSON structure. Let me explain the requirement in detail
We have a table called "dept" that has the following rows
There is another table called "emp" that has the following rows
The output we need is as follows
{"Data": [{
"dept": "Sports",
"City": "LA",
"employees": {
"profile":[
{"name": "Ben John", "salary": "15000"},
{"name": "Carlos Moya", "salary": "19000"}]
}},
{"dept": "Sales",
"City": "Miami",
"employees": {
"profile":[
{"name": "George Taylor", "salary": "9000"},
{"name": "Emma Thompson", "salary": "8500"}]
}}
]
}
The SQL that I issued is as follows
select json_object('dept' value b.deptname,
'city' value b.deptcity,
'employees' value json_object('employee name' value a.empname,
'employee salary' value a.salary)
format json) as JSONRETURN
from emp a, dept b where
a.deptno=b.deptno
However, the result looks like the following and not what we expected.
Please note the parent data is repeated. What is the mistake I am making?
Thanks for the help
Bala

You can do something like this. Note the multiple (nested) calls to json_object and json_arrayagg. Tested in Oracle 12.2; other versions may have other tools that can make the job easier.
select json_object(
'Data' value
json_arrayagg(
json_object (
'dept' value deptname,
'City' value deptcity,
'employees' value
json_object(
'profile' value
json_arrayagg(
json_object(
'name' value empname,
'salary' value salary
) order by empid -- or as needed
)
)
) order by deptno -- or as needed
)
) as jsonreturn
from dept join emp using (deptno)
group by deptno, deptname, deptcity
;

Related

How to extract an entire JSON element from Oracle 19c CLOB

I have a table with JSON data stored in a CLOB. We get this data from an external source and recently they changed some formatting which causes issues with our post processing.
The data contains an object containing user roles and, when correctly formatted, arrays of locations associated to an individual role.
The problem arises when one user has a Role (Test Role 1) with a Location with just curly braces {} and another user also has the same role (Test Role 1) with a location with valid data. When we query the second user's roles we get null location date for that user.
In the sample data there is employeeID 1 (Whitbuckle, Dalongrirlum) who has roles of Test Role 1 and Test Role 2, each with a Location {} and employeeID 2 (Longblade, Skolout) with a role of Test Role 1 with valid locations. The other two users have either an empty EntitlementJSON attribute or Test Role 3 with valid location data.
When we query the data, employeeID 2 record has null roles even if we explicitly select only their employeeID.
Requested Solution:
I am writing a validation procedure to makes sure that rows with the bad formatting gets identified. To do this, I would like to select into a variable the contents of the EntitlementJSON attribute for a single user. I would then check for the existance of "location":{}. If it exists this is a bad record. For example, what I would like to see for employeeID 1 is:
"Test Role 1": {
"dodaac": {},
"fundCode": {},
"glRepair": {},
"location": {},
"cognos": {},
"jv": {}
},
"Test Role 2": {
"dodaac": {},
"fundCode": {},
"glRepair": {},
"location": {},
"cognos": {},
"jv": {}
}
There is an example at this db<>fiddle
Code samples
CREATE TABLE TEST_JSON
( PROCESS_ID NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
JSON_DATA CLOB CONSTRAINT check_json CHECK (JSON_DATA IS JSON)
)
LOB (JSON_DATA) STORE AS SECUREFILE (
ENABLE STORAGE IN ROW
CHUNK 8192
RETENTION
NOCACHE
LOGGING);
-- TABLE ALTERS
ALTER TABLE TEST_JSON
ADD CONSTRAINT TEST_JSON_PK
PRIMARY KEY ( PROCESS_ID ) USING INDEX
ENABLE;
set serveroutput on
declare
c clob;
BEGIN
c:= to_clob('[
{
"displayName": "Whitbuckle, Dalongrirlum",
"employeeID": "1",
"EntitlementJSON": {
"Test Role 1": {
"dodaac": {},
"fundCode": {},
"glRepair": {},
"location": {},
"cognos": {},
"jv": {}
},
"Test Role 2": {
"dodaac": {},
"fundCode": {},
"glRepair": {},
"location": {},
"cognos": {},
"jv": {}
}
},
"manager": "Urgaehilde Rubyforged",
"company": "Bloodguard Industrie"
},
{
"displayName": "Koboldbelly, Sitgrolin",
"employeeID": "4",
"EntitlementJSON": {},
"manager": "Kogrubera Orcborn",
"company": "Bloodguard Industrie"
},
{
"displayName": "Longblade, Skolout",
"employeeID": "2",
"EntitlementJSON": {
"Test Role 1": {
"location": [
"Rockwall Villa - RV",
"Thunderbluff - TB"
]
}
},
"manager": "Therrilyn Mithrilpike",
"company": "Bloodguard Industrie"
},
{
"displayName": "Warmcoat, Alfomdum",
"employeeID": "3",
"EntitlementJSON": {
"Test Role 3": {
"location": [
"ALL"
]
}
},
"manager": "Therrilyn Mithrilpike",
"company": "Bloodguard Industrie"
}
]');
INSERT INTO TEST_JSON (JSON_DATA)
VALUES (c);
commit;
END;
Here is the query we run:
select process_id,
display_name,
employeeID,
manager,
listagg(TR1) within group (order by process_id, display_name, employeeID, manager) Role_TR1,
listagg(TR2) within group (order by process_id, display_name, employeeID, manager) Role_TR2,
listagg(TR3) within group (order by process_id, display_name, employeeID, manager) Role_TR3,
listagg(TR4) within group (order by process_id, display_name, employeeID, manager) Role_TR4
from (select j.process_id,
jt.display_Name,
jt.employeeID,
jt.manager,
TR1,
TR2,
TR3,
TR4
from test_json j
cross apply JSON_TABLE(j.JSON_DATA, '$[*]'
COLUMNS (display_Name VARCHAR2(200 CHAR) PATH '$.displayName',
employeeID VARCHAR2(20 CHAR) PATH '$.employeeID',
manager VARCHAR2(200 CHAR) PATH '$.manager',
nested path '$.EntitlementJSON."Test Role 1"' columns
(TR1 VARCHAR2(4000 CHAR) FORMAT JSON WITH WRAPPER PATH '$.location[*]'),
nested path '$.EntitlementJSON."Test Role 2"' columns
(TR2 VARCHAR2(4000 CHAR) FORMAT JSON WITH WRAPPER PATH '$.location[*]'),
nested path '$.EntitlementJSON."Test Role 3"' columns
(TR3 VARCHAR2(4000 CHAR) FORMAT JSON WITH WRAPPER PATH '$.location[*]'),
nested path '$.EntitlementJSON."Test Role 4"' columns
(TR4 VARCHAR2(4000 CHAR) FORMAT JSON WITH WRAPPER PATH '$.location[*]')
)) jt
where process_id = 1)
--and jt.employeeID = '2')
group by process_id, employeeID, display_name, manager;
Even when we un-comment the "and jt.employeeID = '2'" line we still get null locations for employeeID 2
You don't need to aggregate or use NESTED PATH:
SELECT process_id,
display_name,
employeeID,
manager,
tr1,
tr2,
tr3,
tr4
from test_json j
CROSS APPLY JSON_TABLE(
j.JSON_DATA, '$[*]'
COLUMNS (
display_Name VARCHAR2(200 CHAR) PATH '$.displayName',
employeeID VARCHAR2(20 CHAR) PATH '$.employeeID',
manager VARCHAR2(200 CHAR) PATH '$.manager',
tr1 JSON PATH '$.EntitlementJSON."Test Role 1".location',
tr2 JSON PATH '$.EntitlementJSON."Test Role 2".location',
tr3 JSON PATH '$.EntitlementJSON."Test Role 3".location',
tr4 JSON PATH '$.EntitlementJSON."Test Role 4".location'
)
) e
WHERE j.process_id = 1
AND e.employeeID = '2';
Which, for the sample data, outputs:
PROCESS_ID
DISPLAY_NAME
EMPLOYEEID
MANAGER
TR1
TR2
TR3
TR4
1
Longblade, Skolout
2
Therrilyn Mithrilpike
["Rockwall Villa - RV","Thunderbluff - TB"]
null
null
null
fiddle

Update JSON data type column in MySql table

I have started using MySQL 8 and trying to update JSON data type in a mysql table
My table t1 looks as below:
# id group names
1100000 group1 [{"name": "name1", "type": "user"}, {"name": "name2", "type": "user"}, {"name": "techDept", "type": "dept"}]
I want to add user3 to the group1 and written below query:
update t1 set names = JSON_SET(names, "$.name", JSON_ARRAY('user3')) where group = 'group1';
However, the above query is not working
I suppose you want the result to be:
[{"name": "name1", "type": "user"}, {"name": "name2", "type": "user"}, {"name": "techDept", "type": "dept"}, {"name": "user3", "type": "user"}]
This should work:
UPDATE t1 SET names = JSON_ARRAY_APPEND(names, '$', JSON_OBJECT('name', 'user3', 'type', 'user'))
WHERE `group` = 'group1';
But it's not clear why you are using JSON at all. The normal way to store this data would be to create a second table for group members:
CREATE TABLE group_members (
member_id INT PRIMARY KEY,
`group` VARCHAR(10) NOT NULL,
member_type ENUM('user','dept') NOT NULL DEFAULT 'user',
name VARCHAR(10) NOT NULL
);
Then store one per row.
Adding a new member would be like:
INSERT INTO group_members
SET `group` = 'group1', name = 'user3';
So much simpler than using JSON!

How to delete multiple values in JSONB array Postgresql array object

I have a JSONB array below
[
{
"name": "test",
"age": "21",
"phone": "6589",
"town": "54"
},
{
"name": "test12",
"age": "67",
"phone": "6546",
"town": "54"
},
{
"name": "test123",
"age": "21",
"phone": "6589",
"town": "54"
},
{
"name": "test125",
"age": "67",
"phone": "6546",
"town": "54"
}
]
Now I want to delete the object if the name is test or test125. How to delete multiple or single values in JSONB array?
An update statement including a subquery, which eleminates the unwanted elements with NOT IN operator and aggregates the rest by using jsonb_agg() function, would find out this operation :
Choose this :
1. UPDATE tab
SET jsdata = t.js_new
FROM
(
SELECT jsonb_agg( (jsdata ->> ( idx-1 )::int)::jsonb ) AS js_new
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'name' NOT IN ('test','test125')
) t
or this one :
2. WITH t AS (
SELECT jsonb_agg( (jsdata ->> ( idx-1 )::int)::jsonb ) AS js_new
FROM tab
CROSS JOIN jsonb_array_elements(jsdata)
WITH ORDINALITY arr(j,idx)
WHERE j->>'name' NOT IN ('test','test125')
)
UPDATE tab
SET jsdata = js_new
FROM t
Demo
If you have the Postgres 12, you can use jsonb_path_query_array function to filter the jsonb here is the sample for your question:
with t (j) as ( values ('[
{"name":"test","age":"21","phone":"6589","town":"54"},
{"name":"test12","age":"67","phone":"6546","town":"54"},
{"name":"test123","age":"21","phone":"6589","town":"54"},
{"name":"test125","age":"67","phone":"6546","town":"54"}
]'::jsonb) )
select jsonb_path_query_array(j,
'$[*] ? (#.name != "test" && #.name != "test125")')
from t;
more info on https://www.postgresql.org/docs/12/functions-json.html
I would create a function that does that:
create function remove_array_elements(p_data jsonb, p_key text, p_value text[])
returns jsonb
as
$$
select jsonb_agg(e order by idx)
from jsonb_array_elements(p_data) with ordinality as t(e,idx)
where t.e ->> p_key <> ALL (p_value) ;
$$
language sql
immutable;
Then you can use it like this:
update the_table
set the_column = remove_array_elements(the_column, 'name', array['test', 'test125'])
where id = ...;
Online example

Oracle JSON_TABLE to PostgreSQL - how to search from the second hierarchical key in a JSON column

I'm trying to migrate Oracle 12c queries to Postgres11.5.
Here is the json:
{
"cost": [{
"spent": [{
"ID": "HR",
"spentamount": {
"amount": 2000.0,
"country": "US"
}
}]
}],
"time": [{
"spent": [{
"ID": "HR",
"spentamount": {
"amount": 308.91,
"country": "US"
}
}]
}]
}
Here is the query that has to be migrated to Postgres 11.5:
select js.*
from P_P_J r,
json_table(r.P_D_J, '$.*[*]'
COLUMNS(NESTED PATH '$.spent[*]'
COLUMNS(
ID VARCHAR2(100 CHAR) PATH '$.ID',
amount NUMBER(10,4) PATH '$.spentamount.amount',
country VARCHAR2(100 CHAR) PATH '$.spentamount.country'))
) js
The result:
ID, amount, country
HR, 2000.0,US
HR,308.91,US
I have two questions here:
What does $.*[*] mean?
How can we migrate this query in Postgres so that it directly looks at 'spent' instead of navigating 'cost'->'spent' or 'time'->'spent'
There is no direct replacement for json_table in Postgres. You will have to combine several calls to explode the JSON structure.
You didn't show us your expected output, but as far as I can tell, the following should do the same:
select e.item ->> 'ID' as id,
(e.item #>> '{spentamount, amount}')::numeric as amount,
e.item #>> '{spentamount, country}' as country
from p_p_j r
cross join jsonb_each(r.p_d_j) as a(key, val)
cross join lateral (
select *
from jsonb_array_elements(a.val)
where jsonb_typeof(a.val) = 'array'
) as s(element)
cross join jsonb_array_elements(s.element -> 'spent') as e(item)
;
The JSON path expression '$.*[*] means: iterate over all top-level keys, then iterate over all array elements found in there and the nested path '$.spent[*]' then again iterates over all array elements in there. These steps are reflected in the three JSON function calls that are needed to get there.
With Postgres 12, this would be a bit easier as this can be done with a single call to jsonb_path_query() which also use a JSON Path to access the elements using a very similar JSON path expression:
select e.item ->> 'ID' as id,
(e.item #>> '{spentamount, amount}')::numeric as amount,
e.item #>> '{spentamount, country}' as country
from p_p_j r
cross join jsonb_path_query(r.p_d_j, '$.*[*].spent[*]') as e(item)
;
Online example

Manipulating the JSON column and split the values

I have the following text in one of my Postgres table as TEXT datatype:
[
{"type": "text", "values": ["General"], "valueType": "string", "fieldType": "text", "value": ["General"], "customFieldId": "ee", "name": "customer_group"},
{"type": "text", "values": ["Vienna"], "valueType": "string", "fieldType": "text", "value": ["Vienna"], "customFieldId": "eU", "name": "customer_city"},
{"type": "text", "values": ["Mario"], "valueType": "string", "fieldType": "text", "value": ["Mario"], "customFieldId": "eZ", "name": "first_name"},
{"type": "text", "values": ["2019-06-30"], "valueType": "date", "fieldType": "text", "value": ["2019-06-30"], "customFieldId": "ea", "name": "created_at_date"}
]
I need to split the values of this TEXT field to columns and rows. For that I have converted the TEXT column to JSON as below:
SELECT CAST( "customFieldValues" as JSON) "customFieldValues" FROM fr.contacts
But when I tried to manipulate this JSON value I'm getting NULL as result.
WITH CTE AS(SELECT CAST( "customFieldValues" as JSON) "customFieldValues" FROM fr.contacts
)
SELECT
"customFieldValues" ->>'customer_city' as dd
FROM CTE
Does anyone have any suggestions on this? How to get the column names and it's values in rows. I want to create a TABLE based on this data.
Any suggestions would be of great help.
below is the expected result,
customer_group customer_city first_name created_at_date
General Vienna Mario 2019-06-30
Disclaimer: It is still not clear:
Why is there one element values and one value? What is the difference?
Why are these elements arrays?
step-by-step demo:db<>fiddle
SELECT
MAX(value) FILTER (WHERE column_name = 'customer_group') AS customer_group,
MAX(value) FILTER (WHERE column_name = 'customer_city') AS customer_city,
MAX(value) FILTER (WHERE column_name = 'first_name') AS first_name,
MAX(value) FILTER (WHERE column_name = 'created_at_date') AS created_at_date
FROM (
SELECT
elems ->> 'name' AS column_name,
elems -> 'value' ->> 0 AS value,
data
FROM
mytable,
json_array_elements(data::json) elems
) s
GROUP BY data
Cast text to json with ::json
Expand the JSON array: One row for each element with json_array_elements()
Getting the value: -> 'value' gets the array, ->> 0 gets the text representation of the first array element (the only one here)
Getting the column: ->> 'name' gets the text representation of the column name
Classical pivot algorithm (turning rows to columns) with the FILTER clause.