I can create JSON objects using jsonb_build_object the way I want them. E.g.
SELECT jsonb_build_object('id', id) FROM (SELECT generate_series(1,3) id) objects;
results in
jsonb_build_object
------------------
{"id": 1}
{"id": 2}
{"id": 3}
But when I want to add them to an array, they are wrapped in an additional object, using the column name as key:
SELECT jsonb_build_object(
'foo', 'bar',
'collection', jsonb_agg(collection)
)
FROM (
SELECT jsonb_build_object('id', id)
FROM (
SELECT generate_series(1,3) id
) objects
) collection;
results in
{"foo": "bar", "collection": [{"jsonb_build_object": {"id": 1}}, {"jsonb_build_object": {"id": 2}}, {"jsonb_build_object": {"id": 3}}]}
How can I get
{"foo": "bar", "collection": [{"id": 1}, {"id": 2}, {"id": 3}]}
instead?
Use jsonb_agg(collection.jsonb_build_object). You can use aliases too, but the point is that collection refers to the entire row, which has a (single) jsonb_build_object named (by default) column, which is the JSON you want to aggregate.
With simplifying and aliases, you query can be:
SELECT jsonb_build_object(
'foo', 'bar',
'collection', jsonb_agg(js)
)
FROM generate_series(1,3) id
CROSS JOIN LATERAL jsonb_build_object('id', id) js;
Notes:
LATERAL is implicit, I just wrote it for clarity
aliasing like this in the FROM clause creates a table & a column alias too, with the same name. So it is equivalent to jsonb_build_object('id', id) AS js(js)
Related
I have a requirement to select column values in Oracle in a JSON structure. Let me explain the requirement in detail
We have a table called "dept" that has the following rows
There is another table called "emp" that has the following rows
The output we need is as follows
{"Data": [{
"dept": "Sports",
"City": "LA",
"employees": {
"profile":[
{"name": "Ben John", "salary": "15000"},
{"name": "Carlos Moya", "salary": "19000"}]
}},
{"dept": "Sales",
"City": "Miami",
"employees": {
"profile":[
{"name": "George Taylor", "salary": "9000"},
{"name": "Emma Thompson", "salary": "8500"}]
}}
]
}
The SQL that I issued is as follows
select json_object('dept' value b.deptname,
'city' value b.deptcity,
'employees' value json_object('employee name' value a.empname,
'employee salary' value a.salary)
format json) as JSONRETURN
from emp a, dept b where
a.deptno=b.deptno
However, the result looks like the following and not what we expected.
Please note the parent data is repeated. What is the mistake I am making?
Thanks for the help
Bala
You can do something like this. Note the multiple (nested) calls to json_object and json_arrayagg. Tested in Oracle 12.2; other versions may have other tools that can make the job easier.
select json_object(
'Data' value
json_arrayagg(
json_object (
'dept' value deptname,
'City' value deptcity,
'employees' value
json_object(
'profile' value
json_arrayagg(
json_object(
'name' value empname,
'salary' value salary
) order by empid -- or as needed
)
)
) order by deptno -- or as needed
)
) as jsonreturn
from dept join emp using (deptno)
group by deptno, deptname, deptcity
;
There are many examples of json parsing in POSTGRES, which pull data from a table. I have a raw json string handy and would like to practice using JSON functions and operators. Is it possible to do this without using tables? Or ... what is the most straightfoward way to declare it as a variable? Something like...
# Declare
foojson = "{'a':'foo', 'b':'bar'}"
# Use
jsonb_array_elements(foojson) -> 'a'
Basically I'd like the last line to print to console or be wrappable in a SELECT statement so I can rapidly "play" with some of these operators.
You can pass it directly to the function
select '{"a": "foo", "b": "bar"}'::jsonb ->> 'a';
select *
from jsonb_each('{"a": "foo", "b": "bar"}');
select *
from jsonb_array_elements('[{"a": "foo"}, {"b": "bar"}]');
Or if you want to pretend, it's part of a table:
with data (json_value) as (
values
('{"a": "foo", "b": "bar"}'::jsonb),
('{"foo": 42, "x": 100}')
)
select e.*
from data d
cross join jsonb_each(d.json_value) as e;
with data (json_value) as (
values
('{"a": 1, "b": "x"}'::jsonb),
('{"a": 42, "b": "y"}')
)
select d.json_value ->> 'a',
d.json_value ->> 'b'
from data d;
Following snowflake query returns the JSON structure but output is sorted by the keys. How not to sort by the keys but retains the order? Is there any parameter setting that needs to be set?
select
object_construct
(
'entity', 'XYZ',
'allowed', 'Yes',
'currency', 'USD',
'statement_month','July, 2020'
)
Output: --it sorts by the keys
{
"allowed": "Yes",
"currency": "USD",
"entity": "XYZ",
"statement_month": "July, 2020"
}
Expected Output: --same order as specified
{
"entity": "XYZ",
"allowed": "Yes",
"currency": "USD",
"statement_month": "July, 2020"
}
JSON is an unordered collection of name and values. Order cannot be guaranteed in JSON.
The constructed object does not necessarily preserve the original order of the key-value pairs.
You can do it like as below
SELECT mytable:entity::string as entity,
mytable:allowed::string as allowed,
mytable:currency::string as currency,
mytable:statement_month::string as statement_month
from
(select
object_construct
(
'entity', 'XYZ',
'allowed', 'Yes',
'currency', 'USD',
'statement_month','July, 2020'
) mytable);
Unfortunately, no
Usage notes:
https://docs.snowflake.com/en/sql-reference/functions/object_construct.html#usage-notes
The constructed object does not necessarily preserve the original order of the key-value pairs.
same for PARSE_JSON Usage notes:
https://docs.snowflake.com/en/sql-reference/functions/parse_json.html#usage-notes
The order of the key-value pairs in the string produced by TO_JSON is not predictable.
The order was found to be maintained when using object_construct(*):
WITH base AS (
SELECT 'XYZ' "entity", 'Yes' "allowed", 'USD' "currency", 'July, 2020' "statement_month")
SELECT object_construct(*) FROM base;
I'm trying to migrate Oracle 12c queries to Postgres11.5.
Here is the json:
{
"cost": [{
"spent": [{
"ID": "HR",
"spentamount": {
"amount": 2000.0,
"country": "US"
}
}]
}],
"time": [{
"spent": [{
"ID": "HR",
"spentamount": {
"amount": 308.91,
"country": "US"
}
}]
}]
}
Here is the query that has to be migrated to Postgres 11.5:
select js.*
from P_P_J r,
json_table(r.P_D_J, '$.*[*]'
COLUMNS(NESTED PATH '$.spent[*]'
COLUMNS(
ID VARCHAR2(100 CHAR) PATH '$.ID',
amount NUMBER(10,4) PATH '$.spentamount.amount',
country VARCHAR2(100 CHAR) PATH '$.spentamount.country'))
) js
The result:
ID, amount, country
HR, 2000.0,US
HR,308.91,US
I have two questions here:
What does $.*[*] mean?
How can we migrate this query in Postgres so that it directly looks at 'spent' instead of navigating 'cost'->'spent' or 'time'->'spent'
There is no direct replacement for json_table in Postgres. You will have to combine several calls to explode the JSON structure.
You didn't show us your expected output, but as far as I can tell, the following should do the same:
select e.item ->> 'ID' as id,
(e.item #>> '{spentamount, amount}')::numeric as amount,
e.item #>> '{spentamount, country}' as country
from p_p_j r
cross join jsonb_each(r.p_d_j) as a(key, val)
cross join lateral (
select *
from jsonb_array_elements(a.val)
where jsonb_typeof(a.val) = 'array'
) as s(element)
cross join jsonb_array_elements(s.element -> 'spent') as e(item)
;
The JSON path expression '$.*[*] means: iterate over all top-level keys, then iterate over all array elements found in there and the nested path '$.spent[*]' then again iterates over all array elements in there. These steps are reflected in the three JSON function calls that are needed to get there.
With Postgres 12, this would be a bit easier as this can be done with a single call to jsonb_path_query() which also use a JSON Path to access the elements using a very similar JSON path expression:
select e.item ->> 'ID' as id,
(e.item #>> '{spentamount, amount}')::numeric as amount,
e.item #>> '{spentamount, country}' as country
from p_p_j r
cross join jsonb_path_query(r.p_d_j, '$.*[*].spent[*]') as e(item)
;
Online example
I have two Postgres SQL queries returning JSON arrays:
q1:
[
{"id": 1, "a": "text1a", "b": "text1b"},
{"id": 2, "a": "text2a", "b": "text2b"},
{"id": 2, "a": "text3a", "b": "text3b"},
...
]
q2:
[
{"id": 1, "percent": 12.50},
{"id": 2, "percent": 75.00},
{"id": 3, "percent": 12.50}
...
]
I want the result to be a union of both array unique elements:
[
{"id": 1, "a": "text1a", "b": "text1b", "percent": 12.50},
{"id": 2, "a": "text2a", "b": "text2b", "percent": 75.00},
{"id": 3, "a": "text3a", "b": "text3b", "percent": 12.50},
...
]
How can this be done with SQL in Postgres 9.4?
Assuming data type jsonb and that you want to merge records of each JSON array that share the same 'id' value.
Postgres 9.5
makes it simpler with the new concatenate operator || for jsonb values:
SELECT json_agg(elem1 || elem2) AS result
FROM (
SELECT elem1->>'id' AS id, elem1
FROM (
SELECT '[
{"id":1, "percent":12.50},
{"id":2, "percent":75.00},
{"id":3, "percent":12.50}
]'::jsonb AS js
) t, jsonb_array_elements(t.js) elem1
) t1
FULL JOIN (
SELECT elem2->>'id' AS id, elem2
FROM (
SELECT '[
{"id": 1, "a": "text1a", "b": "text1b", "percent":12.50},
{"id": 2, "a": "text2a", "b": "text2b", "percent":75.00},
{"id": 3, "a": "text3a", "b": "text3b", "percent":12.50}]'::jsonb AS js
) t, jsonb_array_elements(t.js) elem2
) t2 USING (id);
The FULL [OUTER] JOIN makes sure you don't lose records without match in the other array.
The type jsonb has the convenient property to only keep the latest value for each key in the record. Hence, the duplicate 'id' key in the result is merged automatically.
The Postgres 9.5 manual also advises:
Note: The || operator concatenates the elements at the top level of
each of its operands. It does not operate recursively. For example, if
both operands are objects with a common key field name, the value of
the field in the result will just be the value from the right hand operand.
Postgres 9.4
Is a bit less convenient. My idea would be to extract array elements, then extract all key/value pairs, UNION both results, aggregate into a single new jsonb values per id value and finally aggregate into a single array.
SELECT json_agg(j) -- ::jsonb
FROM (
SELECT json_object_agg(key, value)::jsonb AS j
FROM (
SELECT elem->>'id' AS id, x.*
FROM (
SELECT '[
{"id":1, "percent":12.50},
{"id":2, "percent":75.00},
{"id":3, "percent":12.50}]'::jsonb AS js
) t, jsonb_array_elements(t.js) elem, jsonb_each(elem) x
UNION ALL -- or UNION, see below
SELECT elem->>'id' AS id, x.*
FROM (
SELECT '[
{"id": 1, "a": "text1a", "b": "text1b", "percent":12.50},
{"id": 2, "a": "text2a", "b": "text2b", "percent":75.00},
{"id": 3, "a": "text3a", "b": "text3b", "percent":12.50}]'::jsonb AS js
) t, jsonb_array_elements(t.js) elem, jsonb_each(elem) x
) t
GROUP BY id
) t;
The cast to jsonb removes duplicate keys. Alternatively you could use UNION to fold duplicates (for instance if you want json as result). Test which is faster for your case.
Related:
How to turn json array into postgres array?
Merging Concatenating JSON(B) columns in query
For any single jsonb element this use of the concat || operator works well for me with strip_nulls and another trick to cast the result back to jsonb (not an array).
select jsonb_array_elements(jsonb_strip_nulls(jsonb_agg(
'{
"a" : "unchanged value",
"b" : "old value",
"d" : "delete me"
}'::jsonb
|| -- The concat operator works as merge on jsonb, the right operand takes precedence
-- NOTE: it only works one JSON level deep
'{
"b" : "NEW value",
"c" : "NEW field",
"d" : null
}'::jsonb
)));
This gives the result
{"a": "unchanged value", "b": "NEW value", "c": "NEW field"}
which is properly typed jsonb