Having a json array column in a table e.g. ["A1", "A2", "B1"]. I want to reference that array in WHERE IN clause. I could not evaluate the json array to ... WHERE tbl2.refID IN ("A1", "A2", "B1").
SET #ref = replace(replace('["A1", "A2", "BI"]','[', ''), ']', ''); SELECT #ref;
returns "A1", "A2", "B1" as I want it but not working in ... WHERE tbl2.refID IN (#ref)
So how can I evaluate array to be used as "WHERE IN" values?
Table 1
id
array of ids
other cols
1
["A1", "A2", "B1"]
Table 2
id
refID
col 3
1
A1
[ ]
2
A2
[ ]
Using elements of table1.col2 I want to select and group col3 from table2.
Wished I could illustrate it better!
I have tried to evaluate the array column passed to WHERE IN () but not returning any value.
The evaluation is broken somehow.
WHERE tbl2.refID IN (replace(replace('["A1", "A2", "B1"]','[', ''), ']', '')); not evaluating
You could search in the JSON if the value exists with JSON_CONTAINS
Example
Beware, JSON_CONTAINS needs a valid JSON on the two parameters, so JSON_CONTAINS('["A1"]', 'A1') we be invalid as A1 is not a valid JSON string representation.
For the where, you can simply do
WHERE JSON_CONTAINS('["A1", "A2", "BI"]', JSON_QUOTE(tbl2.refID))
It will add quotes around strings and test it against your array.
Related
I've been attempting to use MySQL 8's JSON_TABLE to extract the root keys and then their nested values. The problem is the root keys are dynamic and the nested key/value pairs might not exist.
JSON:
{
"Foo": {
"A": 3
},
"Bar": {
"A": 1,
"B": 368
},
"Biz": {
"C": 2,
"D": 10
}
}
In this JSON the root keys "Foo", "Bar", and "Biz" are dynamic and for each of their objects I want to extract the "A" key's value, which may or may not exist. For example, the above code would return this result set:
json_key
a_value
Foo
3
Bar
1
Biz
null
I've been something along these lines but no luck (just returns one row of nulls):
select * from json_table('{"Foo": {"A": 3}, "Bar": {"A": 1, "B": 368}, "Biz": {"C": 2}}',
'$' COLUMNS(
json_key varchar(255) path '$.*',
sub_value integer path '$.*.A'
)
) as i;
In the worst case I can try to restructure the JSON but it's already in the database so I'm hoping to leverage MySQL's JSON capability. Any ideas?
SELECT json_key,
JSON_EXTRACT(#json_value, CONCAT('$.', json_key, '.A')) a_value
FROM JSON_TABLE(JSON_KEYS(#json_value),
'$[*]' COLUMNS (json_key VARCHAR(255) PATH '$')) keystable
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=dd9c01e77d57206d587dd2d17340bc02
I am struggling to conditionally extract values from a nested JSON string in the Mysql table.
{"users": [{"userId": "10000001", "userToken": "11000000000001", "userTokenValidity": 1}, {"userId": "10000002", "userToken": "12000000000001", "userTokenValidity": 1}, {"userId": "10000003", "userToken": "13000000000001", "userTokenValidity": 0}]}
I want to select a userToken but only if the userTokenValidity is 1. So in this example only "11000000000001" and "12000000000001" should get selected.
This will extract the whole array ... how should I filter the result?
SELECT t.my_column->>"$.users" FROM my_table t;
SELECT CAST(value AS CHAR) output
FROM test
CROSS JOIN JSON_TABLE(test.data, '$.users[*]' COLUMNS (value JSON PATH '$')) jsontable
WHERE value->>'$.userTokenValidity' = 1
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=4876ec22a9df4f6d2e75a476a02a2615
I am using JSON_MODIFY to build complex JSON. Moving from MySQL I am struggling with the JSON functions provided by SQL Server. The issue I'm having is that SQL Server seems to construct all JSON objects in an array. There is the WITHOUT_ARRAY_WRAPPER mechanism, which seems like it should do what I want, however; there are two undesirable consequences.
It only returns one result depending on how it is used
The result is a single string with escape characters
I have constructed a simple query which illustrates my needs and the issue.
QUERY 1
SELECT JSON_MODIFY(
JSON_QUERY('{"definitions": {"id": "INT", "name": "VARCHAR(23)"}}'),
'append $.data',
(
SELECT * FROM (
SELECT 1 AS id, '123abc' AS "name" UNION
SELECT 2 AS id, '234bcd' AS "name"
) AS "data"
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
)
) AS "data";
OUTPUT 1
{
"definitions":{
"id":"INT",
"name":"VARCHAR(23)"
},
"data":[
"{\"id\":1,\"name\":\"123abc\"},{\"id\":2,\"name\":\"234bcd\"}"
]
}
QUERY 2
SELECT JSON_MODIFY(
JSON_QUERY('{"definitions": {"id": "INT", "name": "VARCHAR(23)"}}'),
'append $.data',
(
SELECT * FROM (
SELECT 1 AS id, '123abc' AS "name" UNION
SELECT 2 AS id, '234bcd' AS "name"
) AS "data"
FOR JSON PATH
)
) AS "data";
OUTPUT 2
{
"definitions":{
"id":"INT",
"name":"VARCHAR(23)"
},
"data":[
[
{"id":1, "name":"123abc"},
{"id":2, "name":"234bcd"}
]
]
}
QUERY 1
The data object is an array (which is expected), but the problem is what is in the array... A single string with escape characters.
QUERY 2
The data object is an array, which contains an array. In order to access the actual array of data, I would use something like for each obj in data[0].... The problem this poses is, for anyone consuming the JSON object, I would have to tell them:
"In this particular object the data element is an array of
arrays--You'll want to use the first and only the first
element to access the actual array of data."
I've naively tried many different combinations of JSON_MODIFY, JSON_QUERY, and CONCAT to no avail. How can I properly use JSON_MODIFY to get the following output, without the double array in data?
{
"definitions":{
"id":"INT",
"name":"VARCHAR(23)"
},
"data":[
{"id":1, "name":"123abc"},
{"id":2, "name":"234bcd"}
]
}
You are over-thinking this by trying to JSON_MODIFY an existing object.
Construct the definitions and data properties that you need, inside a subquery if necessary.
Then use FOR JSON a second time to get the outer object.
SELECT
definitions = JSON_QUERY('{"id": "INT", "name": "VARCHAR(23)"}'),
data =
(
SELECT id, name
FROM (VALUES
(1, '123abc'),
(2, '234bcd')
) v(id, name)
FOR JSON PATH
)
FOR JSON PATH;
SQL Fiddle
By trial and error, I found the solution.
Removed the append keyword from the path parameter in the JSON_MODIFY statement
Removed the WITHOUT_ARRAY_WRAPPER parameter from the FOR JSON statement.
Now the results are as expected and I don't need to explain to any consumers to "Just use data[0]"
The Query
SELECT JSON_MODIFY(
JSON_QUERY('{"definitions": {"id": "INT", "name": "VARCHAR(23)"}}'),
'$.data',
(
SELECT * FROM (
SELECT 1 AS id, '123abc' AS "name" UNION
SELECT 2 AS id, '234bcd' AS "name"
) AS "data"
FOR JSON PATH
)
) AS "data";
Produces the following output
{
"definitions":{
"id":"INT",
"name":"VARCHAR(23)"
},
"data":[
{"id":1, "name":"123abc"},
{"id":2, "name":"234bcd"}
]
}
I'm trying to count number of nested JSON array elements grouped by parent index using MySQL 8 JSON type field. My JSON string looks like
{
"a": [
{
"b": [
1,
2,
3
]
},
{
"b": [
1
]
}
]
}
I'm trying to get the count of elements under "b" key for each "a" element. I need an output similar to:
{0: 3, 1: 1}
Meaning that a[0] has 3 elements under "b", while a[1] has 1.
This query counts total number of "b" elements across all "a"s (yields 4):
select JSON_LENGTH(json->>'$.a[*].b[*]') from myTable
Is it possible to somehow group it by a's index?
Thank you!
One option is JSON_TABLE and JSON_OBJECTAGG:
SELECT
JSON_OBJECTAGG(
`rowid` - 1,
JSON_LENGTH(`count`)
)
FROM JSON_TABLE(
'{"a":[{"b":[1,2,3]},{"b":[1]}]}',
'$.a[*]'
COLUMNS(
`rowid` FOR ORDINALITY,
`count` JSON PATH '$.b'
)
) `der`;
See db-fiddle.
Using the || operator yields the following result:
select '{"a":{"b":2}}'::jsonb || '{"a":{"c":3}}'::jsonb ;
?column?
-----------------
{"a": {"c": 3}}
(1 row)
I would like to be able to do achieve the following result (?? just a placeholder for the operator):
select '{"a":{"b":2}}'::jsonb ?? '{"a":{"c":3}}'::jsonb ;
?column?
-----------------
{"a": {"b": 2, "c": 3}}
(1 row)
So, you can see the top-level a key has its child values "merged" such that the result contains both b and c.
How do you "deep" merge two JSONB values in Postgres?
Is this possible, if so how?
A more complex test case:
select '{"a":{"b":{"c":3},"z":true}}'::jsonb ?? '{"a":{"b":{"d":4},"z":false}}'::jsonb ;
?column?
-----------------
{"a": {"b": {"c": 3, "d": 4}, "z": false}}
(1 row)
Another test case where a primitive "merges over" and object:
select '{"a":{"b":{"c":3},"z":true}}'::jsonb ?? '{"a":{"b":false,"z":false}}'::jsonb ;
?column?
-----------------
{"a": {"b": false, "z": false}}
(1 row)
You should merge unnested elements using jsonb_each() for both values. Doing this in a non-trivial query may be uncomfortable, so I would prefer a custom function like this one:
create or replace function jsonb_my_merge(a jsonb, b jsonb)
returns jsonb language sql as $$
select
jsonb_object_agg(
coalesce(ka, kb),
case
when va isnull then vb
when vb isnull then va
else va || vb
end
)
from jsonb_each(a) e1(ka, va)
full join jsonb_each(b) e2(kb, vb) on ka = kb
$$;
Use:
select jsonb_my_merge(
'{"a":{"b":2}, "d": {"e": 10}, "x": 1}'::jsonb,
'{"a":{"c":3}, "d": {"f": 11}, "y": 2}'::jsonb
)
jsonb_my_merge
------------------------------------------------------------------
{"a": {"b": 2, "c": 3}, "d": {"e": 10, "f": 11}, "x": 1, "y": 2}
(1 row)
You can slightly modify the function using recursion to get a solution working on any level of nesting:
create or replace function jsonb_recursive_merge(a jsonb, b jsonb)
returns jsonb language sql as $$
select
jsonb_object_agg(
coalesce(ka, kb),
case
when va isnull then vb
when vb isnull then va
when jsonb_typeof(va) <> 'object' then va || vb
else jsonb_recursive_merge(va, vb)
end
)
from jsonb_each(a) e1(ka, va)
full join jsonb_each(b) e2(kb, vb) on ka = kb
$$;
Examples:
select jsonb_recursive_merge(
'{"a":{"b":{"c":3},"x":5}}'::jsonb,
'{"a":{"b":{"d":4},"y":6}}'::jsonb);
jsonb_recursive_merge
------------------------------------------------
{"a": {"b": {"c": 3, "d": 4}, "x": 5, "y": 6}}
(1 row)
select jsonb_recursive_merge(
'{"a":{"b":{"c":{"d":{"e":1}}}}}'::jsonb,
'{"a":{"b":{"c":{"d":{"f":2}}}}}'::jsonb)
jsonb_recursive_merge
----------------------------------------------
{"a": {"b": {"c": {"d": {"e": 1, "f": 2}}}}}
(1 row)
Finally, the variant of the function with changes proposed by OP (see comments below):
create or replace function jsonb_recursive_merge(a jsonb, b jsonb)
returns jsonb language sql as $$
select
jsonb_object_agg(
coalesce(ka, kb),
case
when va isnull then vb
when vb isnull then va
when jsonb_typeof(va) <> 'object' or jsonb_typeof(vb) <> 'object' then vb
else jsonb_recursive_merge(va, vb) end
)
from jsonb_each(a) e1(ka, va)
full join jsonb_each(b) e2(kb, vb) on ka = kb
$$;
This kind of "deep merge" can be interpreted quite differently, depending on your use case. For completeness, my intuition usually dictates the following rules:
object + object: Every property survives from each object, which is not in the other object (JSON's null value is considered to be in the object, if it's explicitly mentioned). When a property is in both objects, the merge continues recursively with the same rules (this point is usually agreed on).
array + array: The result is the concatenation of the two arrays.
array + primitive/object: the result is the first array, with the second JSON value appended to it.
any other cases: The result is the second JSON value (so f.ex. primitives or incompatible types override each other).
create or replace function jsonb_merge_deep(jsonb, jsonb)
returns jsonb
language sql
immutable
as $func$
select case jsonb_typeof($1)
when 'object' then case jsonb_typeof($2)
when 'object' then (
select jsonb_object_agg(k, case
when e2.v is null then e1.v
when e1.v is null then e2.v
else jsonb_merge_deep(e1.v, e2.v)
end)
from jsonb_each($1) e1(k, v)
full join jsonb_each($2) e2(k, v) using (k)
)
else $2
end
when 'array' then $1 || $2
else $2
end
$func$;
This function's added bonus is that it can be called with literally any type of JSON values: always produces a result & never complains about JSON value types.
http://rextester.com/FAC95623
After PostgreSQL 9.5 you can use jsonb_set function:
'{a,c}' looking into path if it is not there, it will created.
'{"a":{"c":3}}'::jsonb#>'{a,c}' this will get the value of c
new_value added if create_missing is true ( default is true)
Hier is document jsonb -functions
select jsonb_set('{"a":{"b":2}}', '{a,c}','{"a":{"c":3}}'::jsonb#>'{a,c}' )
Result: {"a":{"c":3,"b":2}}
Merge more attribute at once:
with jsonb_paths(main_part,missing_part) as (
values ('{"a":{"b":2}}','{"a":{"c":3,"d":4}}')
)
select jsonb_object_agg(t.k,t.v||t2.v)
from jsonb_paths,
jsonb_each(main_part::jsonb) t(k,v),
jsonb_each(missing_part::jsonb) t2(k,v);
result: {"a":{"c":3,"b":2,"d":4}}
As #lightSouls say, after PostgreSQL 9.5 you can use jsonb_set() function... But you must to learn how to use it!
jsonb_set can merge or destroy...
Supposing j:='{"a":{"x":1},"b":2}'::jsonb.
jsonb_set(j, '{a,y}', '1'::jsonb); will merge object {"y":1} with object {"x":1}. Result: {"a": {"x": 1, "y": 1}, "b": 2}
jsonb_set(j, '{a}', '{"x":1}'::jsonb); will destroy! replacing full old object by the new one.Result: {"a": {"x": 1}, "b": 2}
Combining (merging :-D) answers from #klin, #pozs and comment from #Arman Khubezhov while also actually merging arrays instead of concatenating (which resulted in duplicates otherwise), came up with the following function:
create or replace function jsonb_merge_deep(jsonb, jsonb)
returns jsonb
language sql
immutable
as $func$
select case jsonb_typeof($1)
when 'object' then
case jsonb_typeof($2)
when 'object' then (
select jsonb_object_agg(k,
case
when e2.v is null then e1.v
when e1.v is null then e2.v
else jsonb_merge_deep(e1.v, e2.v)
end
)
from jsonb_each($1) e1(k, v)
full join jsonb_each($2) e2(k, v) using (k)
)
else COALESCE($2, $1)
end
when 'array' then
(
SELECT jsonb_agg(items.val)
FROM (
SELECT jsonb_array_elements($1) AS val
UNION
SELECT jsonb_array_elements($2) AS val
) AS items
)
else $2
end
$func$;
Based on comment from #Arman Khubezhov, enhanced the case when any of $1 or $2 is null with:
else COALESCE($2, $1)
And added real merge (no duplicate) of the 2 arrays values with:
when 'array' then
(
SELECT jsonb_agg(items.val)
FROM (
SELECT jsonb_array_elements($1) AS val
UNION
SELECT jsonb_array_elements($2) AS val
) AS items
)
Glad if one can come up with a enhanced code for this one - like an existing PostreSQL function I am not aware of?
Pros: no data loss when combining 2 JSONB values or updating a JSONB field in an UPDATE query like.
UPDATE my_table
SET my_jsonb_field = jsonb_merge_deep(my_jsonb_field, '{ "a": { "aa" : { "aaa" : [6, 4, 7] } } }'::jsonb)
Cons: removing a key/value or array value requires a dedicated query.