I'm using the JSON_EXTRACT with MYSQL and using this command:
SET #j = '{"id" : "1"}';
SELECT JSON_EXTRACT(#j, '$.id')
the result is
"1"
but, when I use
SET #j = '[{"id" : "1"}, {"id" : "2"}]';
SELECT JSON_EXTRACT(#j, '$.id')
the result is
NULL
I expected this result
"1"
"2"
Any sugestion? I want the list of ID'S from JSON.
try this:
SET #j = '[{"id" : "1"}, {"id" : "2"}]';
SELECT JSON_EXTRACT(#j,'$[*].id')
the result is:
["1", "2"]
Related
I have a table that currently looks like this:
id
tags
1
{"key1" : "val1", "key2" : "val2" }
I want it to look like this:
id
tags
1
{"key1" : ["val1"], "key2" : ["val2"] }
I'm not sure how to write a PSQL query that will transform each value in the json array.
-- simple way
select jsonb_object_agg(t.a, jsonb_build_array(t.b)) from jsonb_each_text('{"key1" : "val1", "key2" : "val2" }'::jsonb) as t(a, b)
-- for multiple json rows
with t1(id, jsondata) as
(
select 1, '{"key1" : "val1", "key2" : "val2" }'::jsonb
union all
select 2, '{"key1" : "val3", "key2" : "val4" }'::jsonb
)
select t1.id, jsonb_object_agg(t2.a, jsonb_build_array(t2.b)) from t1
cross join jsonb_each_text(t1.jsondata) t2(a, b)
group by t1.id
I have the following values inside a cell of a json column in MySql:
{
"produttori": [
"8",
"9"
],
"articoli_alternativi": [
"3",
"9"
],
"articoli_accessori": [
"5",
"6",
"7",
"8"
],
"tecnologie": [],
"fornitori": [
"9",
"8"
],
"classificazioni": [
"3",
"4"
]
}
I would like to make a query that extracts data based on the existence of a value in the array at the fornitori key.
For now I've tried this:
query = 'SELECT nome, formulati_commerciali FROM articolo WHERE JSON_CONTAINS(JSON_EXTRACT(dati, "$.fornitori"), "' + \
value+'", "$")'
Which print is:
SELECT name, data FROM articolo WHERE JSON_CONTAINS(JSON_EXTRACT(data, "$.fornitori"), "8", "$")
Basically the condition is that value ("8") must be inside the fornitori list, otherwise skips the element.
Unfortunately, the query did not produce any results.
I would like to know how you can formulate such a query in MySql. I will need them often!
Thanks in advance!
This should do it:
SELECT name, data
FROM articolo
WHERE JSON_CONTAINS(data, '"8"', '$.fornitori')
The double quotes around 8 are important, in order to properly match the JSON data. On the other hand, the query consistently uses single quotes for string literals.
You can use
SELECT data
FROM
(
SELECT #i := #i + 1 AS rn,
JSON_UNQUOTE(JSON_EXTRACT(data,CONCAT('$.fornitori[',#i-1,']'))) AS elm,
data
FROM information_schema.tables
CROSS JOIN articolo
CROSS JOIN (SELECT #i := 0) r
) q
WHERE elm = 8
in order to search for the spesific value within a spesific
array("fornitori")
Demo
I have the json block modeled below. I want to selectively delete individual blocks from my_items based on the id which is AAA and BBB in my sample. ie if I tried to delete the AAA block under my_items I would want tojust delete the {"id" : "AAA"} but if wanted to delete the BBB block it would delete the larger {"name" : "TestRZ", "id" : "BBB", "description" : ""} block.
I know I can use the #- to remove whole blocks like SELECT '{sample_json}'::jsonb #- '{my_items}' would purge out the whole my_items block. But I dont know how to use this to conditionally delete children under a parent block of json. I have also used code similar to this example to append data inside a nested structure by reading in the node of the nested structure cat-ing new data to it and rewriting it. UPDATE data SET value= jsonb_set(value, '{my_items}', value->'items' || (:'json_to_adds'), true) where id='testnofeed'.
But I dont know how to apply either of these methods to: 1)Delete data in nested structure using #- or 2)Do the same using `jsonb_set. Anyone have any guidance for how to do this using either of these(or another method).
{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
},
],
"name" : "TheName"
}
Data is stored in value jsonb. when I update I will be able to pass in a unique kind so that it only updates this json in one row in db.
-- Table Definition
CREATE TABLE "public"."data" (
"id" varchar(100) NOT NULL,
"kind" varchar(100) NOT NULL,
"revision" int4 NOT NULL,
"value" jsonb
);
This works in PostgreSQL 12 and later with jsonpath support. If you do not have jsonpath, then please leave a comment.
with data as (
select '{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
}
],
"name" : "TheName"
}'::jsonb as stuff
)
select jsonb_set(stuff, '{my_items}',
jsonb_path_query_array(stuff->'my_items', '$ ? (#."id" <> "AAA")'))
from data;
jsonb_set
---------------------------------------------------------------------------------------------------------------------------------------------------
{"name": "TheName", "urlName": "testurl", "my_items": [{"id": "BBB", "name": "TestRZ", "description": ""}], "countryside": "", "description": ""}
(1 row)
To update the table directly, the statement would be:
update data
set value = jsonb_set(value, '{my_items}',
jsonb_path_query_array(value->'my_items',
'$ ? (#."id" <> "AAA")'));
This works for versions before PostgreSQL 12:
with data as (
select 1 as id, '{
"urlName" : "testurl",
"countryside" : "",
"description" : "",
"my_items" : [
{
"id" : "AAA"
},
{
"name" : "TestRZ",
"id" : "BBB",
"description" : ""
}
],
"name" : "TheName"
}'::jsonb as stuff
), expand as (
select d.id, d.stuff, e.item, e.rn
from data d
cross join lateral jsonb_array_elements(stuff->'my_items') with ordinality as e(item, rn)
)
select id, jsonb_set(stuff, '{my_items}', jsonb_agg(item order by rn)) as new_stuff
from expand
where item->>'id' != 'AAA'
group by id, stuff;
id | new_stuff
----+---------------------------------------------------------------------------------------------------------------------------------------------------
1 | {"name": "TheName", "urlName": "testurl", "my_items": [{"id": "BBB", "name": "TestRZ", "description": ""}], "countryside": "", "description": ""}
(1 row)
The direct update for this is a little more involved:
with expand as (
select d.id, d.value, e.item, e.rn
from data d
cross join lateral jsonb_array_elements(value->'my_items')
with ordinality as e(item, rn)
), agg as (
select id, jsonb_set(value, '{my_items}', jsonb_agg(item order by rn)) as new_value
from expand
where item->>'id' != 'AAA'
group by id, value
)
update data
set value = agg.new_value
from agg
where agg.id = data.id;
i'm create a table have one json column and data of inserted has below structure:
{
"options" : {
"info" : [
{"data" : "data1", "verified" : 0},
{"data" : "data2", "verified" : 1},
... and more
],
"otherkeys" : "some data..."
}
}
i want to run a query to get data of verified = 1 "info"
this is for mysql 5.7 comunity running on windows 10
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[*].verified" = 1
is expect the output of "data2" but the actual output is nothing.
below query worked perfectly
select id, (meta->"$.options.info[*].data") AS `data`
from tbl
WHERE meta->"$.options.info[1].verified" = 1
but i need to search all item in array not only index 1
how can fix it ?
(sorry for bad english)
Try:
SELECT `id`, (`meta` -> '$.options.info[*].data') `data`
FROM `tbl`
WHERE JSON_CONTAINS(`meta` -> '$.options.info[*].verified', '1');
See dbfiddle.
I am having a table which is storing the JSON values. Within these JSONs, the JSON is having null attributes like below :
{
"name" : "AAAA",
"department" : "BBBB",
"countryCode" : null,
"languageCode" : null,
"region" : "AP"
}
I would like to write a query so that all the null attributes are removed from the output. For e.g. for the above-mentioned JSON, the resultant output JSON should be like this.
{
"name" : "AAAA",
"department" : "BBBB",
"region" : "AP"
}
I would like to have a generic query which I can apply to any JSON to get rid of null attributes in MySQL (v5.7).
In case you don't know all the keys in advance:
WITH j AS (SELECT CAST('{"a": 1, "b": "null", "c": null}' AS JSON) o)
SELECT j.o, (SELECT JSON_OBJECTAGG(k, JSON_EXTRACT(j.o, CONCAT('$."', jt.k, '"')))
FROM JSON_TABLE(JSON_KEYS(o), '$[*]' COLUMNS (k VARCHAR(200) PATH '$')) jt
WHERE JSON_EXTRACT(j.o, CONCAT('$."', jt.k, '"')) != CAST('null' AS JSON)) removed
FROM j;
Outputs:
o
removed
{"a": 1, "b": "null", "c": null}
{"a": 1, "b": "null"}
And this will keep your keys with string value "null", which is different from json null.
The following query will work for removing a single key value pair, where the value is NULL:
SELECT JSON_REMOVE(col, '$.countryCode')
FROM yourTable
WHERE CAST(col->"$.countryCode" AS CHAR(50)) = 'null';
But, I don't see a clean way of doing multiple removals in a single update. We could try to chain the updates together, but that would be ugly and non readable.
Also, to check for your JSON null, I had to cast the value to text first.
Demo
How you can remove null keys using JSON_REMOVE function. $.dummy is used if the condition is false.
select json_remove(abc,
case when json_unquote(abc->'$.name') = 'null' then '$.name' else '$.dummy' end,
case when json_unquote(abc->'$.department') = 'null' then '$.department' else '$.dummy' end,
case when json_unquote(abc->'$.countryCode') = 'null' then '$.countryCode' else '$.dummy' end,
case when json_unquote(abc->'$.languageCode') = 'null' then '$.languageCode' else '$.dummy' end,
case when json_unquote(abc->'$.region') = 'null' then '$.region' else '$.dummy' end)
from (
select cast('{
"name" : "AAAA",
"department" : "BBBB",
"countryCode" : null,
"languageCode" : null,
"region" : "AP"
}' as json) as abc ) a
Output:
{"name": "AAAA", "region": "AP", "department": "BBBB"}