Query a JSON column with an array of object in MySQL - mysql

I have a json column with the follow array:
[
{
"id": "24276e4b-de81-4c2c-84e7-eed9c3582a31",
"key": "id",
"type": "input",
},
{
"id": "e0ca5aa1-359f-4460-80ad-70445be49644",
"key": "name",
"type": "textarea",
}
]
I tried the follow query to get the row that has the id 24276e4b-de81-4c2c-84e7-eed9c3582a31 in the document column, but it returns not results:
select * from jobs WHERE document->'$[*].id' = "24276e4b-de81-4c2c-84e7-eed9c3582a31"
Anyone know how to do the right query?

I use mysql 5.7 and so JSON_CONTAINS can be easily used like this:
SELECT JSON_CONTAINS(
'[{"id": "24av","name": "she"},{"id": "e0c2", "name": "another_she"}]',
JSON_OBJECT('id', "e0c2")
);

Try like this:
SELECT * FROM jobs WHERE document->'$[*].id' = json_array("24276e4b-de81-4c2c-84e7-eed9c3582a31");
It works for me, but I think the below way is more better:
SELECT * FROM jobs WHERE json_contains(document->'$[*].id', json_array("24276e4b-de81-4c2c-84e7-eed9c3582a31"));
Actually It's easy just remember the return value is JSON_TYPE but not a String or something else;

maybe this? #Barmar
SELECT * FROM jobs WHERE JSON_SEARCH(document, "one", "24276e4b-de81-4c2c-84e7-eed9c3582a31", NULL, '$[*].id') IS NOT NULL;

When you use document->'$[*].id' it returns a comma-delimited list of all the ID properties. This won't be equal to the value of just one ID string, unless there's only one object in the document column.
You need to use JSON_SEARCH() to search for a matching element within the JSON value.
SELECT *
FROM jobs
WHERE JSON_SEARCH(document, "one", "24276e4b-de81-4c2c-84e7-eed9c3582a31", NULL, '$[*].id');

Related

Is there a function in MYSQL like JSON_ARRAY_APPEND but for multiple vaues to be appended?

I'm wanting to add multiple values in JSON_ARRAY_APPEND.
For example in the following query:
SET #data = '{
"Person": {
"Name": "Homer",
"Hobbies": ["Eating", "Sleeping"]
}
}';
SELECT JSON_ARRAY_APPEND(#data, '$.Person.Hobbies', "Base Jumping") AS 'Result';
Our result would be:
{"Person": {"Name": "Homer", "Hobbies": ["Eating", "Sleeping", "Base Jumping"]}}
I'd like to be able to add multiple hobbies in one line rather than a dozen, using something like
SET #data = '{
"Person": {
"Name": "Homer",
"Hobbies": ["Eating", "Sleeping"]
}
}';
SELECT JSON_ARRAY_APPEND(#data, '$.Person.Hobbies', '"Base Jumping","Skiing"') AS 'Result';
Which results in
{"Person": {"Name": "Homer", "Hobbies": ["Eating", "Sleeping", "\"Base Jumping\",\"Skiing\""]}}
That's almost what I want but has extra characters that aren't wanted. Is there a better way to go about this?
JSON_ARRAY_APPEND() allows you to specify multiple path and value arguments. You can repeat the same path, and it will append to the result of the preceding append.
SELECT JSON_ARRAY_APPEND(#data,
'$.Person.Hobbies', "Base Jumping",
'$.Person.Hobbies', "Skiing") AS Result;
This is mentioned in the documentation:
The path-value pairs are evaluated left to right. The document produced by evaluating one pair becomes the new value against which the next pair is evaluated.
You can also use JSON_MERGE_PRESERVE() to concatenate arrays:
SELECT JSON_MERGE_PRESERVE(#data, '$.Person.Hobbies', '["Base Jumping", "Skiing"]') AS Result;

Postgres jsonb conditional replace of specific property in array of objects

Imagine I have a column data in a postgres table with the following sample data:
[
{
"type": "a",
"name": "Joe"
},
{
"type": "b",
"name": "John"
}
]
I want to perform an update on this table to update the type properties for each object in the json array, converting them from the current text to a corresponding number.
text "a" becomes 1
text "b" becomes 2
and so forth
I got as far as this:
update "table"
set "data" = jsonb_set("data", '{0,type}','1')
I understand this will update whichever object is at position 0 in the array to have value 1 in the type property, which is of course not what I want.
The replace needs to be conditional, if there is an a, it should become a 1, if there is a b, it should become a 2, etc..
Is there any way to accomplish what I'm looking for?
You can use JSONB_SET() function nested in JSONB_AGG() within an UPDATE Statement after producing consecutive integers through use of WITH ORDINALITY keywords following JSONB_ARRAY_ELEMENTS() function such as
UPDATE tab
SET data = (
SELECT JSONB_AGG(JSONB_SET(j, '{type}', ('"'||idx||'"')::JSONB))
FROM JSONB_ARRAY_ELEMENTS(data)
WITH ORDINALITY arr(j,idx)
)
Demo

How can Postgres extract parts of json, including arrays, into another JSON field?

I'm trying to convince PostgreSQL 13 to pull out parts of a JSON field into another field, including a subset of properties within an array based on a discriminator (type) property. For example, given a data field containing:
{
"id": 1,
"type": "a",
"items": [
{ "size": "small", "color": "green" },
{ "size": "large", "color": "white" }
]
}
I'm trying to generate new_data like this:
{
"items": [
{ "size": "small" },
{ "size": "large"}
]
}
items can contain any number of entries. I've tried variations of SQL something like:
UPDATE my_table
SET new_data = (
CASE data->>'type'
WHEN 'a' THEN
json_build_object(
'items', json_agg(json_array_elements(data->'items') - 'color')
)
ELSE
null
END
);
but I can't seem to get it working. In this case, I get:
ERROR: set-returning functions are not allowed in UPDATE
LINE 6: 'items', json_agg(json_array_elements(data->'items')...
I can get a set of items using json_array_elements(data->'items') and thought I could roll this up into a JSON array using json_agg and remove unwanted keys using the - operator. But now I'm not sure if what I'm trying to do is possible. I'm guessing it's a case of PEBCAK. I've got about a dozen different types each with slightly different rules for how new_data should look, which is why I'm trying to fit the value for new_data into a type-based CASE statement.
Any tips, hints, or suggestions would be greatly appreciated.
One way is to handle the set json_array_elements() returns in a subquery.
UPDATE my_table
SET new_data = CASE
WHEN data->>'type' = 'a' THEN
(SELECT json_build_object('items',
json_agg(jae.item::jsonb - 'color'))
FROM json_array_elements(data->'items') jae(item))
END;
db<>fiddle
Also note that - isn't defined for json only for jsonb. So unless your columns are actually jsonb you need a cast. And you don't need an explicit ... ELSE NULL ... in a CASE expression, NULL is already the default value if no other value is specified in an ELSE branch.

T-SQL - search in filtered JSON array

SQL Server 2017.
Table OrderData has column DataProperties where JSON is stored. JSON example stored there:
{
"Input": {
"OrderId": "abc",
"Data": [
{
"Key": "Files",
"Value": [
"test.txt",
"whatever.jpg"
]
},
{
"Key": "Other",
"Value": [
"a"
]
}
]
}
}
So, it's an object with Input object, which has Data array that's KVP - full of objects with Key string and Value array of strings.
And my problem - I need to query for rows based on values in Files in example JSON - simple LIKE that matches %text%.
This query works:
SELECT TOP 10 *
FROM OrderData CROSS APPLY OPENJSON(DataProperties,'$.Input.Data') dat
WHERE JSON_VALUE(dat.value, '$.Key') = 'Files' and dat.[key] = 0
AND JSON_QUERY(dat.value, '$.Value') LIKE '%2%'
Problem is that this query is very slow, unsurprisingly.
How to make it faster?
I cannot create computed column with JSON_VALUE, because I need to filter in an array.
I cannot create computed column with JSON_QUERY on "$.Input.Data" or "$.Input.Data[0].Values" - because I need specific array item in this array with Key == "Files".
I've searched, but it seems that you cannot create computed column that also filters data, like with this attempt:
ALTER TABLE OrderData
ADD aaaTest AS (select JSON_QUERY(dat.value, '$.Value')
OPENJSON(DataProperties,'$.Input.Data') dat
WHERE JSON_VALUE(dat.value, '$.Key') = 'Files' and dat.[key] = 0 );
Error: Subqueries are not allowed in this context. Only scalar expressions are allowed.
What are my options?
Add Files column with an index and use INSERT/UPDATE triggers that populate this column on inserts/updates?
Create a view that "computes" this column? Can't add index, will still be slow
So far only option 1. has some merit, but I don't like triggers and maybe there's another option?
You might try something along this:
Attention: I've added a 2 to the text2 to fullfill your filter. And I named both to the plural "Values":
DECLARE #mockupTable TABLE(ID INT IDENTITY, DataProperties NVARCHAR(MAX));
INSERT INTO #mockupTable VALUES
(N'{
"Input": {
"OrderId": "abc",
"Data": [
{
"Key": "Files",
"Values": [
"test2.txt",
"whatever.jpg"
]
},
{
"Key": "Other",
"Values": [
"a"
]
}
]
}
}');
The query
SELECT TOP 10 *
FROM #mockupTable t
CROSS APPLY OPENJSON(t.DataProperties,'$.Input.Data')
WITH([Key] NVARCHAR(100)
,[Values] NVARCHAR(MAX) AS JSON) dat
WHERE dat.[Key] = 'Files'
AND dat.[Values] LIKE '%2%';
The main difference is the WITH-clause, which is used to return the properties inside an object in a typed way and side-by-side (similar to a naked OPENJSON with a PIVOT for all columns - but much better). This avoids expensive JSON methods in your WHERE...
Hint: As we return the Value with NVARCHAR(MAX) AS JSON we can continue with the nested array and might proceed with something like this:
SELECT TOP 10 *
FROM #mockupTable t
CROSS APPLY OPENJSON(t.DataProperties,'$.Input.Data')
WITH([Key] NVARCHAR(100)
,[Values] NVARCHAR(MAX) AS JSON) dat
WHERE dat.[Key] = 'Files'
--we read the array again with `OPENJSON`:
AND 'test2.txt' IN(SELECT [Value] FROM OPENJSON(dat.[Values]));
You might use one more CROSS APPLY to add the array's values and filter this at the WHERE directly.
SELECT TOP 10 *
FROM #mockupTable t
CROSS APPLY OPENJSON(t.DataProperties,'$.Input.Data')
WITH([Key] NVARCHAR(100)
,[Values] NVARCHAR(MAX) AS JSON) dat
CROSS APPLY OPENJSON(dat.[Values]) vals
WHERE dat.[Key] = 'Files'
AND vals.[Value]='test2.txt'
Just check it out...
This is an old question, but I would like to revisit it. There isn't any mention of how the source table is actually constructed in terms of indexing. If the original author is still around, can you confirm/deny what indexing strategy you used? For performant json document queries, I've found that having a table using the COLUMSTORE indexing strategy yields very performant JSON queries even with large amounts of data.
https://learn.microsoft.com/en-us/sql/relational-databases/json/store-json-documents-in-sql-tables?view=sql-server-ver15 has an example of different indexing techniques. For my personal solution I've been using COLUMSTORE albeit on a limited NVARCAHR document size. It's fast enough for any purposes I have even under millions of rows of decently sized json documents.

Update nested tag in JSON field in Postgresql

Ihave the following JSON field:
{
"Id": "64848e27-c25d-4f15-99db-b476d868b575",
"Associations_": [
"RatingBlockPinDatum"
],
"RatingScenarioId": "00572f95-9b81-4f7e-a359-3df06b093d4d",
"RatingBlockPinDatum": [
{
"Name": "mappedmean",
"PinId": "I.Assessment",
"Value": "24.388",
"BlockId": "Score"
},
{
"Name": "realmean",
"PinId": "I.Assessment",
"Value": "44.502",
"BlockId": "Score"
}]}
I want to update the Value from 24.388 to a new value in the nested array "RatingBlockPinDatum" where Name = "mappedmean".
Any help would be appreciated. I have already tried this but couldn't adapt it to work properly:
[Update nested key with postgres json field in Rails
You could first get one result per element in the RatingBlockPinDatum JSON array (using jsonb_array_length and generate_series) and then filter that result for where the Name key has the value "mappedmean". Then you have the records that need updating. The update itself can be done with jsonb_set:
with cte as (
select id, generate_series(0, jsonb_array_length(info->'RatingBlockPinDatum')-1) i
from mytable
)
update mytable
set info = jsonb_set(mytable.info,
array['RatingBlockPinDatum', cte.i::varchar, 'Value'],
'"99.999"'::jsonb)
from cte
where mytable.info->'RatingBlockPinDatum'->cte.i->>'Name' = 'mappedmean'
and cte.id = mytable.id;
Replace "99.999" with whatever value you want to store in that Value property.
See it run on rextester.com