I am Using MySQL 5.7 where I have below table structure:
id(bigInt)
item_name(varchar)
attributes(JSON array)
Example Data Set
1,"PRODUCT",'[ { "1": [2,4,1]},{ "2": [5,4,6]},{ "3": [5,3,2]}]'
I want to fetch the row based on the attributes field where key value is 1 (Not to include values search needs to be done on keys only).
Tried queries:
SELECT JSON_CONTAINS('{"attributes":[ { "1": [2,4,1]},{ "2": [5,4,6]},{ "3": [5,3,2]}]}',"1") Result;
This always returns 0n as result
SELECT JSON_SEARCH('{"attributes":[ { "1": [2,4,1]},{ "2": [5,4,6]},{ "3": [5,3,2]}]}', 'one', "1") Result;
This always return null
Any help will be appreciated
SELECT JSON_CONTAINS(
JSON_EXTRACT(
'{"attributes":[ { "1": [2,4,1]},{ "2": [5,4,6]},{ "3": [5,3,2]}]}',
'$.attributes[*].*'),
'1')
PS. For to understand the query execute SELECT JSON_EXTRACT(..) only.
PPS. Pay attention - the value to be found must be JSON or the value which is implicitly convertable to JSON (i.e. string and not numeric).
https://dbfiddle.uk/?rdbms=mysql_5.7&fiddle=edc90bff8eb2528708581576805ea98d
SELECT JSON_SEARCH('{"attributes":[ { "1": [2,4,1]},{ "2": [5,4,6]},{ "3": [5,3,2]}]}', 'one', "1")
this always return null
Read the function description carefully - JSON_SEARCH searches for string values only (the value to be searched for is specially marked as search_str) whereas the value to be found is numeric in JSON value.
Related
I have documents. How can I write a query with nested json field?
Query: Count value greater than 3 output: doc-1 and doc-3
Doc-1
"1": {
"count":4,
"name": "pen"
}
Doc-2
"2": {
"count":1,
"name": "eraser"
}
Doc-3
"3": {
"count":43,
"name": "book"
}
Convert the dynamic object (OBJECT_VALUES(), OBJECT_NAMES(), OBJECT_PAIRS()) into ARRAY and use ANY clause
https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/objectfun.html
SELECT b.*
FROM mybucket AS b
WHERE ANY v IN OBJECT_VALUES(b) SATISFIES v.`count` > 3 END;
I have a Snowflake table with one variant column (raw).
Every row in this table is complex (both dictionaries and arrays) and nested (multiple hierarchies).
What I want to do is to be able to update a specific item in some array.
It will be easier to understand it using an example so consider this as a row in the table:
{
"id": "1234"
"x_id": [
{
"y_id": "790437306684007491",
"y_state": "some_state"
}
],
"comments": {
"1": [
{
"comment_id": "bb288743-3b73-4423-b76b-f26b8c37f7d4",
"comment_timestamp": "2021-02-10 14:53:25.667564",
"comment_text": "Hey"
},
{
"comment_id": "7378f332-93c4-4522-9f73-3b6a8a9425ce",
"comment_text": "You",
"comment_timestamp": "2021-02-10 14:54:21.337046"
}
],
"2": [
{
"comment_id": "9dd0cbb0-df80-4b0f-b399-9ee153161462",
"comment_text": "Hello",
"comment_timestamp": "2021-02-09 09:26:17.987386"
},
{
"comment_id": "1a3bf1e8-82b5-4a9c-a959-a1da806ce7e3",
"comment_text": "World",
"comment_timestamp": "2021-02-09 09:28:32.144175"
}
]
}
}
And what I want is to update the comment text of a specific comment.
I know that I can update the whole JSON programmatically and update the whole object using PARSE_JSON, but this approach isn't sufficient because there could be other updates that will override other comments so this approach will fail (because these update will override each other).
So first, I've tried the naive approach (which I knew wouldn't work but I had to try):
update table1
set raw['comments']['1'][0]["comment_text"] = 'please work'
And not surprisingly I'm getting the following error:
SQL compilation error: syntax error line 2 at position 7 unexpected '['.
Next, I tried OBJECT_INSERT which should allow a way to update an object but it fails because of the nested key ('1'):
UPDATE table1
SET raw = OBJECT_INSERT(raw:comments:1, "comment_test", 'please work')
with the error
SQL compilation error: syntax error line 1 at position 99 unexpected '1'.
(I've also tried several permutations of this approach with raw:comments:"1" or raw:comments:1[0] or raw['comments']['1'] and some others)
I also tried to refactor the object so instead of having the comments as dictionary, to flat the comments into an array, something like:
{
"id": "1234"
"x_id": [
{
"y_id": "790437306684007491",
"y_state": "some_state"
}
],
"comments": [
{
"comment_id": "bb288743-3b73-4423-b76b-f26b8c37f7d4",
"comment_timestamp": "2021-02-10 14:53:25.667564",
"comment_text": "Hey"
"comment_key": "1"
},
{
"comment_id": "7378f332-93c4-4522-9f73-3b6a8a9425ce",
"comment_text": "You",
"comment_timestamp": "2021-02-10 14:54:21.337046"
"comment_key": "1"
}
{
"comment_id": "9dd0cbb0-df80-4b0f-b399-9ee153161462",
"comment_text": "Hello",
"comment_timestamp": "2021-02-09 09:26:17.987386",
"comment_key": "2"
},
{
"comment_id": "1a3bf1e8-82b5-4a9c-a959-a1da806ce7e3",
"comment_text": "World",
"comment_timestamp": "2021-02-09 09:28:32.144175",
"comment_key": "2"
}
]
}
But this doesn't get me any closer to a solution. I've looked for some ARRAY_REPLACE function that replace an item in array, but it doesn't look that such function exists (all semi-structured related functions)
I've also considered using JavaScript UDF's to do it, but I didn't find any source to UDF's that can actually update a row (they're all used to get data and not update it, as far from what I saw).
Is there any way to achieve what I want?
Thanks a lot!
You can update complex JSON structures using JavaScript UDFs. Here's a sample. Note that both of your JSON samples have errors. I used the second one and fixed the missing commas.
-- Create a temp table with a sigle variant. By convention, I uses "v" as the name of any
-- column in a single-column table. You can change to "raw" in your code.
create or replace temp table foo(v variant);
-- Create a UDF that updates the exact key you want to update.
-- Unfortunately, JavaScript treats the object path as a constant so you can't make this
-- a string that you pass in dynamically. There are ways around this possibly, but
-- library restrictions would require a raw JavaScript parser function. Just update the
-- path you need in the UDF.
create or replace function update_json("v" variant, "newValue" string)
returns variant
language javascript
as
$$
v.comments[0].comment_text = newValue;
return v;
$$;
-- Insert the corrected JSON into the variant field
insert into foo select parse_json('{
"id": "1234",
"x_id": [{
"y_id": "790437306684007491",
"y_state": "some_state"
}],
"comments": [{
"comment_id": "bb288743-3b73-4423-b76b-f26b8c37f7d4",
"comment_timestamp": "2021-02-10 14:53:25.667564",
"comment_text": "Hey",
"comment_key": "1"
},
{
"comment_id": "7378f332-93c4-4522-9f73-3b6a8a9425ce",
"comment_text": "You",
"comment_timestamp": "2021-02-10 14:54:21.337046",
"comment_key": "1"
},
{
"comment_id": "9dd0cbb0-df80-4b0f-b399-9ee153161462",
"comment_text": "Hello",
"comment_timestamp": "2021-02-09 09:26:17.987386",
"comment_key": "2"
},
{
"comment_id": "1a3bf1e8-82b5-4a9c-a959-a1da806ce7e3",
"comment_text": "World",
"comment_timestamp": "2021-02-09 09:28:32.144175",
"comment_key": "2"
}
]
}');
-- Show how the change works without updating the row
select update_json(v, 'please work') from foo;
-- Now update the row using the output. Note that this is updating the
-- whole variant field, not a portion of it.
update foo set v = update_json(v, 'please work');
-- Show the updated key
select v:comments[0].comment_text::string from foo;
Finally, if you want to modify a property that you have to look through the keys to find what you need first, you can do that in a loop in JavaScript. For example, if it's not the 1st comment you need but the one with a particular UUID or comment_text, etc., you can loop through to find it and update the comment_key on the same iteration of the loop.
Thanks, it works!
I've kinda managed to get it to work using built-in functions -
Assuming we know the position of the comment (in this example, position=3):
UPDATE table1 SET
raw = object_construct(
'id', raw:id,
'x_id', raw:x_id,
'comments', array_cat(array_append(array_slice(raw:comments ,0 ,2), parse_json('{"id": "3", "comment_text": "please work"}')) , ARRAY_SLICE(raw:comments,3,array_size(raw:comments)))
)
WHERE raw['id'] = 'some_id'
But I'm still thinking which approach will do the work better.
Anyway, thanks, helped a lot.
I have a json column in a postgres table.
The column contains the following json data:
{
"data": {
"id": "1234",
"sites": [
{
"site": {
"code": "1",
"display": "Site1"
}
},
{
"site": {
"code": "2",
"display": "Site2"
},
"externalSite": true
},
{
"site": {
"code": "3",
"display": "Site3"
}
}
]
}
}
I need to create an update query that adds another attribute ('newAttribute' in the sample below) to all array items that have '"externalSite": true', so, after running the update query the second array element will be:
{
"site": {
"code": "2",
"display": "Site2"
},
"externalSite": true,
"newAttribute": true
}
The following query returns the array elements that need to be updated:
select * from myTable, jsonb_array_elements(data -> 'sites') sites
where sites ->'externalSite' = 'true'
What is the syntax of the update query?
Thanks
Kobi
Assuming your table is called test and your column is called data, you can update it like so:
UPDATE test SET data =
(select jsonb_set(data::jsonb, '{"data","sites"}', sites)
FROM test
CROSS JOIN LATERAL (
SELECT jsonb_agg(CASE WHEN site ? 'externalSite' THEN site || '{"newAttribute":"true"}'::jsonb
ELSE site
END) AS sites
FROM jsonb_array_elements( (data#>'{"data","sites"}')::jsonb ) as ja(site)
) as sub
);
Note that I cast the data to jsonb data as there are more functions and operators available for manipulating jsonb than plain json.
You can run the SELECT statement alone to see what it is doing, but the basic idea is to re-create the sites object by expanding it with jsonb_array_elements and adding the newAttribute attribute if externalSite exists.
This array is then aggregated with jsonb_agg and, finally, in the outer select, the sites object is replaced entirely with this newly computed version.
I have a json column named Data in my user table in the database.
Example of content:
[
{
"id": 10,
"key": "mail",
"type": "male",
},
{
"id": 5,
"key": "name",
"type": "female",
},
{
"id": 8,
"key": "mail",
"type": "female",
}
]
let's assume that many row in the table may have the same content so they should be removed from all of the rows of the table too what i want to do is remove an item by key and value last thing i can come up with is this query for example i want to remove the item where id equal 10:
UPDATE
user
SET
`Data` =
JSON_REMOVE(`Data`,JSON_SEARCH(`Data`,'all',10,NULL,'$[*].id'),10)
but this query remove the all the content of the column.
If any one could help this is much appreciated.
By the way i get on this way because i can't seem to find a way to make it using QueryBuilder in laravel So it will be RawQuery.
Thank you Guys
After a lot of manual reading and interrogation i found the answer i will post for further help for anyone in need
UPDATE
user
SET
`Data` = JSON_REMOVE(
`Data`,
REPLACE(
REPLACE
(
JSON_SEARCH(
Data,
'all',
'10',
NULL,
'$**.id'
),
'.id',
''
),
'"',
''
)
)
==> Some explanation as i search and update the query and the content itself many times
I notice that JSON_SEARCH work only on string value if the value is int it will not find it so i cast the id(s) values of id to string after that JSON_SEARCH will return something like this $[the searched key].id but since i need to get the key of the hole item i need to remode ".id" so replace within was for that purpose and last to remove the quote from the result because it will be like this for example "$[0]" but JSON_REMOVE want it to be like this $[0] so that's the purpose of the second replace finally the item it self will be removed and the data will be updated
Hope laravel team can support those things in the future because i searched for a long hours but unfortunately no much help but we can get through with raw statement.
==> BE AWARE THAT IF THE ITEM YOU SEARCH FOR DOESN'T EXIST IN THE JSON CONTENT ALL THE JSON CONTENT WILL BE SET TO NULL
This is the Laravel way:
$jsonString = '[{
"id": 10,
"key": "mail",
"type": "male"
},
{
"id": 5,
"key": "name",
"type": "female"
},
{
"id": 8,
"key": "mail",
"type": "female"
}
]';
// decode json string to array
$data = json_decode($jsonString);
// remove item that id = 10
$data = array_filter($data, function ($item) {
return $item->id != 10;
});
// run the query
foreach ($data as $item){
DB::table('user')->where('id', $item->id)->update($item);
}
I've a N1QL query like
SELECT RAW ARRAY_AGG(list.id)
FROM default list
WHERE list.type="list"
AND "*" IN list.supported
for the following object(s):
{
"type": "list",
"id": "*",
"name": "Everything",
"listCount": 2,
"supported": [
"*",
"test"
]
}
The problem now is, that I'll always get a double array as a result:
[
[
"*",
"test"
]
]
How can I now prevent the double array as a result or better: how do I use this result in the following subselect (which always just returns an empty array):
SELECT *
FROM default server
WHERE server.type="server" AND ANY listId IN supportedLists SATISFIES
listId in
(SELECT RAW ARRAY_AGG(list.id)
FROM default list
WHERE list.type="list"
AND "*" IN list.supported)
END;
Where server is:
{
"type": "server",
"id": "AAABBBCCC",
"supportedLists": [
"0",
"1"
],
}
The select
SELECT *
FROM default server
WHERE server.type="server" AND ANY listId IN supportedLists SATISFIES
listId in
["test", "other features"]
END;
Works fine ... So my problem is definetly the subselect
What I want to archive is to have a list of servers supporting a list (field supportedList) for example for all lists with the feature "test" (field supported)
For anyone having the same issue .... the subselect should be surrounded with ARRAY_FLATTEN((SUBQUERY),1)
See https://forums.couchbase.com/t/in-operator-is-not-working-for-subquery/11041/2