I have a table with a json column that looks like this :
+----+------------+
| id | myfield |
+----+------------+
| 1 | ["1", "2"] |
| 2 | ["3", "2"] |
| 3 | ["2", "4"] |
+----+------------+
How can I merge all values from myfield in one array?
I need a result that will look like this: ["1", "2", "3", "2", "2", "4"], or even better with removed duplicates.
I tried using this query:
SELECT JSON_ARRAYAGG(myfield) FROM json_test but as a result I'm getting:
[["1", "2"], ["3", "2"], ["2", "4"]]
I assume I need a query in combination with the function JSON_MERGE.
Here's a solution but it requires MySQL 8.0 for the JSON_TABLE() function:
SELECT GROUP_CONCAT(j.myvalue) AS flattened_values
FROM mytable, JSON_TABLE(
mytable.myfield, '$[*]' COLUMNS(myvalue INT PATH '$')
) AS j;
Output:
+------------------+
| flattened_values |
+------------------+
| 1,2,3,2,2,4 |
+------------------+
I would actually recommend avoiding storing JSON arrays. Instead, store multi-valued data in a normalized manner, in a second table. Then you could just use GROUP_CONCAT() on the joined table.
I have still yet to hear of a use of JSON in MySQL that isn't better accomplished by using database normalization.
Related
There is a map nested in a large json payload like
{
"map": {
"key1": "value1",
"key2": "value2",
"key3": "value3"
},
// more stuff
}
I would like to generate a table like that:
+------#--------+
| Key | Value |
+------#--------+
| key1 | value1 |
| key2 | value2 |
| key3 | value3 |
+------#--------+
The only thing I can think of is writing a stored function that loops over JSON_KEYS to convert all key value pairs into
[{"key":"key1", "value":"value1"}, {"key":"key2", "value":"value2"}, ...]
which makes the task trivial with JSON_TABLE.
Is there a faster and more elegant way?
Here's a solution:
select j.key, json_unquote(json_extract(m.data, concat('$.map.', j.key))) as value from mytable as m
cross join json_table(json_keys(m.data, '$.map'), '$[*]' columns (`key` varchar(10) path '$')) as j
Output with your sample data:
+------+--------+
| key | value |
+------+--------+
| key1 | value1 |
| key2 | value2 |
| key3 | value3 |
+------+--------+
If that query seems inelegant or hard to maintain, you're probably right. You shouldn't store data in JSON if you want simple or elegant queries.
your doing well and even for enterprise project doing this way
This is a sample database 'test' with a JSON column 'arr' containing an array of JSON objects
+----+----------------------------------------------------------+
| id | arr |
+----+----------------------------------------------------------+
| 1 | [{"name": "aman"}, {"name": "jay"}] |
| 2 | [{"name": "yash"}, {"name": "aman"}, {"name": "jay"}] |
+----+----------------------------------------------------------+
I want to use JSON_CONTAINS to know if a value exists in a specific key of an object in the array.
Here's my query :
SELECT JSON_CONTAINS(arr, '"jay"', '$[*].name') from test WHERE id=1;
I get the following error:
ERROR 3149 (42000): In this situation, path expressions may not contain the * and ** tokens or an array range.
I know that I can try using JSON_EXTRACT() for this, but what am I doing wrong here ?
Is there any way to use JSON_CONTAINS with an array of JSON objects in MySQL.
Yes, it is possible using the following syntax:
SELECT JSON_CONTAINS(arr, '{"name": "jay"}') from test WHERE id=1;
db<>fiddle demo
Example:
+-----+--------------------------------------------------------+---+
| id | arr | r |
+-----+--------------------------------------------------------+---+
| 1 | [{"name": "aman"}, {"name": "jay"}] | 1 |
| 2 | [{"name": "yash"}, {"name": "aman"}, {"name": "jay"}] | 1 |
| 3 | [{"name": "yash"}, {"name": "aman"}] | 0 |
+-----+--------------------------------------------------------+---+
You must use JSON_SEARCH:
SELECT JSON_SEARCH(arr, 'one', 'jay', NULL, '$[*].name') IS NOT NULL
FROM test
WHERE id=1;
I have a Redshift table in which one of the columns is a JSON array. I would like to append some data into that array. Eg:
id | col1 | col2
1 | A | {"key": []}
2 | B | {"key": []}
3 | B | {"key": ['A']}
4 | B | {"key": ['A', 'B']}
I would like to create a statement like UPDATE <table> SET col2 = <something> where col1 = 'B' so that I get:
id | col1 | col2
1 | A | {"key": []}
2 | B | {"key": ['C']}
3 | B | {"key": ['A', 'C']}
4 | B | {"key": ['A', 'B', 'C']}
You'd have to write your own User Defined Function (UDF), passing in the current value of the column and the element you would like to add, then passing back the result.
Hwoever, you really should avoid JSON columns in Amazon Redshift if at all possible. They cannot take advantage of all the features that make Redshift great (columnar, SORTKEY, etc). Plus, you'll have problems like this that are not in the normal realm of a relational database.
I have table that holds sort json number values that i need to sort id using defined json sort number...so i have table like this:
+----+------------+-----------------+
| id | channel | sort |
+----+------------+-----------------+
| 1 | US_CH 1 | ["1", "2", "4"] |
| 4 | US_CH 4 | ["1", "2", "4"] |
| 2 | US_CH 2 | ["1", "2", "4"] |
+----+------------+-----------------+
And would like to get to get this:
+----+------------+-----------------+
| id | channel | sort |
+----+------------+-----------------+
| 1 | US_CH 1 | ["1", "2", "4"] |
| 2 | US_CH 2 | ["1", "2", "4"] |
| 4 | US_CH 4 | ["1", "2", "4"] |
+----+------------+-----------------+
So the point is to get ID sort by value of json sort values in array. I know that sort json values are not json structure but i need to get using this number because i im working channel editor that update and add channels that have 5000 records (enigma2 stb) so i need using this number because it will store small data in database and inserting and updating will be more faster.
I try using JSON_SEARCH to extract single value but i need all values so that i can use like ORDER BY JSON_EXTRACT(sort, '$[extract numbers]')
Try this:
ORDER BY LOCATE(sort, CONCAT('"', id, '"'))
I've got several Postgres 9.4 tables that contain data like this:
| id | data |
|----|-------------------------------------------|
| 1 | {"user": "joe", "updated-time": 123} |
| 2 | {"message": "hi", "updated-time": 321} |
I need to transform the JSON column into something like this
| id | data |
|----|--------------------------------------------------------------|
| 1 | {"user": "joe", "updated-time": {123, "unit":"millis"}} |
| 2 | {"message": "hi", "updated-time": {321, "unit":"millis"}} |
Ideally it would be easy to apply the transformation to multiple tables. Tables that contain the JSON key data->'updated-time' should be updated, and ones that do not should be skipped. Thanks!
You can use the || operator to merge two jsonb objects together.
select '{"foo":"bar"}'::jsonb || '{"baz":"bar"}'::jsonb;
= {"baz": "bar", "foo": "bar"}