This is a sample database 'test' with a JSON column 'arr' containing an array of JSON objects
+----+----------------------------------------------------------+
| id | arr |
+----+----------------------------------------------------------+
| 1 | [{"name": "aman"}, {"name": "jay"}] |
| 2 | [{"name": "yash"}, {"name": "aman"}, {"name": "jay"}] |
+----+----------------------------------------------------------+
I want to use JSON_CONTAINS to know if a value exists in a specific key of an object in the array.
Here's my query :
SELECT JSON_CONTAINS(arr, '"jay"', '$[*].name') from test WHERE id=1;
I get the following error:
ERROR 3149 (42000): In this situation, path expressions may not contain the * and ** tokens or an array range.
I know that I can try using JSON_EXTRACT() for this, but what am I doing wrong here ?
Is there any way to use JSON_CONTAINS with an array of JSON objects in MySQL.
Yes, it is possible using the following syntax:
SELECT JSON_CONTAINS(arr, '{"name": "jay"}') from test WHERE id=1;
db<>fiddle demo
Example:
+-----+--------------------------------------------------------+---+
| id | arr | r |
+-----+--------------------------------------------------------+---+
| 1 | [{"name": "aman"}, {"name": "jay"}] | 1 |
| 2 | [{"name": "yash"}, {"name": "aman"}, {"name": "jay"}] | 1 |
| 3 | [{"name": "yash"}, {"name": "aman"}] | 0 |
+-----+--------------------------------------------------------+---+
You must use JSON_SEARCH:
SELECT JSON_SEARCH(arr, 'one', 'jay', NULL, '$[*].name') IS NOT NULL
FROM test
WHERE id=1;
In Azure Data Factory, I have a Delimited Text source data which contains 2 JSON String columns.
I need to merge these 2 columns into a new JSON string column.
Tried Copy data but it supports only 1 to 1 mapping. Then took a look in Derived Column of Data flow, but it only has concat function to pad 2 strings.
For example I have my source data like:
| a | b |
|--------------|--------------|
| [{"foo": 0}] | [{"bar": 1}] |
| [{"baz": 2}] | |
And I want to merge Column a and b into c:
| c |
|--------------------------|
| [{"foo": 0}, {"bar": 1}] |
| [{"baz": 2}] |
Is there anyway to do this in Data Factory?
I have a JSON data feed coming into SQL Server 2016. One of the attributes I must parse contains a JSON array. Unfortunately, instead of implementing a key/value design, the source system sends each member of the array with a different attribute name. The attribute names are not known in advance, and are subject to change/volatility.
declare #json nvarchar(max) =
'{
"objects": [
{"foo":"fooValue"},
{"bar":"barValue"},
{"baz":"bazValue"}
]
}';
select * from openjson(json_query(#json, 'strict $.objects'));
As you can see:
element 0 has a "foo" attribute
element 1 has a "bar" attribute
element 2 has a "baz" attribute:
+-----+--------------------+------+
| key | value | type |
+-----+--------------------+------+
| 0 | {"foo":"fooValue"} | 5 |
| 1 | {"bar":"barValue"} | 5 |
| 2 | {"baz":"bazValue"} | 5 |
+-----+--------------------+------+
Ideally, I would like to parse and project the data like so:
+-----+---------------+----------------+------+
| key | attributeName | attributeValue | type |
+-----+---------------+----------------+------+
| 0 | foo | fooValue | 5 |
| 1 | bar | barValue | 5 |
| 2 | baz | bazValue | 5 |
+-----+---------------+----------------+------+
Reminder: The attribute names are not known in advance, and are subject to change/volatility.
select o.[key], v.* --v.[key] as attributeName, v.value as attributeValue
from openjson(json_query(#json, 'strict $.objects')) as o
cross apply openjson(o.[value]) as v;
I have a Redshift table in which one of the columns is a JSON array. I would like to append some data into that array. Eg:
id | col1 | col2
1 | A | {"key": []}
2 | B | {"key": []}
3 | B | {"key": ['A']}
4 | B | {"key": ['A', 'B']}
I would like to create a statement like UPDATE <table> SET col2 = <something> where col1 = 'B' so that I get:
id | col1 | col2
1 | A | {"key": []}
2 | B | {"key": ['C']}
3 | B | {"key": ['A', 'C']}
4 | B | {"key": ['A', 'B', 'C']}
You'd have to write your own User Defined Function (UDF), passing in the current value of the column and the element you would like to add, then passing back the result.
Hwoever, you really should avoid JSON columns in Amazon Redshift if at all possible. They cannot take advantage of all the features that make Redshift great (columnar, SORTKEY, etc). Plus, you'll have problems like this that are not in the normal realm of a relational database.
I have a MySQL table authors with columns id, name and published_books. In this, published_books is a JSON column. With sample data,
id | name | published_books
-----------------------------------------------------------------------
1 | Tina | {
| | "17e9bf8f": {
| | "name": "Book 1",
| | "tags": [
| | "self Help",
| | "Social"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | },
| | "8e8b2470": {
| | "name": "Book 2",
| | "tags": [
| | "Inspirational"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
2 | John | {
| | "8e8b2470": {
| | "name": "Book 4",
| | "tags": [
| | "Social"
| | ],
| | "language": "Tamil",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
3 | Keith | {
| | "17e9bf8f": {
| | "name": "Book 5",
| | "tags": [
| | "Comedy"
| | ],
| | "language": "French",
| | "release_date": "2017-05-01"
| | },
| | "8e8b2470": {
| | "name": "Book 6",
| | "tags": [
| | "Social",
| | "Life"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
As you see, the published_books column has nested JSON data (one level). JSON will have dynamic UUIDs as the keys and its values will be book details as a JSON.
I want to search for books with certain conditions and extract those books JSON data alone to return as the result.
The query that I've written,
select JSON_EXTRACT(published_books, '$.*') from authors
where JSON_CONTAINS(published_books->'$.*.language', '"English"')
and JSON_CONTAINS(published_books->'$.*.tags', '["Social"]');
This query performs the search and returns the entire published_books JSON. But I wanted just those books JSON alone.
The expected result,
result
--------
"17e9bf8f": {
"name": "Book 1",
"tags": [
"self Help",
"Social"
],
"language": "English",
"release_date": "2017-05-01"
}
-----------
"8e8b2470": {
"name": "Book 6",
"tags": [
"Social",
"Life"
],
"language": "English",
"release_date": "2017-05-01"
}
There is no JSON function yet that filters elements of a document or array with "WHERE"-like logic.
But this is a task that some people using JSON data may want to do, so the solution MySQL has provided is to use the JSON_TABLE() function to transform the JSON document into a format as if you had stored your data in a normal table. Then you can use a standard SQL WHERE clause to the fields returned.
You can't use this function in MySQL 5.7, but if you upgrade to MySQL 8.0 you can do this.
select authors.id, authors.name, books.* from authors,
json_table(published_books, '$.*'
columns(
bookid for ordinality,
name text path '$.name',
tags json path '$.tags',
language text path '$.language',
release_date date path '$.release_date')
) as books
where books.language = 'English'
and json_search(tags, 'one', 'Social') is not null;
+----+-------+--------+--------+-------------------------+----------+--------------+
| id | name | bookid | name | tags | language | release_date |
+----+-------+--------+--------+-------------------------+----------+--------------+
| 1 | Tina | 1 | Book 1 | ["self Help", "Social"] | English | 2017-05-01 |
| 3 | Keith | 2 | Book 6 | ["Social", "Life"] | English | 2017-05-01 |
+----+-------+--------+--------+-------------------------+----------+--------------+
Note that nested JSON arrays are still difficult to work with, even with JSON_TABLE(). In this example, I exposed the tags as a JSON array, and then use JSON_SEARCH() to find the tag you wanted.
I agree with Rick James — you might as well store the data in normalized tables and columns. You think that using JSON will save you some work, but it's won't. It might make it more convenient to store the data as a single JSON document instead of multiple rows across several tables, but you just have to unravel the JSON again before you can query it the way you want.
Furthermore, if you store data in JSON, you will have to solve this sort of JSON_TABLE() expression every time you want to query the data. That's going to make a lot more work for you on an ongoing basis than if you had stored the data normally.
Frankly, I have yet to see a question on Stack Overflow about using JSON with MySQL that wouldn't lead to the conclusion that storing data in relational tables is a better idea than using JSON, if the structure of the data doesn't need to vary.
You are approaching the task backwards.
Do the extraction as you insert the data. Insert into a small number of tables (Authors, Books, Tags, and maybe a couple more) and build relations between them. No JSON is needed in this database.
The result is an easy-to-query and fast database. However, it requires learning about RDBMS and SQL.
JSON is useful when the data is a collection of random stuff. Your JSON is very regular, hence the data fits very nicely into RDBMS technology. In that case, JSON is merely a standard way to serialize the data. But it should not be used for querying.