I have a MySQL table authors with columns id, name and published_books. In this, published_books is a JSON column. With sample data,
id | name | published_books
-----------------------------------------------------------------------
1 | Tina | {
| | "17e9bf8f": {
| | "name": "Book 1",
| | "tags": [
| | "self Help",
| | "Social"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | },
| | "8e8b2470": {
| | "name": "Book 2",
| | "tags": [
| | "Inspirational"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
2 | John | {
| | "8e8b2470": {
| | "name": "Book 4",
| | "tags": [
| | "Social"
| | ],
| | "language": "Tamil",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
3 | Keith | {
| | "17e9bf8f": {
| | "name": "Book 5",
| | "tags": [
| | "Comedy"
| | ],
| | "language": "French",
| | "release_date": "2017-05-01"
| | },
| | "8e8b2470": {
| | "name": "Book 6",
| | "tags": [
| | "Social",
| | "Life"
| | ],
| | "language": "English",
| | "release_date": "2017-05-01"
| | }
| | }
-----------------------------------------------------------------------
As you see, the published_books column has nested JSON data (one level). JSON will have dynamic UUIDs as the keys and its values will be book details as a JSON.
I want to search for books with certain conditions and extract those books JSON data alone to return as the result.
The query that I've written,
select JSON_EXTRACT(published_books, '$.*') from authors
where JSON_CONTAINS(published_books->'$.*.language', '"English"')
and JSON_CONTAINS(published_books->'$.*.tags', '["Social"]');
This query performs the search and returns the entire published_books JSON. But I wanted just those books JSON alone.
The expected result,
result
--------
"17e9bf8f": {
"name": "Book 1",
"tags": [
"self Help",
"Social"
],
"language": "English",
"release_date": "2017-05-01"
}
-----------
"8e8b2470": {
"name": "Book 6",
"tags": [
"Social",
"Life"
],
"language": "English",
"release_date": "2017-05-01"
}
There is no JSON function yet that filters elements of a document or array with "WHERE"-like logic.
But this is a task that some people using JSON data may want to do, so the solution MySQL has provided is to use the JSON_TABLE() function to transform the JSON document into a format as if you had stored your data in a normal table. Then you can use a standard SQL WHERE clause to the fields returned.
You can't use this function in MySQL 5.7, but if you upgrade to MySQL 8.0 you can do this.
select authors.id, authors.name, books.* from authors,
json_table(published_books, '$.*'
columns(
bookid for ordinality,
name text path '$.name',
tags json path '$.tags',
language text path '$.language',
release_date date path '$.release_date')
) as books
where books.language = 'English'
and json_search(tags, 'one', 'Social') is not null;
+----+-------+--------+--------+-------------------------+----------+--------------+
| id | name | bookid | name | tags | language | release_date |
+----+-------+--------+--------+-------------------------+----------+--------------+
| 1 | Tina | 1 | Book 1 | ["self Help", "Social"] | English | 2017-05-01 |
| 3 | Keith | 2 | Book 6 | ["Social", "Life"] | English | 2017-05-01 |
+----+-------+--------+--------+-------------------------+----------+--------------+
Note that nested JSON arrays are still difficult to work with, even with JSON_TABLE(). In this example, I exposed the tags as a JSON array, and then use JSON_SEARCH() to find the tag you wanted.
I agree with Rick James — you might as well store the data in normalized tables and columns. You think that using JSON will save you some work, but it's won't. It might make it more convenient to store the data as a single JSON document instead of multiple rows across several tables, but you just have to unravel the JSON again before you can query it the way you want.
Furthermore, if you store data in JSON, you will have to solve this sort of JSON_TABLE() expression every time you want to query the data. That's going to make a lot more work for you on an ongoing basis than if you had stored the data normally.
Frankly, I have yet to see a question on Stack Overflow about using JSON with MySQL that wouldn't lead to the conclusion that storing data in relational tables is a better idea than using JSON, if the structure of the data doesn't need to vary.
You are approaching the task backwards.
Do the extraction as you insert the data. Insert into a small number of tables (Authors, Books, Tags, and maybe a couple more) and build relations between them. No JSON is needed in this database.
The result is an easy-to-query and fast database. However, it requires learning about RDBMS and SQL.
JSON is useful when the data is a collection of random stuff. Your JSON is very regular, hence the data fits very nicely into RDBMS technology. In that case, JSON is merely a standard way to serialize the data. But it should not be used for querying.
Related
I've been trying to set up a MySQL to Elasticsearch data pipeline for real-time data replication.
The MySQL database has around 10 different tables that are highly normalized. But in Elasticsearch, I'm in need to have all of the data from these tables in a single index, which would be similar to the output from a big compound JOIN query. Tried a lot to find out, please help 🙂
(Changing the DB schema isn't feasible as there are a lot of other dependent services. )
For example :
Input from MySQL:
Table: main_profile
+--------+------+
| name | city |
+--------+------+
| Edward | 1 |
| Jake | 9 |
+--------+------+
Table: city_master
+---------+----------+
| city_id | name |
+---------+----------+
| 1 | New York |
| 9 | Tampa |
+---------+----------+
Document stored in Elasticsearch:
{
"0": {
"name": "Edward",
"city": "New York"
},
"1": {
"name": "Jake",
"city": "Tampa"
}
}
you can use Kafka Streams to do aggregation from two different topics to build a unfied message. Please check an example for Debezium source https://github.com/debezium/debezium-examples/tree/master/kstreams
The target is MongoDB in the example but the principle is the same.
I've been experimenting with PostgreSQL's Ltree extension. I'd like to use it to store hierarchical to-do list data (i.e. lists with sub lists). It works well, but after spending a fair bit of time, I still can't find a nice way to retrieve the information from the database in hierarchical JSON format. Here is an example of what I'd like to achieve:
id | content | position | parent_id | parent_path
----+------------------------+-----------+----------+-------------+
1 | Fix lecture notes. | 1 | | root
2 | Sort out red folder. | 1 | 1 | root.1
3 | Order files. | 1 | 2 | root.1.2
4 | Label topics. | 2 | 2 | root.1.2
5 | Sort out blue folder. | 2 | 1 | root.1
6 | Look for jobs. | 2 | | root
From this, to the json output below:
[
{
"id":1,
"content":"Fix lecture notes.",
"position":1,
"parent_id":null,
"parent_path":"root",
"children":[
{
"id":2,
"content":"Sort out red folder.",
"position":1,
"parent_id":1,
"parent_path":"root.1",
"children":[
{
"id":3,
"content":"Order files.",
"position":1,
"parent_id":2,
"parent_path":"root.1.2",
"children":[]
},
{
"id":4,
"content":"Label topics.",
"position":2,
"parent_id":2,
"parent_path":"root.1.2",
"children":[]
}
]
},
{
"id":2,
"content":"Sort out blue folder.",
"position":2,
"parent_id":1,
"parent_path":"root.1",
"children":[]
}
]
},
{
"id":1,
"content":"Look for jobs.",
"position":1,
"parent_id":null,
"parent_path":"root",
"children":[]
}
]
Is there any neat way this can be done, maybe with Python server side? Looking for ideas really!
I want to move a file consisting of raw file to MYSQL.
I want to assign the key in the raw file to the field and move the value to the data in the field, but I do not know what to do.
Is there a way to do it without directly inserting it?
trying insert a raw file into MySQL.
Help!
Example raw files
file1
[{"app":"unknown", "uid": "1000", "says": "hello"}, {"app":"hi", "uid": "1020", "says": "good"}]
file2
[{"app":"wowo", "uid": "20", "says": "asdf"}, {"app":"no", "uid": "1030", "says": "goso"}]
Want MYSQL Result
+-----------------+------+-------------+
| app | uid | says |
+-----------------+------+-------------+
| unknown | 1000 | hello |
| hi | 1020 | good |
| wowo | 20 | asdf |
| no | 1030 | goso |
+-----------------+------+-------------+
I've got several Postgres 9.4 tables that contain data like this:
| id | data |
|----|-------------------------------------------|
| 1 | {"user": "joe", "updated-time": 123} |
| 2 | {"message": "hi", "updated-time": 321} |
I need to transform the JSON column into something like this
| id | data |
|----|--------------------------------------------------------------|
| 1 | {"user": "joe", "updated-time": {123, "unit":"millis"}} |
| 2 | {"message": "hi", "updated-time": {321, "unit":"millis"}} |
Ideally it would be easy to apply the transformation to multiple tables. Tables that contain the JSON key data->'updated-time' should be updated, and ones that do not should be skipped. Thanks!
You can use the || operator to merge two jsonb objects together.
select '{"foo":"bar"}'::jsonb || '{"baz":"bar"}'::jsonb;
= {"baz": "bar", "foo": "bar"}
I'm currently trying to get a handle on how Kettle 4.4 handles data transformations by trying to port something I'm currently doing via Python to a Kettle job.
I have a relational database with four tables that I need to import into my data pipeline. Here's a simplified version of the model...
Widgets
+-----------+-------------+----------------+
| WIDGET_ID | Name | Notes |
+-----------+-------------+----------------+
| 1 | Gizmo | Red paint job |
| 2 | Large Gizmo | Blue paint job |
+-----------+-------------+----------------+
Customers
+-----------+------------+----------------------------------+
| WIDGET_ID | Name | Mailing_Address |
+-----------+------------+----------------------------------+
| 1 | Acme, Inc. | 123 Fake Street, Springfield, IL |
| 2 | Fake Corp. | 555 Main Street, Small Town, IN |
| 2 | Acme, Inc. | 123 Fake Street, Springfield, IL |
+-----------+------------+----------------------------------+
Inventory
+-----------+--------+------------+
| WIDGET_ID | Amount | Date |
+-----------+--------+------------+
| 2 | 11000 | 2012-01-15 |
| 1 | 13000 | 2012-02-05 |
| 1 | 900 | 2013-01-01 |
+-----------+--------+------------+
I'd like to be able to take the above and produce JSON output like this:
{
"id": 1,
"Name": "Gizmo",
"Notes": "Red Paint Job",
"Customers": [
{
"Name": "Acme, Inc.",
"Address": "123 Fake Street..."
}
],
"Inventory": [
{
"Amount": 13000,
"Date": "2012-02-05"
},
{
"Amount": 900,
"Date": "2013-01-01"
}
]
}
My attempts to use Kettle's joins, JS transforms and JSON output have not been very successful, and I find the documentation to be quite lacking. Can anyone help me out, or point me in the right direction?
Thanks!
you can use 3 (well 6 in total) kettle steps for this transformation:
1) add 3 table input steps one for each table.
2) add next a Multiway Merge Join step, group the 3 table input step arrow flows onto this,
choose widget_id as key field, choose inner join type.
3) add 1 json ouput step to the output flow the multiway join step.
to make the final json format you have to use the JSONPath notation:
http://goessner.net/articles/JsonPath/
hope it helps.
(if you are new in kettle i recommend to go trough the samples folder included in kettle spoon)