Select decoded JSON data from joined MySQL tables - mysql

Could you write me please how to make selection from MySQL database if I have two tables with JSON data. One of them has following structure:
Table Trees
(id, name, value) - three columns
which includes following data
1, trees, [{"name":"Oaktree","value":1,"target":null},{"name":"Appletree","value":2,"target":null},{"name":"Plumtree","value":3,"target":null}]
2, length, [{"name":"10m","value":1,"target":null},{"name":"15m","value":2,"target":null},{"name":"20m","value":3,"target":null}]
3, age, [{"name":"5y","value":1,"target":null},{"name":"10y","value":2,"target":null},{"name":"20y","value":3,"target":null}]
The second table has the following structure:
Table SelectedTrees
(properties) - only one column
which includes the following data
[{"id":"1","value":["1","3"]},{"id":"2","value":["1", "2", "3"]},{"id":"3","value":["2"]}]
it means selected data from Trees tables. id in properties column from selectedTrees coresponds to id column from Trees table. I would like to select from database real (json_decoded) values like:
Trees = Oaktree, Plumtree
Length = 10m, 15m, 20m
Age = 10y
How could I make this?
Thanks in advance.
Jan

In a nutshell, this is not possible. Relational databases are built for quickly comparing constant values that they can index. JSON is just a string to MySQL, and any kind of partial string matching triggers a so-called table scan, which is essentially going to become freaking slow when you get serious amounts of data.
You COULD get it to work like this:
SELECT * FROM Trees
JOIN SelectedTrees
ON properties LIKE CONCAT('"id":"', Trees.id, '"')
This is however just a hack that you should never want to use in any production system, and I advise against using it in a test system. Instead refactor your database so there's never going to be any JSON in there that you are going to match on in your queries. It's fine to store secondary data as JSON, just make sure the IDs and names are extracted before insertion, and then insert in separate columns in the database tables so the DB engine can do its relational magic.

Related

Why are arrays not saveable in sql

I know that sql can't save arrays (correct me if i'm wrong).
why?
I know this is a stupid question, but
Arrays are only structured data. Why can't sql save that?
Can i rewrite my mysql database or download a Addon for sql so i can save arrays?
Thanks in advance
Relational database management systems (RDBMS), such as MySQL, SQL Server, Oracle and PostgreSQL usually store data in tables. This is a very good way to store related data.
Let's say there are three entities: customers, orders, and products, and the orders contain multiple products. Four tables hence:
customers(customer_no, name)
products(product_no, name, price)
orders(order_no, customer_no, date)
order_details(order_no, product_no, amount)
We would provide indexes (i.e. search trees) to easily find orders of a customer or products in an order. Now let's say, we want to know how many orders have been made for product 123:
select count(distinct order_no)
from order_details
where product_no = 123;
The DBMS will quickly find the order_detail records for the product, because looking up an index is like searching by last name in a telephone book (binary search). And then it's mere counting. So only few records get read and the whole query is really fast.
Now the same with arrays. Something like:
products(product_no, name, price)
customers
(
customer_no,
name,
array of orders
(
order_no,
date,
array of products
(
product_no,
amount
)
)
)
Well, the order details are now hidden inside an order element which itself is inside a customer object. To get the number of orders for product 123, the only approach seems to be to read all customer records, loop through all orders and see whether they contain the product. This can take awfully long. Moreover without foreign key constraints for the relations between the entities, the arrays may contain product numbers that don't even exist.
Well, there may be ways to kind of index array data and there may be ways to guarantee data consistency for them, but the relational approach with tables has proven to solve these things extremely well. So we would avoid arrays and rather build our relations with tables instead. This is what a relational database is made for.
(Having said this, arrays may come in handy every now and then, e.g. in a recursive query were you want to remember which records have already been visited, but these occasions are rare.)
To answer my own question, i first want to thank for the comments
THANK YOU!
back to the question: ordinary sql cant save arrays and doesnt want to save save, because of normalization issues.
you can save arrays on another way:
A SQL Table is like an array. Link a new table as array. Create the table manually or, if the array could change, with Code. There is no need for arrays in sql
If you have to do, or want to do so, you can use Nosql or PostgreSql or save the Data with JSON, Oracle and XML

MySql 5.7 json_extract by key

I have a table and it looks like below:
Table data
id params
1 {"company1X":{"price":"1124.55"},"company2X":{"price":"1,124.55"},"company3X":{"price":""},"company4X":{"price":""},"company5X":{"price":"1528.0"}}
I don't know the name of "company" to use in my request.
How can I fetch my data ordered by price?
Thanks!
P.S I have tried select json_extract(params, '$[*].price') from data but it doesn't work (return nulls).
$[*] gets all elements of a JSON array, not an object. This is an object, so you get NULL.
$.* will get you all elements in a JSON object, so $.*.price gets you a JSON array of all prices.
mysql> select json_extract(params, '$.*.price') from foo;
+-------------------------------------------+
| json_extract(params, '$.*.price') |
+-------------------------------------------+
| ["1124.55", "1,124.55", "", "", "1528.0"] |
+-------------------------------------------+
Now there's a problem. As far as SQL is concerned, this is a single row. It can't be sorted with a normal order by, that works on rows.
MySQL has no function for sorting JSON... so you're stuck. You can return the JSON array and let whatever is receiving the data do the sorting. You might be able to write a stored procedure to sort the array... but that's a lot of work to support a bad table design. Instead, change the table.
The real problem is this is a bad use of a JSON column.
JSON columns defeat most of the point of a SQL database (less so in PostgreSQL which has much better JSON support). SQL databases work with rows and columns, but JSON shoves what would be multiple rows and columns into a single cell.
For this reason, JSON columns should be used sparingly; typically when you're not sure what sort of data you'll be needing to store. Important information like "price" that's going to be searched and sorted should be done as normal columns.
You'll want to change your table to be a normal SQL table with columns for the company name and price. Then you can use normal SQL features like order by and performance will benefit from indexing. There isn't enough information in your question to suggest what that table might look like.

Merging nested tables in linq to sql

I have a linq query that gets data from an OData Reporting service
So far so good, but when I return my data like this :
select new {TimesheetActual , TimesheetLine,Timesheet, TimesheetProject,TimesheetTask, subTv, TimesheetResource, subRes, pLeft}
It returns as a collection of nested collections.
For my service I need one big table with every column from every record.
I know this is possible by explicitly naming every column in the select statement like this:
select new { TimesheetActual.Column1, TimesheetActual.Column2, .., TimesheetLine.Column1,.., TimesheetProject.Column1,..}
But due to the massive amount of columns I'm a little reluctant to do it this way.
So my question, is there any way to either merge the collections or another way to get the same result without having to specify 100+ columns?

Should i use relations or split the result

I'm creating a database that should contain coordinates, textsize, etc.
My first table looks like this
id, template_id, data_1, data_2, data_3, data_4, data_5, data_6, data_7, data_8
Every data_x field should have one of the following formats:
svg string;textsize
x;y;textheight;textwidth
x;y;imageheight;imagewidth
In the future more formats could be added
My question is, should i use those formats (and split them using eg PHP) or should i create a table for each format with relationships? What is the fastest/best practice?
I hope i explained myself well enough..
First, you should not be storing these in separate columns. You should have another table with one row per table1 id and another per data item. It would have at least two columns:
Table1Id
DataColumn
It might also have an auto-incremented id, a sequential number to enumerate the ids and so on.
As for your question on how to store the data, that depends on how you are going to access them. If the database is going to be blind to the contents, the you can store them all in a single field. If you need to access them, then you might have to go to the next level and break things out into separate data tables, one for each type of value, that the above data column would refer to.
In any case, the more important change at this point is to put the "array" of data values into separate rows of another table.

Joining a table stored within a column of the results

I want to try and keep this as one query and not use PHP, but it's proving to be tough.
I have a table called applications, that stores all the applications and some basic information about them.
Then, I have a table with all the types of applications in it, and that table contains a reference to another table which stores more specific data about the specific type of application in question.
select applications.id as appid, applications.category, type.title as type, type.id as tid, type.valuefld, type.tablename
from applications
left join type on applications.typeid=type.id
left join department on type.deptid=department.id
where not isnull(work_cat)
and work_cat != ''
and applications.deleted=0
and datei between '10-04-14' and '11-04-14'
order by type, work_cat
Now, in the old version, there is another query on every single result. Over hundreds of results... that sucks.
This is the query I'd like to integrate so I can get all the data in one result row. (Old is ASP, I'm re-writing it in PHP)
query = "select sum("&adors.fields("valuefld")&") as cost, description from "&adors.fields("tablename")&" where appid = '"&adors.fields("tablename")&"'"
Prepared statements, I'm aware, are the best solution, but for now they are not an option.
You can't do this with a plain SQL query - you need to have a defined set of tables that your query is based on. The fact that your current implementation queries from whatever table is named by tablename from the first result-set means that to get this all in one query, you will have to restructure your data. You have to know what tables you're querying from rather than having it dynamic.
If the reason for these different tables is the different information stored in each requiring different record (column) structures, you might want to look into Key/Value pair storage in a large table. Once you combine the dynamically named ones into a single location you can integrate your two queries together.