Change json field in mysql to include data from other table - mysql

I have these two tables:
CREATE TABLE `config_support_departments` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`title` varchar(200) NOT NULL DEFAULT '',
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE `support_tickets_filters` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`filter_departments` json DEFAULT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
config_support_departments:
+----+-------------------------+
| id | title |
+----+-------------------------+
| 1 | Projects / File Support |
| 2 | Sales Support |
| 3 | IT Support |
+----+-------------------------+
support_tickets_filters:
+----+--------------------+
| id | filter_departments |
+----+--------------------+
| 1 | ["2", "3"] |
+----+--------------------+
What I need is when query the support_tickets_filters table to also include the title from the config_support_departments table. Hence, the result should be something like this:
+----+-----------------------------------------------+
| id | filter_departments |
+----+-----------------------------------------------+
| 1 | {"2":"Sales Support","3":"IT Support"} |
+----+-----------------------------------------------+

You can use JSON_OBJECTAGG() function along with JSON_TABLE() if DB version is 8.0+
SELECT s.id, JSON_OBJECTAGG(c.id,c.title) AS filter_departments
FROM `support_tickets_filters` AS s
JOIN JSON_TABLE(
s.`filter_departments`,
'$[*]' COLUMNS (id INT PATH '$')
) j
JOIN `config_support_departments` AS c
ON j.id = c.id
GROUP BY s.id
For DB version 5.7, you can use one of the DB metadata tables such as information_schema.tables in order to generate index values as 0,1,... upto the length of the array (filter_departments) for extracting the related value from that iteratively
SELECT s.id, JSON_OBJECTAGG(c.id,c.title) AS filter_departments
FROM
(
SELECT #i := #i + 1 AS n, s.id,
JSON_UNQUOTE(JSON_EXTRACT(`filter_departments`,CONCAT('$[',#i,']'))) AS elm
FROM `support_tickets_filters` AS s
JOIN (SELECT #i := -1) AS iter
LEFT JOIN information_schema.tables AS t
ON #i < JSON_LENGTH(`filter_departments`) - 1 ) AS s
JOIN `config_support_departments` AS c
ON s.elm = c.id
GROUP BY s.id
Demo

Related

Slow MySQL query when using ORDER BY id

I have a very slow query where the first part is created by a gem (https://github.com/CanCanCommunity/cancancan, it creates the select and the inner query) and where I add an ORDER BY and LIMIT for a cursor based pagination.
SELECT `spree_products`.*
FROM `spree_products`
WHERE `spree_products`.`id` IN
(SELECT `spree_products`.`id`
FROM `spree_products`
LEFT OUTER JOIN `spree_vendors` ON `spree_vendors`.`id` = `spree_products`.`vendor_id`
WHERE `spree_vendors`.`active` = TRUE)
ORDER BY `spree_products`.`id` ASC
LIMIT 50;
=> 50 rows in set (1 min 3.48 sec)
This are the tables:
CREATE TABLE `spree_products` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`available_on` datetime DEFAULT NULL,
`permalink` varchar(255) DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
`count_on_hand` int(11) DEFAULT NULL,
`vendor_id` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_spree_products_on_vendor_id` (`vendor_id`)
) ENGINE=InnoDB AUTO_INCREMENT=37209248 DEFAULT CHARSET=utf8mb4
CREATE TABLE `spree_vendors` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`active` tinyint(1) DEFAULT '0',
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4413 DEFAULT CHARSET=utf8mb4
(I removed unneccessary fields to keep it tidy)
The EXPLAIN on the query above returns this:
+----+-------------+----------------+------------+--------+-------------------------------------------+-----------------------------------+---------+--------------------------------+------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+----------------+------------+--------+-------------------------------------------+-----------------------------------+---------+--------------------------------+------+----------+----------------------------------------------+
| 1 | SIMPLE | spree_vendors | NULL | ALL | PRIMARY | NULL | NULL | NULL | 3465 | 10.00 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | spree_products | NULL | ref | PRIMARY,index_spree_products_on_vendor_id | index_spree_products_on_vendor_id | 5 | _hubert_test.spree_vendors.id | 8613 | 100.00 | Using index |
| 1 | SIMPLE | spree_products | NULL | eq_ref | PRIMARY | PRIMARY | 4 | _hubert_test.spree_products.id | 1 | 100.00 | NULL |
+----+-------------+----------------+------------+--------+-------------------------------------------+-----------------------------------+---------+--------------------------------+------+----------+----------------------------------------------+
When I remove the ORDER BY the query is fast:
SELECT `spree_products`.*
FROM `spree_products`
WHERE `spree_products`.`id` IN
(SELECT `spree_products`.`id`
FROM `spree_products`
LEFT OUTER JOIN `spree_vendors` ON `spree_vendors`.`id` = `spree_products`.`vendor_id`
WHERE `spree_vendors`.`active` = TRUE)
LIMIT 50;
=> 50 rows in set (0.00 sec)
When I keep the ORDER BY part from the outer query, but remove the WHERE part from the sub query, the query also is fast:
SELECT `spree_products`.*
FROM `spree_products`
WHERE `spree_products`.`id` IN
(SELECT `spree_products`.`id`
FROM `spree_products`
LEFT OUTER JOIN `spree_vendors` ON `spree_vendors`.`id` = `spree_products`.`vendor_id`)
ORDER BY `spree_products`.`id` ASC
LIMIT 50;
I tried adding a composite index to spree_vendors.id / spree_vendors.active, but that didn't help.
Any idea, on how to optimise this query?
UPDATE 1:
A JOIN Variant of this is also slow. The DISTINCT is added by the gem to prevent duplicate records in case you don't select all columns:
SELECT DISTINCT `spree_products`.*
FROM `spree_products`
LEFT OUTER JOIN `spree_vendors` ON `spree_vendors`.`id` = `spree_products`.`vendor_id`
WHERE `spree_vendors`.`active` = TRUE
ORDER BY `spree_products`.`id` ASC
LIMIT 50;
=> 50 rows in set (1 min 43.13 sec)
Without the DISTINCT the query is fast.
UPDATE 2
It was pointed out, that using a LEFT OUTER JOIN inside the sub query returns the whole table. But when using an INNER JOIN it still is slow:
SELECT `spree_products`.*
FROM `spree_products`
WHERE `spree_products`.`id` IN
(SELECT `spree_products`.`id`
FROM `spree_products`
INNER JOIN `spree_vendors` ON `spree_vendors`.`id` = `spree_products`.`vendor_id`
WHERE `spree_vendors`.`active` = TRUE)
ORDER BY `spree_products`.`id` ASC
LIMIT 50;
=> 50 rows in set (1 min 3.98 sec)
Given that id must be PRIMARY, your query must be functionally identical to this:
SELECT [DISTINCT] p.*
FROM spree_products p
JOIN spree_vendors v
ON v.id = p.vendor_id
WHERE v.active = 1
ORDER
BY p.id ASC
LIMIT 50;
This would benefit from an index on p.vendor_id, and perhaps v.active.

Group by Query - Select row with maximum date

Hi I have this scores table, And in my report on front end, I have to display the keyword and url and score for latest scan.
CREATE TABLE `scores` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`keyword` varchar(200) DEFAULT NULL,
`url` varchar(200) DEFAULT NULL,
`score` int(11) DEFAULT NULL,
`check_date` date DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1;
Here is my Example Data:
Sample Data
| id | keyword | url | score | check_date |
|----|----------|----------------------|-------|------------|
| 1 | facebook | https://facebook.com | 10 | 2020-10-21 |
| 2 | facebook | https://facebook.com | 30 | 2020-10-25 |
| 3 | fb | https://facebook.com | 55 | 2020-10-23 |
| 4 | fb | https://facebook.com | 20 | 2020-10-24 |
My Query
SELECT s1.*
FROM scores s1
JOIN scores s2
ON s1.id = s2.id
WHERE s1.check_date = s2.check_date
GROUP BY keyword,url
It returns correct check_date for a specific keyword, url but score is not according to that date. Please help.
Do not use aggregation for this. A simple method is a correlated subquery:
select s.*
from scores s
where s.check_date = (select max(s2.check_date)
from scores s2
where s2.keyword = s.keyword and s2.url = s.url
);
If you are intent on using an explicit join you can use a left join, look for a larger date, and return the rows that have no larger date:
select s.*
from scores s left join
scores slater
on slater.keyword = s.keyword and
slater.url = s.url and
slater.check_date > s.check_date
where slater.check_date is null;

how can i select row's index from multi select

how can I Derive the query..?
Create Table Query:
CREATE TABLE `account_list` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`nick` char(12) NOT NULL DEFAULT '',
`sponsor` char(12) DEFAULT '',
PRIMARY KEY (`id`),
UNIQUE KEY `nick` (`nick`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;
Example table data:
ID NICK SPONSOR
1 A NULL
2 B C
3 C NULL
4 D C
SELECT a.id, a.nick,b.sponsor
FROM account_list a (select b.sponsor from account_list b)
then, I wanted some like this:
SELECT id, nick, sponsor FROM `account_list` ....
ID NICK SPONSOR
1 A
2 B 3
3 C
4 D 3
look the sponsor row.
printed C is 3.
how can I select look like this?
Your query was almost right(I think) the sub query where clause should reference nick
drop table if exists t;
create table t
(ID int, NICK varchar(1), SPONSOR varchar(1));
insert into t values
(1 , 'A' , NULL),
(2 , 'B' , 'C'),
(3 , 'C' , NULL),
(4 , 'D' , 'C');
SELECT id, nick,sponsor,
(select b.id
from t as b
WHERE a.sponsor= b.nick and a.sponsor is not null
)
FROM t as a
;
+------+------+---------+--------------------------------------------------------------------------------------------------+
| 1 | A | NULL | NULL |
| 2 | B | C | 3 |
| 3 | C | NULL | NULL |
| 4 | D | C | 3 |
+------+------+---------+--------------------------------------------------------------------------------------------------+
4 rows in set (0.00 sec)
You could try using account_list for left join
SELECT a.id, a.nick, b.id sponsor
from account_list a
left join account_list b ON b.nick = a.sponsor

How to rewrite a NOT IN subquery as join

Let's assume that the following tables in MySQL describe documents contained in folders.
mysql> select * from folder;
+----+----------------+
| ID | PATH |
+----+----------------+
| 1 | matches/1 |
| 2 | matches/2 |
| 3 | shared/3 |
| 4 | no/match/4 |
| 5 | unreferenced/5 |
+----+----------------+
mysql> select * from DOC;
+----+------+------------+
| ID | F_ID | DATE |
+----+------+------------+
| 1 | 1 | 2000-01-01 |
| 2 | 2 | 2000-01-02 |
| 3 | 2 | 2000-01-03 |
| 4 | 3 | 2000-01-04 |
| 5 | 3 | 2000-01-05 |
| 6 | 3 | 2000-01-06 |
| 7 | 4 | 2000-01-07 |
| 8 | 4 | 2000-01-08 |
| 9 | 4 | 2000-01-09 |
| 10 | 4 | 2000-01-10 |
+----+------+------------+
The columns ID are the primary keys and the column F_ID of table DOC is a not-null foreign key that references the primary key of table FOLDER. By using the 'DATE' of documents in the where clause, I would like to find which folders contain only the selected documents. For documents earlier than 2000-01-05, this could be written as:
SELECT DISTINCT d1.F_ID
FROM DOC d1
WHERE d1.DATE < '2000-01-05'
AND d1.F_ID NOT IN (
SELECT d2.F_ID
FROM DOC d2 WHERE NOT (d2.DATE < '2000-01-05')
);
and it correctly returns '1' and '2'. By reading
http://dev.mysql.com/doc/refman/5.5/en/rewriting-subqueries.html
the performance for big tables could be improved if the subquery is replaced with a join. I already found questions related to NOT IN and JOINS but not exactly what I was looking for. So, any ideas of how this could be written with joins ?
The general answer is:
select t.*
from t
where t.id not in (select id from s)
Can be rewritten as:
select t.*
from t left outer join
(select distinct id from s) s
on t.id = s.id
where s.id is null
I think you can apply this to your situation.
select distinct d1.F_ID
from DOC d1
left outer join (
select F_ID
from DOC
where date >= '2000-01-05'
) d2 on d1.F_ID = d2.F_ID
where d1.date < '2000-01-05'
and d2.F_ID is null
If I understand your question correctly, that you want to find the F_IDs representing folders which only contains documents from before '2000-01-05', then simply
SELECT F_ID
FROM DOC
GROUP BY F_ID
HAVING MAX(DATE) < '2000-01-05'
Sample Table and Insert Statements
CREATE TABLE `tleft` (
`id` int(2) NOT NULL,
`name` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
CREATE TABLE `tright` (
`id` int(2) NOT NULL,
`t_left_id` int(2) DEFAULT NULL,
`description` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
INSERT INTO `tleft` (`id`, `name`)
VALUES
(1, 'henry'),
(2, 'steve'),
(3, 'jeff'),
(4, 'richards'),
(5, 'elon');
INSERT INTO `tright` (`id`, `t_left_id`, `description`)
VALUES
(1, 1, 'sample'),
(2, 2, 'sample');
Left Join : SELECT l.id,l.name FROM tleft l LEFT JOIN tright r ON l.id = r.t_left_id ;
Returns Id : 1, 2, 3, 4, 5
Right Join : SELECT l.id,l.name FROM tleft l RIGHT JOIN tright r ON l.id = r.t_left_id ;
Returns Id : 1,2
Subquery Not in tright : select id from tleft where id not in ( select t_left_id from tright);
Returns Id : 3,4,5
Equivalent Join For above subquery :
SELECT l.id,l.name FROM tleft l LEFT JOIN tright r ON l.id = r.t_left_id WHERE r.t_left_id IS NULL;
AND clause will be applied during the JOIN and WHERE clause will be applied after the JOIN .
Example : SELECT l.id,l.name FROM tleft l LEFT JOIN tright r ON l.id = r.t_left_id AND r.description ='hello' WHERE r.t_left_id IS NULL ;
Hope this helps

MySQL Query Optimization with MAX()

I have 3 tables with the following schema:
CREATE TABLE `devices` (
`device_id` int(11) NOT NULL auto_increment,
`name` varchar(20) default NULL,
`appliance_id` int(11) default '0',
`sensor_type` int(11) default '0',
`display_name` VARCHAR(100),
PRIMARY KEY USING BTREE (`device_id`)
)
CREATE TABLE `channels` (
`channel_id` int(11) NOT NULL AUTO_INCREMENT,
`device_id` int(11) NOT NULL,
`channel` varchar(10) NOT NULL,
PRIMARY KEY (`channel_id`),
KEY `device_id_idx` (`device_id`)
)
CREATE TABLE `historical_data` (
`date_time` datetime NOT NULL,
`channel_id` int(11) NOT NULL,
`data` float DEFAULT NULL,
`unit` varchar(10) DEFAULT NULL,
KEY `devices_datetime_idx` (`date_time`) USING BTREE,
KEY `channel_id_idx` (`channel_id`)
)
The setup is that a device can have one or more channels and each channel has many (historical) data.
I use the following query to get the last historical data for one device and all it's related channels:
SELECT c.channel_id, c.channel, max(h.date_time), h.data
FROM devices d
INNER JOIN channels c ON c.device_id = d.device_id
INNER JOIN historical_data h ON h.channel_id = c.channel_id
WHERE d.name = 'livingroom' AND d.appliance_id = '0'
AND d.sensor_type = 1 AND ( c.channel = 'ch1')
GROUP BY c.channel
ORDER BY h.date_time, channel
The query plan looks as follows:
+----+-------------+-------+--------+-----------------------+----------------+---------+---------------------------+--------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+-----------------------+----------------+---------+---------------------------+--------+-------------+
| 1 | SIMPLE | c | ALL | PRIMARY,device_id_idx | NULL | NULL | NULL | 34 | Using where |
| 1 | SIMPLE | d | eq_ref | PRIMARY | PRIMARY | 4 | c.device_id | 1 | Using where |
| 1 | SIMPLE | h | ref | channel_id_idx | channel_id_idx | 4 | c.channel_id | 322019 | |
+----+-------------+-------+--------+-----------------------+----------------+---------+---------------------------+--------+-------------+
3 rows in set (0.00 sec)
The above query is currently taking approximately 15 secs and I wanted to know if there are any tips or way to improve the query?
Edit:
Example data from historical_data
+---------------------+------------+------+------+
| date_time | channel_id | data | unit |
+---------------------+------------+------+------+
| 2011-11-20 21:30:57 | 34 | 23.5 | C |
| 2011-11-20 21:30:57 | 9 | 68 | W |
| 2011-11-20 21:30:54 | 34 | 23.5 | C |
| 2011-11-20 21:30:54 | 5 | 316 | W |
| 2011-11-20 21:30:53 | 34 | 23.5 | C |
| 2011-11-20 21:30:53 | 2 | 34 | W |
| 2011-11-20 21:30:51 | 34 | 23.4 | C |
| 2011-11-20 21:30:51 | 9 | 68 | W |
| 2011-11-20 21:30:49 | 34 | 23.4 | C |
| 2011-11-20 21:30:49 | 4 | 193 | W |
+---------------------+------------+------+------+
10 rows in set (0.00 sec)
Edit 2:
Mutliple channel SELECT example:
SELECT c.channel_id, c.channel, max(h.date_time), h.data
FROM devices d
INNER JOIN channels c ON c.device_id = d.device_id
INNER JOIN historical_data h ON h.channel_id = c.channel_id
WHERE d.name = 'livingroom' AND d.appliance_id = '0'
AND d.sensor_type = 1 AND ( c.channel = 'ch1' OR c.channel = 'ch2' OR c.channel = 'ch2')
GROUP BY c.channel
ORDER BY h.date_time, channel
I've used OR in the c.channel where clause because it was easier to generated pro grammatically but it can be changed to use IN if necessary.
Edit 3:
Example result of what I'm trying to achieve:
+-----------+------------+---------+---------------------+-------+
| device_id | channel_id | channel | max(h.date_time) | data |
+-----------+------------+---------+---------------------+-------+
| 28 | 9 | ch1 | 2011-11-21 20:39:36 | 0 |
| 28 | 35 | ch2 | 2011-11-21 20:30:55 | 32767 |
+-----------+------------+---------+---------------------+-------+
I have added the device_id to the example but my select will only need to return channel_id, channel, last date_time i.e max and the data. The results should be the last record from the historical_data table for each channel for one device.
It seems that removing an re-creating the index on date_time by deleting and creating it again sped up my original SQL up to around 2secs
I haven't been able to test this, so I'd like to ask you to run it and let us know what happens.. if it gives you the desired result and if it runs faster than your current:
CREATE DEFINER=`root`#`localhost` PROCEDURE `GetLatestHistoricalData_EXAMPLE`
(
IN param_device_name VARCHAR(20)
, IN param_appliance_id INT
, IN param_sensor_type INT
, IN param_channel VARCHAR(10)
)
BEGIN
SELECT
h.date_time, h.data
FROM
historical_data h
INNER JOIN
(
SELECT c.channel_id
FROM devices d
INNER JOIN channels c ON c.device_id = d.device_id
WHERE
d.name = param_device_name
AND d.appliance_id = param_appliance_id
AND d.sensor_type = param_sensor_type
AND c.channel = param_channel
)
c ON h.channel_id = c.channel_id
ORDER BY h.date_time DESC
LIMIT 1;
END
Then to run a test:
CALL GetLatestHistoricalData_EXAMPLE ('livingroom', 0, 1, 'ch1');
I tried working it into a stored procedure so that even if you get the desired results using this for one device, you can try it with another device and see the results... Thanks!
[edit] : : In response to Danny's comment here's an updated test version:
CREATE DEFINER=`root`#`localhost` PROCEDURE `GetLatestHistoricalData_EXAMPLE_3Channel`
(
IN param_device_name VARCHAR(20)
, IN param_appliance_id INT
, IN param_sensor_type INT
, IN param_channel_1 VARCHAR(10)
, IN param_channel_2 VARCHAR(10)
, IN param_channel_3 VARCHAR(10)
)
BEGIN
SELECT
h.date_time, h.data
FROM
historical_data h
INNER JOIN
(
SELECT c.channel_id
FROM devices d
INNER JOIN channels c ON c.device_id = d.device_id
WHERE
d.name = param_device_name
AND d.appliance_id = param_appliance_id
AND d.sensor_type = param_sensor_type
AND (
c.channel IN (param_channel_1
,param_channel_2
,param_channel_3
)
c ON h.channel_id = c.channel_id
ORDER BY h.date_time DESC
LIMIT 1;
END
Then to run a test:
CALL GetLatestHistoricalData_EXAMPLE_3Channel ('livingroom', 0, 1, 'ch1', 'ch2' , 'ch3');
Again, this is just for testing, so you'll be able to see if it meets your needs..
I would first add an index on the devices table ( appliance_id, sensor_type, name ) to match your query. I don't know how many entries are in this table, but if large, and many elements per device, get right to it.
Second, on your channels table, index on ( device_id, channel )
Third, on your history data, index on ( channel_id, date_time )
then,
SELECT STRAIGHT_JOIN
PreQuery.MostRecent,
PreQuery.Channel_ID,
PreQuery.Channel,
H2.Data,
H2.Unit
from
( select
c.channel_id,
c.channel,
max( h.date_time ) as MostRecent
from
devices d
join channels c
on d.device_id = c.device_id
and c.channel in ( 'ch1', 'ch2', 'ch3' )
join historical_data h
on c.channel_id = c.Channel_id
where
d.appliance_id = 0
and d.sensor_type = 1
and d.name = 'livingroom'
group by
c.channel_id ) PreQuery
JOIN Historical_Data H2
on PreQuery.Channel_ID = H2.Channel_ID
AND PreQuery.MostRecent = H2.Date_Time
order by
PreQuery.MostRecent,
PreQuery.Channel