how to make my select query faster in mysql? - mysql

I've got a table in mysql:
CREATE TABLE `pdd_data` (
`pdd_id` INT(16) NOT NULL primary key auto_increment,
`vin` varchar(32) NOT NULL,
`time` TIMESTAMP NOT NULL,
`cmd` varchar(16) NOT NULL,
`data` varchar(128) NOT NULL
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
I insert 1,000,000 records into pdd_data, and I'll use queries frequently in the future as below:
select * from pdd_data where cmd = 4599;
select * from pdd_data where vin = 400;
select * from pdd_data where vin = 400 and cmd = 4599;
Currently, the query time is about 1.20sec~1.90sec. Could anyone give me some suggestions on how to make this query faster?
p.s. I create a table using index:
CREATE TABLE `pdd_data1` (
`pdd_id` INT(16) NOT NULL primary key auto_increment,
`vin` varchar(32) NOT NULL,
`time` TIMESTAMP NOT NULL,
`cmd` varchar(16) NOT NULL,
`data` varchar(128) NOT NULL,
index idx_vin_cmd (vin(32), cmd(16))
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
But no improvement on select query.

My suggestion is do not use select *. Instead of using select * use select pdd_id, vin,time, cmd, data. This will definitely reduce your execution time.

Related

MySQL RDS performance of aggregation functions

We have a query that performs some aggregation on one column.
The filtering of the data seems to be pretty fast, but the aggregation seems to take too much time.
This query returns ~ 1.5 million rows. It runs for 0.6 seconds (if we want to return the data to the client it takes ~ 2 minutes - the way we tested this is by using the pymysql python library. We used an unbuffered cursor, so we can distinguish between query run time and fetch time):
SELECT *
FROM t_data t1
WHERE (t1.to_date = '2019-03-20')
AND (t1.period = 30)
AND (label IN ('aa','bb') )
AND ( id IN (
SELECT id
FROM t_location_data
WHERE (to_date = '2019-03-20') AND (period = 30)
AND ( country = 'Narniya' ) ) )
But if we run this query:
SELECT MAX(val) val_max,
AVG(val) val_avg,
MIN(val) val_min
FROM t_data t1
WHERE (t1.to_date = '2019-03-20')
AND (t1.period = 30)
AND (label IN ('aa','bb') )
AND ( id IN (
SELECT id
FROM t_location_data
WHERE (to_date = '2019-03-20') AND (period = 30)
AND ( country = 'Narniya' ) ) )
We see that the time to run the query takes 40 seconds and the time to fetch the results in this case is obviously less than a second..
Any help with this terrible performance of the aggregation functions over RDS Aurora? Why calculating Max Min and Avergae on 1.5 million lines takes so long (When comparing to Python on those same numbers, the calculation takes less than 1 second..)
NOTE: We added random number to each select to make sure we do not get cached values.
We use Aurora RDS:
1 instance of db.r5.large (2 vCPU + 16 GB RAM)
MySQL Engine version: 5.6.10a
Create table:
Create Table: CREATE TABLE `t_data` (
`id` varchar(256) DEFAULT NULL,
`val2` int(11) DEFAULT NULL,
`val3` int(11) DEFAULT NULL,
`val` int(11) DEFAULT NULL,
`val4` int(11) DEFAULT NULL,
`tags` varchar(256) DEFAULT NULL,
`val7` int(11) DEFAULT NULL,
`label` varchar(32) DEFAULT NULL,
`val5` varchar(64) DEFAULT NULL,
`val6` int(11) DEFAULT NULL,
`period` int(11) DEFAULT NULL,
`to_date` varchar(64) DEFAULT NULL,
`data_line_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`data_line_id`),
UNIQUE KEY `id_data` (`to_date`,`period`,`id`),
KEY `index1` (`to_date`,`period`,`id`),
KEY `index3` (`to_date`,`period`,`label`)
) ENGINE=InnoDB AUTO_INCREMENT=218620560 DEFAULT CHARSET=latin1
Create Table: CREATE TABLE `t_location_data` (
`id` varchar(256) DEFAULT NULL,
`country` varchar(256) DEFAULT NULL,
`state` varchar(256) DEFAULT NULL,
`city` varchar(256) DEFAULT NULL,
`latitude` float DEFAULT NULL,
`longitude` float DEFAULT NULL,
`val8` int(11) DEFAULT NULL,
`val9` tinyint(1) DEFAULT NULL,
`period` int(11) DEFAULT NULL,
`to_date` varchar(64) DEFAULT NULL,
`location_line_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`location_line_id`),
UNIQUE KEY `id_location_data` (`to_date`,`period`,`id`,`latitude`,`longitude`),
KEY `index1` (`to_date`,`period`,`id`,`country`),
KEY `index2` (`country`,`state`,`city`),
KEY `index3` (`to_date`,`period`,`country`,`state`)
) ENGINE=InnoDB AUTO_INCREMENT=315944737 DEFAULT CHARSET=latin1
Parameters:
##innodb_buffer_pool_size/1024/1024/1024: 7.7900
##innodb_buffer_pool_instances: 8
UPDATE:
Adding the val index (like suggest by #rick-james) did improve the query dramatically (took ~2 seconds) only if I delete the AND ( id IN (SELECT id FROM t_location_data.. condition. If I leave it, the query runs for about ~25 seconds.. better than before but still not good..
Indexes needed:
t_data: INDEX(period, to_date, label, val)
t_data: INDEX(period, label, to_date, val)
t_location_data: INDEX(period, country, to_date, id)
Also, change from the slow IN ( SELECT ... ) to a JOIN:
FROM t_data AS d
JOIN t_location_data AS ld USING(id)
WHERE ...
Better yet, since the tables are 1:1 (is that correct?), combine the tables so as to eliminate the JOIN. If id is not the PRIMARY KEY in each table, you really need to provide SHOW CREATE TABLE and should change the name(s).

Slow search query with a one to many join

My problem is a slow search query with a one-to-many relationship between the tables. My tables look like this.
Table Assignment
CREATE TABLE `Assignment` (
`Id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ProjectId` int(10) unsigned NOT NULL,
`AssignmentTypeId` smallint(5) unsigned NOT NULL,
`AssignmentNumber` varchar(30) NOT NULL,
`AssignmentNumberExternal` varchar(50) DEFAULT NULL,
`DateStart` datetime DEFAULT NULL,
`DateEnd` datetime DEFAULT NULL,
`DateDeadline` datetime DEFAULT NULL,
`DateCreated` datetime DEFAULT NULL,
`Deleted` datetime DEFAULT NULL,
`Lat` double DEFAULT NULL,
`Lon` double DEFAULT NULL,
PRIMARY KEY (`Id`),
KEY `idx_assignment_assignment_type_id` (`AssignmentTypeId`),
KEY `idx_assignment_assignment_number` (`AssignmentNumber`),
KEY `idx_assignment_assignment_number_external`
(`AssignmentNumberExternal`)
) ENGINE=InnoDB AUTO_INCREMENT=5280 DEFAULT CHARSET=utf8;
Table ExtraFields
CREATE TABLE `ExtraFields` (
`assignment_id` int(10) unsigned NOT NULL,
`name` varchar(30) NOT NULL,
`value` text,
PRIMARY KEY (`assignment_id`,`name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
My search query
SELECT
`Assignment`.`Id`, COL_5_72, COL_5_73, COL_5_74, COL_5_75, COL_5_76,
COL_5_77 FROM (
SELECT
`Assignment`.`Id`,
`Assignment`.`AssignmentNumber` AS COL_5_72,
`Assignment`.`AssignmentNumberExternal` AS COL_5_73 ,
`AssignmentType`.`Name` AS COL_5_74,
`Assignment`.`DateStart` AS COL_5_75,
`Assignment`.`DateEnd` AS COL_5_76,
`Assignment`.`DateDeadline` AS COL_5_77 FROM `Assignment`
CASE WHEN `ExtraField`.`Name` = "WorkDistrict" THEN
`ExtraField`.`Value` end as COL_5_78 FROM `Assignment`
LEFT JOIN `ExtraFields` as `ExtraField` on
`ExtraField`.`assignment_id` = `Assignment`.`Id`
WHERE `Assignment`.`Deleted` IS NULL -- Assignment should not be removed.
AND (1=1) -- Add assignment filters.
) AS q1
GROUP BY `Assignment`.`Id`
HAVING 1 = 1
AND COL_5_78 LIKE '%Amsterdam East%'
ORDER BY COL_5_72 ASC, COL_5_73 ASC;
When the table is only around 3500 records my query takes a couple of seconds to execute and return the results.
What is a better way to search in the related data? Should I just add a JSON field to the Assignment table and use the MySQL 5.7 Json query features? Or did I made a mistake in designing my database?
You are using select from subquery that forces MySQL to create unindexed temp table for each execution. Remove subquery (you really don't need it here) and it will be much faster.

MySQL optimize count query

I've got a question about MySQL performance.
These are my tables:
(about 140.000 records)
CREATE TABLE IF NOT EXISTS `article` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`label` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`title` varchar(256) COLLATE utf8_unicode_ci NOT NULL,
`intro` text COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci NOT NULL,
`date` int(11) NOT NULL,
`active` int(1) NOT NULL,
`language_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL,
`indexed` int(1) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=132911 ;
(about 400.000 records)
CREATE TABLE IF NOT EXISTS `article_category` (
`article_id` int(11) NOT NULL,
`category_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
RUNNING THIS COUNT QUERY:
SELECT SQL_NO_CACHE COUNT(id) as total
FROM (`article`)
LEFT JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
WHERE `article`.`language_id` = 1
AND `article_category`.`category_id` = '<catid>'
This query takes a lot of resources, so I am wondering how to optimize this query.
After executing it's beeing cached, so after the first run I am fine.
RUNNING THE EXPLAIN FUNCTION:
AFTER CREATING AN INDEX:
ALTER TABLE `article_category` ADD INDEX ( `article_id` , `category_id` ) ;
After adding indexes and changing LEFT JOIN to JOIN the query runs alot faster!
Thanks for these fast replys :)
QUERY I USE NOW (I removed the language_id because it was not that neccesary):
SELECT COUNT(id) as total
FROM (`article`)
JOIN `article_category` ON `article_category`.`article_id` = `article`.`id`
AND `article_category`.`category_id` = '<catid>'
I've read something about forcing an index, but I think thats not neccesary anymore because the tables are already indexed, right?
Thanks alot!
Martijn
You haven't created necessary index on the table
Table article_category - Create a compound index on (article_id, category_id)
Table article -Create a compound index on (id, language_id)
If this doesn't help post the explain statement.
The columns used in a JOIN condition should have an index, so you need to index article_id.

Avoid UNION for two almost identical tables in MySQL

I'm not very good at MySQL and i'm going to write a query to count messages sent by an user, based on its type and is_auto field.
Messages can be of type "small text message" or "newsletter". I created two entities with a few fields that differs between them. The important one is messages_count that is absent in table newsletter and it's used in the query:
CREATE TABLE IF NOT EXISTS `small_text_message` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`messages_count` int(11) NOT NULL,
`username` varchar(255) NOT NULL,
`method` varchar(255) NOT NULL,
`content` longtext,
`sent_at` datetime DEFAULT NULL,
`status` varchar(255) NOT NULL,
`recipients_count` int(11) NOT NULL,
`customers_count` int(11) NOT NULL,
`sheduled_at` datetime DEFAULT NULL,
`sheduled_for` datetime DEFAULT NULL,
`is_auto` tinyint(1) NOT NULL,
`user_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
And:
CREATE TABLE `newsletter` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`subject` varchar(78) DEFAULT NULL,
`content` longtext,
`sent_at` datetime DEFAULT NULL,
`status` varchar(255) NOT NULL,
`recipients_count` int(11) NOT NULL,
`customers_count` int(11) NOT NULL,
`sheduled_at` datetime DEFAULT NULL,
`sheduled_for` datetime DEFAULT NULL,
`is_auto` tinyint(1) NOT NULL,
`user_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
I ended up with a UNION query. Can this query be shortened or optimized since the only difference is messages_count that should be always 1 for newsletter?
SELECT
CONCAT('sms_', IF(is_auto = 0, 'user' , 'auto')) AS subtype,
SUM(messages_count * (customers_count + recipients_count)) AS count
FROM small_text_message WHERE status <> 'pending' AND user_id = 1
GROUP BY is_auto
UNION
SELECT
CONCAT('newsletter_', IF(is_auto = 0, 'user' , 'auto')) AS subtype,
SUM(customers_count + recipients_count) AS count
FROM newsletter WHERE status <> 'pending' AND user_id = 1
GROUP BY is_auto
I don't see any easy way to avoid a UNION (or UNION ALL) operation, that will return the specified result set.
I would recommend you use a UNION ALL operator in place of the UNION operator. Then the execution plan will not include the step that eliminates duplicate rows. (You already have GROUP BY operations on each query, and there is no way that those two queries can produce an identical row.)
Otherwise, your query looks fine just as it is written.
(It's always a good thing to consider the question, might there be a better way? To get the result set you are asking for, from the schema you have, your query looks about as good as it's going to get.)
If you are looking for more general DB advice, I recommend restructuring the tables to factor the common elements into one table, perhaps called outbound_communication or something, with all of your common fields, then perhaps have "sub tables" for the specific types to host the fields which are unique to that type. It does mean a simple JOIN is necessary to select all of the fields you want, but the again, it's normalized and actually makes situations like this one easier (one table holds all of the entities of interest). Additionally, you have the option of writing that JOIN just once as a "view", and then your existing code would not even need to change to see the two tables as if they never changed.
CREATE TABLE IF NOT EXISTS `outbound_communicaton` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`content` longtext,
`sent_at` datetime DEFAULT NULL,
`status` varchar(255) NOT NULL,
`recipients_count` int(11) NOT NULL,
`customers_count` int(11) NOT NULL,
`sheduled_at` datetime DEFAULT NULL,
`sheduled_for` datetime DEFAULT NULL,
`is_auto` tinyint(1) NOT NULL,
`user_id` int(11) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
CREATE TABLE `small_text_message` (
`oubound_communication_id` int(11) NOT NULL,
`messages_count` int(11) NOT NULL,
`username` varchar(255) NOT NULL,
`method` varchar(255) NOT NULL,
PRIMARY KEY (`outbound_communication_id`),
FOREIGN KEY (outbound_communication_id)
REFERENCES outbound_communicaton(id)
) ENGINE=InnoDB;
CREATE TABLE `newsletter` (
`oubound_communication_id` int(11) NOT NULL,
`subject` varchar(78) DEFAULT NULL,
PRIMARY KEY (`outbound_communication_id`),
FOREIGN KEY (outbound_communication_id)
REFERENCES outbound_communicaton(id)
) ENGINE=InnoDB;
Then selecting a text msg is like this:
SELECT *
FROM outbound_communication AS parent
JOIN small_text_message
ON parent.id = small_text_message.outbound_communication_id
WHERE parent.id = 1234;
The nature of the query is inherently the union of the data from the small text message and the newsletter tables, so the UNION query is the only realistic formulation. There's no join of relevance between the two tables, for example.
So, I think you're very much on the right lines with your query.
Why are you worried about a UNION?

mysql join not use index for 'between' operator

So basically I have three tables:
CREATE TABLE `cdIPAddressToLocation` (
`IPADDR_FROM` int(10) unsigned NOT NULL COMMENT 'Low end of the IP Address block',
`IPADDR_TO` int(10) unsigned NOT NULL COMMENT 'High end of the IP Address block',
`IPLOCID` int(10) unsigned NOT NULL COMMENT 'The Location ID for the IP Address range',
PRIMARY KEY (`IPADDR_TO`),
KEY `Index_2` USING BTREE (`IPLOCID`),
KEY `Index_3` USING BTREE (`IPADDR_FROM`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
CREATE TABLE `cdIPLocation` (
`IPLOCID` int(10) unsigned NOT NULL default '0',
`Country` varchar(4) default NULL,
`Region` int(10) unsigned default NULL,
`City` varchar(90) default NULL,
`PostalCode` varchar(10) default NULL,
`Latitude` float NOT NULL,
`Longitude` float NOT NULL,
`MetroCode` varchar(4) default NULL,
`AreaCode` varchar(4) default NULL,
`State` varchar(45) default NULL,
`Continent` varchar(10) default NULL,
PRIMARY KEY (`IPLOCID`)
) ENGINE=MyISAM AUTO_INCREMENT=218611 DEFAULT CHARSET=latin1;
and
CREATE TABLE 'data'{
'IP' varchar(50)
'SCORE' int
}
My task is to join these three tables and find the location data for given IP address.
My query is as follows:
select
t.ip,
l.Country,
l.State,
l.City,
l.PostalCode,
l.Latitude,
l.Longitude,
t.score
from
(select
ip, inet_aton(ip) ipv, score
from
data
order by score desc
limit 5) t
join
cdIPAddressToLocation a ON t.ipv between a.IPADDR_FROM and a.IPADDR_TO
join
cdIPLocation l ON l.IPLOCID = a.IPLOCID
While this query works, it's very very slow, it took about 100 seconds to return the result on my dev box.
I'm using mysql 5.1, the cdIPAddressToLocation has 5.9 million rows and cdIPLocation table has about 0.3 million rows.
When I check the execution plan, I found it's not using any index in the table 'cdIPAddressToLocation', so for each row in the 'data' table it would do a full table scan against table 'cdIPAddressToLocation'.
It is very weird to me. I mean since there are already two indexes in table 'cdIPAddressToLocation' on columns 'IPADDR_FROM' and 'IPADDR_TO', the execution plan should exploit the index to improve the performance, but why it didn't use them.
Or was there something wrong with my query?
Please help, thanks a lot.
Have you tried using a composite index on the columns cdIPAddressToLocation.IPADDR_FROM and cdIPAddressToLocation.IPADDR_TO?
Multiple-Column Indexes