UPDATE by PRIMARY KEY query is too slow on big table - mysql

I have a MyISAM table (on a Mariadb) with 7 millions rows in it.
CREATE TABLE `mytable` (
`id` bigint(100) unsigned NOT NULL AUTO_INCREMENT,
`x` int(5) unsigned NOT NULL DEFAULT '0',
`y` int(5) unsigned NOT NULL DEFAULT '0',
`value` int(5) unsigned NOT NULL DEFAULT '0'
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=10152508 DEFAULT CHARSET=utf8 PAGE_CHECKSUM=1
When i do
SELECT * FROM mytable WHERE id = 167880;
it takes around 0.272 sec
When i do
UPDATE mytable SET value = 1 WHERE id = 167880;
it takes randomly from 0.200 to 2.5 sec
I was thinking it's because my table have a lot of rows, but still, it shouldn't take that much time to update a row by it's primary key.
Since i did some researchs before posting, here are the checks i've already done :
No duplicate indexes
No others indexes than the primary key "id"
No triggers
Tried to switch to innoDB engine, it was worse (around 6 sec for an update)
Tried to switch to aria engine, it's even worse
Already did OPTIMIZE TABLE;
Config is the default config of last version of Mariadb (fresh install)
Made all theses check while the db was not used by anything else, so no heavy readings during the tests

I think that the problem is the data type you are using for id column.
Using INT rather then BIGINT can make a significant reduction in disk space.
Read this article instead.
http://ronaldbradford.com/blog/bigint-v-int-is-there-a-big-deal-2008-07-18/
Hope it helps

Related

Mysql partitioning effect on DDL and DML

I am using Mysql 5.6 with ~150 million records in Transaction table (InnodB). As the size is increasing this table is becoming unmanageable (adding column or index) and slow even with required indexing. After searching through internet I found it is appropriate time to partition the table. I am confidant that partitioning will solve following purpose for me
Improve DML statements response time (using partitioning pruning)
Improve archival process
But I am not sure wether (and how) it will improve DDL performance for this table or not. More specifically following DDL's performance.
ALTER TABLE ADD/DROP COLUMN
ALTER TABLE ADD/DROP INDEX
I went through Mysql documentation and internet but unable to find my answer. Can anyone please help me in this or provide any relevant documentation for this.
My table structure is as following
CREATE TABLE `TRANSACTION` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`parent_id` int(11) DEFAULT NULL,
`parent_uuid` char(36) DEFAULT NULL,
`order_number` varchar(64) DEFAULT NULL,
`order_id` int(11) DEFAULT NULL,
`order_uuid` char(36) DEFAULT NULL,
`order_type` char(1) DEFAULT NULL,
`business_id` int(11) DEFAULT NULL,
`store_id` int(11) DEFAULT NULL,
`store_device_id` int(11) DEFAULT NULL,
`source` char(1) DEFAULT NULL COMMENT 'instore, online, order_ahead, etc',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
`flags` int(11) DEFAULT NULL,
`customer_lang` char(2) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `business_id` (`business_id`,`store_id`,`store_device_id`),
KEY `parent_uuid` (`parent_uuid`),
KEY `order_uuid` (`order_uuid`),
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
And I am partitioning using following statement.
ALTER TABLE TRANSACTION PARTITION BY RANGE (id)
(PARTITION p0 VALUES LESS THAN (5000000) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (10000000) ENGINE = InnoDB,
PARTITION p2 VALUES LESS THAN MAXVALUE ENGINE = InnoDB)
Thanks!
Partitioning is not a performance panacea. Even the items you mentioned will not speed up; they may even slow down.
Instead, I will critique the table to look for ways to speed up some things.
UUIDs are terrible for performance once the index on it becomes too big to be cached. This is because of its randomness. Possible solutions: compact it into BINARY(16); shrink the table other ways; avoid UUIDs.
Why have both parent_id and parent_uuid??
Shrink the 4-byte INTs to smaller datatypes where practical.
Usually CHAR should be CHARACTER SET ascii (1-byte/character), not utf8mb4 (4 bytes/char).
Caution: 150M is getting remotely close to the 2-billion limit of INT SIGNED. Consider 4B limit of INT UNSIGNED. (Each is 4 bytes.)
Do you ever use created_at or updated_at?
MySQL 8.0.13 has a very fast ADD COLUMN and DROP COLUMN (for limited situations).
5.7.?? has a less-invasive ADD INDEX than previous versions, but I am not sure it applies to partitioned tables.
5.7.4: Online DDL support reduces table rebuild time and permits concurrent DML, which helps reduce user application downtime. For additional information, see Overview of Online DDL.
More importantly, let's see the main queries that are "too slow". There may be composite indexes and/or reformulations of the queries that will speed them up.
There is even a slim chance that partitioning will help but not on the PRIMARY KEY.
I think there are only 4 use cases where partitioning helps performance.

Why does MySQL query return zero rows, but after defrag it works?

I have an InnoDB, MySQL table and this query returns zero rows:
SELECT id, display FROM ra_table WHERE parent_id=7266 AND display=1;
However, there are actually 17 rows that should match:
SELECT id, display FROM ra_itable1 WHERE parent_id=7266;
ID display
------------------
1748 1
5645 1
...
There is an index on display (int 1), and ID is the primary key. The table also has several other fields which I'm not pulling in this query.
After noticing this query wasn't working, I defragmented the table and then the first query started working correctly, but only for a time. It seems after a few days, the query stops working again and I have to defragment to fix it.
My question is, why does the fragmented table break this query?
Additional info: MySQL 5.6.27 on Amazon RDS.
CREATE TABLE `ra_table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`parent_id` int(6) NOT NULL,
`display` int(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `parent_id` (`parent_id`),
KEY `display` (`display`),
) ENGINE=InnoDB AUTO_INCREMENT=13302 DEFAULT CHARSET=latin1
ROW_FORMAT=DYNAMIC
There may be a bug in the version you are running.
Meanwhile, change
INDEX(parent_id),
INDEX(display)
to
INDEX(parent_id, display)
By combining them, the query will run faster (and hopefully correctly). An index on a flag (display) is likely to never be used.

Very slow query on mysql table with 35 million rows

I am trying to figure out why a query is so slow on my MySQL database. I've read various content about MySQL performance, various SO questions, but this stays a riddle for me.
I am using MySQL 5.6.23-log - MySQL Community Server (GPL)
I have a table with roughly 35 million rows.
This table is being inserted to about 5 times / second
The table looks like this:
I have indexes on all the columns except for answer_text
The query I'm running is:
SELECT answer_id, COUNT(1)
FROM answers_onsite a
WHERE a.screen_id=384
AND a.timestamp BETWEEN 1462670000000 AND 1463374800000
GROUP BY a.answer_id
this query takes roughly 20-30 seconds, then gives a result set:
Any insights?
EDIT
as asked, my show create table:
CREATE TABLE 'answers_onsite' (
'id' bigint(20) unsigned NOT NULL AUTO_INCREMENT,
'device_id' bigint(20) unsigned NOT NULL,
'survey_id' bigint(20) unsigned NOT NULL,
'answer_set_group' varchar(255) NOT NULL,
'timestamp' bigint(20) unsigned NOT NULL,
'screen_id' bigint(20) unsigned NOT NULL,
'answer_id' bigint(20) unsigned NOT NULL DEFAULT '0',
'answer_text' text,
PRIMARY KEY ('id'),
KEY 'device_id' ('device_id'),
KEY 'survey_id' ('survey_id'),
KEY 'answer_set_group' ('answer_set_group'),
KEY 'timestamp' ('timestamp'),
KEY 'screen_id' ('screen_id'),
KEY 'answer_id' ('answer_id')
) ENGINE=InnoDB AUTO_INCREMENT=35716605 DEFAULT CHARSET=utf8
ALTER TABLE answers_onsite ADD key complex_index (screen_id,`timestamp`,answer_id);
you can use mysql Partitioning like this :
alter table answers_onsite drop primary key;
alter table answers_onsite add primary key (id, timestamp) partition by HASH(id) partitions 500;
Running the above may take a while depending on the size of your table.
Look at your WHERE clause:
WHERE a.screen_id=384
AND a.timestamp BETWEEN 1462670000000 AND 1463374800000
GROUP BY a.answer_id
I would create a composite index (screen_id, answer_id, timestamp) and run some tests.
You could also try (screen_id, timestamp, answer_id) to see if it performs better.
The BETWEEN clause is known to be slow though, as any range query. So is COUNT on million of rows. I would count once a day and save the result to a 'Stats' table which you can query when you need...obviously if you do not need live data.

MySQL performance - large database

I've read heaps of posts here on stackoverflow, blog posts, tutorials and more, but I still fail to resolve a rather nasty performance issue with my MySQL db. Keep in mind that I'm a novice when it comes to large MySQL databases.
I have a table with approx. 11.000.000 rows (will increase to say 20.000.000 or more). Here's the layout:
CREATE TABLE `myTable` (
`intcol1` int(11) DEFAULT NULL,
`charcol1` char(25) DEFAULT NULL,
`intcol2` int(11) DEFAULT NULL,
`charcol2` char(50) DEFAULT NULL,
`charcol3` char(50) DEFAULT NULL,
`charcol4` char(50) DEFAULT NULL,
`intcol3` int(11) DEFAULT NULL,
`charcol5` char(50) DEFAULT NULL,
`intcol4` int(20) DEFAULT NULL,
`intcol5` int(20) DEFAULT NULL,
`intcol6` int(20) DEFAULT NULL,
`intcol7` int(11) DEFAULT NULL,
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`),
FULLTEXT KEY `idx` (`charcol2`,`charcol3`)
) ENGINE=MyISAM AUTO_INCREMENT=11665231 DEFAULT CHARSET=latin1;
A select statement like
SELECT * from myTable where charchol2='bogus' AND charcol3='bogus2';
takes 25 seconds or so to execute. That's too slow, and will be even slower as the table grows.
The table will not have any inserts or updates at all (so to speak), and will be primarily used for outputting searches on the char-columns.
I've tried to make indexing work (playing around with FULLTEXT, as you can see), but it seems that I'm missing something. Any takes on how to speed up the performance?
Please note: Im currently running MySQL on my Macbook Air (1.7 GHz i5, 4GB RAM). If this is the only answer to my performance issues, I'll move the database to something appropriate ;-)
EDIT: Explain table
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE myTable ALL NULL NULL NULL NULL 11596725 Using where
You don't need to create FULLTEXT indexes for such requests, where equality operator is used. Just create an index on every char field, that will be used in WHERE condition, and remove the fulltext index:
DROP INDEX idx;
ALTER TABLE myTable ADD INDEX charchol_idx (charchol2, charchol3);

Slow INSERT .. ON DUPLICATE KEY UPDATE query with InnoDB

Basically I am monitoring slowest query on a website. It turns out they are something like:
INSERT INTO beststat (bestid,period,rawView) VALUES ( 'idX' , 2012 , 1 )
ON DUPLICATE KEY UPDATE rawView = rawView+1
Basically it's a logging table. If the row is already there it updates rawView with a +1
beststat is InnoDB so I have row-level locking and consindering I do a lot of inserts-updates it should be faster than MyISAM.
Anyway that query shouldn't take so long, maybe there is something else wrong. What it could be ?
Of course I have an Unique Index on bestid, period
Additional Info
This table (beststat) currently has ~1mil record and its size is: 68MB. I have 4GB RAM and innodb buffer pool size = 104,857,600. Mysql: 5.1.49-3
CREATE TABLE `beststat` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`bestid` int(11) unsigned NOT NULL,
`period` mediumint(8) unsigned NOT NULL,
`view` mediumint(8) unsigned NOT NULL DEFAULT '0',
`rawView` mediumint(8) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
UNIQUE KEY `bestid` (`bestid`,`period`)
) ENGINE=InnoDB AUTO_INCREMENT=2020577 DEFAULT CHARSET=utf8
Notice to faster thing a litte bit i could do somethijng like:
UPDATE beststat SET rawView = rawView + 1 WHERE bestid = idX AND period = 2012;
if (mysql_affected_rows()==0)
INSERT INTO beststat (bestid,period,rawView) VALUES ('idX',2012,1)
So most of time i would run only the first query UPDATE. But I would like to understand why the first, more concise, query is slow.
I found this interesting article... still reading
dealing with big # of rows, i suggest to use load date infile to make query faster.
To further improve the query time, you can consider using memory table as well.