mysql insert too slow and high io/cpu usage some time - mysql

the table row is about one hundred million, sometimes the io bps about 150
IOPS about 4k
os version: CentOS Linux 7
MySQL version: docker mysql:5.6
server_id=3310
skip-host-cache
skip-name-resolve
max_allowed_packet=20G
innodb_log_file_size=1G
init-connect='SET NAMES utf8mb4'
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5120M
expire-logs-days=7
log_bin=webser
binlog_format=ROW
back_log=1024
slow_query_log
slow_query_log_file=slow-log
tmpdir=/var/log/mysql
sync_binlog=1000
the create table statement
CREATE TABLE `device_record` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`os` tinyint(9) DEFAULT NULL,
`uid` int(11) DEFAULT '0',
`idfa` varchar(50) DEFAULT NULL,
`adv` varchar(8) DEFAULT NULL,
`oaid` varchar(100) DEFAULT NULL,
`appId` tinyint(4) DEFAULT NULL,
`agent` varchar(100) DEFAULT NULL,
`channel` varchar(20) DEFAULT NULL,
`callback` varchar(1500) DEFAULT NULL,
`activeAt` datetime DEFAULT NULL,
`chargeId` int(11) DEFAULT '0',
`createAt` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `idfa_record_index_oaid` (`oaid`),
UNIQUE KEY `index_record_index_agent` (`agent`) USING BTREE,
UNIQUE KEY `idfa_record_index_idfa_appId` (`idfa`) USING BTREE,
KEY `index_record_index_uid` (`uid`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=1160240883 DEFAULT CHARSET=utf8mb4
insert statement
#Insert(
"insert into idfa_record (os,idfa,oaid,appId,agent,channel,callback,adv,createAt) "
+ "values(#{os},#{idfa},#{oaid},#{appId},#{agent},#{channel},#{callback},#{adv},now()) on duplicate key "
+ "update createAt=if(uid<=0,now(),createAt),activeAt=if(uid<=0 and channel != #{channel},null,activeAt),channel=if(uid<=0,#{channel},channel),"
+ "adv=if(uid<=0,#{adv},adv),callback=if(uid<=0,#{callback},callback),appId=if(uid<=0,#{appId},appId)")

100M rows, but the auto_increment is already at 1160M? This is quite possible, but...
Most importantly, the table is more than halfway to overflowing INT SIGNED.
Are you doing inserts that "burn" ids?
Does the existence of 4 Unique keys cause many rows to be skipped?
This seems excessive: max_allowed_packet=20G.
How much RAM is available?
Does swapping occur?
How many rows are inserted per second? What is "bps"? (I am pondering why there are 4K writes. I would expect about 2 IOPS per Unique key per INSERT, but that dones not add up to 4K unless you have about 500 Inserts/sec.
Are the Inserts coming from different clients? (This feeds into "burning" ids, sluggishness, etc.)

Related

MySQL : SELECT on big table takes a lot of time. Solutions?

my app get stuck for hours on simple queries like :
SELECT COUNT(*) FROM `item`
Context :
This table is around 200Gb+ and 50M+ rows.
We have a RDS on AWS with 2CPU and 16GiB RAM (db.r6g.large).
This is the table structure SQL dump :
/*
Target Server Type : MySQL
Target Server Version : 80023
File Encoding : 65001
*/
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
DROP TABLE IF EXISTS `item`;
CREATE TABLE `item` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`status` tinyint DEFAULT '1',
`source_id` int unsigned DEFAULT NULL,
`type` varchar(50) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`url` varchar(2048) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`title` varchar(500) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`sku` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`price` decimal(20,4) DEFAULT NULL,
`price_bc` decimal(20,4) DEFAULT NULL,
`price_original` decimal(20,4) DEFAULT NULL,
`currency` varchar(10) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`description` text CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci,
`image` varchar(1024) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`time_start` datetime DEFAULT NULL,
`time_end` datetime DEFAULT NULL,
`block_update` tinyint(1) DEFAULT '0',
`status_api` tinyint(1) DEFAULT '1',
`data` json DEFAULT NULL,
`created_at` int unsigned DEFAULT NULL,
`updated_at` int unsigned DEFAULT NULL,
`retailer_id` int DEFAULT NULL,
`hash` char(32) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
`count_by_hash` int DEFAULT '1',
`item_last_update` int DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `sku_retailer_idx` (`sku`,`retailer_id`),
KEY `updated_at_idx` (`updated_at`),
KEY `time_end_idx` (`time_end`),
KEY `retailer_id_idx` (`retailer_id`),
KEY `hash_idx` (`hash`),
KEY `source_id_hash_idx` (`source_id`,`hash`) USING BTREE,
KEY `count_by_hash_idx` (`count_by_hash`) USING BTREE,
KEY `created_at_idx` (`created_at`) USING BTREE,
KEY `title_idx` (`title`),
KEY `currency_idx` (`currency`),
KEY `price_idx` (`price`),
KEY `retailer_id_title_idx` (`retailer_id`,`title`) USING BTREE,
KEY `source_id_idx` (`source_id`) USING BTREE,
KEY `source_id_count_by_hash_idx` (`source_id`,`count_by_hash`) USING BTREE,
KEY `status_idx` (`status`) USING BTREE,
CONSTRAINT `fk-source_id` FOREIGN KEY (`source_id`) REFERENCES `source` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1858202585 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
SET FOREIGN_KEY_CHECKS = 1;
does partitioning the table could help on a simple query like this ?
do I need to increase the RAM of the RDS ? If yes what configuration do I need ?
is NoSQL more compatible to this kind of structure ?
Do you have any advices/solutions/fixes so the app can run those queries (we would like to keep all the data and not erase it if possible..) ?
"SELECT COUNT(*) FROM item" needs to scan an index. The smallest index is about 200MB, so that seems like it should not take "minutes".
There are probably multiple queries that do full table scans. Such will bump out all the cached data from the ~11GB of cache (the buffer_pool) and do that about 20 times. That's a lot of I/O and a lot of elapsed time. Meanwhile, most other queries will run slowly because their cached data is being bumped out.
The resolution:
Locate these naughty queries. RDS probably gives you access to the "slowlog".
Grab the slowlog and run pt-query-digest or mysqldumpslow -s t to find the "worst" queries.
Then we can discuss them.
There are some redundant indexes; removing them won't solve the problem. A rule: If you have INDEX(a), INDEX(a,b), you don't need the former.
If hash is some kind of scrambled value, it is likely that a single-row lookup (or update) will require a disk hit (and bump something else out of the cache).
decimal(20,4) takes 10 bytes and allows values up to 9,999,999,999,999,999.9999; that seems excessive. (Shrinking it won't save much space; something to keep in mind for the future.)
I see that AUTO_INCREMENT has reached 1.8 billion. If there are only 50M rows, does the processing do a lot of DELETEs? Or maybe REPLACE``? IODKU is better than REPLACE`.
Thanks for all the advices here, but the problem was that we were using the MySQL json type for a very heavy column. Removing this column or even changing it to varchar made the COUNT(id) around 1000x faster (also adding WHERE id > 1 helped..)
Note : it was impossible to just delete the column as it was, we had to change it to varchar before.

Got error 66 "Object is remote" from storage engine InnoDB

I am running ALTER TABLE article_attachment CHANGE content content LONGBLOB NULL
on this table:
CREATE TABLE `article_attachment` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`article_id` bigint(20) NOT NULL,
`filename` varchar(250) DEFAULT NULL,
`content_size` varchar(30) DEFAULT NULL,
`content_type` text,
`content_id` varchar(250) DEFAULT NULL,
`content_alternative` varchar(50) DEFAULT NULL,
`content` longblob NOT NULL,
`create_time` datetime NOT NULL,
`create_by` int(11) NOT NULL,
`change_time` datetime NOT NULL,
`change_by` int(11) NOT NULL,
`disposition` varchar(15) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `article_attachment_article_id` (`article_id`),
KEY `FK_article_attachment_create_by_id` (`create_by`),
KEY `FK_article_attachment_change_by_id` (`change_by`)
) ENGINE=InnoDB AUTO_INCREMENT=34672 DEFAULT CHARSET=utf8
And get the error
Got error 66 "Object is remote" from storage engine InnoDB
Google returns almost nothing regarding the error.
I did increase max_allowed_packet to 999999488 but that did not help.
Update
I tried to change another column in the same table an there it tells me The size of BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size. Increase the redo log size using innodb_log_file_size.
Maybe this is related...
Ok, I followed https://dba.stackexchange.com/a/1265/42097 and increased innodb_log_file_size in my.cnf mysqld section.
Mysql ANALYSE of the table told me article_attachment.content max_length is 22942326. So I set innodb_log_file_size to 300000000.
Now ALTER TABLE worked.

Slow Updates for Single Records by Primary Key

I am using MySQL 5.5.
I have an InnoDB table definition as follows:
CREATE TABLE `table1` (
`col1` int(11) NOT NULL AUTO_INCREMENT,
`col2` int(11) DEFAULT NULL,
`col3` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`col4` int(11) DEFAULT NULL,
`col5` datetime DEFAULT NULL,
`col6` tinyint(1) NOT NULL DEFAULT '0',
`col7` datetime NOT NULL,
`col8` datetime NOT NULL,
`col9` int(11) DEFAULT NULL,
`col10` tinyint(1) NOT NULL DEFAULT '0',
`col11` tinyint(1) DEFAULT '0',
PRIMARY KEY (`col1`),
UNIQUE KEY `index_table1_on_ci_ai_tn_sti` (`col2`,`col4`,`col3`,`col9`),
KEY `index_shipments_on_applicant_id` (`col4`),
KEY `index_shipments_on_shipment_type_id` (`col9`),
KEY `index_shipments_on_created_at` (`col7`),
KEY `idx_tracking_number` (`col3`)
) ENGINE=InnoDB AUTO_INCREMENT=7634960 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
The issue is UPDATES. There are about 2M rows in this table.
A typical UPDATE query would be :
UPDATE table1 SET col6 = 1 WHERE col1 = 7634912;
We have about 5-10k QPS on this production server. These queries are often in "Updating" state when looked at through the process list. The InnoDB locks show that there are many rec but not gap locks on index_table1_on_ci_ai_tn_sti. No transaction is waiting for lock.
My feeling is that the Unique Index is causing the lag but I'm not sure why. This is the only table we have that is defined this way using the Unique Index.
I don't think the UNIQUE key has any impact (in this case).
Are you really setting a DATETIME to "1"? (Please check for other typos -- they could make a big difference.)
Are you trying to do 10K UPDATEs per second?
Is innodb_buffer_pool_size bigger than the table, but no bigger than 70% of available RAM?
What is the value of innodb_flush_log_at_trx_commit? 1 is default and secure, but slower than 2.
Can you put a bunch of updates into a single transaction? That would cut down the transaction overhead.

MySql performance suggestions

I am not MySQL expert and am stuck in a problem. I have a table which currently hold 16GB of data and it will grow further. The structure of the table is given below,
CREATE TABLE `t_xyz_tracking` (
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`word` VARCHAR(200) NOT NULL,
`xyzId` BIGINT(100) NOT NULL,
`xyzText` VARCHAR(800) NULL DEFAULT NULL,
`language` VARCHAR(2000) NULL DEFAULT NULL,
`links` VARCHAR(2000) NULL DEFAULT NULL,
`xyzType` VARCHAR(20) NULL DEFAULT NULL,
`source` VARCHAR(1500) NULL DEFAULT NULL,
`sourceStripped` TEXT NULL,
`isTruncated` VARCHAR(40) NULL DEFAULT NULL,
`inReplyToStatusId` BIGINT(30) NULL DEFAULT NULL,
`inReplyToUserId` INT(11) NULL DEFAULT NULL,
`rtUsrProfilePicUrl` TEXT NULL,
`isFavorited` VARCHAR(40) NULL DEFAULT NULL,
`inReplyToScreenName` VARCHAR(40) NULL DEFAULT NULL,
`latitude` BIGINT(100) NOT NULL,
`longitude` BIGINT(100) NOT NULL,
`rexyzStatus` VARCHAR(40) NULL DEFAULT NULL,
`statusInReplyToStatusId` BIGINT(100) NOT NULL,
`statusInReplyToUserId` BIGINT(100) NOT NULL,
`statusFavorited` VARCHAR(40) NULL DEFAULT NULL,
`statusInReplyToScreenName` TEXT NULL,
`screenName` TEXT NULL,
`profilePicUrl` TEXT NULL,
`xyzId` BIGINT(100) NOT NULL,
`name` TEXT NULL,
`location` VARCHAR(200) NULL DEFAULT NULL,
`bio` TEXT NULL,
`url` TEXT NULL COLLATE 'latin1_swedish_ci',
`utcOffset` INT(11) NULL DEFAULT NULL,
`timeZone` VARCHAR(100) NULL DEFAULT NULL,
`frenCnt` BIGINT(20) NULL DEFAULT '0',
`createdAt` DATETIME NULL DEFAULT NULL,
`createdOnGMT` VARCHAR(40) NULL DEFAULT NULL,
`createdOnServerTime` DATETIME NULL DEFAULT NULL,
`follCnt` BIGINT(20) NULL DEFAULT '0',
`favCnt` BIGINT(20) NULL DEFAULT '0',
`totStatusCnt` BIGINT(20) NULL DEFAULT NULL,
`usrCrtDate` VARCHAR(200) NULL DEFAULT NULL,
`humanSentiment` VARCHAR(30) NULL DEFAULT NULL,
`replied` BIT(1) NULL DEFAULT NULL,
`replyMsg` TEXT NULL,
`classified` INT(32) NULL DEFAULT NULL,
`createdOnGMTDate` DATETIME NULL DEFAULT NULL,
PRIMARY KEY (`id`),
INDEX `id` (`id`, `word`),
INDEX `word_index` (`word`) USING BTREE,
INDEX `classified_index` (`classified`) USING BTREE,
INDEX `createdOnGMT_index` (`createdOnGMT`) USING BTREE,
INDEX `location_index` (`location`) USING BTREE,
INDEX `word_createdOnGMT` (`word`, `createdOnGMT`),
INDEX `timeZone` (`timeZone`) USING BTREE,
INDEX `language` (`language`(255)) USING BTREE,
INDEX `source` (`source`(255)) USING BTREE,
INDEX `xyzId` (`xyzId`) USING BTREE,
INDEX `getunclassified_index` (`classified`, `xyzType`) USING BTREE,
INDEX `createdOnGMTDate_index` (`createdOnGMTDate`, `word`) USING BTREE,
INDEX `links` (`links`(255)) USING BTREE,
INDEX `xyzType_classified` (`classified`, `xyzType`) USING BTREE,
INDEX `word_createdOnGMTDate` (`word`, `createdOnGMTDate`) USING BTREE
)COLLATE='utf8_general_ci'
ENGINE=InnoDB
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=17540328
The queries on this table are running slow now and I am expecting them to slow down further, my server configuration is given below,
Intel Xeon E5220 #2.27GHz (2 processors)
12GB Ram
Windows 2008 Server R2
my.ini file details are given below,
default-storage-engine=INNODB
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
max_connections=300
query_cache_size=0
table_cache=256
tmp_table_size=205M
thread_cache_size=8
myisam_max_sort_file_size=3G
myisam_sort_buffer_size=410M
key_buffer_size=354M
read_buffer_size=64K
read_rnd_buffer_size=256K
sort_buffer_size = 64M
join_buffer_size = 64M
thread_cache_size = 8
thread_concurrency = 8
query_cache_size = 128M
innodb_additional_mem_pool_size=15M
innodb_flush_log_at_trx_commit=1
innodb_log_buffer_size=30M
innodb_buffer_pool_size=6G
innodb_log_file_size=343M
innodb_thread_concurrency=44
max_allowed_packet = 16M
slow_query_log
long_query_time = 6
What can be done to improve performance,
Would converting to MyISAM table help, I have INNODB since this table has frequent write and even more frequent reads.
I have noticed high disk I/O, at time as high as 20-40MB/sec
Thanks,
Rohit
One suggestion is to run
SELECT * FROM t_xyz_tracking PROCEDURE ANALYSE()
PROCEDURE ANALYSE will tell you, based on the data in the table, the suggested types for the columns in the table. This should help increase your efficiency.
All the NULLable columns could be potentially moved to a separate table. Check what percentage of values in each of these columns is NULL, and if it's relatively high - move it to a separate table.
Next you might want to think which columns are accessed very often, and which ones are accessed relatively rarely. Rarely used columns can be moved to a separate table as well.
When your mysql server is too slow, a good idea is to activate the "slow query log", and then study the queries showing up in it.
This has helped me a lot to avoid some possible catastrophic failures due to some amateurish written queries.
Just off the top of my head, it looks like you're using the TEXT type where you shouldn't. TEXT is a CLOB (think BLOB for characters only). If you have a url, VARCHAR(255) might work better. For a name, isn't 50 characters enough?
The queries that are running slow, are they utilizing the indexes?
Could your "isXXX" fields be changed to BOOLEAN (or tinyint(1))?

Tables with 30mln entries are slow. Optimize MySQL or switch to mongodb?

I have a simple mysql db running on one server with two tables: products and reviews. products table has about 10 million entries and reviews table has about 30 million entries.
The whole Db is about 30Gb. I feel like it's getting slow and I'm wondering what should I do about it. I created indexes, but it didn't help. For example, products table have category field and when I do a simple select * from products where category=2 - it is just slow.
Will switching to mongodb help me in this situation or I can solve this just by optimizing Mysql somehow? In this case should I do sharding or size of tables is not that big and it's possible to optimize the other way?
Tables and my.cnf
CREATE TABLE IF NOT EXISTS `products` (
`id` int(11) NOT NULL auto_increment,
`product_title` varchar(1000) NOT NULL,
`product_id` varchar(100) NOT NULL,
`title` varchar(1000) NOT NULL,
`image` varchar(1000) NOT NULL,
`url` varchar(1000) NOT NULL,
`price` varchar(100) NOT NULL,
`reviews` int(11) NOT NULL,
`stars` float NOT NULL,
`BrowseNodeID` int(11) NOT NULL,
`status` varchar(100) NOT NULL,
`started_at` int(15) NOT NULL,
PRIMARY KEY (`id`),
KEY `id_index` (`BrowseNodeID`),
KEY `status_index` (`status`),
KEY `started_index` (`started_at`),
KEY `id_ind` (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=13743335 ;
CREATE TABLE IF NOT EXISTS `reviews` (
`id` int(11) NOT NULL auto_increment,
`product_id` varchar(100) NOT NULL,
`product_title` varchar(1000) NOT NULL,
`review_title` varchar(1000) NOT NULL,
`content` varchar(5000) NOT NULL,
`author` varchar(255) NOT NULL,
`author_profile` varchar(1000) NOT NULL,
`stars` float NOT NULL,
`owner` varchar(100) NOT NULL,
PRIMARY KEY (`id`),
KEY `product_id` (`product_id`),
KEY `id_index` (`product_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=48129737 ;
Some info from my.cnf
set-variable = query_cache_size=1512M
set-variable = thread_cache_size=8
thread_concurrency = 8
skip-innodb
low-priority-updates
delay-key-write=ALL
key_buffer_size = 100M
max_allowed_packet = 1M
#table_open_cache = 4048
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 50M
set-variable = table_cache=256
set-variable = query_cache_limit=1024M
set-variable = query_cache_size=1024M
Based on your my.cnf, it looks as though your key_buffer_size is way too small, so you're going to disk for every read. Ideally, that value should be set larger than the total size of your MyISAM indexes.
Before you go changing DB technologies, you may also want to consider changing your table type to InnoDB. Your my.cnf has it disabled, right now. I've gotten pretty stellar performance out of a 300M row table with smart indexes and enough memory. InnoDB will also give you some leeway with longer running reads, as they won't lock your entire table.