Handling rapid microtransactions in MySQL - mysql

I'm running across an issue where innoDB locks rows while doing updates. When I run SHOW FULL PROCESSLIST I end up getting tons of sleeping executions that look like this:
UPDATE accounts SET account_balance = account_balance + 0.000004514 WHERE account_id = 1234 LIMIT 1
The row gets lock and the server ends up with a traffic jam of update connections.
How do I scale my mySQL application, so that I can handle rapid micro-transactions?
EDIT: Table structure:
CREATE TABLE `accounts` (
`account_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`account_balance` float NOT NULL DEFAULT '0',
`account_created` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`account_updated` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00' ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`account_id`)
) ENGINE=InnoDB AUTO_INCREMENT=2756 DEFAULT CHARSET=utf8mb4;

Related

Deadlocks happened when massive inserting executed simultaneously in mysql 5.0

I am using mysql 5.0 and I met some mysql deadlock problems when there were lots of inserts from different session at the same time (We estimated there would be a maximum about 900 insert statements executed in one second).
Here is the error I got:
1213, Deadlock found when trying to get lock; try restarting transaction
Here is one of my failure insert statement:
INSERT `cj_202203qmoh_prize_log` (`user_id`, `lottery_id`, `create_ip`, `flags`, `created_at`, `create_mac`)VALUES('388','58','???.???.???.???','0','2022-04-01 20:00:33','444937f4bc5d5aa8f4af3d96d31dbf61');
My table definition:
CREATE TABLE `cj_202203qmoh_prize_log` (
`id` int(10) unsigned NOT NULL auto_increment,
`user_id` int(10) unsigned NOT NULL,
`lottery_id` int(10) unsigned default NULL,
`code` int(11) default NULL,
`flags` int(10) unsigned default '0',
`create_ip` varchar(64) NOT NULL,
`create_mac` varchar(255) character set ascii NOT NULL,
`created_at` timestamp NOT NULL default '0000-00-00 00:00:00',
`updated_at` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
PRIMARY KEY USING BTREE (`id`),
KEY `user_id` USING BTREE (`user_id`,`created_at`),
KEY `user_id_2` USING BTREE (`user_id`,`lottery_id`),
KEY `create_ip` USING BTREE (`create_ip`),
KEY `create_mac` USING BTREE (`create_mac`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;
I didn't use transactions at all in my business. Except for the insert statement, the only possible (in fact it's extremely improbable to be executed simultaneously) sql that requires a x-lock executes at the same time is:
UPDATE `cj_202203qmoh_prize_log` SET `code` = ? WHERE `id` = ?;
There are several select statements using index 'user_id' or 'user_id_2' could be executed simultaneously, but they don't need a s-lock.
And same user_id could only be inserted in the same session.
According to my company's policy, I have no privileges to run SHOW ENGINE INNODB STATUS, so I am afraid I could not provide further information.
After I set the transaction level to READ COMMITTED, execute the statement in a transaction and drop both create_ip and create_mac indexes, it seemed this problem have not happened again. But I still couldn't figure out what caused the deadlock.

Mysq l - Slow one update query in large table

i have a three almost identical queries executed one by one in my app. Two of queries are executing fairly fast but one is much slower. Here are queries:
update `supremeshop_product_attributes` set `id_step` = 899 where `id_step` = 1 and `id_product` = 32641
540ms
update `supremeshop_product_attributes` set `id_step` = 1 where `id_step` = 0 and `id_product` = 32641
1.71s
update `supremeshop_product_attributes` set `id_step` = 0 where `id_step` = 899 and `id_product` = 32641
9.75ms
Create table query
CREATE TABLE `supremeshop_product_attributes` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`id_product` int(11) NOT NULL,
`id_attribute` int(11) NOT NULL,
`id_attribute_group` int(11) NOT NULL,
`id_step` int(11) NOT NULL,
`id_alias` int(9) NOT NULL,
`price_retail` int(11) NOT NULL DEFAULT '0',
`behaviour` int(1) NOT NULL DEFAULT '0',
`active` int(1) NOT NULL DEFAULT '1',
`created_at` timestamp NULL DEFAULT NULL,
`updated_at` timestamp NULL DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `supremeshop_product_attributes_id_product_index` (`id_product`),
KEY `supremeshop_product_attributes_id_attribute_index` (`id_attribute`),
KEY `supremeshop_product_attributes_id_attribute_group_index` (`id_attribute_group`),
KEY `supremeshop_product_attributes_id_step_index` (`id_step`)
) ENGINE=InnoDB AUTO_INCREMENT=3012991 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Facts:
table has 1500000 rows
each query updates around 650 rows
each searched/updated field has index (id(primary), id_product, id_step, id_alias)
only second query takes much longer time to execute (everytime - even executed as single query not one by one)
each variable used in query is integer
if i execute queries direcly in phpymysql -> sql - i get the same execution time (like in my laravel app so query is somehow slow)
Question is why? And is there a good explanation/ fix for that problem.
For any help many thanks!
Mark.

Why does insert error 1062 occur, although it should not be?

Server version: 10.3.22-MariaDB-0+deb10u1 - Debian 10
Table
CREATE TABLE `background` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL DEFAULT '',
`data` text CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL DEFAULT '[]',
`priority` tinyint(4) NOT NULL DEFAULT 0,
`time` int(11) NOT NULL DEFAULT 0,
`attempt` int(11) NOT NULL DEFAULT 0,
`status` tinyint(4) NOT NULL DEFAULT 0,
`add` int(11) NOT NULL DEFAULT 0,
`update` int(11) NOT NULL DEFAULT 0,
PRIMARY KEY (`id`),
KEY `status` (`status`)
) ENGINE=InnoDB AUTO_INCREMENT=2002280000000127419 DEFAULT CHARSET=utf8mb4
PARTITION BY RANGE (`id`)
(PARTITION `d200226` VALUES LESS THAN (2002270000000000000) ENGINE = InnoDB,
PARTITION `d200227` VALUES LESS THAN (2002280000000000000) ENGINE = InnoDB,
PARTITION `d200228` VALUES LESS THAN (2002290000000000000) ENGINE = InnoDB)
Periodically, an error 1062 occurs during insertion
INSERT INTO `background` (`name`, `data`, `priority`, `time`, `status`, `add`) VALUES ('move', '{\"id\":2002280000000000448,\"frame\":18}', 1, 1582840572, 0, 1582840572)
I looked on the Internet, advised innodb_autoinc_lock_mode to change from 1 to 2, but this did not help, errors still occur.
Question: what to do?
Thanks for answers.
MySQL 1062 error is "Duplicate entry for key".
The only unique constrain in your table is PRIMARY KEY (`id`).
I see 2 possible reasons in shown environ:
1) Manual update of id field value (or insert new record with specifying id value). When autoincrement generator gives the value which is already present in the table the error occured.
2) There exists some trigger which fires while inserting, it inserts data into or updates data in some another table, and the values inserted/updated causes some unique constraint violation.
For to select what reason occures in your particular case you must show full text of the error message.

How to find the reason for the difference in the execution time of a query against different databases?

I have two databases with identical schemas. The one database is from production, the other is a test database. I'm doing a query against a single table from the database. On the production table the query takes around 4.3 seconds, while on the test database it takes about 130 ms. . However, the production table has less then 50.000 records, while I've seeded the test table with more than 100.000. I've compared the two tables and both have the same indexes. To me, it seems that the problem is in the data. While seeding I tried to generate as random data as possible, so that I can simulate production conditions, but still I couldn't reproduce the slow query.
I looked the the results from EXPLAIN for the two queries. They have significant differences in the last two columns.
Production:
+-------+-------------------------+
| rows | Extra |
+-------+-------------------------+
| 24459 | Using where |
| 46 | Using where; Not exists |
+-------+-------------------------+
Test:
+------+------------------------------------+
| rows | Extra |
+------+------------------------------------+
| 3158 | Using index condition; Using where |
| 20 | Using where; Not exists |
+------+------------------------------------+
The create statement for the table on production is:
CREATE TABLE `usage_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`operation` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
`check_time` datetime NOT NULL,
`check_in_log_id` int(11) DEFAULT NULL,
`daily_usage_id` int(11) DEFAULT NULL,
`duration_units` decimal(11,2) DEFAULT NULL,
`is_deleted` tinyint(1) NOT NULL DEFAULT '0',
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`facility_id` int(11) NOT NULL,
`notes` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`mac_address` varchar(20) COLLATE utf8_unicode_ci NOT NULL DEFAULT '00:00:00:00:00:00',
`login` varchar(40) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_usage_logs_on_user_id` (`user_id`),
KEY `index_usage_logs_on_check_in_log_id` (`check_in_log_id`),
KEY `index_usage_logs_on_facility_id` (`facility_id`),
KEY `index_usage_logs_on_check_time` (`check_time`),
KEY `index_usage_logs_on_mac_address` (`mac_address`),
KEY `index_usage_logs_on_operation` (`operation`)
) ENGINE=InnoDB AUTO_INCREMENT=145147 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
while the same in the test database is:
CREATE TABLE `usage_logs` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`operation` varchar(30) COLLATE utf8_unicode_ci NOT NULL,
`check_time` datetime NOT NULL,
`check_in_log_id` int(11) DEFAULT NULL,
`daily_usage_id` int(11) DEFAULT NULL,
`duration_units` decimal(11,2) DEFAULT NULL,
`is_deleted` tinyint(1) NOT NULL DEFAULT '0',
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`facility_id` int(11) NOT NULL,
`notes` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
`mac_address` varchar(20) COLLATE utf8_unicode_ci NOT NULL DEFAULT '00:00:00:00:00:00',
`login` varchar(40) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `index_usage_logs_on_check_in_log_id` (`check_in_log_id`),
KEY `index_usage_logs_on_check_time` (`check_time`),
KEY `index_usage_logs_on_facility_id` (`facility_id`),
KEY `index_usage_logs_on_mac_address` (`mac_address`),
KEY `index_usage_logs_on_operation` (`operation`),
KEY `index_usage_logs_on_user_id` (`user_id`)
) ENGINE=InnoDB AUTO_INCREMENT=104001 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
The full query is:
SELECT `usage_logs`.*
FROM `usage_logs`
LEFT OUTER JOIN usage_logs AS usage_logs_latest ON usage_logs.facility_id = usage_logs_latest.facility_id
AND usage_logs.user_id = usage_logs_latest.user_id
AND usage_logs.mac_address = usage_logs_latest.mac_address
AND usage_logs.check_time < usage_logs_latest.check_time
WHERE `usage_logs`.`facility_id` = 5
AND `usage_logs`.`operation` = 'checkIn'
AND (usage_logs.check_time >= '2018-06-08 00:00:00')
AND (usage_logs.check_time <= '2018-06-08 11:23:05')
AND (usage_logs_latest.id IS NULL)
I execute the query on the same machine against two different databases, so I don't think that other processes are interfering in the result.
What does this result mean and what further steps can I take in order to find out the reason for the big difference in the execution time?
What MySQL version(s) are you using?
There are many factors that lead to the decision by the Optimizer as to
which table to start with; (we can't see if they are different)
which index(es) to use; (we can't see)
etc.
Some of the factors:
the distribution of the index values at the moment,
the MySQL version,
the phase of the moon.
These can also lead to different numbers (estimates) in the EXPLAIN, which may lead to different query plans.
Also other activity in the server can interfere with the availability of CPU/IO/etc. In particular caching of the data can easily show a 10x difference. Did you run each query twice? Is the Query cache turned off? Is innodb_buffer_pool_size the same? Is RAM size the same?
I see Using index condition and no "composite" indexes. Often performance can be improved by providing a suitable composite index. More
I gotta see the query!
Seeding
Random, or not-so-random, rows can influence the Optimizer's choice of which index (etc) to use. This may have led to picking a better way to run the query on 'test'.
We need to see EXPLAIN SELECT ... to discuss this angle further.
Composite indexes
These are likely to help on both servers:
INDEX(facility_id, operation, -- either order
check_time) -- last
INDEX(facility_id, user_id, max_address, check_time, -- any order
id) -- last
There is a quick improvement. Instead of finding all the later rows, but not use the contents of them, use a 'semi-join' which asks of the non-existence of any such rows:
SELECT `usage_logs`.*
FROM `usage_logs`
WHERE `usage_logs`.`facility_id` = 5
AND `usage_logs`.`operation` = 'checkIn'
AND (usage_logs.check_time >= '2018-06-08 00:00:00')
AND (usage_logs.check_time <= '2018-06-08 11:23:05')
AND NOT EXISTS ( SELECT 1 FROM usage_logs AS latest
WHERE usage_logs.facility_id = latest.facility_id
AND usage_logs.user_id = latest.user_id
AND usage_logs.mac_address = latest.mac_address
AND usage_logs.check_time < latest.check_time )
(The same indexes will be fine.)
The query seems to be getting "all but the latest"; is that what you wanted?

mysql 5.5 server: undetermined results of an SELECT

I have a myisam table with 18 millions records in it.
CREATE TABLE `AnyTable` (
`OrderID` int(11) NOT NULL DEFAULT '0',
`GroupID` bigint(20) unsigned NOT NULL DEFAULT '0',
`TextID` bigint(20) unsigned NOT NULL DEFAULT '0',
`Type` tinyint(3) unsigned NOT NULL DEFAULT '0',
`EngineID` int(11) NOT NULL DEFAULT '0',
`UpdateTime` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`data` longtext NOT NULL,
PRIMARY KEY (`OrderID`,`GroupID`,`Type`),
KEY `EngineID-GroupID` (`EngineID`,`GroupID`),
KEY `TextID-ContextType` (`TextID`,`ContextType`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8
An I have a trouble with undermined select results:
for i in {1..100}; do date; mysql yabsdb -A -e "select EngineID, (GroupID|0), OrderID, Type, TextID, Data from AnyTable" | wc -l; sleep 5m; done
11553545
10795243
9855558
....
...while real rows count in table >18 millions
While indeed count of records in AnyTable grows constantly and monotonically. And its count is greater than 18 billions
No errors occured neither server nor client.
Why results of my select are so unpredictable?
First, I supposed there's a problem in libmysqlclient , but insert into TmpTable select * from AnyTable is unpredictable too. So, it seems like the problem is inside a server.
Environment:
ubuntu precise mysql 5.5
lots of insert ... on duplicate key update
request in this table.