Does MOD(primarykey) on an InnoDB partitioned table cause fragmentation - mysql

I'm exporting a largeish table (1.5 billion rows) between servers. This is the table format.
CREATE TABLE IF NOT EXISTS `partitionedtable` (
`domainid` int(10) unsigned NOT NULL,
`instanceid` int(10) unsigned NOT NULL,
`urlid` int(10) unsigned NOT NULL,
`adjrankid` smallint(5) unsigned NOT NULL,
PRIMARY KEY (`domainid`,`instanceid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (MOD(domainid,8192))
(PARTITION p0 VALUES LESS THAN (1) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2) ENGINE = InnoDB,
PARTITION p2 VALUES LESS THAN (3) ENGINE = InnoDB
...
PARTITION p8191 VALUES LESS THAN (8192) ENGINE = InnoDB)
The data was exported to the new server in PK order and resulted in 8192 text files... which equated to around 200K records per file.
I'm simply iterating from 0 to 8191 importing the files into the new table.
LOAD DATA INFILE '/home/backup/rc/$i.tsv INTO TABLE partitionedtable PARTITION (p$i)
I'm thinking that each of these should only take a second to import, however they take around 6 seconds.
The spec of the server can be seen here.
http://www.ovh.co.uk/dedicated_servers/sp_32g.xml
There isn't much else going on in the server that'd bottleneck the process.
Could it be that partitioning by MOD() causes fragmentation? I was under the impression that there wouldnt be any fragmentation as each partition would be considered a separate table, and since data is inserted in PK order there'd be no fragmentation.
Added - probably useful... these settings were applied at the start of the batch.
SET autocommit=0;
SET foreign_key_checks=0;
SET sql_log_bin=0;
SET unique_checks=0;
A COMMIT is applied after every file.
The thread seems to spend the majority of its time in a System lock state, during LOAD DATA INFILE.

When I set up the server I mistakenly thought the open files limit was higher, though in reality it's sitting at 1024.
I've upped it to 16000 and rebooted the server, and it's running slightly quicker # 3 seconds (I was assuming the file opening/closing was causing the system lock status).
I also purged the bin logs.
Still seems a bit slow though.

Related

How to improve performance of Bulk Inserts in MYSQL

env: windows 10
version mysql 5.7
Ram 32GB
ide : toad mysql
i have sufficient hardware requirement but issue is the performance of insert into simple table that does not have any relation ships. i need to have index on the table.
table structure
CREATE TABLE `2017` (
`MOB_NO` bigint(20) DEFAULT NULL,
`CAF_SLNO` varchar(50) DEFAULT NULL,
`CNAME` varchar(58) DEFAULT NULL,
`ACT_DATE` varchar(200) DEFAULT NULL,
KEY `2017_index` (`MOB_NO`,`ACT_DATE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I am using above for inserting the records into table. with out index it took around 30 min where as with indexing it took 22 hrs still going on.
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
LOAD DATA LOCAL INFILE 'D:/base/test/2017/2017.txt'
INTO TABLE 2017COLUMNS TERMINATED BY '|';
commit;
i have seen suggestion to change cnf file, Could not find any in my machine.
By adding following lines in my.ini. I am able to achieve it.
innodb_autoinc_lock_mode =2
sync_binlog=1
bulk_insert_buffer_size=512M
key_buffer_size=512M
read_buffer = 50M
and innodb_flush_log_at_trx_commit=2, i have seen in another link where it said that it increase speed to 160x.
Output performance :more than 24hr to 2 hrs
If you begin with an empty table, create it without any indexes. Then, after fully populating the table, adding an index is reported to be faster than inserting with the index already in place.
See:
MySQL optimizing INSERT speed being slowed down because of indices
Is it better to create an index before filling a table with data, or after the data is in place?
Possibly helpful: Create an index on a huge MySQL production table without table locking

mysql repartitioned table much larger

I have a very large table on a mysql 5.6.10 instance (roughly 480 million rows).
The storage engine is InnoDB. (Table and DB Default).
The table was partitioned by hash of merchantId (bigint: a kind of client identifier) which helped when queries related to a single merchant. Due to significant performance degradation when queries spanned multiple merchants, I decided to repartition the table by Range on ACTION_DATE (the DATE that an activity occurred). Thinking I was being clever, I decided to add a few (5) new fields for future use (unused_varchar1 varchar(200), etc.), since the table is so large, adding new fields essentially requires a rebuild anyway, so why not...
I created the new table structure as _new, dumped the existing file to a secondary server using mysql dump. I then used an awk script to finesse the name and a few other details to fit the new table (change tableName to tableName_new), and started the load.
The existing table was approximately 430 GB. The text file similarly was about 403 GB. I was surprised therefore that the new table ended up taking about 840 GB!! (Based on the linux fize size of the .ibd files)
So, I have 2 basic questions, which really amount to why and what now...
I imagine that the new table is larger because the dump file was in the order of the previous partition (merchantId) while the load was inserting into the new partitioning (Activity date) creating a semi-random insertion order. The randomness led mysql to leave plenty of space (roughly 50%) in the pages for future insertions. (I'm a little fuzzy on the terminology here, having spent much more time in my career with Sql Server DBs than MySql Dbs...) I'm not able to find any internal statistics in mysql for space free per page. The INFORMATION_SCHEMA.TABLES DATA_FREE stat is an unconvincing 68MB.
If it helps these are the relevant stats from I_S.TABLES:
TABLE_TYPE: BASE TABLE
Engine: InnoDB
VERSION: 10
ROW_FORMAT: Compact
TABLE_ROWS: 488,094,271
AVG_ROW_LENGTH: 1,564
DATA_LENGTH: 763,509,358,592 (711 GB)
INDEX_LENGTH: 100,065,574,912 (93.19 GB)
DATA_FREE: 68,157,440 (0.06 GB)
I realize that that doesn't add up to 840 GB, but as I said, that was the size of the .ibd files which seems to be slightly different than the I_S.TABLES stats. Either way, it is significantly more than the text dump file.
I digress...
My question is whether my theory about whether the repartioning explains the roughly doubled size. Or is there another explanation? I think the extra columns (2 Bigint, 2 Varchar(200), 1 Date) are not the culprit since they are all null. My napkin calculation was that the additional columns would add < 9 GB. Likewise, one additional index on UID should be a relatively small addition.
The follow up question is what can I do now if I want to try to compact the table. (Server now only has about 385 GB free...)
If I repeated the procedure, dump to file, reload, this time in the current partition order, would I end up with a table more like the size of my original table ~430 GB?
Following are relevant parts of DDL.
OLD TABLE:
CREATE TABLE table_name (
`AUTO_SEQ` bigint(20) NOT NULL,
`MERCHANT_ID` bigint(20) NOT NULL,
`AFFILIATE_ID` bigint(20) DEFAULT NULL,
`PROGRAM_ID` bigint(20) NOT NULL,
`ACTION_DATE` date DEFAULT NULL,
`UID` varchar(128) DEFAULT NULL,
... additional columns ...
PRIMARY KEY (`AUTO_SEQ`,`MERCHANT_ID`,`PROGRAM_ID`),
KEY `oc_rpt_mpad_idx` (`MERCHANT_ID`,`PROGRAM_ID`,`ACTION_DATE`,`AFFILIATE_ID`),
KEY `oc_rpt_mapd` (`MERCHANT_ID`,`ACTION_DATE`),
KEY `oc_rpt_apda_idx` (`AFFILIATE_ID`,`PROGRAM_ID`,`ACTION_DATE`,`MERCHANT_ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY HASH (merchant_id)
PARTITIONS 16 */
NEW TABLE:
CREATE TABLE `tableName_new` (
`AUTO_SEQ` bigint(20) NOT NULL,
`MERCHANT_ID` bigint(20) NOT NULL,
`AFFILIATE_ID` bigint(20) DEFAULT NULL,
`PROGRAM_ID` bigint(20) NOT NULL,
`ACTION_DATE` date NOT NULL DEFAULT '0000-00-00',
`UID` varchar(128) DEFAULT NULL,
... additional columns...
# NEW COLUMNS (ALL NULL)
`UNUSED_BIGINT1` bigint(20) DEFAULT NULL,
`UNUSED_BIGINT2` bigint(20) DEFAULT NULL,
`UNUSED_VARCHAR1` varchar(200) DEFAULT NULL,
`UNUSED_VARCHAR2` varchar(200) DEFAULT NULL,
`UNUSED_DATE1` date DEFAULT NULL,
PRIMARY KEY (`AUTO_SEQ`,`ACTION_DATE`),
KEY `oc_rpt_mpad_idx` (`MERCHANT_ID`,`PROGRAM_ID`,`ACTION_DATE`,`AFFILIATE_ID`),
KEY `oc_rpt_mapd` (`ACTION_DATE`),
KEY `oc_rpt_apda_idx` (`AFFILIATE_ID`,`PROGRAM_ID`,`ACTION_DATE`,`MERCHANT_ID`),
KEY `oc_uid` (`UID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50500 PARTITION BY RANGE COLUMNS(ACTION_DATE)
(PARTITION p01 VALUES LESS THAN ('2012-01-01') ENGINE = InnoDB,
PARTITION p02 VALUES LESS THAN ('2012-04-01') ENGINE = InnoDB,
PARTITION p03 VALUES LESS THAN ('2012-07-01') ENGINE = InnoDB,
PARTITION p04 VALUES LESS THAN ('2012-10-01') ENGINE = InnoDB,
PARTITION p05 VALUES LESS THAN ('2013-01-01') ENGINE = InnoDB,
PARTITION p06 VALUES LESS THAN ('2013-04-01') ENGINE = InnoDB,
PARTITION p07 VALUES LESS THAN ('2013-07-01') ENGINE = InnoDB,
PARTITION p08 VALUES LESS THAN ('2013-10-01') ENGINE = InnoDB,
PARTITION p09 VALUES LESS THAN ('2014-01-01') ENGINE = InnoDB,
PARTITION p10 VALUES LESS THAN ('2014-04-01') ENGINE = InnoDB,
PARTITION p11 VALUES LESS THAN ('2014-07-01') ENGINE = InnoDB,
PARTITION p12 VALUES LESS THAN ('2014-10-01') ENGINE = InnoDB,
PARTITION p13 VALUES LESS THAN ('2015-01-01') ENGINE = InnoDB,
PARTITION p14 VALUES LESS THAN ('2015-04-01') ENGINE = InnoDB,
PARTITION p15 VALUES LESS THAN ('2015-07-01') ENGINE = InnoDB,
PARTITION p16 VALUES LESS THAN ('2015-10-01') ENGINE = InnoDB,
PARTITION p17 VALUES LESS THAN ('2016-01-01') ENGINE = InnoDB,
PARTITION p18 VALUES LESS THAN ('2016-04-01') ENGINE = InnoDB,
PARTITION p19 VALUES LESS THAN ('2016-07-01') ENGINE = InnoDB,
PARTITION p20 VALUES LESS THAN ('2016-10-01') ENGINE = InnoDB,
PARTITION p21 VALUES LESS THAN ('2017-01-01') ENGINE = InnoDB,
PARTITION p22 VALUES LESS THAN ('2017-04-01') ENGINE = InnoDB,
PARTITION p23 VALUES LESS THAN ('2017-07-01') ENGINE = InnoDB,
PARTITION p24 VALUES LESS THAN ('2017-10-01') ENGINE = InnoDB,
PARTITION p25 VALUES LESS THAN ('2018-01-01') ENGINE = InnoDB,
PARTITION p26 VALUES LESS THAN ('2018-04-01') ENGINE = InnoDB,
PARTITION p27 VALUES LESS THAN ('2018-07-01') ENGINE = InnoDB,
PARTITION p28 VALUES LESS THAN ('2018-10-01') ENGINE = InnoDB,
PARTITION p29 VALUES LESS THAN ('2019-01-01') ENGINE = InnoDB,
PARTITION p30 VALUES LESS THAN (MAXVALUE) ENGINE = InnoDB) */
adding new fields essentially requires a rebuild anyway, so why not
I predict you will regret it.
The existing table was approximately 430 GB.
According to size of .ibd? Or SHOW TABLE STATUS? Or the dump size, which would be bogus (see below).
it is significantly more than the text dump file
The lengths in TABLE STATUS include several flavors of overhead (BTree, free space, extra extents, etc), plus the indexes (which are not in the dump file).
Also, think about a BIGINT that contains 1234. The .ibd will 8 bytes plus some overhead; the dump will have 5 ('1234', plus a comma). That leads to my next point...
Are there really more than 4 billion merchants? merchant_id is BIGINT (8 bytes); INT UNSIGNED is only 4 bytes and allows 0..4 billion.
What's in uid? If it is some sort of UUID, it seems awfully long.
Do you happen to have the "stats from I_S.TABLES" from the old table?
So far, I have not addressed "whether the repartioning explains the roughly doubled size".
extra columns (2 Bigint, 2 Varchar(200), 1 Date)
That's about 29 bytes per row (15GB of Data_length), perhaps less since they are NULL.
You seem to be using the default ROW_FORMAT. I suspect this did not change in the conversion.
It is usually unwise to start an index with the "partition key" (merchant_id or action_date). This is because you are already "pruning" on that key; you are better off starting the index with something else. (Caveat: There are exceptions.)
Check the CHARACTER SET and datatype of the "additional columns". If something changed, that could be significant.
would I end up with a table more like the size of my original table ~430 GB?
Alas, until we figure out why it grew, I can't answer that question.
I'm more interested in whether random insertion vs. the partition (ACTION_DATE) would lead to wasted space / half empty pages.
I recommend you try the following experiment. Do not use optimize partition; see http://bugs.mysql.com/bug.php?id=42822 . Instead do this to defragment one partition (such as p02):
ALTER TABLE table_name REBUILD PARTITION p02;
You could do this SELECT before and after in order to see the change(s) to the PARTITIONs:
SELECT *
FROM information_schema.PARTITIONS
WHERE TABLE_SCHEMA = 'dbname' -- change as needed
AND TABLE_NAME = 'table_name' -- change as needed
ORDER BY PARTITION_ORDINAL_POSITION,
SUBPARTITION_ORDINAL_POSITION;
It's a generic query to get the table-status-like info for the partitions of one table.
If the REBUILD cuts the partition by about 50%, then we have the answer.
Generally, randomly inserting into a BTree should leave you with about 69% (not 50%) of the "full" size. Hence, I'm not 'expecting' this to be the solution/answer.

MySQL LOAD DATA INFILE Taking 13 Hours

Is there anything I can change in the my.ini file to speed up "LOAD DATA INFILE"?
I have two MySQL 5.5 instances each of which has one identical table structured as follows:
CREATE TABLE `log_access` (
`_id` bigint(20) NOT NULL AUTO_INCREMENT,
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`type_id` int(11) NOT NULL,
`building_id` int(11) NOT NULL,
`card_id` varchar(15) NOT NULL,
`user_key` varchar(35) DEFAULT NULL,
`user_name` varchar(25) DEFAULT NULL,
`user_validation` varchar(10) DEFAULT NULL,
PRIMARY KEY (`_id`),
KEY `log_access__user_key_timestamp` (`user_key`,`timestamp`)
KEY `log_access__timestamp` (`timestamp`)
) ENGINE=MyISAM
On a daily basis I need to move the data from previous day from instance A to instance B, which consists of roughly 25 million records. At the moment I am doing the following:
On instance A, generate an OUTFILE with "WHERE timestamp BETWEEN
'2014-09-23 00:00:00' AND '2014-09-23 23:59:59'. This usually takes
less than 2 minutes.
On instance B, execute "LOAD DATA INFILE". This is the problem area
as it takes about 13 hours.
On instance A, delete records from the previous day. This will probably be another
On instance B, run stats On instance B, truncate the table
I have also considered partitioning the tables and just exchanging the partitions. EXCHANGE PARTITION is supported as of 5.6 and I am willing to update MySQL, however, all documentation discusses exchanging between tables and I haven't been able to confirm that I would be able to do that between DB instances.
Replication between the instances, but as I have not tinkered with replication in the past and this is a time sensitive assignment I am somewhat reluctant to tread into new waters.
Any words of wisdom much appreciated.
CREATE the table without PRIMARY KEY and _id column and add these after LOAD DATA INFILE is complete. MySQL checks the PRIMARY KEY integrity with each INSERT, so I think you can gain a lot of performance here. With MariaDB you can disable keys, but I think this won't work on some storage engines (see here)
Not-very-nice-alternative:
I found it very easy to move a MYISAM-database by just copy/move the files on disk. If you cut/paste the files and run a REPAIR TABLE. on your target machine you can do this without restarting the Server. Just make sure you copy all 3 files (.frm, .myd, .myi)
LOAD DATA INFILE in perfect PK-order, INTO a table that only has the PK-definition, so no secondary indexes yet. After import, add all secondary indexes at once, with 'ALTER TABLE mytable ALGORITHM=INPLACE, LOCK=NONE, ADD KEY ...'.
Consider adding back the secondary indexes on each involved box separately, so not via replication (sql_log_bin=0), to prevent replication lag.
Consider using a partitioned table, as then you can run a 'LOAD DATA INFILE' per partition, in parallel. (applies to RANGE and HASH partitioning, as the separate tsv-files (one or more per partition) are easy to prepare for those)
MariaDB doesn't have the variant 'INTO mytable PARTITION (p000)' yet.
You can load into a separate table first, and then exchange partitions, but MariaDB also doesn't have 'WITHOUT VALIDATION' yet.

Why TokuDB and InnoDB insert is so slow compared to MyISAM

I have prepared the following SQL statements to compare the performance behavior of MyISAM, InnoDB, and TokuDB (INSERT is executed for 100000 times):
MyISAM:
CREATE TABLE `testtable_myisam` (`id` bigint(20) NOT NULL AUTO_INCREMENT, `value1` INT DEFAULT NULL, `value2` INT DEFAULT NULL, PRIMARY KEY (`id`), KEY `index1` (`value1`)) ENGINE=MyISAM DEFAULT CHARSET=utf8;
INSERT INTO `testtable_myisam` (`value1`, `value2`) VALUES (FLOOR(RAND() * 1000), FLOOR(RAND() * 1000));
InnoDB:
CREATE TABLE `testtable_innodb` (`id` bigint(20) NOT NULL AUTO_INCREMENT, `value1` INT DEFAULT NULL, `value2` INT DEFAULT NULL, PRIMARY KEY (`id`), KEY `index1` (`value1`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `testtable_innodb` (`value1`, `value2`) VALUES (FLOOR(RAND() * 1000), FLOOR(RAND() * 1000));
TokuDB:
CREATE TABLE `testtable_tokudb` (`id` bigint(20) NOT NULL AUTO_INCREMENT, `value1` INT DEFAULT NULL, `value2` INT DEFAULT NULL, PRIMARY KEY (`id`), KEY `index1` (`value1`)) ENGINE=TokuDB DEFAULT CHARSET=utf8;
INSERT INTO `testtable_tokudb` (`value1`, `value2`) VALUES (FLOOR(RAND() * 1000), FLOOR(RAND() * 1000));
At the beginning, the INSERT performance of InnoDB is almost 50 times slower than MyISAM, and TokuDB is 40 times slower than MyISAM.
Then I figure out the setting of "innodb-flush-log-at-trx-commit=2" on InnoDB, to make its INSERT behavior similar with MyISAM.
The question is, what should I do on the TokuDB? I bet the poor INSERT performance of TokuDB is also caused by some inproper setting, but I cannot figure out the reason.
--------- UPDATE ---------
Thanks to tmcallaghan's comments, I have modified my setting into "tokudb_commit_sync=OFF", now the insert rate of TokuDB on small dataset seems to be meaningful (I will execute them on large dataset once I figure out the following problem):
However, the select performance of TokuDB is still wired compared to MyISAM and InnoDB with following SQL (wherein the ? is replaced by a different Int by my simulator):
SELECT id, value1, value2 FROM testtable_myisam WHERE value1=?;
SELECT id, value1, value2 FROM testtable_innodb WHERE value1=?;
SELECT id, value1, value2 FROM testtable_tokudb WHERE value1=?;
Upon a million dataset, each 10k SELECT statments cost 10 and 15 seconds by MyISAM and InnoDB individually, but TokuDB requires about 40 seconds.
Did I miss some other settings?
Thanks in advance!
This doesn't sound like a very interesting test (100,000 rows is not a lot, and your insertions are not concurrent), but here is the setting you are looking for.
Issuing "set tokudb_commit_sync=0;" will turn off fsync() on commit operations. Note that there are no durability guarantees in this mode.
As I mentioned prior, TokuDB's strength is indexing data that is significantly larger than RAM, and this test is not.
The reason why transactional engines are slower is because they force the hard disk to confirm it wrote the data down. For the HDD to write data down, it has to position the head over the magnetic disk plate and stream the data. Each transaction means that the disk will position the magnetic needle over the head, write the data down and tell the OS that it's there for sure.
The reason transactional engines do that is so they can conform to the D part of ACID. They ensure you that data you wanted to be written down, is, in fact, written down permanently. MyISAM doesn't do that.
Thus, the speed of insert is proportional to the number of Input Output Operations per Second (IOPS) of the hard disk.
That also means, if you wrap several queries in one transaction, you can exploit the write speed bandwith of the mentioned drives.
Also, that implies that drives with high IOPS (SSD for example, have 40+ thousand IOPS and mechanical ones range at about 250 - 300, but don't take my word for exact numbers).
Long story short, if you want really fast inserts using transactional engines - wrap multiple queries in a single transaction.
All the "optimizations" you do are slightly violating the D part of ACID, because the engines will try to exploit various fast memories lying around that can be used as buffers. That means, if something goes wrong, such as you lose power - kiss your data goodbye.
Also, the tests conducted by you are actually bad because they're on small scale. Both InnoDB and especially TokuDB are designed to contain hundreds of gigabytes of data and to offer linear performance.
I have updated my.cnf into something below, now the overall performance looks better.
For 10k times of SELECT from MyISAM, it takes 4 seconds, whereby InnoDB takes 5 seconds, and TokuDB takes 8 seconds. So can I conclude under below configuration, TokuDB is behaving similar (even not necessary better) with MyISAM and InnoDB.
Indeed, I am curious about tons of performance comparison between InnoDB v.s. TokuDB, but not MyISAM v.s. TokuDB, why?
tokudb_commit_sync=0
max_allowed_packet = 1M
table_open_cache = 128
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size = 32M
thread_concurrency = 8
innodb_flush_log_at_trx_commit=2
innodb_buffer_pool_size = 2G
innodb_additional_mem_pool_size = 20M
innodb_log_buffer_size = 8M
innodb_lock_wait_timeout = 50

Slow MySQL InnoDB Inserts and Updates

I am using magento and having a lot of slowness on the site. There is very, very light load on the server. I have verified cpu, disk i/o, and memory is light- less than 30% of available at all times. APC caching is enabled- I am using new relic to monitor the server and the issue is very clearly insert/updates.
I have isolated the slowness to all insert and update statements. SELECT is fast. Very simple insert / updates into tables take 2-3 seconds whether run from my application or the command line mysql.
Example:
UPDATE `index_process` SET `status` = 'working', `started_at` = '2012-02-10 19:08:31' WHERE (process_id='8');
This table has 9 rows, a primary key, and 1 index on it.
The slowness occurs with all insert / updates. I have run mysqltuner and everything looks good. Also, changed innodb_flush_log_at_trx_commit to 2.
The activity on this server is very light- it's a dv box with 1 GB RAM. I have magento installs that run 100x better with 5x the load on a similar setup.
I started logging all queries over 2 seconds and it seems to be all inserts and full text searches.
Anyone have suggestions?
Here is table structure:
CREATE TABLE IF NOT EXISTS `index_process` (
`process_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`indexer_code` varchar(32) NOT NULL,
`status` enum('pending','working','require_reindex') NOT NULL DEFAULT 'pending',
`started_at` datetime DEFAULT NULL,
`ended_at` datetime DEFAULT NULL,
`mode` enum('real_time','manual') NOT NULL DEFAULT 'real_time',
PRIMARY KEY (`process_id`),
UNIQUE KEY `IDX_CODE` (`indexer_code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=10 ;
First: (process_id='8') - '8' is char/varchar, not int, so mysql convert value first.
On my system, I had long times (greater than one second) to update users.last_active_time.
The reason was that I had a few queries that long to perform. As I joined them for the users table. This resulted in blocking of the table to read. Death lock by SELECT.
I rewrote query from: JOIN to: sub-queries and porblem gone.