I'm wondering what's exact cause that makes insert queries on mysql/innodb to last at least 40ms on machine with fairly strong cpu. "Equivalent" query runs <10ms on same MyISAM table (tables are without any foreign keys). Timings are from MySQL console.
This is "as simple as possible" db structure for reproduction.
CREATE TABLE `test_table_innodb` (
`id` int NOT NULL AUTO_INCREMENT,
`int_column` int NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `test_table_myisam` (
`id` int NOT NULL AUTO_INCREMENT,
`int_column` int NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
I'm running same query from mysql console (which auto-commits transactions in case of InnoDB). No other queries are executed on machine at the time and the results are:
mysql> insert into test_table_myisam (int_column) values (5);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test_table_innodb (int_column) values (5);
Query OK, 1 row affected (0.06 sec)
Is transaction overhead making query to run that longer against InnoDB table? Or?
There are three aspects that to be considered with each auto-committed INSERT
ASPECT #1. Overhead
InnoDB supports MVCC and Transaction Isolation as an ACID-compliant storage engine. In order to accommodate this, a copy of a row before changes are committed is written into the Undo Tablespace section of the System Tablespace file ibdata1. What would be written if you are running an INSERT? A copy of a blank row. That way, a rollback simply removes the attempt to INSERT. When an INSERT in committed, the copy of the blank in the Undo Tablespace is expunged.
ASPECT #2. Clustered Index
For every InnoDB table, there exists an internal default row index called gen_clust_index. This is created regardless of the presence or absence of a PRIMARY KEY. Since your table has a PRIMARY KEY of id, the gen_clust_index is constructed to be associated with the row containing a unique id field.
ASPECT #3. Configuration
Believe it or not, there are times when MySQL 4.1 out-of-the-box is faster than MySQL 5.5. Sounds shocking, doesn't it? Percona actually benchmarked several versions of MySQL and found this to be the case.
I wrote about this in DBA StackExchange before
Why mysql 5.5 slower than 5.1 (linux,using mysqlslap) (Nov 24, 2011)
Query runs a long time in some newer MySQL versions (Oct 05, 2011)
Multi cores and MySQL Performance (Sep 20, 2011)
How do I properly perform a MySQL bake-off? (Jun 19, 2011)
The CPU is not the factor here. The factor is the disk .
In innodb the command need to be write to log , so if the log disk is the same disk or disk is not fragment or disk is slow than you will have a big difference.
Related
I have a table size of 40 million rows and I wish to modify an enum column of a table an Aurora MySQL RDS Database V5.6.10 to add more. This table is a frequently updated one. Has anyone ever tried altering such tables before? If so, can you please elaborate on the experience?
Table Structure:
CREATE TABLE `tee_sizes` (
id bigint auto_increment,
customer_id bigint,
tee-size enum('small', 'large', 'x-large'),
created_at timestamp NOT NULL default CURRENT_TIMESTAMP(),
updated_at timestamp NOT NULL default CURRENT_TIMESTAMP() ON UPDATE CURRENT_TIMESTAMP(),
PRIMARY KEY(id)
) ENGINE=InnoDB AUTO_INCREMENT=36910751 DEFAULT CHARSET=utf8;
I wish to add 'xx-large' to the column tee-size.
Will there be a downtime while doing this?
MySQL 5.6 should allow InnoDB online DDL without anny downtime on that table and concurrent queries should still work on that table while altering.
ALTER TABLE tee_sizes MODIFY COLUMN `tee-size` enum('small', 'large', 'x-large', 'new-item'),
ALGORITHM=INPLACE, LOCK=NONE;
ALGORITHM=INPLACE, LOCK=NONE would force MySQL in executing in the
requested level of concurrency without downtime.
If your MySQL version does not execute then the requested level of concurrency was not available meaning ALGORITHM=INPLACE, LOCK=NONE needs to be changed.
see demo
Edited because of comment:
Wait.. So, does this force any locks? ALGORITHM=INPLACE, LOCK=NONE
would force MySQL in executing (if allowed) without downtime if your
MySQL does not execute it means it can't be done using
ALGORITHM=INPLACE, LOCK=NONE This statement is confusing.
No it does not lock copy/paste from the manual
You can control aspects of a DDL operation using the ALGORITHM and
LOCK clauses of the ALTER TABLE statement. These clauses are placed at
the end of the statement, separated from the table and column
specifications by commas. .. To avoid accidentally making the table
unavailable for reads, writes, or both, specify a clause on the ALTER
TABLE statement such as LOCK=NONE` (permit reads and writes) or
LOCK=SHARED (permit reads). The operation halts immediately if the
requested level of concurrency is not available.
env: windows 10
version mysql 5.7
Ram 32GB
ide : toad mysql
i have sufficient hardware requirement but issue is the performance of insert into simple table that does not have any relation ships. i need to have index on the table.
table structure
CREATE TABLE `2017` (
`MOB_NO` bigint(20) DEFAULT NULL,
`CAF_SLNO` varchar(50) DEFAULT NULL,
`CNAME` varchar(58) DEFAULT NULL,
`ACT_DATE` varchar(200) DEFAULT NULL,
KEY `2017_index` (`MOB_NO`,`ACT_DATE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I am using above for inserting the records into table. with out index it took around 30 min where as with indexing it took 22 hrs still going on.
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
LOAD DATA LOCAL INFILE 'D:/base/test/2017/2017.txt'
INTO TABLE 2017COLUMNS TERMINATED BY '|';
commit;
i have seen suggestion to change cnf file, Could not find any in my machine.
By adding following lines in my.ini. I am able to achieve it.
innodb_autoinc_lock_mode =2
sync_binlog=1
bulk_insert_buffer_size=512M
key_buffer_size=512M
read_buffer = 50M
and innodb_flush_log_at_trx_commit=2, i have seen in another link where it said that it increase speed to 160x.
Output performance :more than 24hr to 2 hrs
If you begin with an empty table, create it without any indexes. Then, after fully populating the table, adding an index is reported to be faster than inserting with the index already in place.
See:
MySQL optimizing INSERT speed being slowed down because of indices
Is it better to create an index before filling a table with data, or after the data is in place?
Possibly helpful: Create an index on a huge MySQL production table without table locking
I'm exporting a largeish table (1.5 billion rows) between servers. This is the table format.
CREATE TABLE IF NOT EXISTS `partitionedtable` (
`domainid` int(10) unsigned NOT NULL,
`instanceid` int(10) unsigned NOT NULL,
`urlid` int(10) unsigned NOT NULL,
`adjrankid` smallint(5) unsigned NOT NULL,
PRIMARY KEY (`domainid`,`instanceid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (MOD(domainid,8192))
(PARTITION p0 VALUES LESS THAN (1) ENGINE = InnoDB,
PARTITION p1 VALUES LESS THAN (2) ENGINE = InnoDB,
PARTITION p2 VALUES LESS THAN (3) ENGINE = InnoDB
...
PARTITION p8191 VALUES LESS THAN (8192) ENGINE = InnoDB)
The data was exported to the new server in PK order and resulted in 8192 text files... which equated to around 200K records per file.
I'm simply iterating from 0 to 8191 importing the files into the new table.
LOAD DATA INFILE '/home/backup/rc/$i.tsv INTO TABLE partitionedtable PARTITION (p$i)
I'm thinking that each of these should only take a second to import, however they take around 6 seconds.
The spec of the server can be seen here.
http://www.ovh.co.uk/dedicated_servers/sp_32g.xml
There isn't much else going on in the server that'd bottleneck the process.
Could it be that partitioning by MOD() causes fragmentation? I was under the impression that there wouldnt be any fragmentation as each partition would be considered a separate table, and since data is inserted in PK order there'd be no fragmentation.
Added - probably useful... these settings were applied at the start of the batch.
SET autocommit=0;
SET foreign_key_checks=0;
SET sql_log_bin=0;
SET unique_checks=0;
A COMMIT is applied after every file.
The thread seems to spend the majority of its time in a System lock state, during LOAD DATA INFILE.
When I set up the server I mistakenly thought the open files limit was higher, though in reality it's sitting at 1024.
I've upped it to 16000 and rebooted the server, and it's running slightly quicker # 3 seconds (I was assuming the file opening/closing was causing the system lock status).
I also purged the bin logs.
Still seems a bit slow though.
A simple mysql update query is very slow sometimes. Here is the query:
update produse
set vizite = '135'
where id = '71238'
My simplified table structure is:
CREATE TABLE IF NOT EXISTS `produse`
(
`id` int(9) NOT NULL auto_increment,
`nume` varchar(255) NOT NULL,
`vizite` int(9) NOT NULL default '1',
PRIMARY KEY (`id`),
KEY `vizite` (`vizite`),
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=945179 ;
I use MySQL 5.0.77 and the table is MyISAM.
The table is about 752.6 MO and has 642,442 for the moment.
The database runs on a dedicated VPS that has 3Gb of RAM and 4 processors of 2G each. There are no more than 6-7 queries of that type per second when we have high traffic, but the query is slow not only then.
First, try rebuilding your indexes, it might happen that query is not using them (you can see that using EXPLAIN statement with your update query).
Another possibility is that you have many selects on that table or long running selects, which causes long locks. You can try using replication and have your select queries executed on slave database, only, and updates on master, only. That way, you will avoid table locks caused by updates while you are doing selects and vice versa.
I'm running MySql 5.0.22 and have a really unwieldy table containing approximately 5 million rows.
Some, but not all rows are referenced by a foreign key to another table.
All attempts to cull the unreferenced rows have failed so far, resulting in lock-timeouts every time.
Copying the rows I want to an alternate table also failed with lock-timeout.
Suspiciously, even a statement that should finish instantaneously like the one below will also fail with "lock timeout":
DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;
...it's at this point that I've run out of ideas.
Edit: For what it's worth, I've been working through this on my dev system, so only I am actually using the database at this moment so there shouldn't be any locking going on outside of the SQL I'm running.
Any MySql gurus out there have suggestions on how to tame this rogue table?
Edit #2: As requested, the table structure:
CREATE TABLE `tunknowncustomer` (
`UID_PK` int(11) NOT NULL auto_increment,
`UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
`CREATIONDATE` datetime NOT NULL,
`EMAIL` varchar(100) default NULL,
`CUSTOMERUID` int(11) default NULL,
PRIMARY KEY (`UID_PK`),
KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$
Note, attempting to drop the FK also times out.
I had the same problem with an innodb table. optimize table corrected it.
Ok, I finally found an approach that worked to trim the unwanted rows from my large InnoDB table! Here's how I did it:
Stopped using MySQL Workbench (they have a hard-coded execution timeout of 30 seconds)
Opened a command prompt
Renamed the "full" table using ALTER TABLE
Created an empty table using the original table name and structure
Rebooted MySQL
Turned OFF 'autocommit' with SET AUTOCOMMIT = 0
Deleted a limited number of rows at a time, ramping up my limit after each success
Did a COMMIT; in between delete statements since turning off autocommit really left me inside of one large transaction
The whole effort looked somewhat like this:
ALTER TABLE `ep411`.`tunknowncustomer` RENAME TO `ep411`.`tunknowncustomer2`;
...strange enough, renaming the table was the only ALTER TABLE command that would finish right away.
delimiter $$
CREATE TABLE `tunknowncustomer` (
...
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$
...then a reboot just in case my previous failed attempts could block any new work done...
SET AUTOCOMMIT = 0;
delete from tunknowncustomer2 where customeruid is null limit 1000;
delete from tunknowncustomer2 where customeruid is null limit 100000;
commit;
delete from tunknowncustomer2 where customeruid is null limit 1000000;
delete from tunknowncustomer2 where customeruid is null limit 1000000;
commit;
...Once I got into deleting 100k at a time InnoDB's execution time dropped with each successful command. I assume InnoDB starts doing read-aheads on large scans. Doing commits would reset the read-ahead data, so I spaced out the COMMITs to every 2 million rows until the job was done.
I wrapped-up the task by copying the remaining rows into my "empty" clone table, then dropping the old (renamed) table.
Not a graceful solution, and it doesn't address any reasons why deleting even a single row from a large table should fail, but at least I got the result I was looking for!