Why INSERT IGNORE increments the auto_increment primary key? - mysql

I wrote a java program that accesses a MySQL innodb database.
Whenever an INSERT IGNORE statement encounters a duplicate entry the Auto Increment primary key is incremented.
Is this behaviour the expected? I think it shouldn't happen with IGNORE. That means that IGNORE actually incurs an extra overhead for writing the new primary key value.
The table is the following:
CREATE TABLE `tablename` (
`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
`rowname` varchar(50) NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `rowname` (`rowname`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Thank you!

This has been the default behaviour since MySQL 5.1.22.
You can set the configuration variable innodb_autoinc_lock_mode to 0 (a.k.a “traditional” lock mode) If you'd like to avoid gaps in your auto-increment columns. It may incur a performance penalty, though, as this mode has the effect of holding a table lock until the INSERT completes.
From the docs on InnoDB AUTO_INCREMENT Lock Modes:
innodb_autoinc_lock_mode = 0 (“traditional” lock mode)
The traditional lock mode provides the same behavior that existed
before the innodb_autoinc_lock_mode configuration parameter was
introduced in MySQL 5.1. The traditional lock mode option is provided
for backward compatibility, performance testing, and working around
issues with “mixed-mode inserts”, due to possible differences in
semantics.
In this lock mode, all “INSERT-like” statements obtain a special
table-level AUTO-INC lock for inserts into tables with AUTO_INCREMENT
columns. This lock is normally held to the end of the statement (not
to the end of the transaction) to ensure that auto-increment values
are assigned in a predictable and repeatable order for a given
sequence of INSERT statements, and to ensure that auto-increment
values assigned by any given statement are consecutive.

I believe this is a configurable setting in InnoDB. See: AUTO_INCREMENT Handling in InnoDB
You'd want to go with
innodb_autoinc_lock_mode = 0

INSERT INTO `tablename` (id, rowname) SELECT '1', 'abc' FROM dual WHERE NOT EXISTS(SELECT NULL FROM `tablename` WHERE `rowname`='abc');
or short (because the id field has an increment in the table )
INSERT INTO `tablename` (rowname) SELECT 'abc' FROM dual WHERE NOT EXISTS(SELECT NULL FROM `tablename` WHERE `rowname`='abc');
The solution may look cumbersome, but it works as the author needs.

I think this behaviour is reasonable. The auto-increment should not be relied upon to give sequences that don't have gaps.
For example, a rolled back transaction still consumes IDs:
INSERT INTO t (normalcol, uniquecol) VALUES
('hello','uni1'),
('hello','uni2'),
('hello','uni1');
Generates a unique key violation obviously, and inserts no rows into the database (assuming transactional engine here). However, it may consume up to 3 auto-inc values without inserting anything.

Not sure if it's expected, though I would recommend switching to:
INSERT ... ON DUPLICATE KEY UPDATE

Related

MySQL InnoDB Gap Lock on Update with where clause by PK?

I'm getting locks in update operations that doesn't seem to be related to each other.
This is the DB Context:
MySQL 5.7
InnoDB engine
Read Committed Isolation Level
Optimistic Locking concurrency control in the application
The table structure is something like this:
CREATE TABLE `external_user` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`user_id` bigint(20) NOT NULL,
`status` varchar(30) NOT NULL,
PRIMARY KEY (`id`),
KEY `idx_user_status` (`status`),
KEY `idx_user_id` (`user_id`) USING BTREE,
);
The structure's been simplified. The real one has more attributes and some FKs to other tables.
The process is something like this:
Process 1
BEGIN;
update external_user
set user_id=33333
where (id in (400000, 400002, 400028............., 420000))
and user_id = 22222;
This is a long running query that modifies around 20k rows. Using between is not an option because we don't update all the consecutive records.
At the same time a second process starts.
Process 2
BEGIN;
update external_user
set status='disabled', user_id = 44444
where id = 10000;
It turns out that this second update is waiting for the first one to complete. So there's a lock held in the first query.
I've been reading a lot about locking in MySQL, but I couldn't find anything about updates that in where clause have a PK filter with in operator and another filter by an attribute that has an non-unique index (that is also being changed in the set clause).
Is the first query obtaining a gap lock because of the non-unique index filter? Is it possible? Even though the PK is provided as a filter?
Note: I don't have access to the engine in order to obtain more detailed information.

custom AUTO INCREMENT value not working

I have the following sql code to create a table
CREATE TABLE db.object (
`objid` bigint(20) NOT NULL AUTO_INCREMENT,
`object_type` varchar(32) NOT NULL,
PRIMARY KEY (`objid`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
However, the values in the objid are coming out as 1,2,3... (The insert statement is not adding the ids)
Shouldn't AUTO_INCREMENT=2 make the objid start from 2 instead of 1
With InnoDB tables, the AUTO_INCREMENT value will be reset to the maximum value (plus 1) when the table is opened. The auto increment value exists only in memory, it is not persisted on disk.
A table open would happen, for example, when the MySQL instance was shutdown and then restarted, and a reference is made to the table.
A table can also be closed at other times. For example, when open_table_cache is exceeded (that is, when a large number of other tables is opened), MySQL will close some of the open tables, to make room in the cache for newly opened tables.
I believe this behavior is documented somewhere in the MySQL Reference Manual.
I used your SQL, created the object table and entered two values for object_type and objid started at 2. Can't see anything wrong here...
It might. There are enough exceptions and gotchas with auto-inc on InnoDB tables that it bears urging a full review of the documentation.
That said, there is one scenario I can think of where MySQL ignores the initializer value. I'll quote the documentation:
InnoDB uses the in-memory auto-increment counter as long as the server runs. When the server is stopped and restarted, InnoDB reinitializes the counter for each table for the first INSERT to the table, as described [here]:
InnoDB executes the equivalent of the following statement on the first insert into a table containing an AUTO_INCREMENT column after a restart:
SELECT MAX(ai_col) FROM table_name FOR UPDATE;
A server restart also cancels the effect of the AUTO_INCREMENT = N table option in CREATE TABLE and ALTER TABLE statements, which you can use with InnoDB tables to set the initial counter value or alter the current counter value.
So if you create that table, then do a server restart (like as part of a deployment process), you'll get a nice value of 1 for the initial row. If you want to countermand this, you need to create the table, then insert a dummy row with the auto-inc value you want, then restart, then delete the dummy row.

mariadb alter table lock strategy

I am using MariaDB 10.1.9. Short version: What I really want to know for certain is if I can modify an indexed auto_increment field on an innodb table from int to bigint without locking the table?
Long version: Is there a detailed explanation of which ALTER TABLE operations require which lock level? The documentation just says "Depending on the required operation and the used storage engine, different lock strategies can be used for ALTER TABLE.". It doesn't provide a link to any details and each operation on the ALTER TABLE page does not specify it's required level.
From experimentation, I know ADD COLUMN does not require a lock. MODIFY COLUMN allows reads, but can it be manually set to allow writes? The MariaDB documentation says you can set the lock level, but if you don't set it restrictive enough, it will give an error - but it doesn't say what that error is. The current table column definition looks like
`Id` int(10) NOT NULL AUTO_INCREMENT
KEY `Id` (`Id`)
When I try
ALTER TABLE MyTable MODIFY MyField bigint AUTO_INCREMENT LOCK=NONE;
I just get a generic SQL syntax error. Even if I specify DEFAULT, I get an error, so I'm not sure how to use the LOCK - which I would expect the proper error to tell me when I have chosen an improper lock level.
The syntax...
alter_specification [, alter_specification] ...
... requires a comma
ALTER TABLE MyTable
MODIFY COLUMN MyField BIGINT AUTO_INCREMENT, -- comma here
LOCK=NONE;
I'm guessing the error was not all that "generic" -- it should have said something about the right syntax to use near 'LOCK... which is your hint not that the quoted term is the beginning of the error, but rather that that the parser/lexer expected something other than the quoted value to occur at that position (because it was looking for the comma).
If the column you are altering is the primary key, a lock seems inevitable -- because the entire table should need rebuilding, including all the indexes, since the primary key "rides free" in all indexes, as it is what's used after a non-covering index lookup to actually find the rows matched by the index.

Emulate MyISAM's composite primary key with an autoincrement behavior in InnoDB

In MySQL, if you have a MyISAM table that looks something like:
CREATE TABLE `table1` (
`col1` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`col2` INT(10) UNSIGNED NOT NULL,
PRIMARY KEY (`col2`, `col1`)
)
COLLATE='utf8_general_ci'
ENGINE=MyISAM;
if you insert rows then the autoincrement base will be unique for every distinct col2 value. If my explanation isn't clear enough, this answer should explain better. InnoDB, however, doesn't follow this behavior. In fact, InnoDB won't even let you put col2 as first in the primary key definition.
My question is, is it possible to model this behavior in InnoDB somehow without resorting to methods like MAX(id)+1 or the likes? The closest I could find is this, but it's for PostgreSQL.
edit: misspelling in title
It's a neat feature of MyISAM that I have used before, but you can't do it with InnoDB. InnoDB determines the highest number on startup, then keeps the number in RAM and increments it when needed.
Since InnoDB handles simultaneous inserts/updates, it has to reserve the number at the start of a transaction. On a transaction rollback, the number is still "used" but not saved. Your MAX(id) solution could get you in trouble because of this. A transaction starts, the number is reserved, you pull the highest "saved" number + 1 in a separate transaction, which is the same as that reserved for the first transaction. The transaction finishes and the reserved number is now saved, conflicting with yours.
MAX(id) returns the highest saved number, not the highest used number. You could have a MyISAM table whose sole purpose to to generate the numbers you want. It's the same number of queries as you MAX(id) solution, it's just that one is a SELECT, the other an INSERT.

A simple INSERT query on InnoDB taking too much

I have this simple query:
INSERT IGNORE INTO beststat (bestid,period,rawView) VALUES ( 4510724 , 201205 , 1 )
On the table:
CREATE TABLE `beststat` (
`bestid` int(11) unsigned NOT NULL,
`period` mediumint(8) unsigned NOT NULL,
`view` mediumint(8) unsigned NOT NULL DEFAULT '0',
`rawView` mediumint(8) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (`bestid`,`period`),
) ENGINE=InnoDB AUTO_INCREMENT=2020577 DEFAULT CHARSET=utf8
And it takes 1 sec to completes.
Side Note: actually it doesn't take always 1sec. Sometime it's done even in 0.05 sec. But often it takes 1 sec
This table (beststat) currently has ~500'000 records and its size is: 40MB. I have 4GB RAM and innodb buffer pool size = 104,857,600, with: Mysql: 5.1.49-3
This is the only InnoDB table in my database (others are MyISAM)
ANALYZE TABLE beststat shows: OK
Maybe there is something wrong with InnoDB settings?
I ran some simulations about 3 years ago as part of some evaluation project for a customer. They had a requirement to be able to search a table where data is constantly being added, and they wanted to be up to date up to a minute.
InnoDB has shown much better results in the beginning, but has quickly deteriorated (much before 1mil records), until I have removed all indexes (including primary). At that point InnoDB has become superior to MyISAM when executing inserts/updates. (I have much worse HW then you, executing tests only on my laptop.)
Conclusion: Insert will always suffer if you have indexes, and especially unique.
I would suggest following optimization:
Remove all indexes from your beststat table and use it as a simple dump.
If you really need these unique indexes, consider some programmable solution (like remembering the max bestid at all time, and insisting that the new record is above that number - and immediately increasing this number. (But do you really need so many unique fields - and they all sound to me just like indexes.)
Have a background thread move new records from InnoDB to another table (which can be MyISAM) where they would be indexed.
Consider dropping indexes temporarily and then after bulk update re-indexing the table, possibly switching two tables so that querying is never interrupted.
These are theoretical solutions, I admit, but is the best I can say given your question.
Oh, and if your table is planned to grow to many millions, consider a NoSQL solution.
So you have two unique indexes on the table. You primary key is a autonumber. Since this is not really part of the data as you add it to the data it is what you call a artificial primary key. Now you have a unique index on bestid and period. If bestid and period are supposed to be unique that would be a good candidate for the primary key.
Innodb stores the table either as a tree or a heap. If you don't define a primary key on a innodb table it is a heap if you define a primary key it is defined as a tree on disk. So in your case the tree is stored on disk based on the autonumber key. So when you create the second index it actually creates a second tree on disk with the bestid and period values in the index. The index does not contain the other columns in the table only bestid, period and you primary key value.
Ok so now you insert the data first thing myself does is to ensure the unique index is always unique. Thus it read the index to see if you are trying to insert a duplicate value. This is where the slow down comes into play. It first has to ensure uniqueness then if it passes the test write data. Then it also has to insert the bestid, period and primary key value into the unique index. So total operation would be 1 read index for value 1 insert row into table 1 insert bestid and period into index. A total of three operations. If you removed the autonumber and used only the unique index as the primary key it would read table if unique insert into table. In this case you would have the following number of operations 1 read table to check values 1 insert into tables. This is two operations vs three. So you do 33% less work by removing the redundant autonumber.
I hope this is clear as I am typing from my Android and autocorrect keeps on changing innodb to inborn. Wish I was at a computer.