Reserving mySQL auto-incremented IDs? - mysql

We want to obtain an auto-increment ID from mySQL without actually storing it until the other non-mysql related processes are successfully completed, so that the entry is not stored if an exception or application crash happens. We need to use the ID as a key for the other processes. In essence we want to “reserve” the auto-increment and insert the rows into mySQL as the last step. We don’t want to insert any row until we know the entire process has completed successfully.
Is it possible to do this sort of auto-increment reservation in mySQL?
Note: I know about the SQL transactions. But our process contains non-SQL stuff that need to happen outside of the DB. These process may take few mins to several hours. But we don't want any other process using the same auto-increment ID. That is why we want a "reserve" an auto-increment ID without really inserting any data into the DB. –

The only way to generate an auto-increment value is to attempt the insert. But you can roll back that transaction, and still read the id generated. In MySQL 5.1 and later, the default behavior is that auto-increment values aren't "returned" to the stack when you roll back.
START TRANSACTION;
INSERT INTO mytable () VALUES ();
ROLLBACK;
SELECT LAST_INSERT_ID() INTO #my_ai_value;
Now you can be sure that no other transaction will try to use that value, so you can use it in your external processes, and then finally insert a value manually that uses that id value (when you insert a specific id value, MySQL does not generate a new value).

Have you considred using mysql tranactions?
The essense of it, you start a transaction, if all sql statements are correct and can be complteted, then you commit your transaction. If not, then you rollback as if nothing happened.
More details can be read in this link:
http://dev.mysql.com/doc/refman/5.0/en/sql-syntax-transactions.html

you can use temporary table along with transaction
if transaction complete temp table will be gone and move data to real table
http://www.tutorialspoint.com/mysql/mysql-temporary-tables.htm

Related

MySQL how to increment a (float) field that is not a AutoIncrement?

I just received access to a MySQL Database where the ID is a float field (not autoIncrement). This database was first used with a C# Application that is not updated anymore.
I have to make a web app and I can't edit the type of field in the database neither make a new one.
So, how can I make "INSERT" query that will increment the ID and not create problem when multiple people is working in the same time ?
I tried to get the last id, increment by one, then insert into the table but it's not the best way if users are creating a record in the same time.
Thank you
how can I make "INSERT" query that will increment the ID and not create problem when multiple people is working in the same time ?
You literally cannot make an INSERT query alone that will increment the ID and avoid race conditions. It has nothing to do with the data type of the column. The column could be INT and you would have the same race condition problem.
One solution is to use LOCK TABLES to block concurrent sessions from inserting rows. Then your session can read the current MAX() value in the table, increment it, INSERT a new row with the incremented value, and then UNLOCK TABLES as promptly as possible to allow the concurrent sessions to do their INSERTs.
In fact, this is exactly how MySQL's AUTO_INCREMENT works. Each table stores its own most recent auto-increment value. When you insert to a table with an auto-increment, the table is locked briefly, just long enough for your session to read the table's auto-inc value, increment it, store it back into the table's metadata, and also store that value in your session's thread data. Then it unlocks the table's auto-inc lock. This all happens very quickly. Read https://dev.mysql.com/doc/refman/8.0/en/innodb-auto-increment-handling.html for more on this.
The difficult part is that you can't simulate this from SQL, because SQL naturally must obey transaction scope. The auto-inc mechanism built into InnoDB works outside of transaction scope, so concurrent sessions can read the latest incremented auto-inc value for the table even if the transaction that incremented it has not finished inserting that value and committing its transaction. This is good for allowing maximum concurrency, but you can't do that at the SQL level.
The closest you can do is the LOCK TABLES solution that I described, but this is rather clumsy because it ends up holding that lock a lot longer than the auto-inc lock typically lasts. This puts a limit on the throughput of concurrent inserts to your table. Is that too limiting for your workload? I can't say. Perhaps you have a modest rate of inserts to this table, and it won't be a problem.
Another solution is to use some other table that has an auto-increment or another type of unique id generator that is safe for concurrent sessions to share. But this would require all concurrent sessions to use the same mechanism as they INSERT rows.
A possible solution could be the following, but it is risky and requires thorough testing of ALL applications using the table/database!
The steps to follow:
rename the table (xxx_refactored or something)
create a view using the original table and cast the ID column as FLOAT in the view, so the other application will see the data as FLOAT.
create a new column or alter the existing one and add the AUTO_INCREMENT to it
Eventually the legacy application will have to be updated to handle the column properly, so the view can be dropped
The view will be updatable, so the legacy application will still be able to insert and update the table through the view.
This won't work if:
Data in the column is outside of the range of the chosen new datatype
The column is referenced by a foreign key constraint from any other table
Probably more :)
!!! TEST EVERYTHING BEFORE YOU DO IT IN PRODUCTION !!!
Probably a better option is to ask somebody to show you the code which maintains this field in the legacy application.

Mysql insert row ignoring current transaction

I have a MySQL table implementing a mail queue, and I use it also to send mails which reports unexpected errors in the system. Sometimes these unexcepted errors ocurrs inside a transaction so when I rollback the transacion also I undo the row inserted (the mail which is reporting the unexpected error) in the mail queue table.
My question is how can I force to insert a row in a table in the middle a transaction ignoring the possible transaction rollback?. I mean, If the transactions finally rollsback, not to rollback also the row insertion for the email reporting the error details.
This table can be read by multiple asyncronous process to send the mails in the queue, so in this scenario the rows have to be blocked to send only once the emails so is not possible to use a MyISAM table type and is using Innodb.
Thanks in advance.
If you INSERT should survive a ROLLBACK of the transaction, it is safe to say, that it is not part of the transaction. So what you should do is to simply move it outside the transaction. There are many ways to achieve that:
While in the transaction, instead of running you INSERT, store the
fields in session variables (these will survive a ROLLBACK), after
the transaction run the insert from the session variables
Rethink your schema - this reeks of some deeper-lying problem
Open a second DB connection and run your INSERT on this one, it will not be affected by the transaction on the first connection.
You could create a different connection to the database to insert the errors and it won't be in the same transaction context, so they would be inserted.

Why does the Autoincrement process in MySQL on a InnoDB table sometimes increments by more then 1? [duplicate]

A co-worker just made me aware of a very strange MySQL behavior.
Assuming you have a table with an auto_increment field and another field that is set to unique (e.g. a username-field). When trying to insert a row with a username thats already in the table the insert fails, as expected. Yet the auto_increment value is increased as can be seen when you insert a valid new entry after several failed attempts.
For example, when our last entry looks like this...
ID: 10
Username: myname
...and we try five new entries with the same username value on our next insert we will have created a new row like so:
ID: 16
Username: mynewname
While this is not a big problem in itself it seems like a very silly attack vector to kill a table by flooding it with failed insert requests, as the MySQL Reference Manual states:
"The behavior of the auto-increment mechanism is not defined if [...] the value becomes bigger than the maximum integer that can be stored in the specified integer type."
Is this expected behavior?
InnoDB is a transactional engine.
This means that in the following scenario:
Session A inserts record 1
Session B inserts record 2
Session A rolls back
, there is either a possibility of a gap or session B would lock until the session A committed or rolled back.
InnoDB designers (as most of the other transactional engine designers) chose to allow gaps.
From the documentation:
When accessing the auto-increment counter, InnoDB uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction. The special lock release strategy was introduced to improve concurrency for inserts into a table containing an AUTO_INCREMENT column
…
InnoDB uses the in-memory auto-increment counter as long as the server runs. When the server is stopped and restarted, InnoDB reinitializes the counter for each table for the first INSERT to the table, as described earlier.
If you are afraid of the id column wrapping around, make it BIGINT (8-byte long).
Without knowing the exact internals, I would say yes, the auto-increment SHOULD allow for skipped values do to failure inserts. Lets say you are doing a banking transaction, or other where the entire transaction and multiple records go as an all-or-nothing. If you try your insert, get an ID, then stamp all subsequent details with that transaction ID and insert the detail records, you need to ensure your qualified uniqueness. If you have multiple people slamming the database, they too will need to ensure they get their own transaction ID as to not conflict with yours when their transaction gets committed. If something fails on the first transaction, no harm done, and no dangling elements downstream.
Old post,
but this may help people,
You may have to set innodb_autoinc_lock_mode to 0 or 2.
System variables that take a numeric value can be specified as --var_name=value on the command line or as var_name=value in option files.
Command-Line parameter format:
--innodb-autoinc-lock-mode=0
OR
Open your mysql.ini and add following line :
innodb_autoinc_lock_mode=0
I know that this is an old article but since I also couldn't find the right answer, I actually found a way to do this. You have to wrap your query within an if statement. Its usually insert query or insert and on duplicate querys that mess up the organized auto increment order so for regular inserts use:
$check_email_address = //select query here\\
if ( $check_email_address == false ) {
your query inside of here
}
and instead of INSERT AND ON DUPLICATE use a UPDATE SET WHERE QUERY in or outside an if statement doesn't matter and a REPLACE INTO QUERY also does seem to work

Why does MySQL autoincrement increase on failed inserts?

A co-worker just made me aware of a very strange MySQL behavior.
Assuming you have a table with an auto_increment field and another field that is set to unique (e.g. a username-field). When trying to insert a row with a username thats already in the table the insert fails, as expected. Yet the auto_increment value is increased as can be seen when you insert a valid new entry after several failed attempts.
For example, when our last entry looks like this...
ID: 10
Username: myname
...and we try five new entries with the same username value on our next insert we will have created a new row like so:
ID: 16
Username: mynewname
While this is not a big problem in itself it seems like a very silly attack vector to kill a table by flooding it with failed insert requests, as the MySQL Reference Manual states:
"The behavior of the auto-increment mechanism is not defined if [...] the value becomes bigger than the maximum integer that can be stored in the specified integer type."
Is this expected behavior?
InnoDB is a transactional engine.
This means that in the following scenario:
Session A inserts record 1
Session B inserts record 2
Session A rolls back
, there is either a possibility of a gap or session B would lock until the session A committed or rolled back.
InnoDB designers (as most of the other transactional engine designers) chose to allow gaps.
From the documentation:
When accessing the auto-increment counter, InnoDB uses a special table-level AUTO-INC lock that it keeps to the end of the current SQL statement, not to the end of the transaction. The special lock release strategy was introduced to improve concurrency for inserts into a table containing an AUTO_INCREMENT column
…
InnoDB uses the in-memory auto-increment counter as long as the server runs. When the server is stopped and restarted, InnoDB reinitializes the counter for each table for the first INSERT to the table, as described earlier.
If you are afraid of the id column wrapping around, make it BIGINT (8-byte long).
Without knowing the exact internals, I would say yes, the auto-increment SHOULD allow for skipped values do to failure inserts. Lets say you are doing a banking transaction, or other where the entire transaction and multiple records go as an all-or-nothing. If you try your insert, get an ID, then stamp all subsequent details with that transaction ID and insert the detail records, you need to ensure your qualified uniqueness. If you have multiple people slamming the database, they too will need to ensure they get their own transaction ID as to not conflict with yours when their transaction gets committed. If something fails on the first transaction, no harm done, and no dangling elements downstream.
Old post,
but this may help people,
You may have to set innodb_autoinc_lock_mode to 0 or 2.
System variables that take a numeric value can be specified as --var_name=value on the command line or as var_name=value in option files.
Command-Line parameter format:
--innodb-autoinc-lock-mode=0
OR
Open your mysql.ini and add following line :
innodb_autoinc_lock_mode=0
I know that this is an old article but since I also couldn't find the right answer, I actually found a way to do this. You have to wrap your query within an if statement. Its usually insert query or insert and on duplicate querys that mess up the organized auto increment order so for regular inserts use:
$check_email_address = //select query here\\
if ( $check_email_address == false ) {
your query inside of here
}
and instead of INSERT AND ON DUPLICATE use a UPDATE SET WHERE QUERY in or outside an if statement doesn't matter and a REPLACE INTO QUERY also does seem to work

How can I undo a mysql statement that I just executed?

How can I undo the most recently executed mysql query?
If you define table type as InnoDB, you can use transactions. You will need set AUTOCOMMIT=0, and after you can issue COMMIT or ROLLBACK at the end of query or session to submit or cancel a transaction.
ROLLBACK -- will undo the changes that you have made
You can only do so during a transaction.
BEGIN;
INSERT INTO xxx ...;
DELETE FROM ...;
Then you can either:
COMMIT; -- will confirm your changes
Or
ROLLBACK -- will undo your previous changes
Basically: If you're doing a transaction just do a rollback. Otherwise, you can't "undo" a MySQL query.
For some instrutions, like ALTER TABLE, this is not possible with MySQL, even with transactions (1 and 2).
You can stop a query which is being processed by this
Find the Id of the query process by => show processlist;
Then => kill id;
in case you do not only need to undo your last query (although your question actually only points on that, I know) and therefore if a transaction might not help you out, you need to implement a workaround for this:
copy the original data before commiting your query and write it back on demand based on the unique id that must be the same in both tables; your rollback-table (with the copies of the unchanged data) and your actual table (containing the data that should be "undone" than).
for databases having many tables, one single "rollback-table" containing structured dumps/copies of the original data would be better to use then one for each actual table. it would contain the name of the actual table, the unique id of the row, and in a third field the content in any desired format that represents the data structure and values clearly (e.g. XML). based on the first two fields this third one would be parsed and written back to the actual table. a fourth field with a timestamp would help cleaning up this rollback-table.
since there is no real undo in SQL-dialects despite "rollback" in a transaction (please correct me if I'm wrong - maybe there now is one), this is the only way, I guess, and you have to write the code for it on your own.