Fetching last insert id shows wrong number - mysql

I have table with three records. There is one filed as auto_increment. The ID's are 1, 2, 3...
When I run query
SELECT LAST_INSERT_ID() FROM myTable
The result is 5, even the last ID is 3. Why?
And what is better to use? LAST_INSERT_ID() or SELECT MAX(ID) FROM myTable?

The LAST_INSERT_ID() function only returns the most recent autoincremented id value for the most recent INSERT operation, to any table, on your MySQL connection.
If you haven't just done an INSERT it returns an unpredictable value. If you do several INSERT queries in a row it returns the id of the most recent one. The ids of the previous ones are lost.
If you use it within a MySQL transaction, the row you just inserted won't be visible to another connection until you commit the transaction. So, it may seem like there's no row matching the returned LAST_INSERT_ID() value if you're stepping through code to debug it.
You don't have to use it within a transaction, because it is a connection-specific value. If you have two connections (two MySQL client programs) inserting stuff, they each have their own distinct value of LAST_INSERT_ID() for the INSERT operations they are doing.
edit If you are trying to create a parent - child relationship, for example name and email addresses, you might try this kind of sequence of MySQL statements.
INSERT INTO user (name) VALUES ('Josef');
SET #userId := LAST_INSERT_ID();
INSERT INTO email (user_id, email) VALUES (#userId, 'josef#example.com');
INSERT INTO email (user_id, email) VALUES (#userId, 'josef#josefsdomain.me');
This uses LAST_INSERT_ID() to get the autoincremented ID from the user row after you insert it. It then makes a copy of that id in #userId, and uses it twice, to insert two rows in the child table. By using more INSERT INTO email requests, you could insert an arbitrary number of child rows for a single parent row.
Pro tip: SELECT MAX(id) FROM table is a bad, bad way to figure out the ID of the most recently inserted row. It's vulnerable to race conditions. So it will work fine until you start scaling up your application, then it will start returning the wrong values at random. That will ruin your weekends.

last_insert_id() has no relation to specific tables. In the same connection, all table share the same.
Below is a demo for it.
Demo:
mysql> create table t1(c1 int primary key auto_increment);
Query OK, 0 rows affected (0.11 sec)
mysql> create table t2(c1 int primary key auto_increment);
Query OK, 0 rows affected (0.06 sec)
mysql> insert into t1 values(null);
Query OK, 1 row affected (0.01 sec)
mysql> insert into t2 values(4);
Query OK, 1 row affected (0.00 sec)
mysql> insert into t2 values(null);
Query OK, 1 row affected (0.02 sec)
mysql> select last_insert_id() from t1;
+------------------+
| last_insert_id() |
+------------------+
| 5 |
+------------------+
1 row in set (0.00 sec)

I don't think this function does what you think it does. It returns the last id inserted on the current connection.
If you compare that to SELECT MAX(ID) this selects the highest ID irrespective of connection, be careful not to get them mixed up or you will get unexpected results.
As for why it is showing 5 its probably because its the last id to be inserted, I believe that this value will remain even if the record is removed, perhaps someone could confirm this.

Table level triggers is what can come to rescue here. e.g. before insert trigger.

maybe you should restart the database connection than reconnected again for fresh data

Related

on duplicate key update id=last_insert_id(id) - did the INSERT actually happen?

I need to INSERT a new row into my TABLE(with unique field 'A'), if it already exists(duplicate field, insert failed) - just return the ID of the existing one.
This code works well:
insert into TABLE set A=1 on duplicate key update id=last_insert_id(id)
But now I have another problem: how do I know if the returning ID belongs to a new (inserted) row or it's just an old one?
Yes, I can do "SELECT id WHERE A=1" beforehand, but it would overcomplicate the program code, require two steps, and just looks ugly. Besides, in future I may want to remove some UNIQUE indexes, then I'll have to rewrite the program as well to change all the 'where' checks. Maybe there is a better solution?
[solved, see my answer]
Found this solution. It works in console, but doesn't work in my program (must be a bug in the client, idk) - so probably it will work fine for everyone (except me, sigh)
Just check the 'affected rows count' - it will be 1 for the new record and 0 for the old one
mysql> INSERT INTO EMAIL set addr="test" ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id);
Query OK, 1 row affected (0.01 sec)<----- 1 = INSERTed
mysql> select last_insert_id(); //returns 1
mysql> INSERT INTO EMAIL set addr="test" ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id);
Query OK, 0 rows affected (0.00 sec) <----- 0 = OLD
mysql> select last_insert_id(); //returns 1
UPD: it didn't work for me because my program kept sending the 'CLIENT_FOUND_ROWS ' flag when connecting to mysql. Removed it, now everything is fine!

Can MySQL transaction rollback without ROLLBACK query?

I'm working on a financial system and I've got a problem with the MySQL transactions.
The system is a simple stock exchange, where users can buy and sell virtual shares. To keep integrity in buy and sell process I use transactions.
The problem is that in some cases (I don't know what it depends on) some of transactions are rolled back (or not commited), but next queries are processed.
The process is following:
User wants to buy shares for 1000 USD
In orderbook there are 4 offers for 250 USD
START TRANSACTION
For every offer:
Script does a UPDATE query (moving USD from one user to another and shares in the opposite way). Then script INSERTs entries to history tables.
Users pay a fee (UPDATE balances).
Repeat 5 and 6 for the next offer.
COMMIT
Now the key part - in some cases changes from point 5 are not saved, but from the 6 they are (I see that fee was paid, but there is no transaction in the history).
I'm not using ROLLBACK during this transactions and the script is not breaking (because in this case fee wouldn't be paid).
Is there any possibility that transaction is rolling back without ROLLBACK query? Or can MySQL COMMIT only few newest queries instead of all?
Transactions or not, it's the responsibility of your client code to verity that all your INSERT or UPDATE queries complete successfully and then either issue a explicit ROLLBACK or close the connection withour COMMIT to issue an implicity ROLLBACK. If any of them fails but your code goes on, those queries will not take effect (because they failed) but the rest will do.
Here's a simplified example:
mysql> create table test (
-> id int(10) unsigned not null,
-> primary key (id)
-> );
Query OK, 0 rows affected (0.02 sec)
mysql> insert into test(id) values (1);
Query OK, 1 row affected (0.00 sec)
mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)
mysql> insert into test(id) values (2);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test(id) values (-3);
ERROR 1264 (22003): Out of range value for column 'id' at row 1
We should have rollbacked and aborted here, but we didn't.
mysql> insert into test(id) values (4);
Query OK, 1 row affected (0.00 sec)
mysql> commit;
Query OK, 0 rows affected (0.00 sec)
mysql> select * from test;
+----+
| id |
+----+
| 1 |
| 2 |
| 4 |
+----+
3 rows in set (0.00 sec)
4 rows expected, got 3.
Other than that, there're many circumstances where you can get an unwanted COMMIT but an unwanted ROLLBACK is something I'm not sure that can happen unless you terminate a session with pending changes.
It's long time since the question was asked, but the problem actually was that I didn't check for the SQL errors for every query.
Actually at some points, when I should rollback transaction, I didn't do it.
If your are looking for answer - check again if you testing ALL queries in your transaction for successful execution and don't trust that the framework you are using is doing it for you automatically (just check it again).

Mysql IN stopped working on 5.1

I am trying to do something in Mysql Server 5.1 on Windows.
I am positive this type of query worked in an older version of Mysql as I supplied it to a client previously without a problem.
Basically, a field in one of my tables contains several ids; such as 1,2,3,4,5
The field is of type varchar
I am trying to see if a value exists in the field by using an IN statement, like below. But it returns nothing.
What am I doing wrong? Is there a better way? Thanks.
mysql> create database testing;
Query OK, 1 row affected (0.00 sec)
mysql> use testing;
Database changed
mysql> create table table1(field1 char(20));
Query OK, 0 rows affected (0.01 sec)
mysql> create table table2(field2 char(20));
Query OK, 0 rows affected (0.00 sec)
mysql> insert into table1 values('1');
Query OK, 1 row affected (0.00 sec)
mysql> insert into table2 values('1,2,3');
Query OK, 1 row affected (0.00 sec)
mysql> select * from table1 where field1 in (select field2 from table2);
Empty set (0.00 sec)
From your query select * from table1 where field1 in (select field2 from table2);, what I'm imagining is like this:
Dissecting the sub-query select field2 from table2, you will have:
field2
'1,2,3'
Then the main query will be (substitution):
select * from table1 where field1 in ('1,2,3');
Obviously it will return no rows since the only value that table1.field1 has is '1'. And '1' <> '1,2,3'.
Well, I bet you are looking for this: FIND_IN_SET
Sample query:
SELECT FIND_IN_SET('1', '1,2,3'); will return 1.
insert into table2 values('1,2,3');
most likely needs to be
insert into table2 values('1');
insert into table2 values('2');
insert into table2 values('3');
Then, your sub select select field2 from table2 returns ('1', '2', '3'), and the IN operator can be used to check if the result of the corresponding field from the main select is contained in this set.
According to the comments, the same schema seems to have worked before. I am not aware that the IN operator can be used like in the question, and splitting of column values into a row set seems to be non-trivial.
Using the FIND_IN_SET() function as proposed by #KaeL, the following query should work:
SELECT b.*
FROM table2 a
INNER JOIN table1 b ON FIND_IN_SET(b.field1, a.field2) > 0;
See also
Single MySQL field with comma separated values
Sample query on SQLFiddle
In any case, you should consider normalizing your schema - using string lists as values of single fields can usually be much better handled by a relational database when the separate values are stored in separate rows in a separate table.
http://sqlfiddle.com/#!2/797cc/3 shows a possible solution.

mysql select for delete

Edit:
I found a solution here http://mysql.bigresource.com/Track/mysql-8TvKWIvE/
assuming select takes a long time to execute, will this lock the table for a long time?
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION;
SELECT foo FROM bar WHERE wee = 'yahoo!';
DELETE FROM bar WHERE wee = 'yahoo!';
COMMIT;
I wish to use a criteria to select the rows in mysql, return them to my app as resultset, and then delete these rows. How can this be done? I know I can do the following but it's too inefficient:
select * from MyTable t where _critera_.
//get the resultset and then
delete from MyTable t where t.id in(...result...)
Do I need to use a transaction? Is there a single query solution?
I needed to SELECT some rows by some criteria, do something with the data, and then DELETE those same rows atomically, that is, without deleting any rows that meet the criteria but were inserted after the SELECT.
Contrary to other answers, REPEATABLE READ is not sufficient. Refer to Consistent Nonlocking Reads. In particular note this callout:
The snapshot of the database state applies to SELECT statements within a transaction, not necessarily to DML statements. If you insert or modify some rows and then commit that transaction, a DELETE or UPDATE statement issued from another concurrent REPEATABLE READ transaction could affect those just-committed rows, even though the session could not query them.
You can try it yourself:
First create a table:
CREATE TABLE x (i INT NOT NULL, PRIMARY KEY (i)) ENGINE = InnoDB;
Start a transaction and examine the table (this will be called session 1 now):
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
START TRANSACTION;
SELECT * FROM x;
Start another session (session 2) and insert a row. Note this session is in auto commit mode.
INSERT INTO x VALUES (1);
SELECT * FROM x;
You will see your newly inserted row. Then back in session 1 again:
SELECT * FROM x;
DELETE FROM x;
COMMIT;
In session 2:
SELECT * FROM x;
You'll see that even though you get nothing from the SELECT in session 1, you delete one row. In session 2 you will see the table is empty at the end. Note the following output from session 1 in particular:
mysql> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
Query OK, 0 rows affected (0.00 sec)
mysql> START TRANSACTION;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT * FROM x;
Empty set (0.00 sec)
/* --- insert in session 2 happened here --- */
mysql> SELECT * FROM x;
Empty set (0.00 sec)
mysql> DELETE FROM x;
Query OK, 1 row affected (0.00 sec)
mysql> COMMIT;
Query OK, 0 rows affected (0.06 sec)
mysql> SELECT * FROM x;
Empty set (0.00 sec)
This testing was done with MySQL 5.5.12.
For a correct solution
Use SERIALIZABLE transaction isolation level. However note that session 2 will block on the INSERT.
It seems that SELECT...FOR UPDATE will also do the trick. I have not studied the manual 100% in depth to understand this but it worked when I tried it. The advantage is you don't have to change the transaction isolation level. Again, session 2 will block on the INSERT.
Delete the rows individually after the SELECT. Basically you'd have to include a unique column (the primary key would be good) in the SELECT and then use DELETE FROM x WHERE i IN (...), or something similar, where IN contains a list of keys from the SELECT's result set. The advantage is you don't need to use a transaction at all and session 2 will not be blocked at any time. The disadvantage is that you have more data to send back and forth to the SQL server. Also I don't know if deleting the rows individually is as efficient as using the same WHERE clause as the original SELECT, but if the original SELECT's WHERE clause was complicated or slow the individual deletion may well be faster, so that could be another advantage.
To editorialize, this is one of those things that is so dangerous that even though it's documented it could almost be considered a "bug." But hey, the MySQL designers didn't ask me (or anyone else, apparently).
Do I need to use a transaction? Is there a single query solution?
Yes, you need to use a transaction. You cannot delete and select rows in a single query (i.e., there is no way to "return" or "select" the rows you have deleted).
You don't necessarily need to do the REPEATABLE READ option - I believe you could also select the rows FOR UPDATE, although this is a higher level of locking. REPEATABLE READ does seem to be the lowest level of locking you could use to execute this transaction safely. It happens to be the default for InnoDB.
How much this affects your table depends on whether you have an index on the wee column or not. Without it, I believe MySQL would have to lock writes the entire table.
Further reading:
Wikipedia - Isolation (database systems)
http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
Do a select statement. While looping through it, create a list string of unique IDs. Then pass this list back to mySQL using IN.
You could select your rows into a temporary table, then delete using the same criteria as your select. Since SELECT FROM WHERE FOR UPDATE also returns a result set, you could alter the SELECT FOR UPDATE to a SELECT INTO tmp_table FOR UPDATE. Then delete your selected rows, either using your original criteria, or by using the data in the temporary table as the criteria.
Something like this (but haven't checked it for syntax)
START TRANSACTION;
SELECT a,b into TMP_TABLE FROM table_a WHERE a=1 FOR UPDATE;
DELETE FROM table_a
USING table_a JOIN TMP_TABLE ON (table_a.a=TMP_TABLE.a, table_a.b=TMP_TABLE.b)
WHERE 1=1;
COMMIT;
Now your records are gone from the original table, but you also have a copy in your temporary table, which you can keep, or delete.
There is no single query solution. Use
select * from MyTable t where _critera_
//get the resultset and then
delete from MyTable where _critera_
Execute the SELECT statement with the WHERE clause and then use the same WHERE clause in the DELETE statement as well. Assuming there was no interim changes to the data, the same rows should be deleted.
EDIT: Yes, you could set this up as a single transaction so there's no modification to the tables while you're doing this.

MySQL auto increment

I have table with an auto-increment field, but I need to transfer the table to another table on another database. Will the value of field 1 be 1, that of field 2 be 2, etc?
Also in case the database get corrupted and I need to restore the data, will the auto-increment effect in some way? will the value change? (eg if the first row, id (auto-inc) = 1, name = john, country = UK .... will the id field remain 1?) I am asking because if other table refer to this value, all data will get out of sync if this field change.
It sounds like you are trying to separately insert data into two separate databases in the same order, and using the auto-increment field to link the two rows. It seems you are basically asking, is it OK to rely on the auto-increment being the same in both databases if the data is inserted in the same order.
If so, the answer is no - you cannot rely on this behaviour. It is legitimate for the auto-increment to skip a value, for example see here.
But maybe you are asking, can an auto-increment value suddenly change to another value after it is written and committed? No - they will not change in the future (unless of course you change them explicitly).
Does that answer your question? If not, perhaps you can explain your question again.
Transferring the data wouldn't be a problem, if you completely specify the auto_increment values. MySQL allows you to insert anything you want into an auto_increment field, but only does the actual auto_increment if the value you're inserting is 0 or NULL. At least on my 5.0 copy of MySQL, it'll automatically adjust the auto_increment value to take into account what you've inserted:
mysql> create table test (x int auto_increment primary key);
Query OK, 0 rows affected (0.01 sec)
mysql> insert into test (x) values (10);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test (x) values (null);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test (x) values (0);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test (x) values (5);
Query OK, 1 row affected (0.00 sec)
mysql> select * from test;
+----+
| x |
+----+
| 5 | <--inserted '5' (#4)
| 10 | <--inserted '10' (#1)
| 11 | <--inserted 'null' (#2)
| 12 | <--inserted '0' (#3)
+----+
3 rows in set (0.00 sec)
You can also adjust the table's next auto_increment value as follows:
mysql> alter table test auto_increment=500;
Query OK, 4 rows affected (0.04 sec)
Records: 4 Duplicates: 0 Warnings: 0
mysql> insert into test (x) values (null);
Query OK, 1 row affected (0.00 sec)
mysql> select last_insert_id();
+------------------+
| last_insert_id() |
+------------------+
| 500 |
+------------------+
1 row in set (0.01 sec)
SELECT INTO should keep the same ids on target table
http://dev.mysql.com/doc/refman/5.1/en/ansi-diff-select-into-table.html
Using MySQL backup will do this, if you create your own insert statements make sure that you include your id field and that will insert the value (its not like MSSQL where you have to set identity_insert), a thing to watch for is that if you generate a DDL it sometimes generates "incrorectly" for your identity column (i.e. it states that starting point is at your last identity value? you may not want this behaviour).