In my MySQL table I've created an ID column which I'm hoping to auto-increment in order for it to be the primary key.
I've created my table:
CREATE TABLE `test` (
`id` INT( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`name` VARCHAR( 50 ) NOT NULL ,
`date_modified` DATETIME NOT NULL ,
UNIQUE (
`name`
)
) TYPE = INNODB;
then Inserted my records:
INSERT INTO `test` ( `id` , `name` , `date_modified` )
VALUES (
NULL , 'TIM', '2011-11-16 12:36:30'
), (
NULL , 'FRED', '2011-11-16 12:36:30'
);
I'm expecting that my ID's for the above are 1 and 2 (respectively). And so far this is true.
However when I do something like this:
insert into test (name) values ('FRED')
on duplicate key update date_modified=now();
then insert a new record, I'm expecting it to be 3, however now I'm shown an ID of 4; skipping the place spot for 3.
Normally this wouldn't be an issue but I'm using millions of records which have thousands of updates every day.. and I don't really want to even have to think about running out of ID's simply because I'm skipping a ton of numbers..
Anyclue to why this is happening?
MySQL version: 5.1.44
Thank you
My guess is that the INSERT itself kicks off the code that generates the next ID number. When the duplicate key is detected, and ON DUPLICATE KEY UPDATE is executed, the ID number is abandoned. (No SQL dbms guarantees that automatic sequences will be without gaps, AFAIK.)
MySQL docs say
In general, you should try to avoid using an ON DUPLICATE KEY UPDATE
clause on tables with multiple unique indexes.
That page also says
If a table contains an AUTO_INCREMENT column and INSERT ... ON
DUPLICATE KEY UPDATE inserts or updates a row, the LAST_INSERT_ID()
function returns the AUTO_INCREMENT value.
which stops far short of describing the internal behavior I guessed at above.
Can't test here; will try later.
Is it possible to change your key to unsigned bigint - 18,446,744,073,709,551,615 is a lot of records - thus delaying the running out of ID's
Found this in mysql manual http://dev.mysql.com/doc/refman/5.1/en/example-auto-increment.html
Use a large enough integer data type for the AUTO_INCREMENT column to hold the
maximum sequence value you will need. When the column reaches the upper limit of
the data type, the next attempt to generate a sequence number fails. For example,
if you use TINYINT, the maximum permissible sequence number is 127.
For TINYINT UNSIGNED, the maximum is 255.
More reading here http://dev.mysql.com/doc/refman/5.6/en/information-functions.html#function_last-insert-id it could be inferred that the insert to a transactional table is a rollback so the manual says "LAST_INSERT_ID() is not restored to that before the transaction"
What about for a possible solution to use a table to generate the ID's and then insert into your main table as the PK using LAST_INSERT_ID();
From the manual:
Create a table to hold the sequence counter and initialize it:
mysql> CREATE TABLE sequence (id INT NOT NULL);
mysql> INSERT INTO sequence VALUES (0);
Use the table to generate sequence numbers like this:
mysql> UPDATE sequence SET id=LAST_INSERT_ID(id+1);
mysql> SELECT LAST_INSERT_ID();
The UPDATE statement increments the sequence counter and causes the next call to
LAST_INSERT_ID() to return the updated value. The SELECT statement retrieves that
value. The mysql_insert_id() C API function can also be used to get the value.
See Section 20.9.3.37, “mysql_insert_id()”.
It's really a bug how you can see here: http://bugs.mysql.com/bug.php?id=26316
But, apparently, they fixed it on 5.1.47 and it was declared as INNODB plugin problem.
A duplicate, but same problem, you can see here too: http://bugs.mysql.com/bug.php?id=53791 referenced to the first page mentioned here in this answer.
Related
I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
An example - how to solve autoincrementing in compound index.
CREATE TABLE test ( id INT,
type VARCHAR(192),
value INT,
PRIMARY KEY (id, type) );
-- create additional service table which will help
CREATE TABLE test_sevice ( type VARCHAR(192),
id INT AUTO_INCREMENT,
PRIMARY KEY (type, id) ) ENGINE = MyISAM;
-- create trigger which wil generate id value for new row
CREATE TRIGGER tr_bi_test_autoincrement
BEFORE INSERT
ON test
FOR EACH ROW
BEGIN
INSERT INTO test_sevice (type) VALUES (NEW.type);
SET NEW.id = LAST_INSERT_ID();
END
db<>fiddle here
creating a service table just to auto increment a value seems less than ideal for me. – Mohamed Mufeed
This table is extremely tiny - you may delete all records except one per group with largest autoincremented value in this group anytime. – Akina
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=61f0dc36db25dd5f0cf4647d8970cdee
You may schedule excess rows removing (for example, daily) in service event procedure.
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
CREATE TABLE `journals` (
`id` bigint NOT NULL AUTO_INCREMENT,
`type` smallint NOT NULL DEFAULT '0',
`trans_no` bigint NOT NULL DEFAULT '0',
PRIMARY KEY (`id`),
KEY `transaction` (`type`,`trans_no`)
)
So I created a meta_journals table like this
CREATE TABLE `meta_journals` (
`type` smallint NOT NULL,
`next_trans_no` bigint NOT NULL,
PRIMARY KEY (`type`),
)
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the journals I made sure to increment the next_trans_no of the corresponding type in the meta_transactions table. This increment operation is issued inside the same database TRANSACTION, i.e. inside the BEGIN AND COMMIT
This allowed me to use the exclusive lock acquired by the UPDATE statement on the row of meta_journals table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released by COMMITing.
I have a table where measurements of a sensor are saved. A row contains the value of the measurement, the id (pk and auto increment) and a random number = num (about 10 digits long or even longer).
CREATE TABLE `table` (
`id` int(10) UNSIGNED NOT NULL,
`value` float NOT NULL,
`num` int(10) UNSIGNED NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Now after some weeks/months the table could contain thousand and thousand of rows.
My Question:
My system requires that the random number is unique (two or more measurements/rows with the same random number are not acceptable).
Now I have done some research and there is the neat INSERT IGNORE statement.
But I'm not sure if it's smart to use it in my case as there might be many many rows after some time and checking all rows in the table for the random number and if it matches the one that has to be newly inserted might be overkill after some time and drastically impact performance?
Any thoughts?
Use the INSERT IGNORE command rather than the INSERT command. If a record doesn't duplicate an existing record, then MySQL inserts it as usual. If the record is a duplicate, then the IGNORE keyword tells MySQL to discard it silently without generating an error.
And also use unique UNIQUE constraints on the Random number column. For increasing the performance when you try to retrieve data from table create INDEX for that.
https://www.w3schools.com/sql/sql_create_index.asp
What does AUTO_INCREMENT=535 actually mean or do? I have seen this used when creating tables as shown below, but never knew what it does or is used for.
Create Table:
CREATE TABLE `my_table` (
`entry_id` int(11) NOT NULL auto_increment,
`address` varchar(512) NOT NULL,
PRIMARY KEY(entry_id)
) ENGINE=InnoDB AUTO_INCREMENT=535 DEFAULT CHARSET=utf8
Auto increment field allow automatic indexing of the records in a table. Usually serving as a Unique Key
Any table with definition like AUTO_INCREMENT=535 would mean that next auto-generated key will start from the 535.
This usually happen when you take backup from existing database. But also can be used in some special cases to have higher value of starting index.
Tells when to start with auto_increment counting. For example if you want to reserve some number of ID for some dedicated purposes.
The AUTO_INCREMENT attribute can be used to generate a unique identity
You can use a pair of statements: DROP TABLE and CREATE TABLE to reset the auto-increment column. Like the TRUNCATE TABLE statement, those statements removes all the data and reset the auto-increment value to zero.
No value was specified for the AUTO_INCREMENT column, so MySQL assigned sequence numbers automatically.
You can also ** explicitly assign 0 ** to the column to generate sequence numbers. If the column is declared NOT NULL, it is also ** possible to assign NULL ** to the column to generate sequence numbers.
You can retrieve the most recent AUTO_INCREMENT value with the LAST_INSERT_ID()
To start with an AUTO_INCREMENT value other than 1, you can set that value with CREATE TABLE or ALTER TABLE, like this:
mysql> ALTER TABLE tbl AUTO_INCREMENT = 100;
The AUTO INCREMENT interval value is controlled by the MySQL Server variable auto_increment_increment and applies globally. To change this to a number different from the default of 1, use the following command in MySQL:
mysql> SET ##auto_increment_increment = [interval number];
where [interval number] is the interval value you want to use. So, if we want to set the interval to be 5, we would issue the following command:
mysql> SET ##auto_increment_increment = 5;
refrence:-
http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html
I have Table called url_info and the structure of the table is:
url_info:
url_id ( auto_increment, primary key )
url ( unique,varchar(500) )
When I insert into table like this:
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
The output is:
1 Tom
2 Jerry
When I insert like this
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
The output is
1 Tom
3 Jerry
The auto-increment id is incremented when I try to insert to duplicate entry. I have also tried Insert Ignore
How to prevent it from incrementing when I try to insert a duplicate entry?
It's probably worth creating a stored procedure to insert what you want into the table. But, in the stored procedure check what items you have already in the table. If these match what you're trying to insert, then the query should not even attempt the insert.
Ie. The procedure needs to contain something like this:
IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = 'Tom')
INSERT INTO url_info(url) VALUES('Tom')
So, in your stored procedure, it would look like this (assuming the arguments/variables have been declared)
IF NOT EXISTS(SELECT TOP 1 url_id FROM url_info WHERE url = #newUrl)
INSERT INTO url_info(url) VALUES(#newUrl)
This is expected behaviour in InnoDB. The reason is that they want to let go of the auto_increment lock as fast as possible to improve concurrency. Unfortunately this means they increment the AUTO_INCREMENT value before resolving any constraints, such as UNIQUE.
You can read more about the idea in the manual on AUTO_INCREMENT Handling in InnoDB, but the manual is also unfortunately buggy and doesn't tell why your simple insert will give non-consecutive values.
If this is a real problem for you and you really need consecutive numbers, consider setting the innodb_autoinc_lock_mode option to 0 in your server, but this is not recommended as it will have severe effects on your database (you cannot do any inserts concurrently).
Auto_increment is performed updated by the engine. This is done before hand of checking a value is unique or not. And we can't roll back the operation to get back to former value of auto_increment.
Hence NO to start from where you last read on auto_increment.
And it is not an issue in loosing some intermediate values on auto_increment field.
The MAX value you can store into a SIGNED INT field is 2^31-1 equal to 2,147,483,647. If you read it loud, it sounds 2 billion+.
And I don't think it is small and won't suite your requirement.
CREATE TABLE `url_info` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`url` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=4 ;
When I execute:
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Tom');
INSERT INTO url_info(url) VALUES('Jerry');
I get:
Make sure you ID column is UNIQUE too.
As the manual says:
A UNIQUE index creates a constraint such that all values in the index
must be distinct. An error occurs if you try to add a new row with a
key value that matches an existing row. This constraint does not apply
to NULL values except for the BDB storage engine. For other engines, a
UNIQUE index permits multiple NULL values for columns that can contain
NULL. If you specify a prefix value for a column in a UNIQUE index,
the column values must be unique within the prefix.
How to have only 3 rows in the table and only update them?
I have the settings table and at first run there is nothing so I want to insert 3 records like so:
id | label | Value | desc
--------------------------
1 start 10 0
2 middle 24 0
3 end 76 0
After this from PHP script I need to update this settings from one query.
I have researched REPLACE INTO but I end up with duplicate rows in DB.
Here is my current query:
$query_insert=" REPLACE INTO setari (`eticheta`, `valoare`, `disabled`)
VALUES ('mentenanta', '".$mentenanta."', '0'),
('nr_incercari_login', '".$nr_incercari_login."', '0'),
('timp_restrictie_login', '".$timp_restrictie_login."', '0')
";
Any ideas?
Here is the create table statement. Just so you can see in case I'm missing something.
CREATE TABLE `setari` (
`id` int(10) unsigned NOT NULL auto_increment,
`eticheta` varchar(200) NOT NULL,
`valoare` varchar(250) NOT NULL,
`disabled` tinyint(1) unsigned NOT NULL default '0',
`data` datetime default NULL,
`cod` varchar(50) default NULL,
PRIMARY KEY (`eticheta`,`id`,`valoare`),
UNIQUE KEY `id` (`eticheta`,`id`,`valoare`)
) ENGINE=MyISAM
As explained in the manual, need to create a UNIQUE index on (label,value) or (label,value,desc) for REPLACE INTO determine uniqueness.
What you want is to use 'ON DUPLICATE KEY UPDATE' syntax. Read through it for the full details but, essentially you need to have a unique or primary key for one of your fields, then start a normal insert query and add that code (along with what you want to actually update) to the end. The db engine will then try to add the information and when it comes across a duplicate key already inserted, it already knows to just update all the fields you tell it to with the new information.
I simply skip the headache and use a temporary table. Quick and clean.
SQL Server allows you to select into a non-existing temp table by creating it for you. However mysql requires you to first create the temp db and then insert into it.
1.
Create empty temp table.
CREATE TEMPORARY TABLE IF NOT EXISTS insertsetari
SELECT eticheta, valoare, disabled
FROM setari
WHERE 1=0
2.
Insert data into temp table.
INSERT INTO insertsetari
VALUES
('mentenanta', '".$mentenanta."', '0'),
('nr_incercari_login', '".$nr_incercari_login."', '0'),
('timp_restrictie_login', '".$timp_restrictie_login."', '0')
3.
Remove rows in temp table that are already found in target table.
DELETE a FROM insertsetari AS a INNER JOIN setari AS b
WHERE a.eticheta = b.eticheta
AND a.valoare = b.valoare
AND a.disabled = b.disabled
4.
Insert temp table residual rows into target table.
INSERT INTO setari
SELECT * FROM insertsetari
5.
Cleanup temp table.
DELETE insertsetari
Comments:
You should avoid replacing when the
new data and the old data is the
same. Replacing should only be for
situations where there is high
probability for detecting key values
that are the same but the non-key
values are different.
Placing data into a temp table allows
data to be massaged, transformed and modified
easily before inserting into target
table.
Deleting rows from temp table is
faster.
If anything goes wrong, temp table
gives you an additional debugging
stage to find out what went wrong.
Should consider doing it all in a single transaction.