Auto Increment has been reset back to 1 automatically - mysql

I've just run into an issue which I'm not able to solve.
I have a database table project_queues which is used as a queue, where I store some records. When the records are processed, they are deleted.
Deletion is invoked by rails construction record.destroy in a loop which triggers DELETE record FROM table on MySql database.
But now I've noticed, that in the table project_queues the autoIncrement Id (primary key) was set up back to 1. (Which damaged my references in the audit table. The Same record now points to multiple different project queues)
show create table project_queues;
CREATE TABLE `project_queues` (
`id` int(11) NOT NULL AUTO_INCREMENT,
...
...
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`),
) ENGINE=InnoDB AUTO_INCREMENT=1
I do not use TRUNCATE project_queues or drop the table construction and created it again in code.
Did somebody run into a similar issue like me? I can't find any anomalies in the log either.
I'm using Rails 5.2.3, MariaDB 10.1.
The connection from the application to database enforces this SQL modes:
NO_AUTO_VALUE_ON_ZERO
STRICT_ALL_TABLES
NO_AUTO_CREATE_USER
NO_ENGINE_SUBSTITUTION
NO_ZERO_DATE
NO_ZERO_IN_DATE
ERROR_FOR_DIVISION_BY_ZERO
But I don't think that these have something to do with AI.

Auto_increment can be reset by updates. Since you're using it as a queue I suppose you don't make updates, but it's worth asking.
Also, what table implementation do you use, MyIsam, InnoDB, other ... ?

Well ok, I found the problem (it's a known issue from 2013).
Here is how to reproduce the problem.
# Your MariaDB Server version: 10.1.29-MariaDB MariaDB Server
# Engine InnoDB
create database ai_test;
use ai_test;
CREATE TABLE IF NOT EXISTS ai_test(id INT AUTO_INCREMENT PRIMARY KEY,
a VARCHAR(50));
show table status like 'ai_test'
> Auto_increment: 1
INSERT INTO ai_test(a) VALUES ('first');
INSERT INTO ai_test(a) VALUES ('second');
INSERT INTO ai_test(a) VALUES ('third');
show table status like 'ai_test'
> Auto_increment: 4
MariaDB [ai_test]> Delete from ai_test where a = 'first';
MariaDB [ai_test]> Delete from ai_test where a = 'second';
MariaDB [ai_test]> Delete from ai_test where a = 'third';
show table status like 'ai_test' \G
> Auto_increment: 4
# Restart Server
sudo service rh-mariadb101-mariadb stop
sudo service rh-mariadb101-mariadb start
show table status like 'ai_test' \G
> Auto_increment: 1
I'll try to find some workaround to solve this problem, but I think this cause havoc to many use cases which refers to some archive tables or something like that.
References:
https://bugs.mysql.com/bug.php?id=64287
https://dev.mysql.com/worklog/task/?id=6204
https://openquery.com.au/blog/implementing-sequences-using-a-stored-function-and-triggers

Solved
I've upgraded 10.1.29-MariaDB MariaDB Server to 10.2.8-MariaDB MariaDB Server
Version >= 10.2.4 has solved the resetting of Auto Increment value

Related

How do I add more members to my ENUM-type column in MySQL for a table size of more than 40 million rows?

I have a table size of 40 million rows and I wish to modify an enum column of a table an Aurora MySQL RDS Database V5.6.10 to add more. This table is a frequently updated one. Has anyone ever tried altering such tables before? If so, can you please elaborate on the experience?
Table Structure:
CREATE TABLE `tee_sizes` (
id bigint auto_increment,
customer_id bigint,
tee-size enum('small', 'large', 'x-large'),
created_at timestamp NOT NULL default CURRENT_TIMESTAMP(),
updated_at timestamp NOT NULL default CURRENT_TIMESTAMP() ON UPDATE CURRENT_TIMESTAMP(),
PRIMARY KEY(id)
) ENGINE=InnoDB AUTO_INCREMENT=36910751 DEFAULT CHARSET=utf8;
I wish to add 'xx-large' to the column tee-size.
Will there be a downtime while doing this?
MySQL 5.6 should allow InnoDB online DDL without anny downtime on that table and concurrent queries should still work on that table while altering.
ALTER TABLE tee_sizes MODIFY COLUMN `tee-size` enum('small', 'large', 'x-large', 'new-item'),
ALGORITHM=INPLACE, LOCK=NONE;
ALGORITHM=INPLACE, LOCK=NONE would force MySQL in executing in the
requested level of concurrency without downtime.
If your MySQL version does not execute then the requested level of concurrency was not available meaning ALGORITHM=INPLACE, LOCK=NONE needs to be changed.
see demo
Edited because of comment:
Wait.. So, does this force any locks? ALGORITHM=INPLACE, LOCK=NONE
would force MySQL in executing (if allowed) without downtime if your
MySQL does not execute it means it can't be done using
ALGORITHM=INPLACE, LOCK=NONE This statement is confusing.
No it does not lock copy/paste from the manual
You can control aspects of a DDL operation using the ALGORITHM and
LOCK clauses of the ALTER TABLE statement. These clauses are placed at
the end of the statement, separated from the table and column
specifications by commas. .. To avoid accidentally making the table
unavailable for reads, writes, or both, specify a clause on the ALTER
TABLE statement such as LOCK=NONE` (permit reads and writes) or
LOCK=SHARED (permit reads). The operation halts immediately if the
requested level of concurrency is not available.

How to improve performance of Bulk Inserts in MYSQL

env: windows 10
version mysql 5.7
Ram 32GB
ide : toad mysql
i have sufficient hardware requirement but issue is the performance of insert into simple table that does not have any relation ships. i need to have index on the table.
table structure
CREATE TABLE `2017` (
`MOB_NO` bigint(20) DEFAULT NULL,
`CAF_SLNO` varchar(50) DEFAULT NULL,
`CNAME` varchar(58) DEFAULT NULL,
`ACT_DATE` varchar(200) DEFAULT NULL,
KEY `2017_index` (`MOB_NO`,`ACT_DATE`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
I am using above for inserting the records into table. with out index it took around 30 min where as with indexing it took 22 hrs still going on.
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
LOAD DATA LOCAL INFILE 'D:/base/test/2017/2017.txt'
INTO TABLE 2017COLUMNS TERMINATED BY '|';
commit;
i have seen suggestion to change cnf file, Could not find any in my machine.
By adding following lines in my.ini. I am able to achieve it.
innodb_autoinc_lock_mode =2
sync_binlog=1
bulk_insert_buffer_size=512M
key_buffer_size=512M
read_buffer = 50M
and innodb_flush_log_at_trx_commit=2, i have seen in another link where it said that it increase speed to 160x.
Output performance :more than 24hr to 2 hrs
If you begin with an empty table, create it without any indexes. Then, after fully populating the table, adding an index is reported to be faster than inserting with the index already in place.
See:
MySQL optimizing INSERT speed being slowed down because of indices
Is it better to create an index before filling a table with data, or after the data is in place?
Possibly helpful: Create an index on a huge MySQL production table without table locking

MySQL Mixed replication failed to replicate create table with auto_increment in composite key

When we try to create a table on a master-master MySQL MIXED replication, with a composite key containing an AUTO_INCREMENT column, it creates the table on the master but failed to do so on the slave.
Here is the error we got on slave side:
Error 'Incorrect table definition; there can be only one auto column and it must be defined as a key' on query. Default database: 'total_chamilo'. Query: 'CREATE TABLE `c_attendance_result` (
c_id INT NOT NULL,
id int NOT NULL auto_increment,
user_id int NOT NULL,
attendance_id int NOT NULL,
score int NOT NULL DEFAULT 0,
PRIMARY KEY (c_id, id)
) DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci'
Database is MyISAM.
MySQL version is 5.5.40-0+wheezy1-log
Surprisingly, we have table matching the same schema working on same servers but created on an other replication mode (statement) and/or on a previous MySQL version.
Does anyone know a way to fix this, if possible without changing original query since it is part of a large dump, full of this kind of statement...
thanks,
A.
That looks very much like this slave is not properly configured and trying to use InnoDB instead of MyISAM. An InnoDB table with an AUTO_INCREMENT column requires at least one key where the auto-increment column is the only or leftmost column. See the MySQL 5.5 reference manual. In your case the auto-increment column is the second column.

Simple MySQL INSERT query on simple InnoDB table takes 40+ms

I'm wondering what's exact cause that makes insert queries on mysql/innodb to last at least 40ms on machine with fairly strong cpu. "Equivalent" query runs <10ms on same MyISAM table (tables are without any foreign keys). Timings are from MySQL console.
This is "as simple as possible" db structure for reproduction.
CREATE TABLE `test_table_innodb` (
`id` int NOT NULL AUTO_INCREMENT,
`int_column` int NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `test_table_myisam` (
`id` int NOT NULL AUTO_INCREMENT,
`int_column` int NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
I'm running same query from mysql console (which auto-commits transactions in case of InnoDB). No other queries are executed on machine at the time and the results are:
mysql> insert into test_table_myisam (int_column) values (5);
Query OK, 1 row affected (0.00 sec)
mysql> insert into test_table_innodb (int_column) values (5);
Query OK, 1 row affected (0.06 sec)
Is transaction overhead making query to run that longer against InnoDB table? Or?
There are three aspects that to be considered with each auto-committed INSERT
ASPECT #1. Overhead
InnoDB supports MVCC and Transaction Isolation as an ACID-compliant storage engine. In order to accommodate this, a copy of a row before changes are committed is written into the Undo Tablespace section of the System Tablespace file ibdata1. What would be written if you are running an INSERT? A copy of a blank row. That way, a rollback simply removes the attempt to INSERT. When an INSERT in committed, the copy of the blank in the Undo Tablespace is expunged.
ASPECT #2. Clustered Index
For every InnoDB table, there exists an internal default row index called gen_clust_index. This is created regardless of the presence or absence of a PRIMARY KEY. Since your table has a PRIMARY KEY of id, the gen_clust_index is constructed to be associated with the row containing a unique id field.
ASPECT #3. Configuration
Believe it or not, there are times when MySQL 4.1 out-of-the-box is faster than MySQL 5.5. Sounds shocking, doesn't it? Percona actually benchmarked several versions of MySQL and found this to be the case.
I wrote about this in DBA StackExchange before
Why mysql 5.5 slower than 5.1 (linux,using mysqlslap) (Nov 24, 2011)
Query runs a long time in some newer MySQL versions (Oct 05, 2011)
Multi cores and MySQL Performance (Sep 20, 2011)
How do I properly perform a MySQL bake-off? (Jun 19, 2011)
The CPU is not the factor here. The factor is the disk .
In innodb the command need to be write to log , so if the log disk is the same disk or disk is not fragment or disk is slow than you will have a big difference.

Delete single row from large MySql table results in "lock timeout"

I'm running MySql 5.0.22 and have a really unwieldy table containing approximately 5 million rows.
Some, but not all rows are referenced by a foreign key to another table.
All attempts to cull the unreferenced rows have failed so far, resulting in lock-timeouts every time.
Copying the rows I want to an alternate table also failed with lock-timeout.
Suspiciously, even a statement that should finish instantaneously like the one below will also fail with "lock timeout":
DELETE FROM mytable WHERE uid_pk = 1 LIMIT 1;
...it's at this point that I've run out of ideas.
Edit: For what it's worth, I've been working through this on my dev system, so only I am actually using the database at this moment so there shouldn't be any locking going on outside of the SQL I'm running.
Any MySql gurus out there have suggestions on how to tame this rogue table?
Edit #2: As requested, the table structure:
CREATE TABLE `tunknowncustomer` (
`UID_PK` int(11) NOT NULL auto_increment,
`UNKNOWNCUSTOMERGUID` varchar(36) NOT NULL,
`CREATIONDATE` datetime NOT NULL,
`EMAIL` varchar(100) default NULL,
`CUSTOMERUID` int(11) default NULL,
PRIMARY KEY (`UID_PK`),
KEY `IUNKNOWCUST_CUID` (`CUSTOMERUID`),
KEY `IUNKNOWCUST_UCGUID` (`UNKNOWNCUSTOMERGUID`),
CONSTRAINT `tunknowncustomer_ibfk_1` FOREIGN KEY (`CUSTOMERUID`) REFERENCES `tcustomer` (`UID_PK`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$
Note, attempting to drop the FK also times out.
I had the same problem with an innodb table. optimize table corrected it.
Ok, I finally found an approach that worked to trim the unwanted rows from my large InnoDB table! Here's how I did it:
Stopped using MySQL Workbench (they have a hard-coded execution timeout of 30 seconds)
Opened a command prompt
Renamed the "full" table using ALTER TABLE
Created an empty table using the original table name and structure
Rebooted MySQL
Turned OFF 'autocommit' with SET AUTOCOMMIT = 0
Deleted a limited number of rows at a time, ramping up my limit after each success
Did a COMMIT; in between delete statements since turning off autocommit really left me inside of one large transaction
The whole effort looked somewhat like this:
ALTER TABLE `ep411`.`tunknowncustomer` RENAME TO `ep411`.`tunknowncustomer2`;
...strange enough, renaming the table was the only ALTER TABLE command that would finish right away.
delimiter $$
CREATE TABLE `tunknowncustomer` (
...
) ENGINE=InnoDB DEFAULT CHARSET=utf8$$
...then a reboot just in case my previous failed attempts could block any new work done...
SET AUTOCOMMIT = 0;
delete from tunknowncustomer2 where customeruid is null limit 1000;
delete from tunknowncustomer2 where customeruid is null limit 100000;
commit;
delete from tunknowncustomer2 where customeruid is null limit 1000000;
delete from tunknowncustomer2 where customeruid is null limit 1000000;
commit;
...Once I got into deleting 100k at a time InnoDB's execution time dropped with each successful command. I assume InnoDB starts doing read-aheads on large scans. Doing commits would reset the read-ahead data, so I spaced out the COMMITs to every 2 million rows until the job was done.
I wrapped-up the task by copying the remaining rows into my "empty" clone table, then dropping the old (renamed) table.
Not a graceful solution, and it doesn't address any reasons why deleting even a single row from a large table should fail, but at least I got the result I was looking for!