"Table already exists" when changing PK autoincrement in MySQL - mysql

I am quite new to MySQL and I have encountered a problem that I find quite puzzling. If I create a table with MySQL Workbench, when I set the PK I can choose it to auto-increment or not, as should be. However, if I change my mind later on, once the table has been created, I cannot alter the auto-increment flag any longer, as MySQL tells me that the "table already exists". That happens even if the table is empty.
The auto-generated SQL is as follows:
ALTER TABLE tablename
CHANGE COLUMN `ID` `ID` INT(11) NOT NULL AUTO_INCREMENT ;
and it fails with the error stated above. I have tried changing the algorithm and lock type, to no avail.
This does not happens in T-SQL or Oracle, for instance, so I fail to see a reason why it should fail in MySQL. Is there any way to fix this without having to drop and re-create the table?
Thanks.

From experience all the GUIs get a bit confused when you start changing primary keys, the number of error messages I've seen from SQL Server...
You don't need to drop the whole table, but it might be easiest to drop and then re-create the offending column.
Also, check out the MySQL dev docs, but I think either ALTER or MODIFY column are the two I'd go for and I'm not sure why the column name is there twice if you're not renaming it.

Ok, I discovered the culprit thanks to dbForge Studio. The same thing happens there, but this time the error is more explicit: I cannot change the auto-increment flag apparently because it is used as a foreign key on another table. I deleted the FK and then I was able to set the auto-increment.
Thank you all who helped me, I have learned some new things thanks to your comments.

Related

Trying to convert MyISAM to InnoDB, result into error

I am trying to convert a very big table to InnDB. But it throws an error.
Here is the screenshot of the issue which came after running for 5 minutes.
UPDATE: This is a live production table where live data are coming. Seems like the auto_increment column is causing the issue. By the time the it applies the InnoDB engine new records are coming and the auto_increment is increased again.
Did you check official mysql documentation ? There is one useful text, maybe can help.
https://dev.mysql.com/doc/refman/8.0/en/converting-tables-to-innodb.html
Maybe just to try with ALTER TABLE your_table ENGINE=InnoDB;
I mean, without auto increment.

MySQL Create Table Statement Strange Errors

I am trying to run some basic CREATE TABLE statements for my Databases course project and am getting some strange errors.
When I create the table Manuf it runs fine, but when I try to create the next table, Order, using the same syntax, it does not work.
Also, when I try to create this table, Items, I get an errno: 150. I believe this has to do with my foreign key creation, but I am not exactly sure. Here is a screenshot of that.
I am fairly new to using MySQL so any advice would be greatly appreciated, thank you.
The error on the Order table is caused by ORDER being a reserved word. You can specify it as `Order` with the backticks, but it's better if you choose a different name altogether.
The error 150 is related to the foreign key. The keys must be absolutely identical - the exact same definition, or the FK will fail with error 150.
Also, there must be an available index with that key definition or one compatible (see Kai Baku's example in the comment on the MySQL manual page). The same fields indexed in a different order will fail.
To begin with, check how those keys are defined in the origin tables. For example:
test1 varchar(50) not null
test2 varchar(50)
will not be compatible. I think that even a different collation is enough to throw FK off kilter (but this I haven't checked. The rest I'm sure of, from my personal bitter unexperience).
UPDATE: I forgot to mention, if you use InnoDB tables and issue the SHOW ENGINE INNODB STATUS, the blurb that comes out will contain a much better explanation of why the FK failed, somewhere about one third from top.

MySQL: Duplicate entry for key (Formerly "What does 'idx' mean?")

Update: After a lot of painful research, I've discovered what the problem actually is and updated the title to make a little more sense. I'll put my answer below.
Unfortunately, I'm not able to copy the query that's giving me this problem because it belongs to my company, so I'll have to keep my question very specific.
I have an INSERT INTO ... SELECT query that's returning this error:
Duplicate entry <gobbledygook> for key 'idx_<tablename>'
The tablename at the end is the correct name, but it has this weird idx_ prefix before it that's not a part of any of the tables I'm currently working with. What is that idx? Does it have something to do with the information_schema?
Update: Apparently, I need to clarify something: There is no column with idx in the name.
The numerous websearches didn't reveal much when I was trying to solve this problem, but I did finally figure it out (and JohnH's answer helped me to do this).
I finally discovered that "idx" is not something created by MySQL, but a name that someone else gave to the index. I have never come across a uniqueness constraint on an index that wasn't a key before, so I didn't know where that error came from.
This command showed all of the indices:
SHOW INDEX FROM <tablename>
And I was able to see that non-unique was set to 0 for this key.
To fix the problem, I was able to simply drop the index and recreate it, without adding a uniqueness constraint.
DROP INDEX idx_<tablename> ON <tablename>;
ALTER TABLE <tablename> ADD INDEX idx_<tablename> (<comma-separated columns>);
Whether or not removing the uniqueness constraint is a good idea remains to be seen, but it's also beyond the scope of this question.
"idx_" is a common prefix for index names.
You many have an index that does not allow duplicate values for the column values referenced by that index.
In my case the unique index had duplicate entries even though the column being indexed didn't. I can only think this was caused by a bug. Solution was
Stop the service that writes to the db
Drop the index
Recreate the index
(Do the operation that was previously failing)
Start the service
It's important if you are dropping an recreating an index that nothing can be given an opportunity to insert a duplicate entry while you are doing this. This is why I stopped the service that writes to the db.

MySql 1050: MySQLSyntaxErrorException: Table 'my_db/#sql-ib520' already exists

I've tried to execute the following ALTER TABLE statement:
ALTER TABLE `my_table` ADD COLUMN `new_column` LONGTEXT NULL DEFAULT NULL AFTER `old_column`;
During the execution of the script I've got
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
It appears that this left database in inconsistent state, since no new field was added, and when I try to execute the script again, I'm getting this strange error.
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'my_db/#sql-ib520' already exists
I do not have #sql-ib520 table in my database, so to my understanding it must be some temp table created by the MySQL.
Does anyone encountered this error before, and how could I solve it?
Thanx
Edit
I've tried the script suggested by Alex, but I had not worked:
drop table `#mysql50##sql-ib520`;
ERROR 1051 (42S02): Unknown table 'my_db.#mysql50##sql-ib520'
Update
I'm using Amazon RDS with MySQL 5.6.12
I'm using an AWS RDS instance as well, and did a ton of reading on this problem. While I didn't find a great solution, here's how I fixed it by only replacing one table instead of the entire database.
If you run this command:
SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES
you can see the full list of database tables, including the orphaned table, which isn't normally visible. The two problem tables for me were:
ID NAME
407 my_database/#sql-ib379
379 my_database/users
because I was attempting to ALTER my users table when the DB crashed. Now, as mentioned above, I couldn't run any further ALTER TABLE commands because it was trying to create the same temporary table for any subsequent queries. I tried everything to DROP the orphaned table, but with the 'my_database/' part, it didn't seem possible. I also didn't want to drop and recreate my entire database, and I noticed that the orphaned table is referencing an internal ID of the users table (#sql-ib379), so I figured I would just swap it out. Here's a little MySQL script that did the trick for me:
-- temporarily disable foreign key checks
SET foreign_key_checks = 0;
-- replace this line with query to create a structural copy of the users table
-- named users_copy, including foreign keys if you use them
-- copy everything from original table into new table
INSERT INTO `users_copy` SELECT * FROM `users`;
Make sure everything looks ok, and then run:
-- rename the existing table
RENAME TABLE `users` TO `users_backup`;
-- in case the copy process took some time, and there were additional rows added
-- to the original table, grab them and put them into the copy table
INSERT INTO `users_copy` SELECT * FROM `users_backup` WHERE `users_backup`.id > (SELECT MAX(id) FROM `users_copy`);
-- finally, rename the copy table to the original table name
RENAME TABLE `users_copy` TO `users`;
- re-enable foreign key checks
SET foreign_key_checks = 1;
If you are not using foreign keys, you should be good to go now. I would recommend keeping the backup table around for a bit just in case, but once you remove that backup table, it should remove the orphaned table as well. If you are using foreign keys however, it is very important that you update any references to the original table name (in this case, users)! Depending on how you have your foreign keys setup, other tables that were dependent on users will now reference users_backup, which could cause problems with lost data.
Hope this helps.
After all, since I'm using AWS RDS instance, the script recommended by Alex did not work.
MySQL documentation also recommends this script, you can find more info here about orphaned intermediate tables.
For AWS RDS I've found only one post with no solution provided by Amazon staff. You might want to follow this post in case some solution is provided.
So, at the moment, my only solution was to dump the existing database and create a new one.

Resetting AUTO_INCREMENT on myISAM without rebuilding the table

Please help I am in major trouble with our production database. I had accidentally inserted a key with a very large value into an autoincrement column, and now I can't seem to change this value without a huge rebuild time.
ALTER TABLE tracks_copy AUTO_INCREMENT = 661482981
Is super-slow.
How can I fix this in production? I can't get this to work either (has no effect):
myisamchk tracks.MYI --set-auto-increment=661482982
Any ideas?
Basically, no matter what I do I get an overflow:
SHOW CREATE TABLE tracks
CREATE TABLE tracks (
...
) ENGINE=MYISAM AUTO_INCREMENT=2147483648 DEFAULT CHARSET=latin1
After struggling with this for hours, I was finally able to resolve it. The auto_increment info for myISAM is stored in TableName.MYI, see state->auto_increment in http://forge.mysql.com/wiki/MySQL_Internals_MyISAM. So fixing that file was the right way to go.
However, myisamchk definitely has an overflow bug somewhere in the update_auto_increment function or what it calls, so it does not work for large values -- or rather if the current value is already > 2^31, it will not update it (source file here -- http://www.google.com/codesearch/p?hl=en#kYwBl4fvuWY/pub/FreeBSD/distfiles/mysql-3.23.58.tar.gz%7C7yotzCtP7Ko/mysql-3.23.58/myisam/mi_check.c&q=mySQL%20%22AUTO_INCREMENT=%22%20lang:c)
After discovering this, I ended up just using "xxd" to dump the MYI file into a hexfile, edit around byte 60, and replace the auto_increment value manually in the hexfile. "xxd -r" then restores the binary file from the hex file. To discover exactly what to edit, I just used ALTER TABLE on much smaller tables and looked at the effects using diffs. No fun, but it worked in the end. There seems to be a checksum in the format, but it seems to be ignored.
Have you dropped the record with the very large key? I don't think you can change the auto_increment to a lower value if that record still exists.
From the docs on myisamchk:
Force AUTO_INCREMENT numbering for new records to start at the given value (or higher, if there are existing records with AUTO_INCREMENT values this large)