MySQL Auto_Increment trying to automatically insert prior used values - mysql

I Use Delphi 10.2, MySQL. I have a table that has about 50,000 records and has an Auto_Increment primary key. It has suddenly, and on it's own with no help from me, started trying to re-insert old key values. As a matter of fact, it started over with the value 1. I have no idea how to fix this and I hope you might be able to help.
Thanks,
Jim Sawyer

If the MySQL table is defined with an auto increment primary key then you should never specify the key value. MySQL should not re-use old key values, but you may want to check if there is any table corruption. You can also reset the table's auto-increment value using an ALTER TABLE command. (There's a tutorial on this here: https://www.mysqltutorial.org/mysql-reset-auto-increment)
You can use the Firedac monitoring to confirm whether or not you are sending the primary key to MySQL - set you connection to be monitored using the FireDAC component - they supply a monitoring tool that you can setup to see all of the SQL being transferred. Normally the Firedac layer would do an insert with no primary key and then use LAST_INSERT_ID to update the TField to have the actual value inserted.
If you are sending the wrong key then alter your logic so you don;t send the primary key on an insert.

you can reset the autoincrement value to any value you want with the following command
ALTER TABLE <table_name> AUTO_INCREMENT = <new value>;
so if new value is 100, the next inserted record receives a value of 100.

Related

How to prevent mySQL autoincrement from resetting in Django

I am creating a Django app where the primary keys are AutoFields. i.e. I am not manually assigning any field as primary key in my models.
I need to use mySQL.
I will need to export all the data to excel or perhaps another django app from time to time. Therefore the primary keys must be unique to be able to identify new records or records to be deleted in excel/other app.
However, I have read that mySQL autoincrement counter resets to the max key when database restarts. This will result in reassignment of keys if the latest records were deleted.
I need to avoid this. No key should be reassigned.
How can this be done?
MySQL 8.0 now keeps the last auto-increment per table persistently. So it remembers between restarts, and does not reset the auto-increment.
https://www.percona.com/blog/2018/10/08/persistence-of-autoinc-fixed-in-mysql-8-0/

Getting id generated in a trigger for further requests

I have a table with two columns:
caseId, referring to a foreign table column
caseEventId, int, unique for a given caseId, which I want to auto-increment for the same caseId.
I know that the auto-increment option based on another column is not available in mySql with InnoDb:
MySQL Auto Increment Based on Foreign Key
MySQL second auto increment field based on foreign key
So I generate caseEventId into a trigger. My table:
CREATE TABLE IF NOT EXISTS mydb.caseEvent (
`caseId` CHAR(20) NOT NULL,
`caseEventId` INT NOT NULL DEFAULT 0,
PRIMARY KEY (`caseId`, `caseEventId`),
# Foreign key definition, not important here.
ENGINE = InnoDB;
And my trigger:
CREATE DEFINER=`root`#`%` TRIGGER `mydb`.`caseEvent_BEFORE_INSERT` BEFORE INSERT ON `caseEvent` FOR EACH ROW
BEGIN
SELECT COALESCE((SELECT MAX(caseEventId) + 1 FROM caseEvent WHERE caseId = NEW.caseId),0)
INTO #newCaseEventId;
SET NEW.`caseEventId` = #newCaseEventId;
END
With this, I get my caseEventId which auto-increments.
However I need to re-use this new caseEventId in further calls within my INSERT transaction, so I place this id into #newCaseEventId within the trigger, and use it in following instructions:
START TRANSACTION;
INSERT INTO mydb.caseEvent (caseId) VALUES ('fziNw6muQ20VGYwYPW1b');
SELECT #newCaseEventId;
# Do stuff based on #newCaseEventId
COMMIT;
This seems to work just fine but... what about concurrency, using connection pools etc...?
Is this #newCaseEventId variable going to be shared with all clients using the same connection, can I run into problems when my client server launches two concurrent transactions? This is using mysql under nodejs.
Is this safe, or is there a safer way to go about this? Thanks.
Edit 2020/09/24
FYI I have dropped this approach altogether. I was trying to use the db in a way it isn't meant to be used.
Basically I have dropped caseEventId, and any index which is supposed to increment nicely based on a given column value.
I rely instead on properly written queries on the read side, when I retrieve data, to recreate my caseEventId field...
That is no problem, the user defined variables a per client.
That means every user has its own use defined varoables
User-defined variables are session specific. A user variable defined by one client cannot be seen or used by other clients. (Exception: A user with access to the Performance Schema user_variables_by_thread table can see all user variables for all sessions.) All variables for a given client session are automatically freed when that client exits.
see manul

Error with inserting into mysql database

I am using cfwheels (coldfusion orm framework).
I recently moved some data from my previous host to a new one. Now I am trying to insert into a table, but am getting an error message: "Error Executing Database Query.
Duplicate entry '13651' for key 'PRIMARY'"
I looked into the database and it appears a record with id 13651 already exists. So I think the problem is with mysql generating the right auto increment value.
It seems Auto_Increment value is damaged or not set to max value in that column. It's possible due to bulk insert.
So as per solution, set the maximum PK value + 1 as new AUTO_INCREMENT value. Now when you insert the records in this table, they will automatically pick the next incremented correctly.
ALTER.TABLE tablename AUTO_INCREMENT = value
Is the rest of the data for that record, and the one you are trying to insert, the same? If you you might just need to tell the ORM to replace that value?
If primary key has auto increment attribute turned on, do not insert it manually. remove that primary key part from your insert query (whatever the syntax according to the taste of your ORM framework).

violation of primary key constraint .Cannot insert duplicate key in object using ADO

we are working on a users apllication using Access2003(VBA) as software language and SQL Server 2005 as database.
We are using ADO method and we encounter a problem.
when users create new record in a ADO Screen and they want to save the record after implementing it they receive this error :
error -2147217873 violation of primary key constraint 'PK_ '.Cannot insert duplicate key in object 'Pk_...'
Any help will be appreciated
Thanks in advance
The problem occures since you can't have two primary keys with the same value.
If you are using Ints as primary key, remember to put auto-increment on it.
If you are using GUID as primary key, you might forget to set the guid to sometheing else than the default empty guid, and there by trying to insert and empty guid twice.
Are you trying to insert a new record with the primary key field having a value that is already in the database. The primary key field must always contain unique values.
Check witch columnt is your PrimaryKey. If you are trying to insert value that already exist then you are getting that error.
you should create your PK value either from your code or on the SQL side. On the SQL side, when creating your database, you have to indicate that default value for "myPrimaryKey" field is uniqueIdentifier, while from code, you could have something like
myRecordset.fields("myPrimaryKey") = stGuidGen()
(check here for the stGuidGen function)
There are some pros and cons to each method. By implementing the SQL method, you do not have to care about generating PKs anymore. By doing it through code, you can store the newly generated value without having to requery the database, and reuse it immediatly in your code, and this can be very useful.
get the property of your primary key column and set the identity property to YES

Using bulk insert for 2000 rows of data

Would using bulk insert for 2000 rows of data make sense?
It might be 500-2K in reality.
BTW, does bulk inserts ignore constraint or is that a setting?
(using sql server 2008, .net on the server side, data is coming in via a web service (wse or WCF)).
Bulk insert would probably not make sense for 2000 rows. Maybe for 200,000 rows.
Ignoring constraints is default behaviour. (Also described here).
CHECK_CONSTRAINTS
Specifies that all constraints on the
target table or view must be checked
during the bulk-import operation.
Without the CHECK_CONSTRAINTS option,
any CHECK and FOREIGN KEY constraints
are ignored, and after the operation,
the constraint on the table is marked
as not-trusted.
Note: UNIQUE, PRIMARY KEY, and NOT NULL constraints are always
enforced.
The "KEEPIDENTITY" option of "BULK INSERT":
Specifies that identity value or
values in the imported data file are
to be used for the identity column. If
KEEPIDENTITY is not specified, the
identity values for this column are
verified but not imported and SQL
Server automatically assigns unique
values based on the seed and increment
values specified during table
creation.