Currently I have a composite-primary key consisting of (user, id). My user is John Smith and there are say 30 rows that pertain to him, hence id auto increments each time a new entry is made.
However, if i wanted to add a new user, say Jill Smith to the same table, is there a way in which I can start at (Jill Smith, 1) and have the id auto increment without messing up the previous entries?
No. AUTO_INCREMENT in MySQL cannot have multiple "states" to keep track of multiple counters. To have the described behaviour, you need to implement your own application logic (w/o using the autoincrement feature) and calculate the number part of the key before inserting new rows.
UPDATE
The above is true in general in MySQL but how AUTO_INCREMENT works depends on the storage engine.
The documentation is quite specific on your particular scenario for MyISAM tables:
If the AUTO_INCREMENT column is part of multiple indexes, MySQL
generates sequence values using the index that begins with the
AUTO_INCREMENT column, if there is one. For example, if the animals
table contained indexes PRIMARY KEY (grp, id) and INDEX (id), MySQL
would ignore the PRIMARY KEY for generating sequence values. As a
result, the table would contain a single sequence, not a sequence per
grp value.
https://dev.mysql.com/doc/refman/5.6/en/example-auto-increment.html
Related
I have a table with name as listings and inside there I have a COLUMN namely as when some rows are deleted so the AUTO Incrmementation columns namely as "ID" goes into soemthing very bad values..Like missing values in between which I don't like and don't suit like a professional way..so therefore I want please if you people can guide me to how reset all ID columns values in rows on each INSERT or DELETE Query Exeution please..!
If you really want to find the lowest unused key value, don't use AUTO_INCREMENT at all, and manage your keys manually. However, this is NOT a recommended practice.
AS explained at Auto Increment after delete in MySQL
Primary autoincrement keys in database are used to uniquely identify a
given row and shouldn't be given any business meaning. So leave the
primary key as is and add another column called for example
courseOrder. Then when you delete a record from the database you may
want to send an additional UPDATE statement in order to decrement the
courseOrder column of all rows that have courseOrder greater than the
one you are currently deleting.
As a side note you should never modify the value of a primary key in a
relational database because there could be other tables that reference
it as a foreign key and modifying it might violate referential
constraints.
Well it is not recommended but you insisted , so use this to re order
By using something like:
ALTER TABLE table_name AUTO_INCREMENT = 1;
Good day
I create database at localhost for website. and put some info, than i delete and re-enter info from database. and now for 'id' primary key i have more than 200 rows. I want to re-arrange primary key.
for example
id |name
1 |Samuel
2 |Smith
4 |Gorge
15 |Adam
19 |David
i want to have
id |name
1 |Samuel
2 |Smith
3 |Gorge
4 |Adam
5 |David
Is it possible to do with any command?
You could drop the primary key column and re-create it. All the ids will then be reassigned, I assume in the order in which the rows were inserted.
alter table your_table drop column id;
then to create it
ALTER TABLE `your_table_name` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
The purpose of a primary key is to uniquely identify each row, so rows in one table can be related to rows in another table. Remember, this is a relational database and part of the meaning of "relational" is that entities are related to each other.
In other words, you don't want to change the primary key of rows, because that will break links from other tables. MySQL does not guarantee that auto incremented values are inserted without holes. In fact, as you have discovered, deletions and re-inserts cause problems.
Your interpretation of the "primary key" as a sequential number with no gaps assigned to each row maintained by the database is simply not correct.
Even though you don't want to do this, you can. I advise against it, but you can:
declare #rn := 0;
update t
set id = (#rn := #rn + 1)
order by id;
If you want to enforce this over time, you will need to learn about triggers.
Consider this scenario: Gorge sends some offensive emails, and people complain and his account (#4) is denylisted.
Then you reorder your primary key values, and Adam is now assigned id 4. Suddenly, he finds himself banned! And lots of people mistrust him without cause.
Primary keys are not required to be consecutive -- they're only required to be unique. It's normal for there to be gaps, if you sometimes ROLLBACK transactions, or DELETE rows.
Most likely the primary key is being auto generated from some sort of auto increment sequence. In that case you can take the following steps:
1) update all the primary keys to the next value of the sequence: this will collapse all of the values into a contiguous range. In your case those ids will be 20, 21, 22, 23, 24. Postgres example:
UPDATE my_table SET id = nextval(my_table_id_sequence)
2) reset the sequence to start at 1: In Postgres this would look like the following:
ALTER SEQUENCE my_table_id_sequence RESTART WITH 1
3) update the values to the next value of the sequence again: Now can move all the rows back "down" to start at 1, and in your case they will be 1, 2, 3, 4, 5. It is important to first consolidate all the values at the "top" of the sequence before resetting, because that way we guarantee that there wont be any primary key collisions at the "bottom"
UPDATE my_table SET id = nextval(my_table_id_sequence)
NOTE: this approach only works if there are no foriegn keys which are referring to the primary key of the table. If there are foreign keys you can still take the same approach, but first do these 3 steps:
1) find all of the related tables/columns that are referencing this primary key column
2) create a function that will cascade updates to the pk out to all fks
3) create a trigger that will execute the above function whenever the pk is updated: at this point, when we update the primary key column, all of the related foreign keys will also be updated. Depending on the database, you might need to explicitly defer constraint validation, or do the whole thing in one transaction.
For an example of what the above might look like in Postgres you can take a look at my answer here How Do I Deep Copy a Set of Data, and Change FK References to Point to All the Copies?
I am in a situation where i have to store key -> value pairs in a table which signifies users who have voted certain products.
UserId ProductID
1 2345
1 1786
6 657
2 1254
1 2187
As you can see that userId keeps on repeating and so can productId. I wanted to know what can be the best way to represent this data. Also is there a necessity of using primary key in here. I've searched a lot but am not able to find the exact specification about my problem. Any help would be appreciated. Thank you.
If you want to enforce that a given user can vote for a given product at most once, create a unique constraint over both columns:
ALTER TABLE mytable ADD UNIQUE INDEX (UserId, ProductID);
Although you can use these two columns together as a key, your app code is often simpler if you define a separate, typically auto increment, key column, but the decision to do this depends on which app code language/library you use.
If you have any tables that hold a foreign key reference to this table, and you intend to use referential integrity, those tables and the SQL used to define the relationship will also be simpler if you create a separate key column - you just end up carting multiple columns around instead of just one.
I have a table in Access which I'd like to substitute with a query which gathers data from the table and other new tables. The table is used by many queries which look to a primary key (autonumber) in the table, so the new query must have a primary key which is a unique combination of the primary keys of the tables used by the query. What can I do?
--EDIT--
Solution found: Since I want to "merge" tables with a query, and since the pk is an autonumber, I can define the new pk (of the query) by "expanding the numbering": I multiply both pkeys by 2 (because I have two tables) and add or subtract 1 to one of the two (or 1 for the first table and 2 for the second, and so on).
For example:
PK1 = 1,2,3,4,5,6
PK2 = 1,3,4,5,8,9,10 (some records may have been deleted, so the number is skipped)
new PK = (2*PK1, (2*PK2 + 1)) = (2,4,6,8,10,12),(3,7,9,11,17,19,21)
as you can see they will never overlap (no new value of PK2 can be obtained from any value of PK1, because of the "+1") because math says they belong to different vector spaces.
Hope it may help somebody
Use composite key (Multiple-field primary key)
Hello is it possible to save the deleted auto incremented primary key in my database. For example
I have
Name_ID
1
2
3
4
If I delete primary key 4 and I insert again the primary key of I inserted should be four.
so. Name 1 2 3 4 5
I deleted primary key 5 (Name 1 2 3 4)
I added a data primary key should be 5 again not 6. THANKS!
Auto generated fields always have gaps in these cases.
What if you have an audit or history table that stored the rows with ID = 4, ID = 5? Then delete them again? How do you differentiate rows?
In your example, you've only deleted the last row? What is you delete ID = 1? Then what?
That is, they are just internal numbers unique to that table (and any associated tables like audit ones): no external meaning should be attached
As with other comments and answers here, I would not recommend this, especially if the data in the auto increment column is referenced externally, but you can set the next auto increment number to a specific value via an ALTER TABLE query
ALTER TABLE T_YourTable AUTO_INCREMENT=4
You could also drop the column and then re-add the column with the same attributes (this could be expensive if you have a lot of rows).
Why?
It's only intended to be a unique identifier.
You'll also get gaps with database clusters and whenever you rollback an insert transaction which overlaps a commited transaction - not just when you delete data.
A mechanism to fill-in-the-gaps would be complex, slow and difficult to maintain - and it's not needed.