Mysql Deadlock on insert and update - mysql

I am running into a dead lock during moderate-to-high load situations. Here are the details.
MySQL-5.5.21-55
Engine: InnoDB
Table: Order
# Field, Type, Null, Key, Default, Extra
id, bigint(20) unsigned, NO, PRI, , auto_increment
sno, varchar(32), NO, MUL, ,
misc1, int, NO, , 0,
Table: OrderItem
# Field, Type, Null, Key, Default, Extra
id, bigint(20) unsigned, NO, PRI, , auto_increment
order_id, bigint(20), YES, MUL, ,
f1, varchar(50), YES, , ,
f2, varchar(100), YES, , ,
misc2, int, NO, , 0,
Order.sno is unique
OrderItem.order_id is not defined as foreign key but used as foreign key in the application
Order has one-to-many relation with OrderItem
OrderItem.order_id + OrderItem.f1 + OrderItem.f2 is unique
Usecase:
Whenever any record in Order or OrderItem needs to be updated, I want to invalidate the old records (or delete) and insert new ones.
It may so happen that earlier, one record in Order (e.g order1) has 3 records in OrderItem (e.g orderItem1, orderItem2, orderItem3). But now I want to have it as order1->orderItem1, orderItem4, orderItem5 or entirely new set. This is the reason why i want to invalidate the old records all together and insert new ones as finding out what is changed in OrderItem is complicated.
Multiple threads will be doing this operation; but they will work on different record sets. I'm operating on 25 Order s at a time
What I tried:
Insert into Order; on duplicate key update Order and delete all children from OrderItem and insert the OrderItem s.
Have another column called is_active in Order and mark all the records for the same sno as 0 and insert the new records in Order; insert new children into OrderItem.
Delete from Order for the given sno; delete from OrderItem for the same sno.Insert into both the tables freshly.
All the above approaches resulted in deat lock, some or the other time.
No other thread or process is working on these tables.
Observation:
Went through the following links
InnoDB record level locks
InnoDB locks set
and found that updating/deleting multiple records causes MySQL to obtain Next Key locks at REPEATABLE_READ isolation level (which is the default). In my opinion this causes the problem.
Appreciate if you can provide some direction on solving this.

Since you haven't mention it, I'll give it a try, have you tried "FOR UPDATE" ?
set connection auto commit to false.
at the beginning of program, use this to lock all relevent data.
SELECT o.* , oi.*
FROM order o
INNER JOIN orderitem oi ON (o.id=oi.order_id)
WHERE o.id = <order id to update>
FOR UPDATE;
then you can do whatever you want to those entries.
then commit.
EDIT: I think the root cause of the problem is multi threading (obviously), I was thinking if it's better if you remove that element from the equation.
imagine a system where you have multiple receiver that receives request. Those receiver will only do something like:
//select to check if there is a existing record(no need to lock), if no, return fail as response
SELECT o.* , oi.*
FROM order o
INNER JOIN orderitem oi ON (o.id=oi.order_id)
WHERE o.sno = <sno to update>;
insert into request_buffer (request_id, sno, new_order_item,create_date .......)
values
(1, abc , orderitem1......);
//return success after inserting buffer.
where you have a seperate single thread program that pools this table and handle those buffer entries.
In that case it would seperate the multithread element from the DB updateing process. I am not sure about the amount of incoming request but I think if you handle a few more request per cycle/query then the preformance wouldn't be THAT significantly different?

Finally, (perhaps a workaround, in-line with #JackyCheng's suggestion), Im,
Inserting into Order table (multiple records at a time); no update on duplicate key
Inserting into OrderItem table (multiple records at a time);
Select all Order.ids that are to be updated
Mark the identified Order (ids) as inactive one-by-one (this is the key part as it uses index key lock and not the next key lock)
Overall the transaction execution time remains below 30ms for 25 Orders.

Related

Is there any constraint in SQL where I add a value that is a foreign key to one table, it removes that value(primary key) from another table?

I have two tables defined as:
dealership_inventory(vin, dealer_id, price, purchase_date)
where vin is the PK and dealer_id is the FK
transactions(transaction_id, dealer_id, customer_id, vin, cpurchase_date, price)
where transaction_id is PK and dealer_id,customer_id, and vin are FKs
Whenever I add a new transaction to the transactions table with an insert statement, I would like to remove that tuple with matching vin from the dealership_inventory table. Is this possible with some type of constraint?
You don't really need to do the removal of the VIN number from the inventory table. Instead, if you want to find out whether a vehicle be still available, just use an exists query, e.g.
SELECT di.*
FROM dealership_inventory di
WHERE NOT EXISTS (
SELECT 1
FROM transactions t
WHERE t.vin = di.vin
);
If, at some later point, the inventory table gets bogged down with items no longer available, you can run a batch job which moves these sold items to another table.

How to reduce the auto increment number in SQL database?

Currently the table structure is like this:
user_preference
---------------
id
user_id
pref_id
this table store all the user options, and id is auto -inc
the problems are:
1) is it necessary to keep an ID for every table ? It seems the common practice to keep a system generated id for every table
2) whenever the user update their perference, I will clear all related record for him and insert the update one, the auto-inc number will become very large later. How can I prevent that?
Thanks for helping.
You can periodically reset the auto increment counter back to 1, to ensure that the id does not become arbitrarily large (and sparse) over the course of frequent deletion of records.
In MySQL:
ALTER TABLE table_name AUTO_INCREMENT = 1
In SQL Server:
DBCC CHECKIDENT (table_name, RESEED, 0)
Each of these commands will reset the auto increment counter to 1, or to the value closest to 1 if 1 be already in use by another record.
You do not need to have an AUTO_INCREMENT PRIMARY KEY for every table. Sometimes there is a 'natural' key that works quite well for the PK.
Do not manipulate AUTO_INCREMENT values. Do not depend on any property other than uniqueness.
Your user_preference table smells like a many-to-many mapping? If so, this is optimal:
CREATE TABLE user_preference (
user_id ...,
pref_id ...,
PRIMARY KEY(user_id, pref_id),
INDEX (pref_id, user_id)
) ENGINE=InnoDB;
For discussion of "why", see http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table

How to optimise change history data for MySQL

The previous table this data was stored in approached 3-4gb, but the data wasn't compressed before/after storage. I'm not a DBA so I'm a little out of my depth with a good strategy.
The table is to log changes to a particular model in my application (user profiles), but with one tricky requirement: we should be able to fetch the state of a profile at any given date.
Data (single table):
id, username, email, first_name, last_name, website, avatar_url, address, city, zip, phone
The only two requirements:
be able to fetch a list of changes for a given model
be able to fetch state of model on a given date
Previously, all of the profile data was stored for a single change, even if only one column was changed. But to get a 'snapshot' for a particular date was easy enough.
My first couple of solutions in optimising the data structure:
(1) only store changed columns. This would drastically reduce data stored, but would make it quite complicated to get a snapshot of data. I'd have to merge all changes up to a given date (could be thousands), then apply that to a model. But that model couldn't be a fresh model (only changed data is stored). To do this, I'd have to first copy over all data from current profiles table, then to get snapshot apply changes to those base models.
(2) store whole of data, but convert to a compressed format like gzip or binary or whatnot. This would remove ability to query the data other than to obtain changes. I couldn't, for example, fetch all changes where email = ''. I would essentially have a single column with converted data, storing the whole of the profile.
Then, I would want to use relevant MySQL table options, like ARCHIVE to further reduce space.
So my question is, are there any other options which you feel are a better approach than 1/2 above, and, if not, which would be better?
First of all, I wouldn't worry at all about a 3GB table (unless it grew to this size in a very short period of time). MySQL can take it. Space shouldn't be a concern, keep in mind that a 500 GB hard disk costs about 4 man-hours (in my country).
That being said, in order to lower your storage requirements, create one table for each field of the table you want to monitor. Assuming a profile table like this:
CREATE TABLE profile (
profile_id INT PRIMARY KEY,
username VARCHAR(50),
email VARCHAR(50) -- and so on
);
... create two history tables:
CREATE TABLE profile_history_username (
profile_id INT NOT NULL,
username VARCHAR(50) NOT NULL, -- same type as profile.username
changedAt DATETIME NOT NULL,
PRIMARY KEY (profile_id, changedAt),
CONSTRAINT profile_id_username_fk
FOREIGN KEY profile_id_fkx (profile_id)
REFERENCES profile(profile_id)
);
CREATE TABLE profile_history_email (
profile_id INT NOT NULL,
email VARCHAR(50) NOT NULL, -- same type as profile.email
changedAt DATETIME NOT NULL,
PRIMARY KEY (profile_id, changedAt),
CONSTRAINT profile_id_fk
FOREIGN KEY profile_id_email_fkx (profile_id)
REFERENCES profile(profile_id)
);
Everytime you change one or more fields in profile, log the change in each relevant history table:
START TRANSACTION;
-- lock all tables
SELECT #now := NOW()
FROM profile
JOIN profile_history_email USING (profile_id)
WHERE profile_id = [a profile_id]
FOR UPDATE;
-- update main table, log change
UPDATE profile SET email = [new email] WHERE profile_id = [a profile_id];
INSERT INTO profile_history_email VALUES ([a profile_id], [new email], #now);
COMMIT;
You may also want to set appropriate AFTER triggers on profile so as to populate the history tables automatically.
Retrieving history information should be straightforward. In order to get the state of a profile at a given point in time, use this query:
SELECT
(
SELECT username FROM profile_history_username
WHERE profile_id = [a profile_id] AND changedAt = (
SELECT MAX(changedAt) FROM profile_history_username
WHERE profile_id = [a profile_id] AND changedAt <= [snapshot date]
)
) AS username,
(
SELECT email FROM profile_history_email
WHERE profile_id = [a profile_id] AND changedAt = (
SELECT MAX(changedAt) FROM profile_history_email
WHERE profile_id = [a profile_id] AND changedAt <= [snapshot date]
)
) AS email;
You can't compress the data without having to uncompress it in order to search it - which is going to severely damage the performance. If the data really is changing that often (i.e. more than an average of 20 times per record) then it would be more efficient to for storage and retrieval to structure it as a series of changes:
Consider:
CREATE TABLE profile (
id INT NOT NULL autoincrement,
PRIMARY KEY (id);
);
CREATE TABLE profile_data (
profile_id INT NOT NULL,
attr ENUM('username', 'email', 'first_name'
, 'last_name', 'website', 'avatar_url'
, 'address', 'city', 'zip', 'phone') NOT NULL,
value CARCHAR(255),
starttime DATETIME DEFAULT CURRENT_TIME,
endtime DATETIME,
PRIMARY KEY (profile_id, attr, starttime)
INDEX(profile_id),
FOREIGN KEY (profile_id) REFERENCES profile(id)
);
When you add a new value for an existing record, set an endtime in the masked record.
Then to get the value at a date $T:
SELECT p.id, attr, value
FROM profile p
INNER JOIN profile_date d
ON p.id=d.profile_id
WHERE $T>=starttime
AND $T<=IF(endtime IS NULL,$T, endtime);
Alternately just have a start time, and:
SELECT p.id, attr, value
FROM profile p
INNER JOIN profile_date d
ON p.id=d.profile_id
WHERE $T>=starttime
AND NOT EXISTS (SELECT 1
FROM prodile_data d2
WHERE d2.profile_id=d.profile_id
AND d2.attr=d.attr
AND d2.starttime>d.starttime
AND d2.starttime>$T);
(which will be even faster with the MAX concat trick).
But if the data is not changing with that frequency then keep it in the current structure.
You need a slow changing dimension:
i will do this only for e-mail and telephone so you understand (pay attention to the fact of i use two keys, 1 as unique in the table, and another that is unique to the user that it concerns. This is, the table key identifies the the record, and the user key identifies the user):
table_id, user_id, email, telephone, created_at,inactive_at,is_current
1, 1, mario#yahoo.it, 123456, 2012-01-02, , 2013-04-01, no
2, 2, erik#telecom.de, 123457, 2012-01-03, 2013-02-28, no
3, 3, vanessa#o2.de, 1234568, 2012-01-03, null, yes
4, 2, erik#telecom.de, 123459, 2012-02-28, null, yes
5, 1, super.mario#yahoo.it, 654321,2013-04-01, 2013-04-02, no
6, 1, super.mario#yahoo.it, 123456,2013-04-02, null, yes
most recent state of the database
select * from FooTable where inactive_at is null
or
select * from FooTable where is_current = 'yes'
All changes to mario (mario is user_id 1)
select * from FooTable where user_id = 1;
All changes between 1 jan 2013 and 1 of may 2013
select * from FooTable where created_at between '2013-01-01' and '2013-05-01';
and you need to compare with the old versions (with the help of a stored procedure, java or php code... you chose)
select * from FooTable where incative_at between '2013-01-01' and '2013-05-01';
if you want you can do a fancy sql statement
select f1.table_id, f1.user_id,
case when f1.email = f2.email then 'NO_CHANGE' else concat(f1.email , ' -> ', f2.email) end,
case when f1.phone = f2.phone then 'NO_CHANGE' else concat(f1.phone , ' -> ', f2.phone) end
from FooTable f1 inner join FooTable f2
on(f1.user_id = f2.user_id)
where f2.created_at in
(select max(f3.created_at) from Footable f3 where f3.user_id = f1.user_id
and f3.created_at < f1.created_at and f1.user_id=f3.user_id)
and f1.created_at between '2013-01-01' and '2013-05-01' ;
As you can see a juicy query, to compare the user_with the previews user row...
the state of the database on 2013-03-01
select * from FooTable where table_id in
(select max(table_id) from FooTable where inactive_at <= '2013-03-01' group by user_id
union
select id from FooTable where inactive_at is null group by user_id having count(table_id) =1 );
I think this is the easiest way of implement what you want... you could implement a multi-million tables relational model, but then it would be a pain in the arse to query it
Your database is not big enough, I work everyday with one even bigger. Now tell me is the money you save in a new server worthy the time you spend on a super-complex relational model?
BTW if the data changes too fast, this approach cannot be used...
BONUS: optimization:
create indexes on created_at, inactive_at, user_id and the pair
perform partition (both horizontal and vertical)
if you try and put all occurring changes in different tables and later if you require an instance on some date you join them along and display by comparing dates, for example if you want an instance at 1st of july you can run a query with condition where date is equal or less than 1st of july and order it in asc ordering limiting the count to 1. that way the joins will produce exactly the instance it was at 1st of july. in this manner you can even figure out the most frequently updated module.
also if you want to keep all the data flat try range partitioning on the basis of month that way mysql will handle it pretty easily.
Note: by date i mean storing unix timestamp of the date its pretty easier to compare.
I'll offer one more solution just for variety.
Schema
PROFILE
id INT PRIMARY KEY,
username VARCHAR(50) NOT NULL UNIQUE
PROFILE_ATTRIBUTE
id INT PRIMARY KEY,
profile_id INT NOT NULL FOREIGN KEY REFERENCES PROFILE (id),
attribute_name VARCHAR(50) NOT NULL,
attribute_value VARCHAR(255) NULL,
created_at DATETIME NOT NULL DEFAULT GETTIME(),
replaced_at DATETIME NULL
For all attributes you are tracking, simply add PROFILE_ATTRIBUTE records when they are updated, and mark the previous attribute record with the DATETIME it was replaced at.
Select Current Profile
SELECT *
FROM PROFILE p
LEFT JOIN PROFILE_ATTRIBUTE pa
ON p.id = pa.profile_id
WHERE p.username = 'username'
AND pa.replaced_at IS NULL
Select Profile At Date
SELECT *
FROM PROFILE p
LEFT JOIN PROFIILE_ATTRIBUTE pa
ON p.id = pa.profile_id
WHERE p.username = 'username'
AND pa.created_at < '2013-07-01'
AND '2013-07-01' <= IFNULL(pa.replaced_at, GETTIME())
When Updating Attributes
Insert the new attribute
Update the previous attribute's replaced_at value
It would probably be important that the created_at for a new attribute match the replaced_at for the corresponding old attribute. This would be so that there is an unbroken timeline of attribute values for a given attribute name.
Advantages
Simple two-table architecture (I personally don't like a table-per-field approach)
Can add additional attributes with no schema changes
Easily mapped into ORM systems, assuming an application lives on top of this database
Could easily see the history for a certain attribute_name over time.
Disadvantages
Integrity is not enforced. For example, the schema doesn't restrict on multiple NULL replaced_at records with the same attribute_name... perhaps this could be enforced with a two-column UNIQUE constraint
Let's say you add a new field in the future. Existing profiles would not select a value for the new field until they save a value to it. This is opposed to the value coming back as NULL if it were a column. This may or may not be an issue.
If you use this approach, be sure you have indexes on the created_at and replaced_at columns.
There may be other advantages or disadvantages. If commenters have input, I'll update this answer with more information.

How to have Unique IDs across two or more tables in MySQL?

I have a table called events where all new information goes. This table works as a reference for all queries for news feed(s) so event items are selected from there and information corresponding to that event is retrieved from the correct tables.
Now, here's my problem. I have E_ID's in the events table which correspond to the ID of an event in a different table, be it T_ID for tracks, S_ID for status and so on... These ID's could be the same so for the time being I just used a different auto_increment value for each table so status started on 500 tracks on 0 etc. Obviously, I don't want to do that as I have no idea yet of which table is going to have the most data in it. I would assume status would quickly exceed tracks.
The information is inserted into the event table with triggers. Here's an example of one;
BEGIN
INSERT INTO events (action, E_ID, ID)
VALUES ('has some news.', NEW.S_ID, NEW.ID);
END
That ones for he status table.
Is there an addition to that trigger I can make to ensure the NEW.S_ID != an E_ID currently in events and if it does change the S_ID accordingly.
Alternatively, is there some kind of key I can use to reference events when auto incrementing the S_ID so that the S_ID is not incremented to a value of E_ID.
Those are my thoughts, I think the latter solution would be better but I doubt it is possible or it is but would require another reference table and would be too complex.
It's really uncommon to require a unique id across tables, but here's a solution that will do it.
/* Create a single table to store unique IDs */
CREATE TABLE object_ids (
id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
object_type ENUM('event', ...) NOT NULL
) ENGINE=InnoDB;
/* Independent object tables do not auto-increment, and have a FK to the object_ids table */
CREATE TABLE events (
id INT UNSIGNED NOT NULL PRIMARY KEY,
...
CONSTRAINT FOREIGN KEY (id) REFERENCES object_ids (id)
) ENGINE=InnoDB;
/* When creating a new record, first insert your object type into the object_ids table */
INSERT INTO object_ids(object_type) VALUES ('event');
/* Then, get the auto-increment id. */
SET #id = LAST_INSERT_ID();
/* And finally, create your object record. */
INSERT INTO events (id, ...) VALUES (#id, ...);
Obviously, you would duplicate the structure of the events table for your other tables.
You could also just use a Universal Unique Identifier (UUID).
A UUID is designed as a number that is globally unique in space and time. Two calls to UUID() are expected to generate two different values, even if these calls are performed on two separate computers that are not connected to each other.
Please read more about it in the manual.
There's also a shorter version.
UUID_SHORT() should do the trick.
It will generate 64-bit unsigned integers for you.
According to the doc the generator logic is:
(server_id & 255) << 56
+ (server_startup_time_in_seconds << 24)
+ incremented_variable++;
The value of UUID_SHORT() is guaranteed to be unique if the following conditions hold:
The server_id value of the current server is between 0 and 255 and is unique among your set of master and slave servers
You do not set back the system time for your server host between mysqld restarts
You invoke UUID_SHORT() on average fewer than 16 million times per second between mysqld restarts
mysql> SELECT UUID_SHORT();
-> 92395783831158784
If you curious what is your server id you can use either of these:
SELECT ##server_id
SHOW VARIABLES LIKE 'server_id';

How to fill in the "holes" in auto-increment fields?

I've read some posts about this but none cover this issue.
I guess its not possible, but I'll ask anyway.
I have a table with more than 50.000 registers. It's an old table where various insert/delete operations have taken place.
That said, there are various 'holes' some of about 300 registers. I.e.: ..., 1340, 1341, 1660, 1661, 1662,...
The question is. Is there a simple/easy way to make new inserts fill these 'holes'?
I agree with #Aaron Digulla and #Shane N. The gaps are meaningless. If they DO mean something, that is a flawed database design. Period.
That being said, if you absolutely NEED to fill these holes, AND you are running at least MySQL 3.23, you can utilize a TEMPORARY TABLE to create a new set of IDs. The idea here being that you are going to select all of your current IDs, in order, into a temporary table as such:
CREATE TEMPORARY TABLE NewIDs
(
NewID INT UNSIGNED AUTO INCREMENT,
OldID INT UNSIGNED
)
INSERT INTO NewIDs (OldId)
SELECT
Id
FROM
OldTable
ORDER BY
Id ASC
This will give you a table mapping your old Id to a brand new Id that is going to be sequential in nature, due to the AUTO INCREMENT property of the NewId column.
Once this is done, you need to update any other reference to the Id in "OldTable" and any foreign key it utilizes. To do this, you will probably need to DROP any foreign key constraints you have, update any reference in tables from the OldId to the NewId, and then re-institute your foreign key constraints.
However, I would argue that you should not do ANY of this, and just understand that your Id field exists for the sole purpose of referencing a record, and should NOT have any specific relevance.
UPDATE: Adding an example of updating the Ids
For example:
Let's say you have the following 2 table schemas:
CREATE TABLE Parent
(
ParentId INT UNSIGNED AUTO INCREMENT,
Value INT UNSIGNED,
PRIMARY KEY (ParentId)
)
CREATE TABLE Child
(
ChildId INT UNSIGNED AUTO INCREMENT,
ParentId INT UNSIGNED,
PRIMARY KEY(ChildId),
FOREIGN KEY(ParentId) REFERENCES Parent(ParentId)
)
Now, the gaps are appearing in your Parent table.
In order to update your values in Parent and Child, you first create a temporary table with the mappings:
CREATE TEMPORARY TABLE NewIDs
(
Id INT UNSIGNED AUTO INCREMENT,
ParentID INT UNSIGNED
)
INSERT INTO NewIDs (ParentId)
SELECT
ParentId
FROM
Parent
ORDER BY
ParentId ASC
Next, we need to tell MySQL to ignore the foreign key constraint so we can correctly UPDATE our values. We will use this syntax:
SET foreign_key_checks = 0;
This causes MySQL to ignore foreign key checks when updating the values, but it will still enforce the correct value type is used (see MySQL reference for details).
Next, we need to update our Parent and Child tables with the new values. We will use the following UPDATE statement for this:
UPDATE
Parent,
Child,
NewIds
SET
Parent.ParentId = NewIds.Id,
Child.ParentId = NewIds.Id
WHERE
Parent.ParentId = NewIds.ParentId AND
Child.ParentId = NewIds.ParentId
We now have updated all of our ParentId values correctly to the new, ordered Ids from our temporary table. Once this is complete, we can re-institute our foreign key checks to maintain referential integrity:
SET foreign_key_checks = 1;
Finally, we will drop our temporary table to clean up resources:
DROP TABLE NewIds
And that is that.
What is the reason you need this functionality? Your db should be fine with the gaps, and if you're approaching the max size of your key, just make it unsigned or change the field type.
You generally don't need to care about gaps. If you're getting to the end of the datatype for the ID it should be relatively easy to ALTER the table to upgrade to the next biggest int type.
If you absolutely must start filling gaps, here's a query to return the lowest available ID (hopefully not too slowly):
SELECT MIN(table0.id)+1 AS newid
FROM table AS table0
LEFT JOIN table AS table1 ON table1.id=table0.id+1
WHERE table1.id IS NULL
(remember to use a transaction and/or catch duplicate key inserts if you need concurrent inserts to work.)
INSERT INTO prueba(id)
VALUES (
(SELECT IFNULL( MAX( id ) , 0 )+1 FROM prueba target))
IFNULL for skip null on zero rows count
add target for skip error mysql "error clause FROM)
There is a simple way but it doesn't perform well: Just try to insert with an id and when that fails, try the next one.
Alternatively, select an ID and when you don't get a result, use it.
If you're looking for a way to tell the DB to automatically fill the gaps, then that's not possible. Moreover, it should never be necessary. If you feel you need it, then you're abusing an internal technical key for something but the single purpose it has: To allow you to join tables.
[EDIT] If this is not a primary key, then you can use this update statement:
update (
select *
from table
order by reg_id -- this makes sure that the order stays the same
)
set reg_id = x.nextval
where x is a new sequence which you must create. This will renumber all existing elements preserving the order. This will fail if you have foreign key constraints. And it will corrupt your database if you reference these IDs anywhere without foreign key constraints.
Note that during the next insert, the database will create a huge gap unless you reset the identity column.
As others have said, it doesn't matter, and if it does then something is wrong in your database design. But personally I just like them to be in order anyway!
Here is some SQL that will recreate your IDs in the same order, but without the gaps.
It is done first in a temp_id field (which you will need to create), so you can see that it is all good before overwriting your old IDs. Replace Tbl and id as appropriate.
SELECT #i:=0;
UPDATE Tbl
JOIN
(
SELECT id
FROM Tbl
ORDER BY id
) t2
ON Tbl.id = t2.id
SET temp_id = #i:=#i+1;
You will now have a temp_id field with all of your shiny new IDs. You can make them live by simply:
UPDATE Tbl SET id = temp_id;
And then dropping your temp_id column.
I must admit I'm not quite sure why it works, since I would have expected the engine to complain about duplicate IDs, but it didn't when I ran it.
You might wanna clean up gaps in a priority column.
The way below will give an auto increment field for the priority.
The extra left join on the same tabel will make sure it is added in the same order as (in this case) the priority
SET #a:=0;
REPLACE INTO footable
(id,priority)
(
SELECT tbl2.id, #a
FROM footable as tbl
LEFT JOIN footable as tbl2 ON tbl2.id = tbl.id
WHERE (select #a:=#a+1)
ORDER BY tbl.priority
)