Query execution time increases after adding order by - mysql

I use following query to fetch around data from 15 columns from four tables
SELECT company_table.some_values,
log_table.somevalues,
employee_table.somevalues,
manager_table.somevalues
FROM company_table
JOIN log_table
ON company_table.id = log_table.id
AND company_table.department = log_table.dept
JOIN employee_table
ON employee_table.id = log_table.id
AND employee_table.year = log_table.joining
AND employee_table.department = log_table.dept
JOIN manager_table
ON manager_table.id = log_table.id
AND manager_table.year = log_table.joining
AND manager_table.department = log_table.dept
ORDER BY log_table.id DESC
Output is correct but it takes more time to execute. But if I remove order by then execution time is reduced by considerable amount. I tried with order by ascending still it takes more time

I suspect your tables are not designed correctly.
I think you have recurring values in your log_table.id column, because I suspect it doesnt truly have a unique index for a primary key, or a correct primary key, since its id column doubles up as the company id also. This is why I have this notion:
ON company_table.id = log_table.id
AND company_table.department = log_table.dept
If either or both of these columns are/make up your primary key, which I suspect they are not, they are not the correct choice.
So it will do a good job of retrieving things in the order they are found, but a little extra work has to be done when ordering because you can potentially have colliding values, that and you have to join on 2 columns.
If what above is true, you can try this. Backup before trying, or try on dev.
Try adding a new column for a primary key and ordering on that:
ALTER TABLE log_table DROP PRIMARY KEY;
ALTER TABLE log_table ADD pk_column INT AUTO_INCREMENT PRIMARY KEY;
then ORDER BY pk_column DESC.
If you have specified another column in log_table as your primary key, order by on that instead.

Related

How to optimise change history data for MySQL

The previous table this data was stored in approached 3-4gb, but the data wasn't compressed before/after storage. I'm not a DBA so I'm a little out of my depth with a good strategy.
The table is to log changes to a particular model in my application (user profiles), but with one tricky requirement: we should be able to fetch the state of a profile at any given date.
Data (single table):
id, username, email, first_name, last_name, website, avatar_url, address, city, zip, phone
The only two requirements:
be able to fetch a list of changes for a given model
be able to fetch state of model on a given date
Previously, all of the profile data was stored for a single change, even if only one column was changed. But to get a 'snapshot' for a particular date was easy enough.
My first couple of solutions in optimising the data structure:
(1) only store changed columns. This would drastically reduce data stored, but would make it quite complicated to get a snapshot of data. I'd have to merge all changes up to a given date (could be thousands), then apply that to a model. But that model couldn't be a fresh model (only changed data is stored). To do this, I'd have to first copy over all data from current profiles table, then to get snapshot apply changes to those base models.
(2) store whole of data, but convert to a compressed format like gzip or binary or whatnot. This would remove ability to query the data other than to obtain changes. I couldn't, for example, fetch all changes where email = ''. I would essentially have a single column with converted data, storing the whole of the profile.
Then, I would want to use relevant MySQL table options, like ARCHIVE to further reduce space.
So my question is, are there any other options which you feel are a better approach than 1/2 above, and, if not, which would be better?
First of all, I wouldn't worry at all about a 3GB table (unless it grew to this size in a very short period of time). MySQL can take it. Space shouldn't be a concern, keep in mind that a 500 GB hard disk costs about 4 man-hours (in my country).
That being said, in order to lower your storage requirements, create one table for each field of the table you want to monitor. Assuming a profile table like this:
CREATE TABLE profile (
profile_id INT PRIMARY KEY,
username VARCHAR(50),
email VARCHAR(50) -- and so on
);
... create two history tables:
CREATE TABLE profile_history_username (
profile_id INT NOT NULL,
username VARCHAR(50) NOT NULL, -- same type as profile.username
changedAt DATETIME NOT NULL,
PRIMARY KEY (profile_id, changedAt),
CONSTRAINT profile_id_username_fk
FOREIGN KEY profile_id_fkx (profile_id)
REFERENCES profile(profile_id)
);
CREATE TABLE profile_history_email (
profile_id INT NOT NULL,
email VARCHAR(50) NOT NULL, -- same type as profile.email
changedAt DATETIME NOT NULL,
PRIMARY KEY (profile_id, changedAt),
CONSTRAINT profile_id_fk
FOREIGN KEY profile_id_email_fkx (profile_id)
REFERENCES profile(profile_id)
);
Everytime you change one or more fields in profile, log the change in each relevant history table:
START TRANSACTION;
-- lock all tables
SELECT #now := NOW()
FROM profile
JOIN profile_history_email USING (profile_id)
WHERE profile_id = [a profile_id]
FOR UPDATE;
-- update main table, log change
UPDATE profile SET email = [new email] WHERE profile_id = [a profile_id];
INSERT INTO profile_history_email VALUES ([a profile_id], [new email], #now);
COMMIT;
You may also want to set appropriate AFTER triggers on profile so as to populate the history tables automatically.
Retrieving history information should be straightforward. In order to get the state of a profile at a given point in time, use this query:
SELECT
(
SELECT username FROM profile_history_username
WHERE profile_id = [a profile_id] AND changedAt = (
SELECT MAX(changedAt) FROM profile_history_username
WHERE profile_id = [a profile_id] AND changedAt <= [snapshot date]
)
) AS username,
(
SELECT email FROM profile_history_email
WHERE profile_id = [a profile_id] AND changedAt = (
SELECT MAX(changedAt) FROM profile_history_email
WHERE profile_id = [a profile_id] AND changedAt <= [snapshot date]
)
) AS email;
You can't compress the data without having to uncompress it in order to search it - which is going to severely damage the performance. If the data really is changing that often (i.e. more than an average of 20 times per record) then it would be more efficient to for storage and retrieval to structure it as a series of changes:
Consider:
CREATE TABLE profile (
id INT NOT NULL autoincrement,
PRIMARY KEY (id);
);
CREATE TABLE profile_data (
profile_id INT NOT NULL,
attr ENUM('username', 'email', 'first_name'
, 'last_name', 'website', 'avatar_url'
, 'address', 'city', 'zip', 'phone') NOT NULL,
value CARCHAR(255),
starttime DATETIME DEFAULT CURRENT_TIME,
endtime DATETIME,
PRIMARY KEY (profile_id, attr, starttime)
INDEX(profile_id),
FOREIGN KEY (profile_id) REFERENCES profile(id)
);
When you add a new value for an existing record, set an endtime in the masked record.
Then to get the value at a date $T:
SELECT p.id, attr, value
FROM profile p
INNER JOIN profile_date d
ON p.id=d.profile_id
WHERE $T>=starttime
AND $T<=IF(endtime IS NULL,$T, endtime);
Alternately just have a start time, and:
SELECT p.id, attr, value
FROM profile p
INNER JOIN profile_date d
ON p.id=d.profile_id
WHERE $T>=starttime
AND NOT EXISTS (SELECT 1
FROM prodile_data d2
WHERE d2.profile_id=d.profile_id
AND d2.attr=d.attr
AND d2.starttime>d.starttime
AND d2.starttime>$T);
(which will be even faster with the MAX concat trick).
But if the data is not changing with that frequency then keep it in the current structure.
You need a slow changing dimension:
i will do this only for e-mail and telephone so you understand (pay attention to the fact of i use two keys, 1 as unique in the table, and another that is unique to the user that it concerns. This is, the table key identifies the the record, and the user key identifies the user):
table_id, user_id, email, telephone, created_at,inactive_at,is_current
1, 1, mario#yahoo.it, 123456, 2012-01-02, , 2013-04-01, no
2, 2, erik#telecom.de, 123457, 2012-01-03, 2013-02-28, no
3, 3, vanessa#o2.de, 1234568, 2012-01-03, null, yes
4, 2, erik#telecom.de, 123459, 2012-02-28, null, yes
5, 1, super.mario#yahoo.it, 654321,2013-04-01, 2013-04-02, no
6, 1, super.mario#yahoo.it, 123456,2013-04-02, null, yes
most recent state of the database
select * from FooTable where inactive_at is null
or
select * from FooTable where is_current = 'yes'
All changes to mario (mario is user_id 1)
select * from FooTable where user_id = 1;
All changes between 1 jan 2013 and 1 of may 2013
select * from FooTable where created_at between '2013-01-01' and '2013-05-01';
and you need to compare with the old versions (with the help of a stored procedure, java or php code... you chose)
select * from FooTable where incative_at between '2013-01-01' and '2013-05-01';
if you want you can do a fancy sql statement
select f1.table_id, f1.user_id,
case when f1.email = f2.email then 'NO_CHANGE' else concat(f1.email , ' -> ', f2.email) end,
case when f1.phone = f2.phone then 'NO_CHANGE' else concat(f1.phone , ' -> ', f2.phone) end
from FooTable f1 inner join FooTable f2
on(f1.user_id = f2.user_id)
where f2.created_at in
(select max(f3.created_at) from Footable f3 where f3.user_id = f1.user_id
and f3.created_at < f1.created_at and f1.user_id=f3.user_id)
and f1.created_at between '2013-01-01' and '2013-05-01' ;
As you can see a juicy query, to compare the user_with the previews user row...
the state of the database on 2013-03-01
select * from FooTable where table_id in
(select max(table_id) from FooTable where inactive_at <= '2013-03-01' group by user_id
union
select id from FooTable where inactive_at is null group by user_id having count(table_id) =1 );
I think this is the easiest way of implement what you want... you could implement a multi-million tables relational model, but then it would be a pain in the arse to query it
Your database is not big enough, I work everyday with one even bigger. Now tell me is the money you save in a new server worthy the time you spend on a super-complex relational model?
BTW if the data changes too fast, this approach cannot be used...
BONUS: optimization:
create indexes on created_at, inactive_at, user_id and the pair
perform partition (both horizontal and vertical)
if you try and put all occurring changes in different tables and later if you require an instance on some date you join them along and display by comparing dates, for example if you want an instance at 1st of july you can run a query with condition where date is equal or less than 1st of july and order it in asc ordering limiting the count to 1. that way the joins will produce exactly the instance it was at 1st of july. in this manner you can even figure out the most frequently updated module.
also if you want to keep all the data flat try range partitioning on the basis of month that way mysql will handle it pretty easily.
Note: by date i mean storing unix timestamp of the date its pretty easier to compare.
I'll offer one more solution just for variety.
Schema
PROFILE
id INT PRIMARY KEY,
username VARCHAR(50) NOT NULL UNIQUE
PROFILE_ATTRIBUTE
id INT PRIMARY KEY,
profile_id INT NOT NULL FOREIGN KEY REFERENCES PROFILE (id),
attribute_name VARCHAR(50) NOT NULL,
attribute_value VARCHAR(255) NULL,
created_at DATETIME NOT NULL DEFAULT GETTIME(),
replaced_at DATETIME NULL
For all attributes you are tracking, simply add PROFILE_ATTRIBUTE records when they are updated, and mark the previous attribute record with the DATETIME it was replaced at.
Select Current Profile
SELECT *
FROM PROFILE p
LEFT JOIN PROFILE_ATTRIBUTE pa
ON p.id = pa.profile_id
WHERE p.username = 'username'
AND pa.replaced_at IS NULL
Select Profile At Date
SELECT *
FROM PROFILE p
LEFT JOIN PROFIILE_ATTRIBUTE pa
ON p.id = pa.profile_id
WHERE p.username = 'username'
AND pa.created_at < '2013-07-01'
AND '2013-07-01' <= IFNULL(pa.replaced_at, GETTIME())
When Updating Attributes
Insert the new attribute
Update the previous attribute's replaced_at value
It would probably be important that the created_at for a new attribute match the replaced_at for the corresponding old attribute. This would be so that there is an unbroken timeline of attribute values for a given attribute name.
Advantages
Simple two-table architecture (I personally don't like a table-per-field approach)
Can add additional attributes with no schema changes
Easily mapped into ORM systems, assuming an application lives on top of this database
Could easily see the history for a certain attribute_name over time.
Disadvantages
Integrity is not enforced. For example, the schema doesn't restrict on multiple NULL replaced_at records with the same attribute_name... perhaps this could be enforced with a two-column UNIQUE constraint
Let's say you add a new field in the future. Existing profiles would not select a value for the new field until they save a value to it. This is opposed to the value coming back as NULL if it were a column. This may or may not be an issue.
If you use this approach, be sure you have indexes on the created_at and replaced_at columns.
There may be other advantages or disadvantages. If commenters have input, I'll update this answer with more information.

How to return rows listed in descending order of COUNT(*)?

I have a table called foo with these fields:
- id
- type
- parentId
I want to select a list of parent IDS, in the descending order of their COUNT(*) of how many times they appear in the table. Something like this:
SELECT DISTINCT parentId FROM `foo`
ORDER BY (COUNT(parentId) DESC where parentId = parentId)
How can this be done in the most efficient way and putting the least load on the server?
There can be thousands-hundreds of thousands of records in the table, so manually going through each record is not acceptable..
Simply by applying a GROUP BY clause, and assuming you have an index , FOREIGN KEY, or PRIMARY KEY on parentId, the performance should be quite good. (parentId looks like it is likely a FORIEGN KEY, so be sure to define the constraint to enforce indexing).
SELECT `parentId`
FROM `foo`
GROUP BY `parentId`
ORDER BY COUNT(*) DESC
How can this be done in the most efficient way and putting the least load on the server?
The key is the the most efficient way.
Not a Count() for sure, but most efficient is... to read a field, which you are storing the Count result. You can update it with a trigger or after insert.
Especially when
There can be thousands-hundreds of thousands of records in the table

Realtime Performant Tag Search in MySQL or Redis

Problem Description:
A tag (tags) can be associated with arbitrary objects through a junction table (tagged_as). For a specific object type (specific_object), select the union or intersection of all of the objects associated with a series of tags, order the results by a numeric column on the object and limit the results for pagination purposes.
Contrived Schema:
CREATE TABLE tags (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(45) NOT NULL,
PRIMARY KEY (id)
);
CREATE TABLE specific_object(
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(45) NOT NULL,
vote_sum INT NOT NULL DEFAULT 0,
PRIMARY KEY (id)
);
CREATE TABLE tagged_as(
id INT NOT NULL AUTO_INCREMENT,
tag_id INT NOT NULL,
content_type_id INT NOT NULL,
object_id INT NOT NULL,
PRIMARY KEY (id)
);
For the purposes of this example, I am omitting many other columns in the specific_object table.
Table Row Counts:
tags: 12,297
tagged_as: 46,642,064
specific_object: 2,444,944
Naive MySQL Solution:
SELECT
specific_object.*
FROM
specific_object
JOIN
tagged_as
ON
specific_object.id = tagged_as.object_id
AND
tagged_as.content_type_id = <SPECIFIC_OBJECT_CONTENT_TYPE_ID>
WHERE
tagged_as.tag_id = <TAG_ONE_ID>
AND
tagged_as.tag_id = <TAG_TWO_ID>
...
ORDER BY specific_object.vote_sum DESC
LIMIT 50
The problem with this solution is that MySQL cannot utilize an index to resolve the ORDER BY clause because the "key used to fetch the rows is not the same as the one used in the ORDER BY" (http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html). Execution time: 20+ seconds
Naive Redis Solution:
for each specific object: SET specfic_object:<ID> <ID>
for each tagged as: SADD tag:<TAG ID> specific_object:<ID>
specific_object_ids = SUNION tag:<TAG_ONE_ID> tag:<TAG_TWO_ID> ...
specific_object_ids = SINTER tag:<TAG_ONE_ID> tag:<TAG_TWO_ID> ...
SELECT * FROM specific_object WHERE id IN (<specific_object_ids>) ORDER BY vote_sum DESC
The problem with this solution is that the ORDER BY still has to been done by MySQL. Also, a tag could potentially be associated with hundreds of thousands of specific objects which is a lot of data to move around. Execution Time: 20+ seconds for larger tags
Possible Solutions I Haven't Tried Yet
Denormalize
Perhaps move the vote_sum column into the tagged_as table. Remove the need for the join to do the order by. This might have the same issue as the naive solution.
Redis Sorted Sets
for each specific object: SET specific_object:<ID> <ID>
for each specific object: SET specific_object_weight:<ID> <VOTE_SUM>
for each tagged as: SADD tag:<TAG_ID> specific_object:<ID>
SINTERSTORE result:<timestamp> <TAG_ONE_ID> <TAG_TWO_ID> ...
SORT result:<timestamp> BY specific_object_weight_* LIMIT 0 50
specific_object_ids = SMEMBERS result:<timestamp>
DEL result:<timestamp>
SELECT * FROM specific_object WHERE id IN (<specific_object_ids>)
Move all of the sorting into Redis. This add extra complexity because now you have to maintain the vote_sum values in Redis as well. Not sure if this would be fast enough.
Question:
Are either of the possible solutions viable? Are there other solutions or different technologies that would help? I am open to pretty significant changes to solve this problem.
When the problem has been performance of a DESC sort, what I've done in the past is to solve the problem is to store the value of -1*vote_sum in a separate column, and then ORDER BY that column ASC. I've been able to get MySQL to use an index to do the sort on that column.
You could either store a redundant column (both vote_sum and neg_vote_sum, or you could just store the negative value, and just multiply it by -1 when you need to return it as a positive value.
But I'm suspicious that the source of your performance issue is the sort operation. How does the performance of the statement compare, as a test, when you do an ORDER BY vote_sum ASC ?

mysql left join, limit and sorting

I've a doubt. I need to make a left join between two tables and get only the first result (I mean the first record on table A that doesn't match nothing on table B).
This is an example
create table a (
id int not null auto_increment primary key,
name varchar(50),
surname varchar(50),
prov char(2)
) engine = myisam;
insert into a (name,surname,prov)
values ('aaa','aaa','ss'),('bbb','bbb','ca'),('ccc','ccc','mi'),('ddd','ddd','mi'),('eee','eee','to'),
('fff','fff','mi'),('ggg','ggg','ss'),('hhh','hhh','mi'),('jjj','jjj','ss'),('kkk','kkk','to');
create table b (
id int not null auto_increment primary key,
id_name int
) engine = myisam;
insert into b (id_name) values (3),(4),(8),(5),(10),(1);
Query A:
select a.*
from a
left join b
on a.id = b.id_name
where b.id_name is null and a.prov = 'ss'
order by a.id
limit 1
Query B:
select a.*
from a
left join b
on a.id = b.id_name
where b.id_name is null and a.prov = 'ss'
limit 1
Both queries gives me right result, that is record with id = 7.
I want to know if I can rely on query B even without specifing sorting on id or if it's just a case that I get the right result.
I ask that because on large recordset (more than 10 millions of rows), the query without sorting gives me one record immediately while applying sorting it takes even more than 20 seconds even though a.id is primary key.
Thanks in advance.
You can't rely on query B. Mysql just returned what it found faster to return.
Is there an index on table "b" on column "id_name"? If no, then create it and tell us what You get (I mean how fast) It doesn't matter You are looking for not matched rows, JOIN has to be made before it can test if there is match or not.

How to fill in the "holes" in auto-increment fields?

I've read some posts about this but none cover this issue.
I guess its not possible, but I'll ask anyway.
I have a table with more than 50.000 registers. It's an old table where various insert/delete operations have taken place.
That said, there are various 'holes' some of about 300 registers. I.e.: ..., 1340, 1341, 1660, 1661, 1662,...
The question is. Is there a simple/easy way to make new inserts fill these 'holes'?
I agree with #Aaron Digulla and #Shane N. The gaps are meaningless. If they DO mean something, that is a flawed database design. Period.
That being said, if you absolutely NEED to fill these holes, AND you are running at least MySQL 3.23, you can utilize a TEMPORARY TABLE to create a new set of IDs. The idea here being that you are going to select all of your current IDs, in order, into a temporary table as such:
CREATE TEMPORARY TABLE NewIDs
(
NewID INT UNSIGNED AUTO INCREMENT,
OldID INT UNSIGNED
)
INSERT INTO NewIDs (OldId)
SELECT
Id
FROM
OldTable
ORDER BY
Id ASC
This will give you a table mapping your old Id to a brand new Id that is going to be sequential in nature, due to the AUTO INCREMENT property of the NewId column.
Once this is done, you need to update any other reference to the Id in "OldTable" and any foreign key it utilizes. To do this, you will probably need to DROP any foreign key constraints you have, update any reference in tables from the OldId to the NewId, and then re-institute your foreign key constraints.
However, I would argue that you should not do ANY of this, and just understand that your Id field exists for the sole purpose of referencing a record, and should NOT have any specific relevance.
UPDATE: Adding an example of updating the Ids
For example:
Let's say you have the following 2 table schemas:
CREATE TABLE Parent
(
ParentId INT UNSIGNED AUTO INCREMENT,
Value INT UNSIGNED,
PRIMARY KEY (ParentId)
)
CREATE TABLE Child
(
ChildId INT UNSIGNED AUTO INCREMENT,
ParentId INT UNSIGNED,
PRIMARY KEY(ChildId),
FOREIGN KEY(ParentId) REFERENCES Parent(ParentId)
)
Now, the gaps are appearing in your Parent table.
In order to update your values in Parent and Child, you first create a temporary table with the mappings:
CREATE TEMPORARY TABLE NewIDs
(
Id INT UNSIGNED AUTO INCREMENT,
ParentID INT UNSIGNED
)
INSERT INTO NewIDs (ParentId)
SELECT
ParentId
FROM
Parent
ORDER BY
ParentId ASC
Next, we need to tell MySQL to ignore the foreign key constraint so we can correctly UPDATE our values. We will use this syntax:
SET foreign_key_checks = 0;
This causes MySQL to ignore foreign key checks when updating the values, but it will still enforce the correct value type is used (see MySQL reference for details).
Next, we need to update our Parent and Child tables with the new values. We will use the following UPDATE statement for this:
UPDATE
Parent,
Child,
NewIds
SET
Parent.ParentId = NewIds.Id,
Child.ParentId = NewIds.Id
WHERE
Parent.ParentId = NewIds.ParentId AND
Child.ParentId = NewIds.ParentId
We now have updated all of our ParentId values correctly to the new, ordered Ids from our temporary table. Once this is complete, we can re-institute our foreign key checks to maintain referential integrity:
SET foreign_key_checks = 1;
Finally, we will drop our temporary table to clean up resources:
DROP TABLE NewIds
And that is that.
What is the reason you need this functionality? Your db should be fine with the gaps, and if you're approaching the max size of your key, just make it unsigned or change the field type.
You generally don't need to care about gaps. If you're getting to the end of the datatype for the ID it should be relatively easy to ALTER the table to upgrade to the next biggest int type.
If you absolutely must start filling gaps, here's a query to return the lowest available ID (hopefully not too slowly):
SELECT MIN(table0.id)+1 AS newid
FROM table AS table0
LEFT JOIN table AS table1 ON table1.id=table0.id+1
WHERE table1.id IS NULL
(remember to use a transaction and/or catch duplicate key inserts if you need concurrent inserts to work.)
INSERT INTO prueba(id)
VALUES (
(SELECT IFNULL( MAX( id ) , 0 )+1 FROM prueba target))
IFNULL for skip null on zero rows count
add target for skip error mysql "error clause FROM)
There is a simple way but it doesn't perform well: Just try to insert with an id and when that fails, try the next one.
Alternatively, select an ID and when you don't get a result, use it.
If you're looking for a way to tell the DB to automatically fill the gaps, then that's not possible. Moreover, it should never be necessary. If you feel you need it, then you're abusing an internal technical key for something but the single purpose it has: To allow you to join tables.
[EDIT] If this is not a primary key, then you can use this update statement:
update (
select *
from table
order by reg_id -- this makes sure that the order stays the same
)
set reg_id = x.nextval
where x is a new sequence which you must create. This will renumber all existing elements preserving the order. This will fail if you have foreign key constraints. And it will corrupt your database if you reference these IDs anywhere without foreign key constraints.
Note that during the next insert, the database will create a huge gap unless you reset the identity column.
As others have said, it doesn't matter, and if it does then something is wrong in your database design. But personally I just like them to be in order anyway!
Here is some SQL that will recreate your IDs in the same order, but without the gaps.
It is done first in a temp_id field (which you will need to create), so you can see that it is all good before overwriting your old IDs. Replace Tbl and id as appropriate.
SELECT #i:=0;
UPDATE Tbl
JOIN
(
SELECT id
FROM Tbl
ORDER BY id
) t2
ON Tbl.id = t2.id
SET temp_id = #i:=#i+1;
You will now have a temp_id field with all of your shiny new IDs. You can make them live by simply:
UPDATE Tbl SET id = temp_id;
And then dropping your temp_id column.
I must admit I'm not quite sure why it works, since I would have expected the engine to complain about duplicate IDs, but it didn't when I ran it.
You might wanna clean up gaps in a priority column.
The way below will give an auto increment field for the priority.
The extra left join on the same tabel will make sure it is added in the same order as (in this case) the priority
SET #a:=0;
REPLACE INTO footable
(id,priority)
(
SELECT tbl2.id, #a
FROM footable as tbl
LEFT JOIN footable as tbl2 ON tbl2.id = tbl.id
WHERE (select #a:=#a+1)
ORDER BY tbl.priority
)