Mysql temporarily suppress unique index - mysql

I have a table with unique index on two columns, id_parent and sort_order to be precise
+----+-----------+------------+-------------+-------------+-------------+
| id | id_parent | sort_order | some_data | other_data | more_data |
+----+-----------+------------+-------------+-------------+-------------+
| 1 | 1 | 1 | lorem ipsum | lorem ipsum | lorem ipsum |
| 2 | 1 | 2 | lorem ipsum | lorem ipsum | lorem ipsum |
| 3 | 1 | 3 | lorem ipsum | lorem ipsum | lorem ipsum |
+----+-----------+------------+-------------+-------------+-------------+
Now I want to update them, their data and their sort_order in one-go. sort_order would change from 1 - 2 - 3 to, for example 2 - 3 - 1.
But when I start running update statements, unique index block me, just as expected, saying that I can't have two rows with id_parent = 1 and sort_order = 2.
Well, I could set it 4 for now, update other rows in correct order, and then set this one.
But then, I would have to run an extra statement, and most probably add additional logic to my scripting language to determine correct order of updates.
I also use ORM, and it becomes even more inconvinient.
My question now, is there some method to make mysql temporarily ignore this index? Like starting a special transaction, in which indexes would be calculated only right before commiting it?

As far as I know that isn't possible.
The only time I've seen anything like that is that you can disable non unique keys on myisam tables. But not on InnoDB and not on unique keys.
However, to save you an update or two, there is no need to have the exact number 1, 2 and 3. You could as well have 4, 5 and 6. Right? You would use it in a order by and nothing else so the exact numbers aren't important. It will even save you an update if you're clever. From your example
update table set sort_order = 4 where sort_order = 1 and id = 1 and id_parent = 1;
New sort order is 2, 3, 1. And in just one update.

‘But when I start running update statements…’ – I understand, you tried updating the values using multiple UPDATE statement, like in a loop. Is that so? How about updating them in one go? Like this, for example:
UPDATE atable
SET sort_order = CASE sort_order WHEN 3 THEN 1 ELSE sort_order + 1 END
WHERE id_parent = 1
AND sort_order BETWEEN 1 AND 3
A single statement is atomic, so, by the time this update ends, the values of sort_order, although changed, remain unique.
I can't test this in MySQL, sorry, but it definitely works in SQL Server, and I believe the behaviour respects the standards.

MyISAM
For MyISAM tables, you can simply add this line at the start of your script:
SET UNIQUE_CHECKS=0;
It's common to use this in conjunction with:
SET FOREIGN_KEY_CHECKS=0;
The UNIQUE_CHECKS variable is mentioned in the docs here:
http://dev.mysql.com/doc/refman/5.0/en/converting-tables-to-innodb.html
InnoDB
There seem to be reports that the above commands don't work with the InnoDB engine. If so, then you can try dropping the UNIQUE index temporarily and then adding it back.
See: How to remove unique key from mysql table
Feel free to edit this post and improve it with a code example if you have one.

Related

How do you batch SELECT statements when you can't rely on the IDs to be in literal order?

What I mean by literal order is that, altough the IDs are auto-increment, through business logic, it might end up that 8 comes after 4 when 5 should've been there. That is to say, if a deletion if ID happens, there's no re-indexing
This is how my rows look (table name is wp_posts):
+-----+-------------+----+--+--+--+
| ID | post_author | .. | | | |
+-----+-------------+----+--+--+--+
| 4 | .. | | | | |
+-----+-------------+----+--+--+--+
| 8 | .. | | | | |
+-----+-------------+----+--+--+--+
| 124 | .. | | | | |
+-----+-------------+----+--+--+--+
| 672 | .. | | | | |
+-----+-------------+----+--+--+--+
| 673 | .. | | | | |
+-----+-------------+----+--+--+--+
| 674 | .. | | | | |
+-----+-------------+----+--+--+--+
ID is an int that has the auto-increment characteristic, but when a post is deleted, there is no re-assignment of IDs. It will just simply get deleted and because it's auto-increment, you can still assume that, vertically, the items that come after the one you're looking at are always bigger than the ones before.
I'm querying for ID: SELECT ID FROM wp_posts to get a list of all the IDs I need. Now, it just so happens that I need to batch all of this, using AJAX requests because once I retrieve the IDs, I need to operate on them.
Thing is, I don't really understand how to pass my data back to AJAX. What LIMIT does is, if I provide 2 arguments, such as: SELECT ID FROM wp_posts LIMIT 1,3, it'll return back 4,8,124 because it looks at row number. But what do I do on the next call? Yes, the first call always starts with 1, but once I need to launch the second AJAX request to perform yet another SELECT, how do I know where I should start? In my case, I'd want to start again at 4, so, my second query would be SELECT ID FROM wp_posts LIMIT 4, 7 and so on.
Do I really need to send that counter (even if I can automate it, since, you see, it's an increment of 3) back?
Is there no way for SQL to handle this automatically?
You have many confusions in your question. Let me try to clear up some basic ones.
First, the auto-incremented key is the primary key for the table. You do not need to worry about gaps. In fact, the key should basically be meaningless. It fulfills the following:
It is guaranteed to be unique.
It is guaranteed to be in insertion order.
Gaps are allowed and of no concern. There is no re-indexing. It is a bad idea because:
Primary keys uniquely identify each row and this mapping should be consistent across time.
Primary keys are used in other tables to refer to values, so re-indexing would either invalidate those relationships or require massive changes to many tables.
Re-indexes pre-supposes that the value means something, when it doesn't.
Second, a query such as:
SELECT ID
FROM wp_posts
LIMIT 1, 3;
Can return any three rows. Why? Because you have no specified an ORDER BY and SQL result sets without ORDER BY are unordered. There are no guarantees. So you should always be in the habit of using an ORDER BY.
Third, if you want to essentially "page" through results, then use the OFFSET feature in LIMIT (as you have above):
SELECT ID
FROM wp_posts
ORDER BY ID
LIMIT #offset, 3;
This will allow you to reset the #offset value and go to which rows you want.
First query:
SELECT ID FROM wp_posts ORDER BY ID LIMIT 3
This returns 4,8,124 as you said. In your client, save the largest ID value in a variable.
Subsequent queries:
SELECT ID FROM wp_posts WHERE ID > ? ORDER BY ID LIMIT 3
Send a parameter into this query using the greated ID value from the previous result. It's still in a variable.
This also helps make the query faster, because it doesn't have to skip all those initial rows every time. Paging through a large dataset using LIMIT/OFFSET is pretty inefficient. SQL has to actually read all those rows even though it's not going to return them.
But if you use WHERE ID > ? then SQL can efficiently start the scan in the right place, on the first row that would be included in the result.
Seems, you want to return the first three rows of your query ordered by currently existing ID values(whatever they're after all DML statement's applied on the table wp_posts).
Then, Consider using an auxiliary iteration variable #i to provide an ordered integer value set starting from 1 and increasing as 2,3,... without any gaps :
select t.*
from
(
select #i := #i + 1 as rownum, t1.*
from tab t1
join (select #i:=0) t2
) t
order by rownum
limit 0,3;
Demo

Run UPDATE query by a custom order

Is there a way to update all records in the table by a specific custom order? I specifically mean a situation, when the actual order comes from the 'outside' (eg as POST value).
For example, have a table
id | title | order_idx
----------------------
1 | lorem | 1
2 | ipsum | 2
3 | dolor | 3
I have a form that submits a hidden field, carrying ID values in this order: 2, 3, 1
I want to update the table to add incremental number to order_idx in each next row, going by the ID order served by the form field. So in this case, end result should look like this:
id | title | order_idx
----------------------
1 | lorem | 3
2 | ipsum | 1
3 | dolor | 2
Can this be done in a single UPDATE query somehow as opposed to running 3 queries (each including WHERE clause) in a php loop
You can use conditional expressions in assignment statements, like so:
UPDATE t
SET x = CASE
WHEN 2 THEN 1
WHEN 3 THEN 2
WHEN 1 THEN 3
ELSE x
WHERE ....
parameterized:
UPDATE t
SET x = CASE
WHEN ? THEN 1
WHEN ? THEN 2
WHEN ? THEN 3
ELSE x
WHERE ....
In either case, the query will most likely need constructed dynamically to account for a varying number of items to order.
Since your comment indicates potentially hundreds...
MySQL has a limit to query length (reference).
For a large number, I would start recommending a different approach.
Step 1) CREATE TEMPORARY TABLE `newOrder` (id INT, new_order_idx INT);
Step 2) INSERT id's and their new order into the temp table.
Step 3) UPDATE t INNER JOIN newOrder AS n ON t.id = n.id SET t.order_idx = n.new_order_idx WHERE ...
Step 4) DROP TEMPORARY TABLE newOrder;
The process itself is no longer a single query; but the UPDATE is.
Note: If you have a unique key involving order_idx I am entirely sure either of these would work. Occasions when I have needed to maintain uniqueness, the usual solution is to shift the records to be adjusted to a completely different range in one step, and then to their new positions in a second one. (Something like UPDATE t SET order_idx = -1 * order_idx WHERE ... would work as a pre-Step 3 range shift in the second part of this answer.)
Use ON DUPLICATE KEY UPDATE like below:
INSERT INTO table (id, order_idx) VALUES (1,3),(2,1),(3,2)
ON DUPLICATE KEY UPDATE order_idx=VALUES(order_idx);
So you can set order_idx dynamically for every row:
INSERT INTO table (id, order_idx)
VALUES (1,order_list[0]),(2,order_list[1]),(3,order_list[2])
ON DUPLICATE KEY UPDATE order_idx=VALUES(order_idx);

Is it okay to have non sequential ids as primary keys for a table in your database?

I don't know enough about databases to find the right words to ask this question, so let me give an example to explain what I'm trying to do: Suppose I want the primary key for a table to be an ID I grab from an API, but the majority of those API requests result in 404 errors. As a result, my table would look like this:
I also don't know how to format a table-like structure on Stack Overflow, so this is going to be a rough visual:
API_ID_PK | name
------------------
1 | Billy
5 | Timmy
23 | Richard
54 | Jobert
104 | Broccoli
Is it okay for the ID's not to be sequentially separated by 1 digit? Or should I do this:
ID PK | API_ID | NAME
----------------------------------------
1 | 1 | Billy
2 | 5 | Timmy
3 | 23 | Richard
4 | 54 | Jobert
5 | 104 | Broccoli
Would the second table be more efficient for indexing reasons? Or is the first table perfectly fine? Thanks!
No, there won't be any effect on efficiency if you have non-consecutive IDs. In fact, MySQL (and other databases) allow for you to set a variable auto_increment_increment to have the ID increment by more than 1. This is commonly used in multi-master setups.
It's fine to have IDs not sequential. I regularly use GUIDs for IDs when dealing with enterprise software where multiple business could share the same object and they're never sequential.
The one thing to watch out for is if the numbers are the same. What's determining the ID value you're storing?
If you have a clustered index (Sql-Server) on a ID column and insert IDs with random values (like Guids), this can have a negative effect, as the physical order of the clustered index corresponds to the logical order. This can lead to a lot of index re-organisations. See: Improving performance of cluster index GUID primary key.
However, ordered but non consecutive values (values not separated by 1) are not a problem for clustered indexes.
For non-clustered indexes the order doesn't matter. It is okay to insert random values for primary keys as long as they are unique.

MySQL and InnoDB - UPDATE with WHERE on non unique index - how are rows encountered?

How are the rules in context of table row encountering when UPDATE with WHERE is performed on non-unique indexed column ?
I have a test table with col column as non-unique index:
id | col
----------
1 | 1
----------
2 | 2
----------
3 | 2
----------
22 | 3
UPDATE tab SET col=1 WHERE col=1;
// OR
UPDATE tab SET col=3 WHERE col=3;
// OR
UPDATE tab SET col=2 WHERE col=2;
// These updates encounter ONLY rows where col=1, col=3 or col=2
Same table and same updates, but with one more record in the table where col=2:
id | col
----------
1 | 1
----------
2 | 2
----------
3 | 2
----------
4 | 2
----------
22 | 3
UPDATE tab SET col=1 WHERE col=1;
// OR
UPDATE tab SET col=3 WHERE col=3;
// Both updates encounter ONLY rows where col=1 or col=3.
UPDATE tab SET col=2 WHERE col=2;
// This update encounters ALL the rows in the table even those where col IS NOT 2.
// WHY ?
In short, every row encountered in the processing of an UPDATE is exclusively row-locked. This means that the locking impact of an UPDATE depends on how the query is processed to read the rows to be updated. If your UPDATE query uses no index, or a bad index, it may lock many or all rows. (Note that the order in which rows are locked also depends on the index used.) In your case, since your table is very small and you're materially changing the distribution of the rows in your index, it is choosing to use a full table scan for the query in question.
You can test the performance and behavior of most UPDATE queries by converting them to a SELECT and using EXPLAIN SELECT on them (in newer versions you can even EXPLAIN UPDATE).
In short, though: You should have tables with a realistic distribution of data bInefore testing performance or locking behavior, not a very small table with a few test rows.
There is a wonderful article out there.I believe this would answer your queries.
http://www.mysqlperformanceblog.com/2012/11/23/full-table-scan-vs-full-index-scan-performance/

MySQL Update Field with some prefix

i have table have prefixed with bok- and inv-
id | number
1 | bok-1
2 | inv-3
3 | bok-2
4 | inv-2
5 | inv-10
6 | bok-3
How can it sorted the field number prefixed with inv-?
Which in this case the result will be:
id | number
1 | bok-1
2 | inv-1
3 | bok-2
4 | inv-2
5 | inv-3
6 | bok-3
You could just use MySQL's SUBSTRING() function:
ORDER BY CAST(SUBSTRING(number, 5) AS SIGNED)
See it on sqlfiddle.
However, it would probably be better to store the prefix and integer parts in separate columns, if at all possible:
ALTER TABLE mytable
ADD COLUMN prefix ENUM('bok', 'inv'),
ADD COLUMN suffix INT;
UPDATE mytable SET
prefix = LEFT(number, 3),
suffix = SUBSTRING(number, 5);
ALTER TABLE mytable
DROP COLUMN number;
Basically you should redesign your database structure. Unfortunately no other options possible processing this efficiently since the database won't index on those dashes. So separate both in 2 fields is the most common practice. Otherwise you will run table scans on every order by clause.
Edit: In addition to the information from the discussion you had: https://chat.stackoverflow.com/rooms/13241/discussion-between-eggyal-and-gusdecool it is clear that this is a wrong design and the operation you are asking for should not be executed at all.
It would be both impossible to realize it without created a decent structure and to create a solution this way which would be legally ok.