Avoiding SQL deadlock in insert combined with select - mysql

I'm trying to insert pages into a table with a sort column that I auto-increment by 2000 in this fashion:
INSERT INTO pages (sort,img_url,thumb_url,name,img_height,plank_id)
SELECT IFNULL(max(sort),0)+2000,'/image/path.jpg','/image/path.jpg','name',1600,'3'
FROM pages WHERE plank_id = '3'
The trouble is I trigger these inserts on the upload of images, so 5-10 of these queries are run almost simultaneously. This triggers a deadlock on some files, for some reason.
Any idea what is going on?
Edit: I'm running MySQL 5.5.24 and InnoDB. The sort column has an index.

What I made for myself is setting sort to 0 on insert, retrieve id of inserted row and set sort to id*2000. But, also, you can try to use transactions:
BEGIN;
INSERT INTO pages (sort,img_url,thumb_url,name,img_height,plank_id)
SELECT IFNULL(max(sort),0)+2000,'/image/path.jpg','/image/path.jpg','name',1600,'3'
FROM pages WHERE plank_id = '3';
COMMIT;
Note that not all of the MySQL client libraries support multiqueris, so you may have to execute them separately but in stream of one connection.
Another approach is to lock the whole table for the time INSERT is executed, but this will lead to increase of queries queue because they will have to wait until insert is performed

Related

MySQL PDO create and populate 1000 small tables in 3 seconds or less?

Is it possible? From a single process?
DB is on SATA disk.
I am using ubuntu 14.04. All tables have 20-60 rows and 6 columns each.
I am using transactions.
The current sequence is:
Create table
Start transaction
Insert #1
Insert #2
...
Insert #n
Commit
Right now I am getting about 3-4 tables/second.
Conclusion: When I disabled logging my performance became similar to phpmyadmin. So, as Rick James suggested, I guess there is no way to achieve further improvements without a faster storage.
On a spinning drive, you can get about 100 operations per second. CREATE TABLE might be slower since it involves multiple file operations in the OS. So, I would expect 1000 CREATE TABLEs to take more than 10 seconds. That's on Ubuntu; longer on Windows.
It is usually poor schema design to make multiple tables that are identical; instead have a single table with an extra column to distinguish the subsets.
INSERTing 40K rows--
40K single-row INSERTs with autocommit=ON -- 400 seconds.
1000 multi-row INSERTs of 20-60 rows each, again COMMITted after each statement -- 10 seconds.
A single INSERT with 40K rows (if you don't blow out some other limitation) -- possibly less than 1 second.
Do not use multi-statement queries; it is a potential security problem. Anyway, it won't help much.
For create table you could perform a multi statement query (PDO support this) so in a single query you can create several table and t for insert
you could use bulk insert preparing a sql insert query with repeated insert value and the execute as a single query
The bulk insert is based on
INSERT INTO your_table( col1, col2,,,)
VALUES ( val1_1, val1_2 ,,,),
( vale2_1, val2_2 ,,,),
....
Then you can build a PDO query based on these tecnique and do the fact the execution if for you single statement and not for each statement as ne number of values you can inset thousand of value in a query and get the result in a few seconds
the Use the multiple-row INSERT syntax reduce communication overhead
between the client and the server if you need to insert many rows This
tip is valid for inserts into any table, not just InnoDB tables.

MySQL insert table1 update table2, subquery or transaction?

I want to insert or update, then insert a new log into another table.
I'm running a nifty little query to pull information from a staging table into other tables, something like
Insert into
select
on duplicate key update
What I'd like to do without php, or triggers (the lead dev doesn't like em, and I'm not that familiar with them either) is insert a new record into a logging table. Needed for reporting on what data was updated or inserted and on what table.
Any hints or examples?
Note: I was doing this with php just fine, although it was taking about 4 hours to process on 50K rows. Using the laravel php framework, looping over each entry in staging update 4 other tables with the data and log for each one was equalling 8 queries for each row (this was using laravel models not raw sql). I was able to optimise by pushing logs into an array and batch processing. But you can't beat 15sec processing time in mysql by bypassing all that throughput. Now I'm hooked on doing awesome things the sql way.
If you need executing more than one query statement. I refer to use transaction then trigger to guarantee Atomicity (part of ACID). Bellow code is sample for MySql transaction:
START TRANSACTION;
UPDATE ...
INSERT ...
DELETE ...
Other query statement
COMMIT;
Statements inside transaction will be executed all or nothing.
If you want to do two things (insert the base row and insert a log row), you'll need two statements. The second can (and should) be a trigger.
It would be better to use a Trigger, it is often used for Logging purposes

Mysql InnoDB Insertion Speed Too Slow?

I have a InnoDb table in Mysql that needs to handle insertions very quickly (everything else can be as slow as it wants). The table has no relations or indexes an id which is auto incremented and a time stamp.
I ran a script from multiple clients to insert as many records as possible in the allotted time (loop insert) and calculated the number of insertions per second.
I am only getting on average 200 insertions per second and I need around 20000. The performance doesn't change with the number of clients running the script or the machine the script is running on.
Is there any way to speed up the performance of these insertions?
------ edit --------
Thanks for all your help. I couldn't group any of the insertions together because when we launch all the insertions will be coming from multiple connections. I ended up switching the engine for that table to MyISAM and the insertions per second immediately shot up to 40,000.
Which primary key did you used?
InnoDB uses clustered index so all datas are in same order as its primary key indexes.
if you don't use auto-increment type primary key, then it makes large disk operations for each inserts. it pushes all other data and inserts new element.
for longer reference, try check http://dev.mysql.com/doc/refman/5.0/en/innodb-table-and-index.html
First, execute the INSERTs in a TRANSACTION:
START TRANSACTION;
INSERT ...
COMMIT;
Second, batch up multiple rows into a single INSERT statement:
START TRANSACTION;
INSERT INTO table (only,include,fields,that,need,non_default,values) VALUES
(1,1,1,1,1,1,1),
(2,1,1,1,1,1,1),
(3,1,1,1,1,1,1),
...;
COMMIT;
Lastly, you might find LOAD DATA INFILE performs better than INSERT, if the input data is in the proper format.
i suggest to try with multiple inserts in one query for example
INSERT INTO table
(title, description)
VALUES
('test1', 'description1'),
('test2', 'description2'),
('test3', 'description3'),
('test4', 'description4')
or try to use procedures

MySQL SQL_NO_CACHE not working

I have a table (InnoDB) that gets inserted, updated and read frequently (usually in a burst of few milliseconds apart). I noticed that sometimes the SELECT statement that follows an INSERT/UPDATE would get stale data. I assume this was due to cache, but after putting SQL_NO_CACHE in front of it doesn't really do anything.
How do you make sure that the SELECT always wait until the previous INSERT/UPDATE finishes and not get the data from cache? Note that these statements are executed from separate requests (not within the same code execution).
Maybe I am misunderstanding what SQL_NO_CACHE actually does...
UPDATE:
#Uday, the INSERT, SELECT and UPDATE statement looks like this:
INSERT myTable (id, startTime) VALUES(1234, 123456)
UPDATE myTable SET startTime = 123456 WHERE id = 1234
SELECT SQL_NO_CACHE * FROM myTable ORDER BY startTime
I tried using transactions with no luck.
More UPDATE:
I think this is actually a problem with INSERT, not UPDATE. The SELECT statement always tries to get the latest row sorted by time. But since INSERT does not do any table-level locking, it's possible that SELECT will get old data. Is there a way to force table-level locking when doing INSERT?
The query cache isn't the issue. Writes invalidate the cache.
MySQL gives priority to writes and with the default isolation level (REPEATABLE READ), your SELECT would have to wait for the UPDATE to finish.
INSERT can be treated differently if you have CONCURRENT INSERTS enabled for MyISAM, also InnoDB uses record locking, so it doesn't have to wait for inserts at the end of the table.
Could this be a race condition then? Are you sure your SELECT occurs after the UPDATE? Are you reading from a replicated server where perhaps the update hasn't propagated yet?
If the issue is with the concurrent INSERT, you'll want to disable CONCURRENT INSERT on MyISAM, or explicitly lock the table with LOCK TABLES during the INSERT. The solution is the same for InnoDB, explicitly lock the table on the INSERT with LOCK TABLES.
A)
If you dont need the caching at all(for any SELECT), disable the query
cache completely.
B)
If you want this for only one session, you can do it like
"set session query_cache_type=0;" which will set this for that
perticular session.
use SQL_NO_CAHCE additionally in either case.

MySQL pause index rebuild on bulk INSERT without TRANSACTION

I have a lot of data to INSERT LOW_PRIORITY into a table. As the index is rebuilt every time a row is inserted, this takes a long time. I know I could use transactions, but this is a case where I don't want the whole set to fail if just one row fails.
Is there any way to get MySQL to stop rebuilding indices on a specific table until I tell it that it can resume?
Ideally, I would like to insert 1,000 rows or so, set the index do its thing, and then insert the next 1,000 rows.
I cannot use INSERT DELAYED as my table type is InnoDB. Otherwise, INSERT DELAYED would be perfect for me.
Not that it matters, but I am using PHP/PDO to access MySQL. Any advice you could give would be appreciated. Thanks!
ALTER TABLE tableName DISABLE KEYS
// perform inserts
ALTER TABLE tableName ENABLE KEYS
This disables updating of all non-unique indexes. The disadvantage is that those indexes won't be used for select queries as well.
You can however use multi-inserts (INSERT INTO table(...) VALUES(...),(...),(...) which will also update indexes in batches.
AFAIK, for those that use InnoDB tables, if you don't want indexes to be rebuilt after each INSERT, you must use transactions.
For example, for inserting a batch of 1000 rows, use the following SQL:
SET autocommit=0;
//Insert the rows one after the other, or using multi values inserts
COMMIT;
By disabling autocommit, a transaction will be started at the first INSERT. Then, the rows are inserted one after the other and at the end, the transaction is committed and the indexes are rebuilt.
If an error occurs during execution of one of the INSERT, the transaction is not rolled back but an error is reported to the client which has the choice of rolling back or continuing. Therefore, if you don't want the entire batch to be rolled back if one INSERT fails, you can log the INSERTs that failed and continue inserting the rows, and finally commit the transaction at the end.
However, take into account that wrapping the INSERTs in a transaction means you will not be able to see the inserted rows until the transaction is committed. It is possible to set the transaction isolation level for the SELECT to READ_UNCOMMITTED but as I've tested it, the rows are not visible when the SELECT happens very close to the INSERT. See my post.