How to delete millions of record from mysql table - mysql

In my MySQL table contain more than 20 millions of records . I want to delete it from lower index by running
delete FROM mydb.dailyreportdetails where idDailyReportDetails>0 order by idDailyReportDetails asc limit 1000 ;
While running the above query i got the error as mention below
Operation failed: There was an error while applying the SQL script to the database.
ERROR 1205: 1205: Lock wait timeout exceeded; try restarting transaction
SQL Statement:
Is there have any way to run query in mysql background or any faster steps to delete those record ?

You could first find the actual id to delete...
SELECT idDailyReportDetails
FROM mydb.dailyreportdetails
where idDailyReportDetails>0
order by idDailyReportDetails asc limit 1000,1 ;
Then a straight forward delete, using the value from the select...
DELETE FROM mydb.dailyreportdetails
where idDailyReportDetails < ID;

you can check the InnoDB locks by
SHOW ENGINE InnoDB STATUS;
Then you will get the status or error..
If you want to increase the time in your case use
set innodb_lock_wait_timeout=3000;
After that run your delete query.

The quickest way to delete rows from a table is to reference them by id. This as well is preferable for the binary log. Hence the best way is to use some programming language and in a loop fetch a new group of row ids and then delete the rows by explicitly stating them in the WHERE idDailyReportDetails IN () clause.
Range queries (WHERE id < something) might constantly fail because of locks if the database is used by other processes.

Related

Mysql subquery locking

If I have a query like:
UPDATE table_x SET a = 1 WHERE id = ? AND (
SELECT SUM(a) < 100 FROM table_x
)
And
hundreds of this query could be made at exactly the same time
I need to be certain that a never gets to more than 100
Do I need to lock the table or will table_x be locked automatically as it's a subquery?
Assuming this is innodb table, You will have row level locking . So, even if they are 100 of these happening at a time, only ONE transaction will be able to acquire the lock on those rows and finish processing before the next transaction is to occur. There is no difference between how a transaction is processed for the update and the subquery. To the innodb engine this is all ONE transaction, not two separate transactions.
If you want to see what is going on behind the scenes when you run your query, type 'show engine innodb status' in the command line while the query is running.
Here is a great walkthrough on what all that output means.
If you want to read more about Innodb and row level locking, follow link here.

Reading and updating 10 data records (tuples) in MySql Table at the same time(Using a single query);

I am trying to read 10 records from a MySql table and updating a field IsRead to 1 to avoid the duplicate read. So when I again read the data then the next 10 record should be read not the already read records using IsRead,
select * from tablename where IsRead=0 limit 10;
But my Question is how can I read and update the 10 records at the same time.
Using a Single Query.
EDIT
Previously I am reading and updating one one records, but now I want to avoid the reading time (once for reading and once for updating) so what will be the suitable way to read and update 10 records. Duplicate record should not be read.
What you're looking for is not a single statement, but transactions.
Transactions are a way to make multiple statements ACID compliant. Read about it in the link provided. In short it means, "all or nothing".
Code wise it would simply look something like this:
START TRANSACTION;
select * from tablename where IsRead=0 ORDER BY created_or_whatever_column limit 10 for update;
update tablename set IsRead = 1 ORDER BY created_or_whatever_column LIMIT 10;
COMMIT;
Notice, that I added an order by clause. Using limit without order by doesn't make sense. There's no order in the data in a database, unless you specify it.
Also I added for update to the select statement, so the rows are locked until the transaction ends (with commit) so that no other transaction manipulates these rows in the meantime.
What you should also have a look at in this context are the isolation levels.

MYSQL delete all data from table which have millions record with same name

Hi i want to delete all record of a table which have 10 millions record but its hang and give me follow error:
Lock wait timeout exceeded; try restarting transaction
I am using the following query:
delete from table where name = '' order by id limit 1000
in for loop.
Please suggest me how to optimize it.
You said i want to delete all record of a table which have 10 millions record. Then why not use TRUNCATE command instead which will have minimal/no overhead of logging.
TRUNCATE TABLE tbl_name
You can as well use DELETE statement but in your case the condition checking (where name = '' order by id limit 1000) is not necessary since you wanted to get rid of all rows but DELETE has overhead of logging in transaction log which may matter for record volume of millions.
Per your comment, you have no other option rather than going by delete from table1 where name = 'naresh'. You can delete in chunks using the LIMIT operator like delete from table1 where name = 'naresh' limit 1000. So if name='naresh' matches 25000 rows, it will be deleting only 1000 rows out of them.
You can include the same in a loop as well like below (Not tested, minor tweak might require)
DECLARE v1 INT;
SELECT count(*) INTO v1 FROM table1 WHERE name = 'naresh';
WHILE v1 > 0 DO
DELETE FROM table1 WHERE name = 'naresh' LIMIT 1000;
SET v1 = v1 - 1000;
END WHILE;
So in the above code, loop will run for 25 times deleting 1000 rows each time (assuming name='naresh' condition returns 25K rows).
If you want to delete all records(empty table),
You can use
TRUNCATE TABLE `table_name_here`...
May be it will work for you...
(not tried with big database)

error 1206 whenever trying to delete records from a table

I have a table with more than 40 million records.i want to delete about 150000 records with a sql query:
DELETE
FROM t
WHERE date="2013-11-24"
but I get error 1206(The total number of locks exceeds the lock table size).
I searched a lot and change the buffer pool size:
innodb_buffer_pool_size=3GB
but it didn't work.
I also tried to lock tables but didn't work too:
Lock Tables t write;
DELETE
FROM t
WHERE date="2013-11-24";
unlock tables;
I know one solution is to split the process of deleting but i want this be my last option.
I am using mysql server, server OS is centos and server Ram is 4GB.
I'll appreciate any help.
You can use Limit on your delete and try deleting data in batches of say 10,000 records at a time as:
DELETE
FROM t
WHERE date="2013-11-24"
LIMIT 10000
You can also include an ORDER BY clause so that rows are deleted in the order specified by the clause:
DELETE
FROM t
WHERE date="2013-11-24"
ORDER BY primary_key_column
LIMIT 10000
There are a lot of quirky ways this error can occur. I will try to list one or two and perhaps the analogy holds true for someone reading this at some point.
On larger datasets even when changing innodb_buffer_pool_size to a larger value, you can hit this error when an adequate index is not in place to isolate the rows in the where clause. Or in some cases with the primary index (see this) and the comment from Roger Gammans:
From the (5.0 documentation for innodb):-
If you have no indexes suitable for your statement and MySQL must scan
the entire table to process the statement, every row of the table
becomes locked, which in turn blocks all inserts by other users to the
table. It is important to create good indexes so that your queries do
not unnecessarily scan many rows.
A visual of how this error can occur and difficult to solve is with this simple schema:
CREATE TABLE `students` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`thing` int(11) NOT NULL,
`campusId` int(11) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `ix_stu_cam` (`camId`)
) ENGINE=InnoDB;
A table with 50 Million rows. FK's not shown, not the issue. This table was originally for showing query performance also not important. Yet, in initializing thing=id in blocks of 1M rows, I had to perform a limit during the block update to prevent other problems, by using:
update students
set thing=id
where thing!=id
order by id desc
limit 1000000 ; -- 1 Million
This was all well until it got down to say 600000 left to update as seen by
select count(*) from students where thing!=id;
Why I was doing that count(*) stemmed from repeated
Error 1206: The total number of locks exceeds the lock table size
I could keep lowering my LIMIT shown in the above update, but in the end I would be left, say, with 1200 != in the count, and the problem just continued.
Why did it continue? Because the system filled the lock table as it scanned this large table. Sure, it might "intra implicit transaction" have changed those last 1200 row to equal, in my mind, but due to the lock table filling up, in reality would abort the transaction with nothing set. And the process would stalemate.
Illustration 2:
In this example, let's say I have 288 rows of the 50 Million row table that could be updated shown above. Due to the end-game problem described, I would often find a problem running this query twice:
update students set thing=id where thing!=id order by id desc limit 200 ;
But I would not have a problem with these:
update students set thing=id where thing!=id order by id desc limit 200;
update students set thing=id where thing!=id order by id desc limit 88 ;
Solutions
There are many ways to solve this, including but not limited to:
A. The creation of another index on a column suggesting the data had been updated, perhaps a boolean. And incorporating that into the where clause. Yet on huge tables, the creation of somewhat temporary indexes may be out of the question.
B. Populating a 2nd table with yet to be cleaned id's could be another solution. Coupled with and update with a join pattern.
C. Dynamically changing the LIMIT value so as to not cause an overrun of the lock table. The overrun can occur when there are simply no more rows to UPDATE or DELETE (your operation), the LIMIT has not been reached, and the lock table fills up in a fruitless scan for more that simply don't exist (seen above in Illustration2).
The main point of this answer is to offer an understanding of why it is happening. And for any reader to craft an end-game solution that fits their needs (versus, at times, fruitless changes to system variables, reboots, and prayers).
The simplest way is to create an index on the date column. I had 170 million rows and was deleting 6.5 million rows. I ran into the same problem and solved it by creating non-clustered index on the column which I was using in the WHERE clause then I executed the delete query and it worked.
Delete the index if you don't need it for future.

Mysql deadlock on column update

Have run into this weird problem where a simple query fails due to a deadlock
Here is the query
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '59'
The weird issue is that this query fails only when my php server is located on a slower server on a remote network
Before I run this query, the following happens
transaction starts
insert new row in table 5
select 1 row from myproblematictable
insert new row in table 6
update table 4
UPDATE myprobelmatictable SET mycolumn = (mycolum-0) WHERE id = '<id>'
update table 3
Commit Transaction
The strange thing is that the same query fails each time with the following error
Error Number: 1213</p><p>Deadlock found when trying to get lock; try restarting transaction
The innodb status command does not seem to mention myproblematictable
any clues?
This can be the result of another query updating the tables in a different order. I would try to see if there's a pre-determined order in which the tables should be updated, and if so, rewriting the order of the updates.
If not, I would suggest trying to find the offending quer(ies), and seeing what order they are updating the tables. What type of table engine are you using? Keep in mind that MyISAM locks the entire table.