Information schema process list and updating state - mysql

Well here is my question, I messed up yesterday and made a delete query on my database. The messing up is not the delete part, but the fact that i didnt realized it was 424 million records. I keeping track on the query with the information schema but i would like to know what the "updating" state stands by. What is it doing right now, deleting or what?
Here is the get
"COMMAND": Query
"STATE": "updating"
"INFO": "delete from posicion where fechahora between '2012-12-12 00:00:00' and '2014-12-15 00:00:00'"

It looks to me like your delete query is still running.
You can double check this by issuing this command, from some other client besides the one from which you issued the delete query.
SHOW FULL PROCESSLIST
This will show your active processes and what they are doing. Your DELETE query may be among them.
If you
don't want to delete those rows.
are using InnoDB for your posicion table, and
have not yet finished running the DELETE query
you can look at the processlist to get the id of your delete operation, then issue the command
KILL QUERY id
InnoDB should roll back the in-progress delete and leave your table as it was before you started the DELETE .
Good luck!

Related

How to detect if query is running in the state “Sending data”

I read some explanations about "sending data" status but I still don't get if the query is running or not. They say "sending data" means server sending some data to client but I really don't which data is sending.
What does it mean when MySQL is in the state "Sending data"?
I run some query using Mysql Workbench and while this query is executing Workbench goes timeout(after 10 min). Then I run "show processlist" command to see if query is continues to executing or not. It says my query status is "sending data".
By the way logs table has 10 million records. So this query must be finish in 10 hours. I just want to know if my query is really executing still?
update logs join user
set logs.userid=user.userid
where logs.log_detail LIKE concat("%",user.userID,"%");
When it's in the process list it is still running. Your query is just running very slow, I assume, cause you're doing a cross join (which means you connect every column of one table to every column of the other table, which can result in quite an enormous amount of data, therefore I further assume, that your query does not do, what you think it does) and no index can be used on the where clause. You're probably doing a full table scan on a very huge amount of data. You can verify this by doing an explain <your query>;.
To avoid the cross join specify the connection in an on clause, like
update logs join user ON logs.userid = user.userid
set logs.whatever = user.whatever
where logs.log_detail LIKE concat("%",user.userID,"%");

MySQL update not updating all records

I have a database table that lists all orders. Each weekend a cron runs and it generates invoices for each customer. The code loops through each customer, gets their recent orders, creates a PDF and then updates the orders table to record the invoice ID against each of their orders.
The final update query is:
update bookings set invoiced='12345' where username='test-username' and invoiced='';
So, set invoiced to 12345 for all orders for test-username that haven't been previously invoiced.
I have come across a problem where orders are being added to the PDF but not updated to reflect the fact that they have been invoiced.
I have started running the update query manually and come across a strange scenario.
A customer may have 60 orders.
If I run the query once then 1 order is updated. I run it again and 1 order is updated, I repeat the process and each time only a small number of orders are updated - between 1 and 3. It doesn't update the 60 in one query as I would expect. I need to run the query repeatedly until it finally comes back with "0 rows affected" and then I can be sure that all rows have been updated.
I am not including a LIMIT XX in my query so I so no reason why it can't update all orders at once. The query I run repeatedly is identical each time.
Does anybody have any wise suggestions?!
I'm guessing you're using InnoDB. You haven't disclosed the type of code you're running.
But I bet you're seeing an issue that relates to transactions. When a program works differently from an interactive session, it's often a transaction issue.
See here: http://dev.mysql.com/doc/refman/5.5/en/commit.html
Do things work better if you issue a COMMIT; command right after your UPDATE statement?
Note that your language binding may have its own preferred way of issuing the COMMIT; command.
Another way to handle this problem is to issue the SQL command
SET autocommit = 1
right after you establish your connection. This will make every SQL command that changes data do its COMMIT operation automatically.

Approximately how long should it take to delete 10m records from an MySQL InnoDB table with 30m records?

I am deleting approximately 1/3 of the records in a table using the query:
DELETE FROM `abc` LIMIT 10680000;
The query appears in the processlist with the state "updating". There are 30m records in total. The table has 5 columns and two indexes, and when dumped to SQL the file about 9GB.
This is the only database and table in MySQL.
This is running on a machine with 2GB of memory, a 3 GHz quad-core processor and a fast SAS disk. MySQL is not performing any reads or writes other than this DELETE operation. No other "heavy" processes are running on the machine.
This query has been running for more than 2 hours -- how long can I expect it to take?
Thanks for the help! I'm pretty new to MySQL, so any tidbits about what's happening "under the hood" while running this query are definitely appreciated.
Let me know if I can provide any other information that would be pertinent.
Update: I just ran a COUNT(*), and in 2 hours, it's only deleted 200k records. I think I'm going to take Joe Enos' advice and see how well inserting the data into a new table and dropping the previous table performs.
Update 2: Sorry, I actually misread the number. In 2 hours, it's not deleted anything. I'm confused. Any suggestions?
Update 3: I ended up using mysqldump with --where "true LIMIT 10680000,31622302" and then importing the data into a new table. I then deleted the old table and renamed the new one. This took just over half an hour.
Don't know if this would be any better, but it might be worth thinking about doing the following:
Create a new table and insert 2/3 of the original table into the new one.
Drop the original table.
Rename the new table to the original table's name.
This would prevent the log file from having all the deletes, but I don't know if inserting 20m records is faster than deleting 10m.
You should post the table definition.
Also, to know why is it taking to much time, try to enable the profile mode on the delete request via :
SET profiling=1;
DELETE FROM abc LIMIT 10680000;
SET profiling=0;
SHOW PROFILES;
SHOW PROFILE ALL FOR QUERY X; (X is the ID of your query shown in SHOW PROFILES)
and post what it returns (But I think the query must end to return the profiling data)
http://dev.mysql.com/doc/refman/5.0/en/show-profiles.html
Also, I think you'll get more responses on ServerFault ;)
When you run this query, the InnoDB log file for the database is used to record all the details of the rows that are deleted - and if this log file isn't large enough from the outset it'll be auto-extended as and when necessary (if configured to do so) - I'm not familiar with the specifics but I expect this auto-extension is not blindingly fast. 2 hours does seem like a long time - but doesn't surprise me if the log file is growing as the query is running.
Is the table from which the records are being deleted on the end of a foreign key (i.e. does another table reference it through a FK constraint)?
I hope your query ended by now ... :) but from what I've seen, LIMIT with large numbers (and I never tried this kind of numbers) is very slow. I would try something based on the pk like
DELETE FROM abc WHERE abc_pk < 10680000;

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.

Deleting rows cause lock timeout

I keep getting these errors when trying to delete rows from a table. The special case here is that I may be running 5 processes at the same time.
The table itself is an Innodb table with ~4.5 million rows. I do not have an index on the column used in my WHERE clause. Other indices are working as supposed to.
It's being done within a transcation, first I delete records, then I insert replacing records, and only if all records are inserted should the transaction be commited.
Error message:
Query error: Lock wait timeout exceeded; try restarting transaction while executing DELETE FROM tablename WHERE column=value
Would it help to create an index on the referenced column here? Should I explicitly lock the rows?
I have found some additional information in question #64653 but I don't think it covers my situation fully.
Is it certain that it is the DELETE statement that is causing the error, or could it be other statements in the query? The DELETE statement is the first one so it seems logical but I'm not sure.
An index would definitely help. If you are trying to replace deleted records I would recommend you modify your query to use an update instead of a DELETE followed by an INSERT, if possible:
INSERT INTO tableName SET
column2 = 'value2'
WHERE column = value
ON DUPLICATE KEY UPDATE
column2 = 'value2'
An index definitely helps. I once worked on a DB containing user data. There was sometimes a problem with the web front end and user deletion. During the day it worked fine (although it took quite long). But in the late afternoon it sometimes timed out, because the DB server was under more load due to end of day processing.
Whacked an index on the affected column and everything ran smoothly from there on.