I have a table with >19M rows that I want to create a subtable of (I'm breaking the table into several smaller tables). So I'm doing a CREATE TABLE new_table (SELECT ... FROM big_table). I run the query in MySQL Workbench.
The query takes a really long time to execute so eventually I get a "Lost connection to MySQL server" message. However, after a few minute the new table is there and it seems to contain all the data that was supposed to be copied over (I'm doing a GROUP BY so cannot just check that the number of rows are equal in both tables).
My question is: Am I guaranteed that the query is completed even though I lose connection to the database? Or could MySQL interrupt the query midway and still leave a table with incomplete data?
Am I guaranteed that the query is completed even though I lose connection to the database?
No. There are several reasons other than connection timeout to get lost-connection errors. The server might crash due to used-up disk space or a hardware fault. An administrator might have terminated your session.
"Guarantee" is a strong word in the world of database management. Because other peoples' data. You should not assume that any query ran correctly to completion unless it ended gracefully.
If you're asking because an overnight query failed and you don't want to repeat it, you can inspect the table with stuff like COUNT(*) to convince yourself it completed. But please don't rely on this kind of hackery in production with other peoples' data.
I've a table with 10,000 records precommit_tags_change_lists, a select query on this takes forever.
I tried to add an index as below which also hangs...
ALTER TABLE `precommit_tags_change_lists` ADD INDEX `change_list_id` (`change_list_id`)
Following is the structure of the table, any guidance on how to debug this and what could be causing this issue?
One observation is Quite a few process are stuck with state "Waiting for table metadata lock" on the table precommit_tags_changelists
Because of the above the database connection keeps failing intermittently with error Can't connect to MySQL server on '10.xx.xxx.xxx' ((1040, u'Too many connections'))")
A table of 10k records is not very large. An ALTER TABLE should complete in at most a couple of seconds. I think it's likely that your ALTER TABLE is waiting for a lock on the table. All those other SELECT queries are also waiting, because they're queued behind the ALTER TABLE.
An ALTER TABLE requires exclusive access to the table. No other query can be running while ALTER TABLE does its work (well, certain types of changes can be done "online" in MySQL 5.6 or later, but in general no). This exclusive access is implemented using the metadata lock. Many SELECT queries can share a metadata lock, but ALTER TABLE cannot share.
So I think your real problem is that you have some long-running query hindering the ALTER TABLE. You haven't shown this long-running query.
It's possible to make a long-running query even on a small table. It has to do with the logic of the query. You should look in your processlist for a query referencing precommit_tags_change_lists but is not waiting for metadata lock. It will be in some other state (like "sending data" or "writing to temp table", etc.), and has been running for longer than any other query.
When you find that query, kill it. If it has been running for hours, it's not likely anyone is still waiting for its result. Once you kill that query, the logjam will be broken, and the ALTER TABLE and all the other queries will be able to complete.
This is my guess, based on experience. But I have to make some assumptions about your situation because you haven't provided all the relevant information.
This error can occurs when connection reaches the maximum limit as defined in the configuration file. The variables holding this value is max_connections
To check the current value of this variable, login as root user and run the following command:
show global variables like max_connections;
Login to MySQL using root user and increase the max_connections variable to higher value.
SET GLOBAL max_connections = 100;
In order to make the max_connection value persistent, modify the value in the configuration file.
To persist the Change Permanently
Stop the MySQL server:
Service mysql stop
Edit the configuration file my.cnf.
vi /etc/my.cnf
Find the variable max_connections under mysqld section.
[mysql]
max_connections = 100
Set into higher value and save the file.
Start the server.
Service mysqld start
I have one table called History in which client has more than 10 million records in it.
Now I want to ALTER this table with New Extra Column, but it is taking too much time and sometime even server crash.
Is there any faster way by which I can ALTER large tables?
My query is :
ALTER TABLE History ADD COLUMN oldID BIGINT(20) UNSIGNED NULL,
ADD INDEX oldid16 (oldID);
I am using InnoDB.
In MySQL Workbench db editor there are 3 options, self-explanatory:
DBMS connection keep-alive interval (in seconds): 600
DBMS connection read time out (in seconds): 600
DBMS connection time out (in seconds): 60
If you want to achieve your task in MySQL Workbench, then you have to give higher values to these options.
The same principle should apply to all editors, inclusive Liquibase.
More of it, in MySQL server you can achieve this by changing some server system variables:
connect_timeout (the most important for your task)
long_query_time
wait_timeout
interactive_timeout
net_read_timeout
net_write_timeout
and/or maybe others, I don't know them anymore.
Good luck!
EDIT 1:
There are also innodb specific variables as candidates for changing, in order to accomplish your task. Like this, for example:
innodb_buffer_pool_size
All changes and combinations depend, of course, on your system resources/configurations/workflow.
EDIT 2:
PS: For such very big operations we used SQLyog, a very stable and powerfull db editor. Most important: we had never crashes like with MySQL Workbench. And all db workflows/processes were smooth.
EDIT 3:
New suggestions:
Prepare the logging process before running the query again, in order to follow the arised error or success messages.
Also, I saw the query. I would suggest you to apply the oldid16
index separately, after inserting the new column.
An important one, about the db tables: each of them should have a
separately file allocated in the file system. See InnoDB
File-Per-Table Tablespaces. And, maybe, Overview of Partitioning in MySQL
P.S: Personally, I can't see another option for running the ALTER query in other way, than is presented by your original question: as a whole and at once - after, maybe, separating the index part too.
I have one problem with FEDERATED table in MySQL. I have one server (MySQL version 5.0.51a), who serve to store client data and actually nothing more. The logic database are stored in another server (version 5.1.56), sometimes it should handle that data from first server. So the second server have one FEDERATED table, which connect to the first server.
Actually, it has worked without any problems, but recently I got strange errors with this solution. Some kind of queries on second server cannot be performed correctly.
For example SELECT * FROM table - doesn't work. It hangs exactly 3 minutes and then gives:
Error Code: 1159 Got timeout reading communication packets
Ok, I checked table on the first server and it's Ok. Then I tried some another queries to FEDERATED table and they work...
For example, query like SELECT * FROM table WHERE id=x returns the result. Probably it could have problem with size of result, so I tried query with dummy WHERE-clause like SELECT * FROM table WHERE id > 0 - and it also works...
Finally I found a "solution", which helped only for two days - on the first server I made a copy of table, and on second server I re-declared a new FEDERATED table with new connection string to this copy. And it works, but after two days the same problem with new copied table.
I've already talk with both server providers, they see no problems, everything seems to work and other hosting provider is the causer of problems.
I've checked all variables in MySQL and there is no timeout parameter with 3 minutes etc. So how can I deal so kind of problems? It seems to be something automatic on network or database side, but I don't know, how to detect the reason of problems.
Do You have any ideas?
You may try checking MTU size settings for network interfaces on both servers.
This warning is logged when idle threads are killed by wait_timeout.
Normally, the way to avoid threads getting killed by wait_timeout is to call mysql_close() in scripts when the connection is no longer needed. Unfortunately that doesn't work for queries made through federated tables because the query and the connection are not on the same server.
For example, when a query is executed on server A of a federated table (pointing to data on server B), it creates a connection on server B. Then when you run mysql_close() on server A it obviously can not close the connection that was created on server B.
Eventually the connection gets killed by mysql after the number of seconds specified in "wait_timeout" have passed (the default is 8 hours). This generates the warning in your mysqlerror.log "Got timeout reading communication packets"
From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.