MySQL server seems to constantly lock up and stop responding on certain types of queries and eventually (after couple of minutes of not responding) give up with an error "MySQL server has gone away", then hang again on the next set of queries, again and again. The server is set up as a slave to replicate from a master to dbA, mostly INSERT statements, around 5-10 rows per second. A PHP based application is running on the server that reads the freshly replicated data every 5-10 seconds, processes it and stores (INSERT ON DUPLICATE KEY UPDATE) results in a separate database dbB. All tables use MyISAM engine. A web application displays the post-processed data for the user. In basic terms the processing steps involved are compression of time series data in per second resolution into per minute, hour and day resolutions.
When MySQL locks up, I execute SHOW PROCESSLIST command and I see the following queries:
N User Time Status SQL query
1 system user XX update INSERT INTO `dbA`.`tableA` (...) VALUES (...)
2 ???? XX Waiting for query cache lock INSERT INTO `dbB`.`tableB` (...) VALUES (...) ON DUPLICATE KEY UPDATE ...
3 ???? XX Writing to net SELECT ... FROM `dbA`.`tableA` WHERE ... ORDER BY ...
The "Time" column will keep ticking away synchronously until some sort of query wait timeout has been reached and then we get error "MySQL server has gone away". In 5-10 seconds when it will be time to process new data again the same lock up will happen. Query #1 is the replication process. Query #2 is the updating of the post-processed data. Query #3 is streaming (unbuffered) the newly replicated data for processing. It is the Query #3 that eventually produces the error "MySQL server has gone away", presumably because it is the first one to timeout.
It looks like some sort of dead lock, but I cannot understand why. Simultaneous SELECT and INSERT in one database seems to cause a dead lock with query cache update by INSERT ON DUPLICATE KEY UPDATE in a different database. If I turn off either the Replication or the Query Cache then the lock up does not happen. Platform: Debian 7, MySQL 5.5.31, PHP 5.4.4 - all standard packages. It may be worth noting that almost the same application is currently working fine on Debian 6, MySQL 5.1.66, PHP 5.3.3, with only difference in that the post-processed data is stored using separate INSERT and UPDATE queries rather than INSERT ON DUPLICATE KEY UPDATE.
MySQL configuration (on both the Debian 6 and 7 machines):
key_buffer_size = 2G
max_allowed_packet = 16M
thread_cache_size = 64
max_connections = 200
query_cache_limit = 2M
query_cache_size = 1G
Any hints to why this lock up occurs will be much appreciated!
Try to reduce the query cache size significantly. 1G is probably too big.
Start with 16M or 32M and adjust the query_cache_limit accordingly (256K?) - and move your way up as the read performance increases without reaching "Waiting for query cache lock" on writes.
"Be cautious about sizing the query cache excessively large, which increases the overhead required to maintain the cache, possibly beyond the benefit of enabling it. Sizes in tens of megabytes are usually beneficial. Sizes in the hundreds of megabytes might not be."
http://dev.mysql.com/doc/refman/5.6/en/query-cache.html
Related
Forking multiple process in php (Supervisor). Each create connection to same Mysql DB and execute same SELECT query in parallel (Gearman). If i increase amount of processes (i.e. same time connections) and more same queries will run in parallel lead to increase sending data time in SHOW PROCESSLIST in each process. It's a simple select with transaction level READ UNCOMMITED. Is it some mysql config issue? Or SELECT query caused tables locks? Or maybe full scan does?
Server: Ubuntu 16.04.2 LTS. 1 CPU core. MySQL 5.7.17. innodb_buffer_pool_size 12 GB
It use 32 tables including self joins (13 unique tables) executing in 3 seconds in one connection
Gotta see the details. Sounds like missing or inadequate indexes.
Is this "Entity-Attribute-Value? If so, have you followed the tips here: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta
InnoDB does not lock tables. But it could be doing table scans which would lock all rows. Again, sounds like bad indexes and/or query formulation.
Please provide SHOW CREATE TABLE for all 13 tables, plus SELECT and EXPLAIN SELECT ....
If there is some kind of write going on in the background, that could impact the SELECT, even in READ UNCOMMITTED mode.
At least 16GB of RAM?
Forking how many processes? How many CPU cores do you have?
I'm developing a mobile application whose backend is developed in Java and database is MySQL.
We have some insert and update operations in database tables with a lot of rows (between 400.000 and 3.000.000). Every operation usually doesn't need to touch every register of the table, but maybe, they are called simultaneously to update a 20% of them.
Sometimes I get this errors:
Deadlock found when trying to get lock; try restarting transaction
and
Lock wait timeout exceeded; try restarting transaction
I have improved my queries making them smaller and faster but I still have a big problem when some operations can't be performed.
My solutions until now have been:
Increase server performance (AWS Instance from m2.large to c3.2xlarge)
SET GLOBAL tx_isolation = 'READ-COMMITTED';
Avoid to check foreign keys: SET FOREIGN_KEY_CHECKS = 0; (I know this is not safe but my priotity is not to lock de database)
Set this values for timeout variables (SHOW VARIABLES LIKE '%timeout%';):
connect_timeout: 10
delayed_insert_timeout: 300
innodb_lock_wait_timeout: 50
innodb_rollback_on_timeout: OFF
interactive_timeout: 28800
lock_wait_timeout: 31536000
net_read_timeout: 30
net_write_timeout: 60
slave_net_timeout: 3600
wait_timeout: 28800
But I'm not sure if these things have decreased performance.
Any idea of how to reduce those errors?
Note: these others SO answer don't help me:
MySQL Lock wait timeout exceeded
MySQL: "lock wait timeout exceeded"
How can I change the default Mysql connection timeout when connecting through python?
Try to update less rows per single transaction.
Instead of updating 20% or rows in a single transaction update 1% of rows 20 times.
This will improve significantly your performances and you will avoid the timeout.
Note: ORM are not the good solution for big updates. It is better to use standard JDBC. Use ORM to retrieve, update, delete few records each time. It speed up the coding phase, not the execution time.
As a comment more than an answer, if you are in the early stages of development, you may wish to consider whether or not you actually need this particular data in a relational database. There are much faster and larger alternatives for storing data from mobile apps depending upon the planned use of the data. [S3 for large files, stored-once, read often (and can be cached); NoSQL (Mongo etc) for unstructured large, write-once, read many, etc.]
On my server, doing insert records into MySQL DB is very slow. Regarding the Server Status, InnoDB writes per second is around 20.
I am not an expert, just graduated from university. I don't have much experience on it.
How could I improve the speed of InnoDB writes? If doesn't upgrade the hardware of my server, is there any way can do it?
My server is not good, so I installed Microsoft windows server 2003 R2. The hardware info is following:
CPU: Intel Xeon E5649 2.53GHZ
RAM: 2GB
Any comments, Thank you.
Some hints:
Minimize the number of indexes - there will be less index maintenance. This is obvously a trade-off with SELECT performance.
Maximize the number of INSERTs per transaction - the "durability price" will be less (i.e. physical writing to disk can be done in the background while the rest of the transaction is still executing, if the transaction is long enough). One large transaction will usually be faster than many small transaction, but this is obviously contingent on the actual logic you are trying to implement.
Move the table to a faster storage, such as SSD. Reads can be cached, but a durable transaction must be physically written to disk, so just caching is not enough.
Also, it would be helpful if you could show us your exact database structure and the exact INSERT statement you are using.
If using InnoDB engine+local disk, try to benchmark with innodb_flush_method = O_DSYNC. With O_DSYNC our bulk inserts (surrounded by TRANSACTION) was improved.
Adjust the flush method
In some versions of GNU/Linux and Unix, flushing files to disk with
the Unix fsync() call (which InnoDB uses by default) and similar
methods is surprisingly slow. If database write performance is an
issue, conduct benchmarks with the innodb_flush_method parameter set
to O_DSYNC.
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-diskio.html
Modify your config for MySQL server
innodb_flush_log_at_trx_commit = 0
then Restart MySQL server
please set the innodb_buffer_pool_size to 512M. It may increase the performance
SET GLOBAL innodb_buffer_pool_size=512M
Recommendations could vary based on your implementation. Here are some notes copied directly from MySQL documentation:
Bulk Data Loading Tips
When importing data into InnoDB, make sure that MySQL does not have
autocommit mode enabled because that requires a log flush to disk for
every insert. To disable autocommit during your import operation,
surround it with SET autocommit and COMMIT statements.
Use the multiple-row INSERT syntax to reduce communication overhead
between the client and the server if you need to insert many rows:
INSERT INTO yourtable VALUES (1,2), (5,5), ...;
If you are doing a huge batch insert, try avoiding the "select from
last_insert_id" that follows the insert as it seriously slows down the
insertions (to the order of making a 6 minute insert into a 13 hour
insert) if you need the number for another insertion (a subtable
perhaps) assign your own numbers to the id's (this obviously only
works if you are sure nobody else is doing inserts at the same time).
As mentioned already, you can increase the size of the InnoDB buffer pool (innodb_buffer_pool_size variable). This is generally a good idea because the default size is pretty small and most systems can accommodate lending more memory to the pool. This will increase the speed of most queries, especially SELECTs (as more records will be kept in the buffer between queries). The insert buffer is also a section of the buffer pool and will store recently inserted records, which will increase speed if you are basing future inserts on values from previous inserts. Hope this helps :)
I'm trying to understand an issue I am having with a MySQL 5.5 server.
This server hosts a number of databases. Each day at a certain time a process runs a series of inserts into TWO tables within this database. This process lasts from 5 to 15 minutes depending on the amount of rows being inserted.
This process runs perfectly. But it has a very unexpected side effect. All other inserts and update's running on tables unrelated to the two being inserted to just sit and wait until the process has stopped. Reads and writes outside of this database work just fine and SELECT statements too are fine.
So how is it possible for a single table to block the rest of a database but not the entire server (due to loading)?
A bit of background:-
Tables being inserted to are MyISAM with 10 - 20 million rows.
MySQL is Percona V5.5 and is serving one slave both running on
Debian.
No explicit locking is called for by the process inserting the
records.
None of the Insert statements do not select data from any other
table. They are also INSERT IGNORE statements.
ADDITIONAL INFO:
While this is happening there are no LOCK table entries in PROCESS LIST and the processor inserting the records causing this problem does NOT issue any table locks.
I've already investigated the usual causes of table locking and I think I've rules them out. This behaviour is either something to do with how MySQL works, a quirk of having large database files or possibly even something to do with the OS/File System.
After a few weeks of trying things I eventually found this: Yoshinori Matsunobu Blog - MyISAM and Disk IO Scheduler
Yoshinori demonstrates that changing the scheduler queue to 100000 (from the default 128) dramatically improves the throughput of MyISAM on most schedulers.
After making this change to my system there were no longer any dramatic instances of database hang on MyISAM tables while this process was running. There was slight slowdown as to be expected with the volume of data but the system remained stable.
Anyone experiencing performance issues with MyISAM should read Yoshinori's blog entry and consider this fix.
oI have a table with 2 millions of registers, but it will grow much more soon. Basically this table contains points of interest of an image with respective descriptors. When I'm trying to execute query that selects points that are spatially near to the query points, total execution time takes too long. More precisely Duration / Fetch = 0.484 sec / 27.441 sec. And the query is quite simple, which returns only ~17000 rows.
My query is:
SELECT fp.fingerprint_id, fp.coord_x, fp.coord_y, fp.angle,
fp.desc1, fp.desc2, fp.desc3, fp.desc4, fp.desc5, fp.desc6, fp.desc7, fp.desc8, fp.desc9, fp.desc10,
fp.desc11, fp.desc12, fp.desc13, fp.desc14, fp.desc15, fp.desc16, fp.desc17, fp.desc18, fp.desc19,
fp.desc20, fp.desc21, fp.desc22, fp.desc23, fp.desc24, fp.desc25, fp.desc26, fp.desc27, fp.desc28,
fp.desc29, fp.desc30, fp.desc31, fp.desc32
FROM fingerprint fp
WHERE
fp.is_strong_point = 1 AND
(coord_x BETWEEN 193-40 AND 193+40) AND (coord_y BETWEEN 49-15 AND 49+15 )
LIMIT 1,1000000;
That is what I've done.
I've tried to change key_buffer_size in my.ini, but didn't see much changes.
In addition I've tried to set coord_x and coord_y as indexes, but query time became slower.
The table is partitioned by range of coord_x field, which gave me better results.
How I can reduce the Fetch time? Is it possible to reduce it to milliseconds?
I faced slow fetch issue too (MySQL, InnoDB).
Finally I found that innodb_buffer_pool_size is set to 8MB by default for my system which is not enough to handle the query. After increasing it to 1GB performance seems fine:
Duration / Fetch
353 row(s) returned 34.422 sec / 125.797 sec (8MB innodb buffer)
353 row(s) returned 0.500 sec / 1.297 sec (1GB innodb buffer)
UPDATE:
To change innodb_buffer_pool_size add this to your my.cnf
innodb_buffer_pool_size=1G
restart your mysql to make it effect
Reference: How to change value for innodb_buffer_pool_size in MySQL on Mac OS?
If i am right the query is really fast but what is slow is the fetching of the data from your database. It takes 27 seconds to load the 170000 results from your storage.
It looks like you use the wrong database type. Try switching the table from one database engine to another.
For maximum speed you can use the MEMORY engine. The only drawback would be that you would have to store a copy of that table in another engine if you have to do dynamic changes to it and after any change you would have to reload the differences or the entire table.
Also you would have to make a script that fires when you restart your server so that your memory table would be loaded on startup of your mysql server
See here for the doc
Increasing my buffer size make myquery faster. But you need to open the my.ini file as notepad++ because it will some hex data if you open it as notepad.
I found a Fix, Just disable your AVG or any antivuris in your system and then restart your workbench
Make sure that the line is not written in your pom.xml.
<property name="hbm2ddl.auto">create</property>
If it is written than remove it.