My table has 3m records. I am using AWS RDS. When creating a new index
create index jobs_view_stats_created_at_job_id_index on jobs_view_stats (created_at, job_id)
This error is returned:
[70100][1317] Query execution was interrupted
max_allowed_packet: 1073741824
max_execution_time: 180000
Is this issue also caused by low space? Space left on the instance is Free Storage: 595.87 MB
Related
I need to delete about 400 rows from the MySQL table with columns:
id - INT(11)
training_id - INT(11)
estimator_blob - LONGBLOB
The size of estimator_blob is 32 Mb.
It seems that a query like this:
DELETE FROM estimator WHERE training_id IN (1, 2, ..., 400);
locks the database. After executing this query all new queries inserting results to this table are executed extremely slow.
In the status of database (mysql> SHOW ENGINE INNODB STATUS;) I have this message:
Process ID=2600, Main thread ID=139974149678848, state: enforcing dict cache limit
What exactly does "enforcing dict cache limit" mean? Is it the reason of this "lock"?
Increasing the innodb_buffer_pool_size didn't help.
I'm using MySQL 5.7.19-0ubuntu0.16.04.1 on the machine with 70Gb RAM.
enforcing dict cache limit is a background status for InnoDB processes. In this state InnoDB checks for the usage of table dictionary cache (e.g table metadata) memory consumption and cleans up the dictionary cache if required.
As per the guidance provided by MySQL developers for bug report 84424:
You have to do several things in order to alleviate the problem:
increase the additional memory pool
increase total number of file handles available to MySQL
increase number of file handles for InnoDB
improve performance of the I/O on your operating system
You probably need to restart your server to fully recover from the issue.
Forking multiple process in php (Supervisor). Each create connection to same Mysql DB and execute same SELECT query in parallel (Gearman). If i increase amount of processes (i.e. same time connections) and more same queries will run in parallel lead to increase sending data time in SHOW PROCESSLIST in each process. It's a simple select with transaction level READ UNCOMMITED. Is it some mysql config issue? Or SELECT query caused tables locks? Or maybe full scan does?
Server: Ubuntu 16.04.2 LTS. 1 CPU core. MySQL 5.7.17. innodb_buffer_pool_size 12 GB
It use 32 tables including self joins (13 unique tables) executing in 3 seconds in one connection
Gotta see the details. Sounds like missing or inadequate indexes.
Is this "Entity-Attribute-Value? If so, have you followed the tips here: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta
InnoDB does not lock tables. But it could be doing table scans which would lock all rows. Again, sounds like bad indexes and/or query formulation.
Please provide SHOW CREATE TABLE for all 13 tables, plus SELECT and EXPLAIN SELECT ....
If there is some kind of write going on in the background, that could impact the SELECT, even in READ UNCOMMITTED mode.
At least 16GB of RAM?
Forking how many processes? How many CPU cores do you have?
I've a database hosted on Clever-Cloud (https://www.clever-cloud.com/pricing - MySQL addon size LM: memory 1 GB & 2 vCPUS). I have a table with 188 000 lines about 311 MB using InnoDb engine.
When I try to drop a column of my table (no index on this column) I get in phpMyAdmin the following error:
2006 - MySQL server has gone away
Log of MySQL at the time of the error : https://gist.github.com/urcadox/038c180cefdcba20e1052e7418a43324
I've read that InnoDb engine used memory to create a new table, copy the data without the dropped column and switch old and new tables to perform the drop operation.
Is there anything I can do to use less memory?
Is there anyway to make InnoDb use disk instead of memory?
Thank you!
Why don't you try ALGORITHM=COPY in in your table alteration query? It's is part of the ALTER TABLE syntax It forces the table to be copied rather than be modified in place. It's memory usage is likely to be lower. But certain caveats apply
Any ALTER TABLE operation run with the ALGORITHM=COPY clause prevents
concurrent DML operations. Concurrent queries are still allowed. That
is, a table-copying operation always includes at least the concurrency
restrictions of LOCK=SHARED (allow queries but not DML). You can
further restrict concurrency for such operations by specifying
LOCK=EXCLUSIVE, which prevents DML and queries.
MySQL server seems to constantly lock up and stop responding on certain types of queries and eventually (after couple of minutes of not responding) give up with an error "MySQL server has gone away", then hang again on the next set of queries, again and again. The server is set up as a slave to replicate from a master to dbA, mostly INSERT statements, around 5-10 rows per second. A PHP based application is running on the server that reads the freshly replicated data every 5-10 seconds, processes it and stores (INSERT ON DUPLICATE KEY UPDATE) results in a separate database dbB. All tables use MyISAM engine. A web application displays the post-processed data for the user. In basic terms the processing steps involved are compression of time series data in per second resolution into per minute, hour and day resolutions.
When MySQL locks up, I execute SHOW PROCESSLIST command and I see the following queries:
N User Time Status SQL query
1 system user XX update INSERT INTO `dbA`.`tableA` (...) VALUES (...)
2 ???? XX Waiting for query cache lock INSERT INTO `dbB`.`tableB` (...) VALUES (...) ON DUPLICATE KEY UPDATE ...
3 ???? XX Writing to net SELECT ... FROM `dbA`.`tableA` WHERE ... ORDER BY ...
The "Time" column will keep ticking away synchronously until some sort of query wait timeout has been reached and then we get error "MySQL server has gone away". In 5-10 seconds when it will be time to process new data again the same lock up will happen. Query #1 is the replication process. Query #2 is the updating of the post-processed data. Query #3 is streaming (unbuffered) the newly replicated data for processing. It is the Query #3 that eventually produces the error "MySQL server has gone away", presumably because it is the first one to timeout.
It looks like some sort of dead lock, but I cannot understand why. Simultaneous SELECT and INSERT in one database seems to cause a dead lock with query cache update by INSERT ON DUPLICATE KEY UPDATE in a different database. If I turn off either the Replication or the Query Cache then the lock up does not happen. Platform: Debian 7, MySQL 5.5.31, PHP 5.4.4 - all standard packages. It may be worth noting that almost the same application is currently working fine on Debian 6, MySQL 5.1.66, PHP 5.3.3, with only difference in that the post-processed data is stored using separate INSERT and UPDATE queries rather than INSERT ON DUPLICATE KEY UPDATE.
MySQL configuration (on both the Debian 6 and 7 machines):
key_buffer_size = 2G
max_allowed_packet = 16M
thread_cache_size = 64
max_connections = 200
query_cache_limit = 2M
query_cache_size = 1G
Any hints to why this lock up occurs will be much appreciated!
Try to reduce the query cache size significantly. 1G is probably too big.
Start with 16M or 32M and adjust the query_cache_limit accordingly (256K?) - and move your way up as the read performance increases without reaching "Waiting for query cache lock" on writes.
"Be cautious about sizing the query cache excessively large, which increases the overhead required to maintain the cache, possibly beyond the benefit of enabling it. Sizes in tens of megabytes are usually beneficial. Sizes in the hundreds of megabytes might not be."
http://dev.mysql.com/doc/refman/5.6/en/query-cache.html
I am trying to optimize a query of the form SELECT SQL_NO_CACHE col FROM TABLE ... When I first connect to the database and execute the query it takes about 9 seconds. When I execute the query the second time it takes almost 0.1 seconds. I place the SQL_NO_CACHE in the query to ensure that mysql is not reading the result from cache. My question is why does the first execution of the query, right after the connecting to the database (mysql -uroot ... ) takes significantly longer than subsequent executions. What is the actual execution time of the query?
MySQL can take a while to warm up its internal caches. Remember, SQL_NO_CACHE means avoid the query cache only. The index cache is the most important from a performance perspective. If the index has not been read, there's a significant penalty the first time it's used.
If you're using InnoDB, which you should be, ensure that your buffer pool is sufficiently large. Most servers should allocate at least several GB of memory.