I've recently noticed my MySQL server is creating a reasonably large number of disk tables [created temp disk tables: 67, created temp tables: 304].
I've been trying to identify what queries are creating these tables, but I've had no luck. I've enabled the slow query log for queries taking more than 1 second, but the queries showing up in there don't make sense. The only queries showing up regularly in the slow query log are updates to a single row on a user table, using the primary key as the where clause.
I've run 'explain' on all the queries that run regularly, and I'm coming up blank on the culprit.
The EXPLAIN report may say "Using filesort" but that's misleading. It doesn't mean it's writing to a file, it only means it's sorting without the benefit of an index.
The EXPLAIN report may say "Using temporary" but this doesn't mean it's using a temporary table on disk. It can make a small temp table in memory. The table must fit within the lesser of max_heap_table_size and tmp_table_size. If you increase tmp_table_size, you should also increase max_heap_table_size to match.
See also http://dev.mysql.com/doc/refman/5.1/en/internal-temporary-tables.html for more info on temporary table management.
But 4 gigs for that value is very high! Consider that this memory can potentially be used per connection. The default is 16 megs, so you've increased it by a factor of 256.
So we want to find which queries caused the temp disk tables.
If you run MySQL 5.1 or later, you can SET GLOBAL long_query_time=0 to make all queries output to the slow query log. Be sure to do this only temporarily and set it back to a nonzero value when you're done! :-)
If you run Percona Server, the slow query log is extended with additional information and configurability, including whether the query caused a temp table or a temp disk table. You can even filter the slow-query log to include only queries that cause a temp table or temp disk table (the docs I link to).
You can also process Percona Server's slow-query log with mk-query-digest and filter for queries that cause a temp disk table.
mk-query-digest /path/to/slow.log --no-report --print \
--filter '($event->{Disk_tmp_table }||"") eq "Yes"'
If you are using MySQL 5.6 or above, you can use the performance schema. Try something like:
select * from events_statements_summary_by_digest where SUM_CREATED_TMP_DISK_TABLES>0\G
Often queries using an ORDER BY will have to use temp table(s). If you run EXPLAIN on those queries you might see:
using filesort ; using temporary tables
Look for queries with an ORDER BY
Related
I have a large mysql database that receives large volumes of queries, each query takes around 5-10 seconds to perform.
Queries involve checking records, updating records and adding records.
I'm experiencing some significant bottle necks in the query executions, which I believe is due to incoming queries having to 'queue' whilst current queries are using records that these incoming queries need to access.
Is there a way, besides completely reformatting my database structure and SQL queries, to enable simultaneous use of database records by queries?
An INSERT, UPDATE, or DELETE operation locks the relevant tables - myISAM - or rows -InnoDB - until the operation completes. Be sure your query of this type are fastly commited .. and also chechck for you transacation isolating the part with relevant looking ..
For MySQL internal locking see: https://dev.mysql.com/doc/refman/5.5/en/internal-locking.html
Also remeber that in mysql there are differente storage engine with different features eg:
The MyISAM storage engine supports concurrent inserts to reduce
contention between readers and writers for a given table: If a MyISAM
table has no holes in the data file (deleted rows in the middle), an
INSERT statement can be executed to add rows to the end of the table
at the same time that SELECT statements are reading rows from the
table.
https://dev.mysql.com/doc/refman/5.7/en/concurrent-inserts.html
eventually take a look at https://dev.mysql.com/doc/refman/5.7/en/optimization.html
We have about 60-70 databases on an RDS server, and a lot of them can be deleted.
I want to do a benchmark of size before and after, and they are all (to my knowledge) innoDB tables.
So, I'm using the information_schema table per this link: https://www.percona.com/blog/2008/03/17/researching-your-mysql-table-sizes/
and this is great, except the first query listed (and I presume the others) just runs and runs and eventually finishes after EIGHT MINUTES.
I can run this query instantly:
SELECT COUNT(*) FROM information_schema.TABLES;
And get about 12,500 tables.
I also notice - ironically enough - that information_schema.TABLES has no indexes! My instinct is not to mess with that.
My best option at this point is to dump the TABLES table, and run the query on a copy that I actually index.
My questions are:
1. how dynamic is the information_schema.TABLES table and in fact that entire database
2. why is it running so slow?
3. would it be advisable to index some key fields to optimize the queries I want to do?
4. If I do do an SQL dump, will I be getting current table size information?
Thanks, I hope this question is instructive.
information_schema is currently a thin layer on top of some older stuff. The older stuff needed to "open" each table to discover its size, etc. That involved reading at least the .frm. But it did not need to open in order to count the number of tables. Think of the difference between SHOW TABLES and SHOW TABLE STATUS.
table_open_cache and table_definition_cache probably did have all the tables in them when you did the 8 minute query. Anyway, the values for those VARIABLES may have been less than 12,500, implying that there would have been churn.
In the future (probably 5.8), all that info will probably be sitting in a single InnoDB table instead of splayed across the OS's file system. At that point, it will be quite fast. (Think of how fast a table scan of 12,500 rows can be done, especially if fully cached in RAM.)
Since the information_schema does not have "real" tables, there is no way to add INDEXes.
mysqldump does not provide the table size info. Even if it did, it would be no faster, since it would go through the same, old, mechanism.
60 is a questionably large number of databases; 12K is a large number of tables. Often this implies a schema design that chooses to create multiple tables instead of putting data into a single table?
Our server database is in mysql 5.1
we have 754 tables in our db.We create a table for each project. Hence the large no of tables.
From past one week i have noticed a very long delay in inserts and updates to any table.If i create a new table and insert into it,It takes one min to insert around 300 recs.
Where as our test database in the same server has 597 tables Same insertion is very fast in test db.
Default engine is MYISAM. But we have few tables in INNODB .
There were a few triggers running. After i deleted triggers it has become some what faster. But it is not fast enough.
USE DESCRIBE to know your query execution plans.
Look more at http://dev.mysql.com/doc/refman/5.1/en/explain.html for its usage.
As #swapnesh mentions, the DESCRIBE command is very usefull for performance debugging.
You can also check your installation for issues using:
https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
You use it like this:
wget https://raw.github.com/rackerhacker/MySQLTuner-perl/master/mysqltuner.pl
chmod +x mysqltuner.pl
./mysqltuner.pl
Of course, here I am assuming that you run some kind of a Unix based system.
You can use OPTIMIZE. According to Manual it does the following:
Reorganizes the physical storage of table data and associated index
data, to reduce storage space and improve I/O efficiency when
accessing the table. The exact changes made to each table depend on
the storage engine used by that table
The syntax is:
OPTIMIZE TABLE tablename
Inserts are typically faster when made in bulk rather than one by one. Try inserting 10, 30, or 100 records per statement.
If you use jdbc you may be able to achieve the same effect with batching, without changing the SQL.
I have a mysql table with over 30 million records that was originally being stored with myisam. Here is a description of the table:
I would run the following query against this table which would generally take around 30 seconds to complete. I would change #eid each time to avoid database or disk caching.
select count(fact_data.id)
from fact_data
where fact_data.entity_id=#eid
and fact_data.metric_id=1
I then converted this table to innoDB without making any other changes and afterwards the same query now returns in under a second every single time I run the query. Even when I randomly set #eid to avoid caching, the query returns in under a second.
I've been researching the differences between the two storage types to try to explain the dramatic improvement in performance but haven't been able to come up with anything. In fact, much of what I read indicates that Myisam should be faster.
The queries I'm running are against a local database with no other processes hitting the database at the time of the tests.
That's a surprisingly large performance difference, but I can think of a few things that may be contributing.
MyISAM has historically been viewed as faster than InnoDB, but for recent versions of InnoDB, that is true for a much, much smaller set of use cases. MyISAM is typically faster for table scans of read-only tables. In most other use cases, I typically find InnoDB to be faster. Often many times faster. Table locks are a death knell for MyISAM in most of my usage of MySQL.
MyISAM caches indexes in its key buffer. Perhaps you have set the key buffer too small for it to effectively cache the index for your somewhat large table.
MyISAM depends on the OS to cache table data from the .MYD files in the OS disk cache. If the OS is running low on memory, it will start dumping its disk cache. That could force it to keep reading from disk.
InnoDB caches both indexes and data in its own memory buffer. You can tell the OS not to also use its disk cache if you set innodb_flush_method to O_DIRECT, though this isn't supported on OS X.
InnoDB usually buffers data and indexes in 16kb pages. Depending on how you are changing the value of #eid between queries, it may have already cached the data for one query due to the disk reads from a previous query.
Make sure you created the indexes identically. Use explain to check if MySQL is using the index. Since you included the output of describe instead of show create table or show indexes from, I can't tell if entity_id is part of a composite index. If it was not the first part of a composite index, it wouldn't be used.
If you are using a relatively modern version of MySQL, run the following command before running the query:
set profiling = 1;
That will turn on query profiling for your session. After running the query, run
show profiles;
That will show you the list of queries for which profiles are available. I think it keeps the last 20 by default. Assuming your query was the first one, run:
show profile for query 1;
You will then see the duration of each stage in running your query. This is extremely useful for determining what (e.g., table locks, sorting, creating temp tables, etc.) is causing a query to be slow.
My first suspicion would be that the original MyISAM table and/or indexes became fragmented over time resulting in the performance slowly degrading. The InnoDB table would not have the same problem since you created it with all the data already in it (so it would all be stored sequentially on disk).
You could test this theory by rebuilding the MyISAM table. The easiest way to do this would be to use a "null" ALTER TABLE statement:
ALTER TABLE mytable ENGINE = MyISAM;
Then check the performance to see if it is better.
Another possibility would be if the database itself is simply tuned for InnoDB performance rather than MyISAM. For example, InnoDB uses the innodb_buffer_pool_size parameter to know how much memory should be allocated for storing cached data and indexes in memory. But MyISAM uses the key_buffer parameter. If your database has a large innodb buffer pool and a small key buffer, then InnoDB performance is going to be better than MyISAM performance, especially for large tables.
What are your index definitions, there are ways in which you can create indexes for MyISAM in which your index fields will not be used when you think they would.
We have a series of tables that have grown organically to several million rows, in production doing an insert or update can take up to two seconds. However if I dump the table and recreate it from the dump queries are lightning fast.
We have rebuilt one of the tables by creating a copy rebuilding the indexes and then doing a rename switch and copying over any new rows, this worked because that table is only ever appended to. Doing this made the inserts and updates lightning quick.
My questions:
Why do inserts get slow over time?
Why does recreating the table and doing an import fix this?
Is there any way that I can rebuild indexes without locking a table for updates?
It sounds like it's either
Index unbalancing over time
Disk fragmentation
Internal innodb datafile(s) fragmentation
You could try analyze table foo which doesn't take locks, just a few index dives and takes a few seconds.
If this doesn't fix it, you can use
mysql> SET PROFILING=1;
mysql> INSERT INTO foo ($testdata);
mysql> show profile for QUERY 1;
and you should see where most of the time is spent.
Apparently innodb performs better when inserts are done in PK order, is this your case?
InnoDB performance is heavily dependent on RAM. If the indexes don't fit in RAM, performance can drop considerably and quickly. Rebuild the whole table improves performance because the data and indexes are now optimized.
If you are only ever inserting into the table, MyISAM is better suited for that. You won't have locking issues if only appending, since the record is added to the end of the file. MyISAM will also allow you to use MERGE tables, which are really nice for taking parts of the data offline or archiving without having to do exports and/or deletes.
Updating a table requires indices to be rebuilt. If you are doing bulk inserts, try to do them in one transaction (as the dump and restore does). If the table is write-biased I would think about dropping the indices anyway or let a background job do read-processing of the table (eg by copying it to an indexed one).
track down the in use my.ini and increase the key_buffer_size I had a 1.5GB table with a large key where the Queries per second (all writes) were down to 17. I found it strange that the in the administration panel (while the table was locked for writing to speed up the process) it was doing 200 InnoDB reads per second to 24 writes per second.
It was forced to read the index table off disk. I changed the key_buffer_size from 8M to 128M and the performance jumped to 150 queries per second completed and only had to perform 61 reads to get 240 writes. (after restart)
Could it be due to fragmentation of XFS?
Copy/pasted from http://stevesubuntutweaks.blogspot.com/2010/07/should-you-use-xfs-file-system.html :
To check the fragmentation level of a drive, for example located at /dev/sda6:
sudo xfs_db -c frag -r /dev/sda6
The result will look something like so:
actual 51270, ideal 174, fragmentation factor 99.66%
That is an actual result I got from the first time I installed these utilities, previously having no knowledge of XFS maintenance. Pretty nasty. Basically, the 174 files on the partition were spread over 51270 separate pieces. To defragment, run the following command:
sudo xfs_fsr -v /dev/sda6
Let it run for a while. the -v option lets it show the progress. After it finishes, try checking the fragmentation level again:
sudo xfs_db -c frag -r /dev/sda6
actual 176, ideal 174, fragmentation factor 1.14%
Much better!