I have updated several MySQL tables to InnoDb. After doing so, MySQL has become sluggish and the hard drive I have the database on is constantly writing, even though my changes have been completed. Peridoically the CPU will get heavy use, 100% on two cores, but whatever is using them is not registering that us in System Monitor (Debian). Reading the database is possible, but slow. I have not tried writing, as it is obviously busy doing something - but I do not know what.
Digging deeper, I have found that I have a very large ibdata1 file, almost 62GB - I have some large tables in InnoDB, including 16, 10, 9, 1.5 and 1.1 GB; and many smaller.
Does anyone have any idea what may be happening here? Or logs I can look at that might shed some light? I have restarted, but when MySQL comes online, the same thing happens, and has been going on for over an hour. Also, would it be a good idea for me to change some InnoDB tables to MyISSAM? Of the large ones, none require InnoDb for transactions, but some of my smaller ones do (under 50MB).
Two most important options to start are innodb_buffer_size and innodb_logfile_size. Set former to be as big as your database, but leave at least 4-6G for the OS.
Optimal logfile size depends on how much you have writes, but something around 256M works for most workloads.
Related
I have MySQL 5.7.24 running on a Windows VM. It has a few thousand databases (7000). I understand this is not the recommended set up for MySQL but some business requirements have necessitated this multi-tenant db structure and I cannot change that unfortunately.
The server works fine when it is running but the startup time can get pretty long, almost 20-30 mins after a clean shutdown of the MySQL service and 1+ hours after a restart of the Windows VM.
Is there any way to reduce the startup time?
In my configuration, I observed that innodb_file_per_table = ON (which is the default for MySQL 5.7 I believe) and so I think that at startup it is scanning every .ibd file.
Would changing innodb_file_per_table = OFF and then altering each table to get rid of the .ibd files be a viable option. One thing to note is that in general, every database size is pretty small and even with 7000 databases, the total size of the data is about 60gb only. So to my understanding, innodb_file_per_table = ON is more beneficial when there are single tables that can get pretty large which is not the case for my server.
Question: Is my logic reasonable and could this innodb_file_per_table be the reason for the slow startup? Or is there some other config variable that I can change so that each .ibd file is not scanned before the server starts accepting connections.
Any help to guide me in the right direction would be much appreciated. Thanks in advance!
You should upgrade to MySQL 8.0.
I was working on a system with the same problem as yours. In our case, we had about 1500 schemas per MySQL instance, and a little over 100 tables per schema. So it was about 160,000+ tables per instance. It caused lots of problems trying to use innodb_file_per_table, because the mysqld process couldn't work with that many open file descriptors efficiently. The only way to make the system work was to abandon file-per-table, and move all the tables into the central tablespace.
But that causes a different problem. Tablespaces never shrink, they only grow. The only way to shrink a tablespace is to move the tables to another tablespace, and drop the big one.
One day one of the developers added some code that used a table like a log, inserting a vast number of rows very rapidly. I got him to stop logging that data, but by then it was too late. MySQL's central tablespace had expanded to 95% of the size of the database storage, leaving too little space for binlogs and other files. And I could never shrink it without incurring downtime for our business.
I asked him, "Why were you writing to that table so much? What are you doing with the data you're storing?" He shrugged and said casually, "I dunno, I thought the data might be interesting sometime, but I had no specific use for them." I felt like strangling him.
The point of this story is that one naïve developer can cause a lot of inconvenience if you disable innodb_file_per_table.
When MySQL 8.0 was being planned, the MySQL Product Manager solicited ideas for scalability criteria. I told him about the need to support instances with a lot of tables, like 160k or more. MySQL 8.0 included an all-new implementation of internal code for handling metadata about tables, and he asked the engineers to test the scalability with up to 1 million tables (with file-per-table enabled).
So the best solution to your problem is not to turn off innodb_file_per_table. That will just lead to another kind of crisis. The best solution is to upgrade to 8.0.
Re your comment:
As far as I know, InnoDB does not open tables at startup time. It opens tables when they are first queried.
Make sure you have table_open_cache and innodb_open_files tuned for your scale. Here is some reading:
https://dev.mysql.com/doc/refman/5.7/en/table-cache.html
https://www.percona.com/blog/2009/11/18/how-innodb_open_files-affects-performance/
https://www.percona.com/blog/2018/11/28/what-happens-if-you-set-innodb_open_files-higher-than-open_files_limit/
https://www.percona.com/blog/2017/10/01/one-million-tables-mysql-8-0/
I hope you are using an SSD for storage, not a spinning disk. This makes a huge difference when doing a lot of small I/O operations. SSD storage devices have been a standard recommendation for database servers for about 10 years.
Also this probably doesn't help you but I gave up on using Windows around 2007. Not as a server nor a desktop.
I have a MySQL server running on CentOS which houses a large (>12GB) DB. I have been advised to move to InnoDB for performance reasons as we are experiencing lockups where the application that relies on the DB becomes unresponsive when the server is busy.
I have been reading around and can see that the ALTER command that changes the table to InnoDB is likely to take a long time and hammer the server in the process. As far as I can see, the only change required is to use the following command:
ALTER TABLE t ENGINE=InnoDB
I have run this on a test server and it seems to complete fine, taking about 26 minutes on the largest of the tables that needs to be converted.
Having never run this on a production system I am interested to know the following:
What changes are recommended to be made to the MySQL config to take advantage of additional performance of InnoDB tables? The server currently has 3GB assigned to InnoDB cache - was thinking of increasing this to 15GB once the additional RAM is installed.
Is there anything else I should do to the server with this change?
I would really recommend using either Percona MySQL or MariaDB. Both have tools that will help you get the most out of InnoDB, as well as some tools to help you diagnose and optimize your database further (for example, Percona's Online Schema Change tool could be used to alter your tables without downtime).
As far as optimization of InnoDB, I think most would agree that innodb_buffer_pool_size is one of the most important parameters to tune (and typically people set it around 70-80% of total available memory, but that's not a magic number). It's not the only important config variable, though, and there's really no magic run_really_fast setting. You should also pay attention to innodb_buffer_pool_instances (and there's a good discussion about this topic on https://dba.stackexchange.com/questions/194/how-do-you-tune-mysql-for-a-heavy-innodb-workload)
Also, you should definitely check out the tips offered in the MySQL documentation itself (http://dev.mysql.com/doc/refman/5.6/en/optimizing-innodb.html). It's also a good idea to pay attention to your InnoDB hit ratio (Rolado over at DBA Stackexchange has a great answer on this topic, eg, https://dba.stackexchange.com/questions/65341/innodb-buffer-pool-hit-rate) and analyze your slow query logs carefully. Towards that later end, I would definitely recommend taking a look at Percona again. Their slow query analyzer is top notch and can really give you a leg up when it comes to optimizing SQL performance.
We have a windows 2003 server with MySQL 5.1 installed. Since a few days the mysqldump is almost 1 hour slower than normal. The total size of the databases was growing with approx. 1 MB per day. Not much differences.
Our technical support guy can't find strange things on our machine. He says that it would be a good idea to defrag our server with MySQL on it. I never thought about defragmentation but I have Defraggler installed. After anaylizing it says 80% fragmentation!
Could that be the issue / reason why the dump's are getting slower with almost 1 hour ?
My first idea was that some windows update may be the reason.
Please sent me your thoughts
And is it wise to do a defragmentation on a MySQL production server??
Defraggler says deframentation could take more than 1 day ?
Your database dump is probably disk-bound, meaning that the is limited by accessing data on the disk.
If the data on the disk is highly fragmented, it can significantly affect the performance. Depending on the layout of the data on the disk, MySQL may be filling in lots of small gaps in the disk with the new data, which will be much slower to work with than large contiguous blocks of data.
Regular defragmentation is a good idea. However, since MySQL likely has a lock on the data files for your databases, it may be necessary to shut down MySQL to defragment its files.
I suggest at least running defraggler without shutting down MySQL to defragment as much as possible. I'm not familiar with defraggler, but if it makes it possible, try to find out if MySQL's data is fragmented. It might have a table of fragmented files or a graphical view of the data on the disk. The MySQL data files will probably be some of the largest files so that might make it easier to find them. If you find that its files are heavily fragmented, you may then want to try to find some time to shut down MySQL and defragment its files.
I have been looking at ways to keep a lid on the potential size of a mySQL DB in the webapp that I plan to get into beta soon. The DB has the potential to become very big. Having examined what I am doing I found that using the file system to store certain data (mostly images) would have a huge impact on DB size - ensure that I would probably never have to deal with a DB that blows the TB ceiling though it might still get into 100s of GBs. I ended up discovering TokuDB in my quest for the answer to the question - "would it be worth compressing my VARCAHAR data prior to putting it into the DB?".
The benchmarks I have seen for TokuDB are very impressive. However, I have also run into a few comments that are less than reassuring
It is slow on ON DUPLICATE UPDATES. This has been countered by another writer who claims that a simple switch to REPLACE INTO fixes the issue.
It is good enough for a backup DB but not for production purposes.
Those comments are more than a year old so I was wondering... what is the current view? I would like to hear from anyone who has experience with using TokuDB. The tables on which I may use it have a high proportion of write accesses and typically have one largish VARCHAR column.
I'm running a classfieds website which has a records around 300K, it was on MyISAM engine but site at times will go high load and it will make server crashed. So my IT tech decided to switch to innoDB, since site is switched to innoDB it is very slow and taking alot of CPU 300%.
My IT tech said because changes are made recently he believes it is rebuilding indexes that's why site is slow and taking alot of cpu. Just want to know how true it is?
We can't tell without looking at the server, but it is possible. Other things that could be causing this is not enough memory assigned to InnoDB. People often forget to tweak default values in my.cnf file, which results in poor performance and a lot of disk I/O