How to improve performance of MySQL dump restore - mysql

Many of us who are working on their home or pet projects and who use databases for storing structured data may encounter performance issues when trying to dump/restore data. It can annoying just to sit and wait for another dump restore operation for dozens of minutes or even for hours.
I have quite typical machine specs - 4 core i5 7300, 8 Gb RAM, quite fast M2 drive and Windows 10/MySQL 5.7.
The problem was that trying to restore ~4.5Gb file it took more than 4 hours. That was ridiculous and I wondered if mysqld process isn't using even a half of system resources - CPU/Memory/Disk I/O
Generally speaking, this post relates to some kind of summary of related issues including credits to many other posts which I put below

I performed a number of experiments with MySQL parameters for better dump restore operations
+--------------------------------+---------+---------+-----------------------+---------------------+
| Parameter | Default | Changed | Performance (minutes) | Perfomance gain (%) |
+--------------------------------+---------+---------+-----------------------+---------------------+
| All default | - | - | 259 min | - |
| innodb_buffer_pool_size | 8M | 4G | 32 min | +709% |
| innodb_buffer_pool_size | 4G | 6G | 32 min | ~0% |
| innodb_log_file_size | 48M | 1G | 11 min | +190% |
| innodb_log_file_size | 1G | 2G | 10 min | +10% |
| max_allowed_packet | 4M | 128M | 10 min | ~0% |
| innodb_flush_log_at_trx_commit | 1 | 0 | 9 min 25 sec | +5% |
| innodb_thread_concurrency | 9 | 0 | 9 min 27 sec | ~0% |
| innodb_double_write | - | off | 8 min 5 sec | +18% |
+--------------------------------+---------+---------+-----------------------+---------------------+
Summary (for best dump restore performance):
Set innodb_buffer_pool_size to half of RAM
Set innodb_log_file_size to 1G
Set innodb_flush_log_at_trx_commit to 0
Disabling innodb_double_write recommended only for fastest performance, it should be enabled on production. I also found, that changing another related parameter innodb_flush_method didn't change performance. But this can be an issue of Windows platform.
If you have complex structure with a lot of foreign keys for example, you can try Bulk Data Loading for InnoDB Tables tricks, link is listed at bottom of page
As you can see, I tried to increase CPU utilization by setting innodb_thread_concurrency to 0 (and also setting innodb_read_io_threads to maximum of 64) but results didn't change - it seems that mysqld process is already quite efficient for multi-core environment.
Restoring only data (without table structure) also didn't affect performance
I also changed a number of other parameters, but those above are most relevant ones for dump restore operation so far.
It may seem obvious, but novice question can be - where I can find and set those settings?
In Windows, my.ini file is located at ProgramData/MySQL/MySQL Server <version>/my.ini. You won't find some settings there (like innodb_double_write) - it's ok, just add to the end of the file.
The best way to change settings is to use MySQL Workbench (Server > Options file > InnoDB).
I pay my credits to following posts (and a lot of similar ones), which I found very useful:
https://www.percona.com/blog/2018/02/22/restore-mysql-logical-backup-maximum-speed/
https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/questions/86636/when-is-it-safe-to-disable-innodb-doublewrite-buffering
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html

Related

MySQL8 instance crashes on CloudSQL

i had a MySQL 5.7 instance on Google CloudSQL.
Now i created a MySQL 8 instance.
The configuration is pretty much the same (except i use 2 CPUs instead of one, 3.75GB of RAM).
Now the default config for MySQL Memory usage (like innodb_buffer_pool_size,..) seems to be the same.
I migrated about half of my applications to use this instance. What is happening now is, that the instance Memory usage goes above 3.XX GB and the service gets restarted.
Which is super annoying because in that time my applications obviously crash.
It seems like memory usage grows with every select statement, and everything is cached.
Here are some of the config values:
| key_buffer_size | 8.00 MB |
| innodb_buffer_pool_size | 1408.00 MB |
| innodb_log_buffer_size | 16.00 MB |
| sort_buffer_size | 0.25 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.25 MB |
| join_buffer_size | 0.25 MB |
| thread_stack | 0.273 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.00 MB |
This makes CloudSQL pretty much unusable to me. I need MySQL 8 without crashing several times a day.

mariadb increase disk usage every day and reclaim the space when stop mariadb daemon

My mariadb is increase disk usage every day until full the disk up. Actually data inserting to the other disk.
Only reclaim the used disk space when stop mariadb daemon.(or restart)
Mariadb is work fine until the disk full up.
Mariadb Env.
Version : 10.3.x
OS : Ubuntu 18.04 LTS
Related DB Configurations.
| Variable_name | Value |
+------------------------+--------------------------------------+
| basedir | /usr |
| innodb_temp_data_file_path | ibtmp1:12M:autoextend |
| datadir | /mysql_data |
| innodb_tmpdir | |
| max_tmp_tables | 32 |
| slave_load_tmpdir | /mysql_data/mysql_tmp |
| tmp_disk_table_size | 4294967295 |
| tmp_memory_table_size | 33554432 |
| tmp_table_size | 33554432 |
| tmpdir | /mysql_data/mysql_tmp |
+----------------------------+----------------------------------+
'/mysql_data' is physically separated disk with '/'.
I understand that increased '/mysql_data' disk by inserted data but don't under stand why increased every day about 2G size usage to '/' disk.
When stop mariadb daemon then reclaim the space clearly at the disk '/'.
Everyday full backup the database by mysqldump command to the '/mysql_data' disk.
Searching mariadb log but there's noting error or noticeable messages.
I think mariadb daemon is grabbed system space('/'), but don't know why.
I need your help. How can i solve this?

very poor performance from MySQL INNODB

lately my mysql 5.5.27 has been performing very poorly. I have changed just about everything in the config to try and see if it makes a difference with no luck. I am getting tables locked up constantly reaching 6-9 locks per table. My select queries take forever 300sec-1200sec.
Moved Everything to PasteBin because it exceeded 30k chars
http://pastebin.com/bP7jMd97
SYS ACTIVITIES
90% UPDATES AND INSERTS
10% SELECT
My slow query log is backed up. below I have my mysql info. Please let me know if there is anything i should add that would help.
Server version 5.5.27-log
Protocol version 10
Connection XX.xx.xxx via TCP/IP
TCP port 3306
Uptime: 21 hours 39 min 40 sec
Uptime: 78246 Threads: 125 Questions: 6764445 Slow queries: 25 Opens: 1382 Flush tables: 2 Open tables: 22 Queries per second avg: 86.451
SHOW OPEN TABLES
+----------+---------------+--------+-------------+
| Database | Table | In_use | Name_locked |
+----------+---------------+--------+-------------+
| aridb | ek | 0 | 0 |
| aridb | ey | 0 | 0 |
| aridb | ts | 4 | 0 |
| aridb | tts | 6 | 0 |
| aridb | tg | 0 | 0 |
| aridb | tgle | 2 | 0 |
| aridb | ts | 5 | 0 |
| aridb | tg2 | 1 | 0 |
| aridb | bts | 0 | 0 |
+---------+--------------+-------+------------+
I've hit a brick wall and need some guidance. thanks!
From looking through your log it would seem the problem (as I’m quite sure you’re aware) is due to the huge amount of locks that are present given the amount of data being updated / selected / inserted and possible at the same time.
It is really hard to give performance tips without first knowing lots of information which you don’t provide such as size of tables, schema, hardware, config, topology etc – SO probably isn’t the best place for such a broad question anyway!
I’ll keep my answer as generic as I can, but possible things to look at or try would be:
Run Explain the select queries and make sure they are selectively finding data and not performing full table scans or wasting huge amounts of data
Leave the server to do it's inserts and updates but create a read replica for reporting, this way data won’t be locked
If you’re updating many rows at a time, think about updating with a limit supplied to stop so much data being locked
If you are able to, delay the inserts to relieve pressure
Look at a hardware fix such as Solid State Disks for IO performance and more memory so more indexing / data can be held in memory or to have a larger buffer

The case of the mysterious MySQL caching across restarts

I found a very slow MySQL query in my web app. The weird thing is that the query is only slow the first time it's executed, despite the fact that the query_cache is set to its default (query_cache_size 0) like so:
mysql> show variables like 'query%';
+------------------------------+---------+
| Variable_name | Value |
+------------------------------+---------+
| query_alloc_block_size | 8192 |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 0 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
| query_prealloc_size | 8192 |
+------------------------------+---------+
The even weirder thing is that this speedup persists even after the MySQL server has been stopped and restarted (I'm using OSX, and perform this restart using the system preferences pane.) The only way I can re-create the poor performance of the initial query is by rebooting the system.
So my question is: how is this happening? Obviously some sort of caching at work, but where? And how does it persist across database restarts? This query is mediated through our web app, which comes via PHP/Apache, but there are no extra bells and whistles, and the curious caching also persists across Apache restarts.
Help?
wild guess is operating system, or hard disk, caching the file

Fast MySQL bulk load when indexes don't fit into key_buffer

have an issue here of how to configure mysql (myisam) properly for the bulk insert (load data infile) to be performed fast.
There is 6 Gb text file to be imported, 15 mln rows, 16 columns (some int, some varchar(255), one varchar(40), one char(1) some datetime, one mediumtext).
relative my.conf settings:
key_buffer = 800M
max_allowed_packet = 160M
thread_cache_size = 80
myisam_sort_buffer_size = 400M
bulk_insert_buffer_size = 400M
delay_key_write = ON
delayed_insert_limit = 10000
There are three indexes - one primary (autincrement int), one unique int and one unique varchar(40).
The problem is that after executing the load data infile command, the first 3 gigs of data are imported quickly (based on the increasing size of table.myd - 5-8 mb/s), but uppon crossing the 3020 Mb limit the import speed decreases greatly - the size of table.myd is growing 0,5mb/s. I've noticed, that the import process slows down upon the Key_blocks_unused gets drained to zero. These are the output of mysql> show status like '%key%'; in the beginning of import:
mysql> show status like '%key%';
+------------------------+---------+
| Variable_name | Value |
+------------------------+---------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 57664 |
| Key_blocks_unused | 669364 |
| Key_blocks_used | 57672 |
| Key_read_requests | 7865321 |
| Key_reads | 57672 |
| Key_write_requests | 2170158 |
| Key_writes | 4 |
+------------------------+---------+
10 rows in set (0.00 sec)
and this is what how it looks after the 3020Mb limit, i.e. when key_blocks_unused gets down to zero, and that's when the bulk insert process get really slow:
mysql> show status like '%key%';
+------------------------+-----------+
| Variable_name | Value |
+------------------------+-----------+
| Com_preload_keys | 0 |
| Com_show_keys | 0 |
| Handler_read_key | 0 |
| Key_blocks_not_flushed | 727031 |
| Key_blocks_unused | 0 |
| Key_blocks_used | 727036 |
| Key_read_requests | 171275179 |
| Key_reads | 1163091 |
| Key_write_requests | 41181024 |
| Key_writes | 436095 |
+------------------------+-----------+
10 rows in set (0.00 sec)
The problem is pretty clear, to my understanding - indexes are being stored in cache, but once the cache fills in, the indexes get written to disk one by one, which is slow, therefore all the process slows down. If i disable the unique index based on varchar(40) column and, therefore, all the indexes fit into Key_blocks_used (i guess this is the variable directly dependant on key_buffer, isn't it?), all the bulk import is successfull. So, i'm curious, how to make mysql put all the Key_blocks_used data to disk at once, and free up the Key_blocks_used?. I understand that it might be doing some sorting on-the-fly, but still, i guess it should be available to do some cached RAM-disk synchronization in order to successfully manage indexes even when they don't all fit into the memory cache. So my question is "how to configure mysql so that bulk inserting would avoid writing to disk on (almost)each index, even when all indexes don't fit into a cache?" last not least - delay_key_write is set to 1 for a given table, though it didn't add any speed-up, in comparison to when it was disabled.
Thanks for any thoughts, ideas, explanations and RTMs in advance ! (:
One more little question - how would i calculate how many varchar(40) indexes would fit into cache before Key_blocks_unused gets to 0?
P.S. disabling indexes with $myisamchk --keys-used=0 -rq /path/to/db/tbl_name and then re-enabling them with $myisamchk -rq /path/to/db/tbl_name, as described in Mysql docs is a known solution, which works, but only when bulk-inserting into an empty table. When there are some data in a table already, the index uniqueness checking is necessary, therefore disabling indexes is not a solution.
When you import data with "load data infile", I think mysql perform the insert one by one and with each insert, it tries to update the index file .MYI as well and this could slow down your import as it consume bot I/O and CPU resources for each individual insert.
What you could do is add 4 files to your import file to disable the keys of your table and enable it at the end of the insert statement and you should see the difference.
LOCK TABLES tableName WRITE;
ALTER TABLE tableName DISABLE KEYS;
----
your insert statement from go here..
----
ALTER TABLE tableName ENABLE KEYS
UNLOCK TABLES;
If you don't want to edit your data file, try to use mysqldump to get a proper dump file and you shouldn't run into this slowness with import data.
##Dump the database
mysqldump databaseName > database.sql
##Import the database
mysql databaseName < database.sql
Hope this helps!
I am not sure the key_buffer you mention is same as key_buffer_size.
I had faced similar problem. My problem was resolved by bumping up the key_buffer_size value to something like 1GB. Check my question here.