After upgraded the MySQL from version 5.7 to 8.0, I found out that the database performance is significant drop.
Before upgrade the MySQL the CPU usage is stable around 30%+-, but after upgraded the CPU usage is become unstable and frequently having large spike.
And recently I test out something very interesting, I'm keep run a same query for a few time, and found out that the duration taken becomes longer and longer. as per picture shown below.
I had read a lot of article and stack overflow post, but none of the solution is really get help.
So hope that someone can share some idea or experience on tuning the MySQL8.0 with me.
Will very appreciate it.
Please let me know if needed any info for further investigate.
Config my.ini:-
key_buffer_size = 2G
max_allowed_packet = 1M
;Added to reduce memory used (minimum is 400)
table_definition_cache = 600
sort_buffer_size = 4M
net_buffer_length = 8K
read_buffer_size = 2M
read_rnd_buffer_size = 2M
myisam_sort_buffer_size = 2G
;Path to mysql install directory
basedir="c:/wamp64/bin/mysql/mysql8.0.20"
log-error="c:/wamp64/logs/mysql.log"
;Verbosity Value 1 Errors only, 2 Errors and warnings , 3 Errors, warnings, and notes
log_error_verbosity=2
;Path to data directory
datadir="c:/wamp64/bin/mysql/mysql8.0.20/data"
;slow_query_log = ON
;slow_query_log_file = "c:/wamp64/logs/slow_query.log"
;Path to the language
;See Documentation:
; http://dev.mysql.com/doc/refman/5.7/en/error-message-language.html
lc-messages-dir="c:/wamp64/bin/mysql/mysql8.0.20/share"
lc-messages=en_US
; The default storage engine that will be used when create new tables
default-storage-engine=InnoDB
; New for MySQL 5.6 default_tmp_storage_engine if skip-innodb enable
; default_tmp_storage_engine=MYISAM
;To avoid warning messages
secure_file_priv="c:/wamp64/tmp"
skip-ssl
explicit_defaults_for_timestamp=true
; Set the SQL mode to strict
sql-mode=""
;sql-mode="STRICT_ALL_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_AUTO_CREATE_USER"
;skip-networking
; Disable Federated by default
skip-federated
; Replication Master Server (default)
; binary logging is required for replication
;log-bin=mysql-bin
; binary logging format - mixed recommended
;binlog_format=mixed
; required unique id between 1 and 2^32 - 1
; defaults to 1 if master-host is not set
; but will not function as a master if omitted
server-id = 1
; Replication Slave (comment out master section to use this)
; New for MySQL 5.6 if no slave
skip-slave-start
; The InnoDB tablespace encryption feature relies on the keyring_file
; plugin for encryption key management, and the keyring_file plugin
; must be loaded prior to storage engine initialization to facilitate
; InnoDB recovery for encrypted tables. If you do not want to load the
; keyring_file plugin at server startup, specify an empty string.
early-plugin-load=""
;innodb_data_home_dir = C:/mysql/data/
innodb_data_file_path = ibdata1:12M:autoextend
;innodb_log_group_home_dir = C:/mysql/data/
;innodb_log_arch_dir = C:/mysql/data/
; You can set .._buffer_pool_size up to 50 - 80 %
; of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 4G
; Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 16M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 64
innodb_flush_log_at_trx_commit = 2
log_bin_trust_function_creators = 1;
innodb_lock_wait_timeout = 120
innodb_flush_method=normal
innodb_use_native_aio = true
innodb_flush_neighbors = 2
innodb_autoinc_lock_mode = 1
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
; Remove the next comment character if you are not familiar with SQL
;safe-updates
[isamchk]
key_buffer_size = 20M
sort_buffer_size = 20M
read_buffer_size = 2M
write_buffer_size = 2M
[myisamchk]
key_buffer_size = 256M ;20M hys
sort_buffer_size_size = 20M
read_buffer_size = 2M
write_buffer_size = 2M
[mysqlhotcopy]
interactive-timeout
[mysqld]
port = 3306
skip-log-bin
default_authentication_plugin= mysql_native_password
max_connections = 400
max_connect_errors = 100000
innodb_read_io_threads = 32
innodb_write_io_threads = 8
innodb_thread_concurrency = 64
Hardware:-
Ram: 16GB
CPU: 4 Cores 3.0 Ghz
SHOW GLOBAL STATUS:
https://pastebin.com/FVZrgnTw
SHOW ENGINE INNODB STATUS:
https://pastebin.com/Rewp84Gi
SHOW GLOBAL VARIABLES:
https://pastebin.com/3v6cM6KZ
Rate Per Second = RPS
Suggestions to consider for your my.ini [mysqld] section
It is unusual to have more than 1 [mysqld] section in the my.ini configuration
the section you have near the end of you my.ini could be moved to be just before
[mysqldump] to avoid confusion.
innodb_lru_scan_depth=100 # from 1024 to conserve 90% of CPU cycles used for function
key_buffer_size=16M # from 1G to conserve RAM - you are not using MyISAM data tables
read_rnd_buffer_size=64K # from 2M to reduce handler_read_rnd_next RPS of 1,872,921
innodb_io_capacity=900 # from 200 to more of your rotating drive IOPS capacity
You should find query completion time and CPU busy reduced with these changes.
select_scan averages 41 RPS and is caused by indexes not being available, causing delays.
For additional suggestions, view profile, Network profile for contact info, FAQ, additional tips and free downloadable Utility Scripts to assist with performance tuning.
I have found out the root cause, and post it in https://dba.stackexchange.com/questions/271785/query-performance-become-slower-after-upgrade-to-mysql-8-0-20 .
Thanks a lot for all the reply and suggestion. Appreciate it.
[Update: solved the problem at our site]
Actually I currently have had a very similar (maybe the same?) issue.
We have
Windows Server 2016, 4 CPUs, 32 GB RAM
MySQL 8 Community Edition
Java / Apache Tomcat based application on top
For 2 weeks we experienced severe application problems, with mysqld process taking 100% CPU as soon as application interaction happens -- rendering the server completely unresponsive.
The last change to the setup before this degradation was updating MySQL from 8.0.18 to 8.0.20 due to security fixes.
Query monitoring shows many occurrences of the same (simple) query
SELECT COUNT(1) FROM xxxxx;
which take 5-10 seconds (although the table only has about 3 rows, so it should rather take 5 milliseconds!).
One hypothesis was this MySQL issue: https://bugs.mysql.com/bug.php?id=99593
However the recommended workaround did not help me.
Solution for us:
Apparently there was an additional bug in MySQL Community Edition, introduced in 8.0.19 or 8.0.20.
After downgrading MySQL to 8.0.18 everything worked fine again!
Additional note:
Downgrading is not supported by MySQL!
Actually in order to provide a downgraded DB on the same machine, I...
did a backup of the application schema (with mysqldump command)
did a manual installation of MySQL 8.0.18 binaries (no installer)
created an additional MySQL instance (different data directory, different port)
imported the backup into the new instance (with mysql command)
created roles and permissions exactly like "before"
switched application config to new MySQL port
Related
DB server
16 cores
63Gb RAM
CentOS release 6.8
etc/my.cnf
[mysqld]
pid_file=/var/lib/mysql/fatty01.pid
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
innodb_buffer_pool_size = 50G
innodb_log_file_size = 2G
innodb_flush_log_at_trx_commit = 0
sync_binlog = 0
innodb_flush_method = O_DIRECT
innodb_buffer_pool_instances = 16
innodb_thread_concurrency = 16
skip_name_resolve = 1
innodb_io_capacity = 4000
innodb_io_capacity_max = 6000
innodb_buffer_pool_dump_at_shutdown = 1
innodb_buffer_pool_load_at_startup = 1
query_cache_size = 0
query_cache_type = OFF
innodb_checksum_algorithm = crc32
table_open_cache_instances = 16
innodb_read_io_threads = 20
innodb_write_io_threads = 10
max_connections = 700
when we have peaks of 3000 concurrent clients the mysqld does not seem
to pull all the resources posibles from the machine.
I see the load at 40 but the cpu does not seem to overpass the 60%
That reflects in the front end server
**
My question is clear, how can I improve the performance without compromising the server? Also how can decrease the MYSQL waiting time in the front end server, since clearly is a problem with the configurations on the DB server side.
**
**
UPDATE After research the problem seem to be in the slow queries, so I
guess this configuration is optimal for this hardware
**
No, it is not likely to be a simple tuning change. As I said, my.cnf looks good -- based on limited information.
Based on the charts, something happened suddenly. Or a flurry of activity.
Turn on the slowlog, set long_query_time=1, wait until the problem happens again, then use pt-query-digest to tell you the naughty query.
Your max_connections is only set to 700. How did you determine that you have 3000 concurrent clients? Site visits can be different from concurrent database connections. You might try increasing the connections available to your clients, as they may be experiencing slowdowns while waiting to connect.
Try checking SHOW PROCESSLIST; during peak usage to see how many connections your server is handling, and look for Too many connections in your mysql error log.
If you do increase your max_connections limit watch your CPU and RAM. MySQL will use more memory with more connections made available to clients.
I want to increase the performance of MySQL. So I have done the configuration level changes to MySQL. I used innodb_flush_method = O_DIRECT, but insert rate is not increasing much. Normally, insertion rate is 650 inserts/sec. How do I know weather O_DIRECT is working properly.
I am using Ubuntu 14.04.1 server and MySQL v5.6. CPU Memory and Disk I/O rates are normal (I use RAID, 16 GB RAM, 8 CPU cores) I use WSO2 CEP for insertion. I have implement that part and measured using MySQL workbench. But I couldn't get much more performance though I increase the insertion rate through wSO2 CEP.
I have used following my.cnf.
my.cnf
[mysqld]
innodb_buffer_pool_size = 9G
query_cache_size = 128M
innodb_log_file_size = 1768M
innodb_flush_log_at_trx_commit = 0
innodb_io_capacity = 1000
innodb_flush_method = O_DIRECT
max_heap_table_size = 536870912
innodb_lock_wait_timeout = 1
max_connections = 400
sort_buffer_size = 128M
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
skip-host-cache
skip-name-resolve
event_scheduler=on
In this case if you are using Event tables, older CEP/siddhi version does not perform batch insertions.. That could be the cause for above.. In latest SNAPSHOT source (of Siddhi) we have fixed this.. And you can gain considerably good numbers in next release..
I have a Drupal application which has been running on a single MySQL database server for 12 months, and has been performing relatively well (apart from peak load events). We needed to be able to support much higher spikes than the current DB server allowed, and at 32GB there was not much gain to be had from simply vertically scaling the single DB server.
We decided to set up a new MariaDB Galera cluster with 2x 32GB instances. We matched the configuration as far as possible with the soon-to-be-obselete DB server.
After migrating to the new database servers, we noticed that the CPU usage on those instances was constantly at 100%, and load was steadily increasing. Over the course of 1 hour, load average went from 0.1 to 150.
Initially we thought it might have something to do with the synchronisation between servers, but even with 1 server turned off and no sync occurring the it was still maxing out CPU as long as the web application was making requests to it.
After a lot of experimentation I found that reducing a few of the configuration options had a profound effect on the CPU usage and load. After making the below changes, the load average has stabilised between 4 and 6 on both instances.
The questions
What are some possible reasons for such a dramatic difference in CPU usage between the old and new servers, despite essentially migrating the configuration from the old server?
Load is currently hovering between 4 and 6 (and this is a low traffic period for our website). What should I be looking at to try and reduce this value, and ensure that when the site gets hit with some real traffic it wont fall over?
Config changes
innodb_buffer_pool_instances
Original value: 500 (there are 498 tables total in all databases)
New value: 92
table_cache
Original value: 8
New value: 4
max_connections
Original value: 1000
New value: 400
Current configuration
Here is the full configuration file from one of the servers /etc/mysql/my.cnf
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=1
bind-address=0.0.0.0
max_connections = 400
wait_timeout = 600
key_buffer_size = 16M
max_allowed_packet = 16777216
max_heap_table_size = 512M
table_cache = 92
thread_stack = 196608
thread_cache_size = 8
myisam-recover = BACKUP
query_cache_limit = 1048576
query_cache_size = 128M
expire_logs_days = 10
general_log = 0
max_binlog_size = 10485760
server-id = 0
innodb_file_per_table
innodb_buffer_pool_size = 25G
innodb_buffer_pool_instances = 4
innodb_log_buffer_size = 8388608
innodb_additional_mem_pool_size = 8388608
innodb_thread_concurrency = 16
net_buffer_length = 16384
sort_buffer_size = 2097152
myisam_sort_buffer_size = 8388608
read_buffer_size = 131072
join_buffer_size = 131072
read_rnd_buffer_size = 262144
tmp_table_size = 512M
long_query_time = 1
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="xxx"
wsrep_cluster_address="gcomm://xxx.xxx.xxx.107,xxx.xxx.xxx.108"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="xxx.xxx.xxx.107"
wsrep_node_name="xxx01"
[mysqldump]
quick
quote-names
max_allowed_packet = 16777216
[isamchk]
key_buffer_size = 16777216
We ended up getting a Percona consultant to assist with this problem. The main issue they identified was a large number of EXPLAIN queries were being executed. Turns out this was some debugging code that was left enabled (devel.module query logging for drupal devs). Disabling this saw CPU usage fall off a cliff.
There were a number of additional fixes which they recommended we implement.
Add a third node to the cluster to act as an observer and maintain the integrity of the cluster.
Add primary keys to tables that do not have one.
Change MyISAM tables to InnoDB.
Change wsrep_sst_method from rsync to xtrabackup-v2.
Set innodb_log_file_size to 512M.
Set innodb_flush_log_at_trx_commit to 2 as the cluster maintains the integrity of the data.
I hope this information helps anyone who runs into similar issues.
innodb_buffer_pool_instances should not be a function of the number of tables. The manual advocates that each instance be no smaller than 1GB. So, I suggest that even 92 is much too high. But my.cnf says only innodb_buffer_pool_instances = 4??
table_cache = 92
Maybe your comments are messed up? 500 would be more reasonable for table_open_cache. (table_cache is the old name.)
This may be the problem:
query_cache_size = 128M
Whenever a write occurs, all entries in the QC for the table(s) involved are purged from the QC. Recommend no more than 50M. Or, better yet, turn the QC off completely.
You have the slowlog turned on. What does pt-query-digest say are the top couple of queries? (This may be your best way to get a handle on the problem.)
I Have a dedicated server - Intel Xeon L5320 with 8GB of RAM and 2 x 500GB 7200RMP HDD
I need to optimize mysql to cope with a large 5Gb MyISAM table + around 25 - 30 smaller databases currently it looks like this:
key_buffer = 3G
thread_cache_size = 16
table_cache = 8192
query_cache_size = 512M
As it is the server really struggles and I get continues tmp disk full warnings could you please help me out / suggest the best my.cnf configuration for my server and or any other settings changes that would improve performance.
Thanks in advance
I recommend you use mytop and mysqltuner to analyze using mysql resources (RAM and CPU).
Too enable the option to log slow queries:
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
And check out this post about ntpd service:
MySQL high CPU usage
Finally I leave you in a setting that I have a dedicated server for a high rate of transactions.
max_allowed_packet=16M
key_buffer_size=8M
innodb_additional_mem_pool_size=10M
innodb_buffer_pool_size=512M
join_buffer_size=40M
table_open_cache=1024
query_cache_size=40M
table_definition_cache=256
innodb_additional_mem_pool_size=10M
key_buffer_size=16M
max_allowed_packet=32M
max_connections = 300
query_cache_limit = 10M
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 3
Greetings.
If /tmp is filling up, you are running some large, inefficient queries somewhere which are falling back to FILESORT. Well-written, efficient queries should typically not need this -- turn on slow query logging (if it isn't already) and check the log to see what needs optimizing.
I have what seems to be a slowing MySQL restore, and am looking for some tuning advice (I am a PostgreSQL and SQL Server guy).
The dev server has 48GB of RAM, 8 cores, running Centos 6.2 64-bit and MySQL 5.1.61 (same as production MySQL), and 4 x 7200 RPM SAS drives in software managed RAID-10 / XFS. The only MySQL client process is the restore. The dump was taken with a plain mysqldump of all databases on the production server.
I have applied some of the options from http://derwiki.tumblr.com/post/24490758395/loading-half-a-billion-rows-into-mysql, including setting FOREIGN_KEY_CHECKS and UNIQUE_CHECKS to zero. I have included my.cnf below.
Monitoring the restore with mytop and pv (pv backup.sql | mysql -u root -p), it appears that the INSERT INTO statements begin to progressively get slower. qps shown by mytop starts at 3, and drops to 0 at 60% through the dump file. Not sure how accurate mytop is in this case, as 3 inserts (with values) still seems slow. htop shows < 10% CPU utilization on the CPU used by MySQL, and less than 8GB of the 48GB of RAM is being utilized.
Different databases, but similar restore techniques, run about 5-10x faster on the same server using PostgreSQL.
Ideas?
[mysqld]
# my.cnf
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
slow-query-log
long_query_time = 60
log-slow-admin-statements
slow_query_log_file = /var/log/mysql_slow.log
innodb_buffer_pool_size = 2G
max_allowed_packet = 1G
key_buffer_size = 1G
concurrent_insert = 1
innodb_flush_log_at_trx_commit = 2
bulk_insert_buffer_size = 1G
innodb_flush_method = O_DIRECT
Sounds like your innodb indexes are slowing you down. If can change the way you dump the database you can remove all non-primary key indexes load the data then re-add them. Better still order the data to be loaded by the primary key. This is probably too much to ask.
Sounds like you are already aware of these tips: http://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
The flush to disk operation (innodb_flush_log_at_trx_commit = 2) may be happening many times a second. Check your innodb_log_file_size * innodb_log_files_in_group is sufficient to avoid writing to disk too often.
(I assumed you are using Innodb from your settings)