i had a MySQL 5.7 instance on Google CloudSQL.
Now i created a MySQL 8 instance.
The configuration is pretty much the same (except i use 2 CPUs instead of one, 3.75GB of RAM).
Now the default config for MySQL Memory usage (like innodb_buffer_pool_size,..) seems to be the same.
I migrated about half of my applications to use this instance. What is happening now is, that the instance Memory usage goes above 3.XX GB and the service gets restarted.
Which is super annoying because in that time my applications obviously crash.
It seems like memory usage grows with every select statement, and everything is cached.
Here are some of the config values:
| key_buffer_size | 8.00 MB |
| innodb_buffer_pool_size | 1408.00 MB |
| innodb_log_buffer_size | 16.00 MB |
| sort_buffer_size | 0.25 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.25 MB |
| join_buffer_size | 0.25 MB |
| thread_stack | 0.273 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.00 MB |
This makes CloudSQL pretty much unusable to me. I need MySQL 8 without crashing several times a day.
Related
I recently migrated a database from a MySQL 5.6 physical server to Percona 5.7 Centos 7 VM.
In the legacy environment, the loading of a 27G CSV file into a single table took 2 hours to complete.
In the new environment, with heavily upgraded resources (RAM, CPUs, etc), it will run for over 24 hours and never complete.
Details:
Server CPU for the mysqld process will spike to over 100% when the job starts and maintain until the process is killed in db or command line.
This is a MyISAM table. (I do not want to hear about InnoDB. This engine is a customer requirement and there is no changing it)
Within 10 seconds, the MYI file for the table will build to 451MB and stop. 5 minutes later, it increases to 939MB within 5-10 seconds and stops again. Up to an hour or two later, it will increase again to 1.6G. 24 hours later, it may reach 6.2G; but does not increase past that point.
Recall that during the 'quiet' times, CPU is at 100+%. IO is zero except during the few seconds it is writing to the MYI file. Server load is 1-2. Memory usage is 27% at most. Disk is SSD. Server has 96G RAM.
The table is truncated before each script run, so bulk_insert_biffer_size is unused. Keys are automatically disabled due to empty table. I have tried tweaking every buffer and nothing changes the results in any way. I have changed the table to InnoDB, with no different except the files are a little bigger; but the stopping points are the same and it does not finish.
I have looked at OS level buffers and caching and have not found anything either.
Ideas?
mysql> show global variables like '%buffer%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| audit_log_buffer_size | 1048576 |
| bulk_insert_buffer_size | 67108864 |
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_dump_at_shutdown | ON |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_dump_pct | 25 |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | ON |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 10737418240 |
| innodb_change_buffer_max_size | 25 |
| innodb_change_buffering | all |
| innodb_log_buffer_size | 134217728 |
| innodb_sort_buffer_size | 1048576 |
| join_buffer_size | 8388608 |
| key_buffer_size | 26843545600 |
| myisam_sort_buffer_size | 4294967296 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 1048576 |
| read_rnd_buffer_size | 10485760 |
| sort_buffer_size | 67108864
Server Cores: 8 CPUs with 8 cores each
set sql_log_bin = 0;
LOCK TABLES t_surescripts_full WRITE;
TRUNCATE TABLE t_surescripts_full;
LOAD DATA LOCAL INFILE '/data/0149561_SS_PRE/SS_PRE_20200108_v44.match_output.just_output_records' INTO TABLE t_surescripts_full CHARACTER SET 'latin1'
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"' ESCAPED BY ''
LINES TERMINATED BY '\n';
UNLOCK TABLES;
The processlist is not really helpful as the load data infile is the only query and its status is 'executing', even after 20 hours.
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 385429
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 10240
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Consider these possibilities, please.
To set sql_bin_log OFF requires the user to have Super_priv as Y.
SELECT host,user,super_priv FROM mysql.user WHERE user = 'somename';
may be used to confirm the super_priv is Y
SET sql_log_bin = 0;
might be more successful with
SET SESSION sql_log_bin = OFF;
to avoid logging 27G of CSV data being loaded.
Please post new time required and see email sent to you this AM for additional details.
Many of us who are working on their home or pet projects and who use databases for storing structured data may encounter performance issues when trying to dump/restore data. It can annoying just to sit and wait for another dump restore operation for dozens of minutes or even for hours.
I have quite typical machine specs - 4 core i5 7300, 8 Gb RAM, quite fast M2 drive and Windows 10/MySQL 5.7.
The problem was that trying to restore ~4.5Gb file it took more than 4 hours. That was ridiculous and I wondered if mysqld process isn't using even a half of system resources - CPU/Memory/Disk I/O
Generally speaking, this post relates to some kind of summary of related issues including credits to many other posts which I put below
I performed a number of experiments with MySQL parameters for better dump restore operations
+--------------------------------+---------+---------+-----------------------+---------------------+
| Parameter | Default | Changed | Performance (minutes) | Perfomance gain (%) |
+--------------------------------+---------+---------+-----------------------+---------------------+
| All default | - | - | 259 min | - |
| innodb_buffer_pool_size | 8M | 4G | 32 min | +709% |
| innodb_buffer_pool_size | 4G | 6G | 32 min | ~0% |
| innodb_log_file_size | 48M | 1G | 11 min | +190% |
| innodb_log_file_size | 1G | 2G | 10 min | +10% |
| max_allowed_packet | 4M | 128M | 10 min | ~0% |
| innodb_flush_log_at_trx_commit | 1 | 0 | 9 min 25 sec | +5% |
| innodb_thread_concurrency | 9 | 0 | 9 min 27 sec | ~0% |
| innodb_double_write | - | off | 8 min 5 sec | +18% |
+--------------------------------+---------+---------+-----------------------+---------------------+
Summary (for best dump restore performance):
Set innodb_buffer_pool_size to half of RAM
Set innodb_log_file_size to 1G
Set innodb_flush_log_at_trx_commit to 0
Disabling innodb_double_write recommended only for fastest performance, it should be enabled on production. I also found, that changing another related parameter innodb_flush_method didn't change performance. But this can be an issue of Windows platform.
If you have complex structure with a lot of foreign keys for example, you can try Bulk Data Loading for InnoDB Tables tricks, link is listed at bottom of page
As you can see, I tried to increase CPU utilization by setting innodb_thread_concurrency to 0 (and also setting innodb_read_io_threads to maximum of 64) but results didn't change - it seems that mysqld process is already quite efficient for multi-core environment.
Restoring only data (without table structure) also didn't affect performance
I also changed a number of other parameters, but those above are most relevant ones for dump restore operation so far.
It may seem obvious, but novice question can be - where I can find and set those settings?
In Windows, my.ini file is located at ProgramData/MySQL/MySQL Server <version>/my.ini. You won't find some settings there (like innodb_double_write) - it's ok, just add to the end of the file.
The best way to change settings is to use MySQL Workbench (Server > Options file > InnoDB).
I pay my credits to following posts (and a lot of similar ones), which I found very useful:
https://www.percona.com/blog/2018/02/22/restore-mysql-logical-backup-maximum-speed/
https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/questions/86636/when-is-it-safe-to-disable-innodb-doublewrite-buffering
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html
My mysqld process consumes 232% CPU and and there 14000+ connections
(I'm a little new to this thing but following Stack Overflow for assistance).
top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3112 mysql 20 0 7061444 1.397g 15848 S 232.6 8.9 1138:06 mysqld
System:
Ubuntu 18.04,
16GB RAM,
8 Core CPU,
120GB Disk
and MySQL version 5.7.25
mysql> show status like 'Conn%';
+-----------------------------------+-------+
| Variable_name | Value |
+-----------------------------------+-------+
| Connection_errors_accept | 0 |
| Connection_errors_internal | 0 |
| Connection_errors_max_connections | 0 |
| Connection_errors_peer_address | 0 |
| Connection_errors_select | 0 |
| Connection_errors_tcpwrap | 0 |
| Connections | 14007 |
+-----------------------------------+-------+
7 rows in set (0.01 sec)
And show variables like "%timeout%"
mysql> show variables like "%timeout%";
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
13 rows in set (0.01 sec)
And mysqld.cnf settings
[mysqld]
# Skip reverse DNS lookup of clients
skip-name-resolve
default-storage-engine=InnoDB
max_allowed_packet=500M
max_connections = 256
interactive_timeout=7200
wait_timeout=7200
innodb_file_per_table=1
innodb_buffer_pool_size = 8G
innodb_buffer_pool_instances = 4
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_open_files=5000
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_old_blocks_time=2000
open_files_limit=50000
query_cache_type = 1
query_cache_min_res_unit = 1M
query_cache_limit = 1M
query_cache_size = 50M
tmp_table_size= 256M
max_heap_table_size= 256M
#key_buffer_size = 128M
thread_stack = 128K
thread_cache_size = 32
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
long_query_time = 1
Note: Corrected above mysqld.cnf values to match with below reports attached
Additional Info:
htop:- https://pastebin.com/43f4b3fK
top:- https://pastebin.com/rTh1XvUt
GLOBAL VARIABLES: https://pastebin.com/K2fgKwEv (Complete)
INNODB STATUS:- https://pastebin.com/nGrZjHAg
Mysqltuner:- https://pastebin.com/ZNYieJj8
[SHOW FULL PROCESSLIST], [ulimit -a], [iostat -xm], [lscpu] :- https://pastebin.com/mrnyQrXf
Server freezes when multiple db transaction is being carried out. Is there a lock like thing or any configuration flaws?
(Background: This is a WordPress blog and nobody else is accessing it right now. I somehow imported a 115K posts from an old blog but struck here with this CPU ghost)
Rate Per Second = RPS - Suggestions to consider for your my.cnf [mysqld] section,
innodb_lru_scan_depth=100 # from 1024 to reduce 90% of cpu cycles used for function every SECOND
innodb_io_capacity=3500 # from 2000 to enable higher IOPS on your SSD devices
innodb_flushing_avg_loops=5 # from 30 to reduce innodb_buffer_pool_pages_dirty overhead - count was 3183 in SGStatus
read_buffer_size=256K # from 128K to reduce handler_read_next RPS of 277,134
read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS of 778
There are many more opportunities to improve performance through Global Variables.
Disclaimer: I am the author of web site mentioned in my profile, Network profile that includes contact information.
A likely cause of high CPU and poor performance is the poor schema for wp_postmeta. I discuss the remedy here.
Meanwhile, "you can't tune your way out of a performance problem". I did glance at the settings -- all are "reasonable".
I have a table partitioned by month and this table has 1.5M rows, for making queries it touches the correct partition and the query runs really fast. I've made my partition on this table because everyday people makes batch insert and everyday this table is growing up, moreover than making batch inserts, we use an app for crud operations and my problems is on insert rows, each insert is taking more than 5segs. I have tracked what's wrong on my process-list and I can see all my inserts are waiting for something, and reminds me when I used to have this table on MyIsam (actually is on Innodb). I'd like to know how can I increment speediness for insert operations on a partition table when they use the web application (or not using batch insert).
My table is on **Mysql 5.1**
Operative System: Debian
InnoDB Setup by Default (I have not made any changes on my.cnf), so any suggestion would be fine.
I have 4G available on free memory according free -g command on Debian.
mysql> SHOW VARIABLES LIKE '%buffer%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| bulk_insert_buffer_size | 8388608 |
| innodb_buffer_pool_size | 2097152000 |
| innodb_log_buffer_size | 1048576 |
| join_buffer_size | 131072 |
| key_buffer_size | 16777216 |
| myisam_sort_buffer_size | 8388608 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 131072 |
| read_rnd_buffer_size | 262144 |
| sort_buffer_size | 2097144 |
| sql_buffer_result | OFF |
+-------------------------+------------+
12 rows in set
The last thing I did was increment Innodb_buffer_pool_size to 2000MB and my server got slower.
I am getting Too many connections error from Magento.
I have increased the max_connection to 1000 but I am still getting the error.
I contacted hosting provider and they asked me to use command show processlist; and review my coding.
When I ran the command, I only saw few active connections (about 4 to 5 connection). Therefore, I have no clue how to fix the problem.
I have increased the max_connection to 1500 and I am getting create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug error now.
Could anyone can help me with this situation please?
I am grateful for your help and time.
This is my my.cnf
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 1024
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 16
query_cache_type = 1
query_cache_size = 48M
log_slow_queries=/var/log/mysqld.slowquery.log
max_connections=1000
wait_timeout=120
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 20M
open_files_limit=34484
#
And this is show proccesslist
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| 4729 | root | localhost | abc_def| Sleep | 13093 | | NULL |
| 16282 | eximstats | localhost | eximstats | Sleep | 84 | | NULL |
| 16283 | DELAYED | localhost | eximstats | Delayed insert | 84 | Waiting for INSERT | |
| 16343 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
4 rows in set (0.00 sec)
You can increase max connections, but just so much
In essence there are too many connections being started, or not closed
So you can
increase max connections to allow more connections being started
Reduce wait_timeout for the connections that have gone away to free up again
Investigate where all these connections request are coming from? (can often be bots, or may bots at once, some index update or other cronjob etc)
thanks, hope it helps