Mysql Insert On Partition Takes Too Long With InnoDB Default Settings - mysql

I have a table partitioned by month and this table has 1.5M rows, for making queries it touches the correct partition and the query runs really fast. I've made my partition on this table because everyday people makes batch insert and everyday this table is growing up, moreover than making batch inserts, we use an app for crud operations and my problems is on insert rows, each insert is taking more than 5segs. I have tracked what's wrong on my process-list and I can see all my inserts are waiting for something, and reminds me when I used to have this table on MyIsam (actually is on Innodb). I'd like to know how can I increment speediness for insert operations on a partition table when they use the web application (or not using batch insert).
My table is on **Mysql 5.1**
Operative System: Debian
InnoDB Setup by Default (I have not made any changes on my.cnf), so any suggestion would be fine.
I have 4G available on free memory according free -g command on Debian.
mysql> SHOW VARIABLES LIKE '%buffer%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| bulk_insert_buffer_size | 8388608 |
| innodb_buffer_pool_size | 2097152000 |
| innodb_log_buffer_size | 1048576 |
| join_buffer_size | 131072 |
| key_buffer_size | 16777216 |
| myisam_sort_buffer_size | 8388608 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 131072 |
| read_rnd_buffer_size | 262144 |
| sort_buffer_size | 2097144 |
| sql_buffer_result | OFF |
+-------------------------+------------+
12 rows in set
The last thing I did was increment Innodb_buffer_pool_size to 2000MB and my server got slower.

Related

MySQL8 instance crashes on CloudSQL

i had a MySQL 5.7 instance on Google CloudSQL.
Now i created a MySQL 8 instance.
The configuration is pretty much the same (except i use 2 CPUs instead of one, 3.75GB of RAM).
Now the default config for MySQL Memory usage (like innodb_buffer_pool_size,..) seems to be the same.
I migrated about half of my applications to use this instance. What is happening now is, that the instance Memory usage goes above 3.XX GB and the service gets restarted.
Which is super annoying because in that time my applications obviously crash.
It seems like memory usage grows with every select statement, and everything is cached.
Here are some of the config values:
| key_buffer_size | 8.00 MB |
| innodb_buffer_pool_size | 1408.00 MB |
| innodb_log_buffer_size | 16.00 MB |
| sort_buffer_size | 0.25 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.25 MB |
| join_buffer_size | 0.25 MB |
| thread_stack | 0.273 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.00 MB |
This makes CloudSQL pretty much unusable to me. I need MySQL 8 without crashing several times a day.

Cannot complete LOAD DATA INFILE of large file in MySQL/Percona 5.7 VM

I recently migrated a database from a MySQL 5.6 physical server to Percona 5.7 Centos 7 VM.
In the legacy environment, the loading of a 27G CSV file into a single table took 2 hours to complete.
In the new environment, with heavily upgraded resources (RAM, CPUs, etc), it will run for over 24 hours and never complete.
Details:
Server CPU for the mysqld process will spike to over 100% when the job starts and maintain until the process is killed in db or command line.
This is a MyISAM table. (I do not want to hear about InnoDB. This engine is a customer requirement and there is no changing it)
Within 10 seconds, the MYI file for the table will build to 451MB and stop. 5 minutes later, it increases to 939MB within 5-10 seconds and stops again. Up to an hour or two later, it will increase again to 1.6G. 24 hours later, it may reach 6.2G; but does not increase past that point.
Recall that during the 'quiet' times, CPU is at 100+%. IO is zero except during the few seconds it is writing to the MYI file. Server load is 1-2. Memory usage is 27% at most. Disk is SSD. Server has 96G RAM.
The table is truncated before each script run, so bulk_insert_biffer_size is unused. Keys are automatically disabled due to empty table. I have tried tweaking every buffer and nothing changes the results in any way. I have changed the table to InnoDB, with no different except the files are a little bigger; but the stopping points are the same and it does not finish.
I have looked at OS level buffers and caching and have not found anything either.
Ideas?
mysql> show global variables like '%buffer%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| audit_log_buffer_size | 1048576 |
| bulk_insert_buffer_size | 67108864 |
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_dump_at_shutdown | ON |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_dump_pct | 25 |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | ON |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 10737418240 |
| innodb_change_buffer_max_size | 25 |
| innodb_change_buffering | all |
| innodb_log_buffer_size | 134217728 |
| innodb_sort_buffer_size | 1048576 |
| join_buffer_size | 8388608 |
| key_buffer_size | 26843545600 |
| myisam_sort_buffer_size | 4294967296 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 1048576 |
| read_rnd_buffer_size | 10485760 |
| sort_buffer_size | 67108864
Server Cores: 8 CPUs with 8 cores each
set sql_log_bin = 0;
LOCK TABLES t_surescripts_full WRITE;
TRUNCATE TABLE t_surescripts_full;
LOAD DATA LOCAL INFILE '/data/0149561_SS_PRE/SS_PRE_20200108_v44.match_output.just_output_records' INTO TABLE t_surescripts_full CHARACTER SET 'latin1'
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"' ESCAPED BY ''
LINES TERMINATED BY '\n';
UNLOCK TABLES;
The processlist is not really helpful as the load data infile is the only query and its status is 'executing', even after 20 hours.
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 385429
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 10240
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Consider these possibilities, please.
To set sql_bin_log OFF requires the user to have Super_priv as Y.
SELECT host,user,super_priv FROM mysql.user WHERE user = 'somename';
may be used to confirm the super_priv is Y
SET sql_log_bin = 0;
might be more successful with
SET SESSION sql_log_bin = OFF;
to avoid logging 27G of CSV data being loaded.
Please post new time required and see email sent to you this AM for additional details.

Mysqld consumes 232% CPU

My mysqld process consumes 232% CPU and and there 14000+ connections
(I'm a little new to this thing but following Stack Overflow for assistance).
top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3112 mysql 20 0 7061444 1.397g 15848 S 232.6 8.9 1138:06 mysqld
System:
Ubuntu 18.04,
16GB RAM,
8 Core CPU,
120GB Disk
and MySQL version 5.7.25
mysql> show status like 'Conn%';
+-----------------------------------+-------+
| Variable_name | Value |
+-----------------------------------+-------+
| Connection_errors_accept | 0 |
| Connection_errors_internal | 0 |
| Connection_errors_max_connections | 0 |
| Connection_errors_peer_address | 0 |
| Connection_errors_select | 0 |
| Connection_errors_tcpwrap | 0 |
| Connections | 14007 |
+-----------------------------------+-------+
7 rows in set (0.01 sec)
And show variables like "%timeout%"
mysql> show variables like "%timeout%";
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| have_statement_timeout | YES |
| innodb_flush_log_at_timeout | 1 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| rpl_stop_slave_timeout | 31536000 |
| slave_net_timeout | 60 |
| wait_timeout | 28800 |
+-----------------------------+----------+
13 rows in set (0.01 sec)
And mysqld.cnf settings
[mysqld]
# Skip reverse DNS lookup of clients
skip-name-resolve
default-storage-engine=InnoDB
max_allowed_packet=500M
max_connections = 256
interactive_timeout=7200
wait_timeout=7200
innodb_file_per_table=1
innodb_buffer_pool_size = 8G
innodb_buffer_pool_instances = 4
innodb_log_file_size = 1G
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_open_files=5000
innodb_io_capacity=2000
innodb_io_capacity_max=4000
innodb_old_blocks_time=2000
open_files_limit=50000
query_cache_type = 1
query_cache_min_res_unit = 1M
query_cache_limit = 1M
query_cache_size = 50M
tmp_table_size= 256M
max_heap_table_size= 256M
#key_buffer_size = 128M
thread_stack = 128K
thread_cache_size = 32
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
long_query_time = 1
Note: Corrected above mysqld.cnf values to match with below reports attached
Additional Info:
htop:- https://pastebin.com/43f4b3fK
top:- https://pastebin.com/rTh1XvUt
GLOBAL VARIABLES: https://pastebin.com/K2fgKwEv (Complete)
INNODB STATUS:- https://pastebin.com/nGrZjHAg
Mysqltuner:- https://pastebin.com/ZNYieJj8
[SHOW FULL PROCESSLIST], [ulimit -a], [iostat -xm], [lscpu] :- https://pastebin.com/mrnyQrXf
Server freezes when multiple db transaction is being carried out. Is there a lock like thing or any configuration flaws?
(Background: This is a WordPress blog and nobody else is accessing it right now. I somehow imported a 115K posts from an old blog but struck here with this CPU ghost)
Rate Per Second = RPS - Suggestions to consider for your my.cnf [mysqld] section,
innodb_lru_scan_depth=100 # from 1024 to reduce 90% of cpu cycles used for function every SECOND
innodb_io_capacity=3500 # from 2000 to enable higher IOPS on your SSD devices
innodb_flushing_avg_loops=5 # from 30 to reduce innodb_buffer_pool_pages_dirty overhead - count was 3183 in SGStatus
read_buffer_size=256K # from 128K to reduce handler_read_next RPS of 277,134
read_rnd_buffer_size=192K # from 256K to reduce handler_read_rnd_next RPS of 778
There are many more opportunities to improve performance through Global Variables.
Disclaimer: I am the author of web site mentioned in my profile, Network profile that includes contact information.
A likely cause of high CPU and poor performance is the poor schema for wp_postmeta. I discuss the remedy here.
Meanwhile, "you can't tune your way out of a performance problem". I did glance at the settings -- all are "reasonable".

Magento - Too many connections

I am getting Too many connections error from Magento.
I have increased the max_connection to 1000 but I am still getting the error.
I contacted hosting provider and they asked me to use command show processlist; and review my coding.
When I ran the command, I only saw few active connections (about 4 to 5 connection). Therefore, I have no clue how to fix the problem.
I have increased the max_connection to 1500 and I am getting create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug error now.
Could anyone can help me with this situation please?
I am grateful for your help and time.
This is my my.cnf
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 1024
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 16
query_cache_type = 1
query_cache_size = 48M
log_slow_queries=/var/log/mysqld.slowquery.log
max_connections=1000
wait_timeout=120
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 20M
open_files_limit=34484
#
And this is show proccesslist
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| 4729 | root | localhost | abc_def| Sleep | 13093 | | NULL |
| 16282 | eximstats | localhost | eximstats | Sleep | 84 | | NULL |
| 16283 | DELAYED | localhost | eximstats | Delayed insert | 84 | Waiting for INSERT | |
| 16343 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
4 rows in set (0.00 sec)
You can increase max connections, but just so much
In essence there are too many connections being started, or not closed
So you can
increase max connections to allow more connections being started
Reduce wait_timeout for the connections that have gone away to free up again
Investigate where all these connections request are coming from? (can often be bots, or may bots at once, some index update or other cronjob etc)
thanks, hope it helps

mysql setting variable innodb_flush_method to O_DSYNC or O_DIRECT

In my configuration innodb_flush_method=O_DSYNC from O-DIRECT reduces about 75% the iowait, and accordingly this the load. Should I set another variables besides innodb_flush_method to reduce more the iowait?
My configuration file is:
[mysqld]
innodb_file_per_table=1
query_cache_size=128M
thread_cache_size=64
key_buffer_size=32M
max_allowed_packet=16M
table_cache=1024
table_definition_cache=8192
wait_timeout=20
max_user_connections=25
innodb_flush_method=O_DSYNC
open_files_limit=16384
myisam_sort_buffer_size=2M
collation_server=utf8_unicode_ci
character_set_server=utf8
tmp_table_size = 384M
max_heap_table_size = 384M
innodb_buffer_pool_size=64M
innodb_thread_concurrency=8
max_connections=125
I have a database with 100 Innodb tables, 3 of them has about 25000 records, the others has no significant records. The average queries in peak time is about 160, the majority is SELECT
innodb_buffer_pool_size
Major problem is innodb_buffer_pool_size is too small. Recommandation is set to 50~75% of main memory.
innodb_buffer_pool_size=64M
I strongly recommand that you should increase it's value.
Generally speaking, O_DIRECT is little bit fast because InnoDB Buffer Pool caches Data+Index, So with O_DIRECT disabled File System Page Cache is faster. MySQL Manaual says
( http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_method)
Depending on hardware configuration, setting innodb_flush_method to O_DIRECT can either have either a positive or negative effect on performance. Benchmark your particular configuration to decide which setting to use.
But in my experience, there was no significant difference between O_DIRECT and O_DSYNC. Both SSD and HDD are test.
Anyway you should increase innodb_buffer_pool_size.
Calculating innodb buffer pool hit ratio
mysql> SHOW GLOBAL STATUS LIKE '%innodb%';
+---------------------------------------+-------------+
| Variable_name | Value |
+---------------------------------------+-------------+
.....
.....
| Innodb_buffer_pool_read_requests | 11054273949 |
| Innodb_buffer_pool_reads | 135237 |
| Innodb_buffer_pool_wait_free | 0 |
....
innodb buffer pool hit ratio = ((Innodb_buffer_pool_read_requests) / (Innodb_buffer_pool_read_requests + Innodb_buffer_pool_reads)) * 100
E.g. above examples,
hit ratio = (11054273949 / (11054273949 + 135237)) * 100 = 99.99%
I think this value is too small in your case.
query_cache_size
"the majority is SELECT"
If most queries are SELECT and update query is rare, I think increasing query_cache_size is very helpful for you.
Could you post your query cache status as follows?
mysql> show global status like 'Qc%';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| Qcache_free_blocks | 13 |
| Qcache_free_memory | 1073403104 |
| Qcache_hits | 217949 |
| Qcache_inserts | 337009 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 2122598 |
| Qcache_queries_in_cache | 68 |
| Qcache_total_blocks | 167 |
+-------------------------+------------+
mysql> show global status like 'com_select%';
+---------------+---------+
| Variable_name | Value |
+---------------+---------+
| Com_select | 3292531 |
+---------------+---------+
1 row in set (0.00 sec)
Calculating innodb buffer pool hit ratio
query cache hit ratio = ((Qcache_hits) / (Qcache_hits + Com_select)) * 100
first, figure out your query cache hit ratio.