I managed (with some help from here) to setup a replication from a MASTER server running mysql 5.6 (centos 6) to a slave running Mariadb 10.1.22 (Centos 7).
My issue now is this, i have another server with the exact mariadb version and specs but its replication is not catching up, instead it is increasing.
When started it was 48000 seconds behind and quickly dropped to 46000 after a few minutes. After that it is steadily increasing. ATM of writing almost back to 48K seconds
Show full processlist; shows the sql thread is spending up to 8 seconds running Update_rows_log_event::ha_update_row(-1) back to back which from all the google search i cannot find what it means.
MariaDB [(none)]> show full processlist;
+-----+------------------+---------------------------------------+--------------+---------+------+------------------------------------------+-----------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-----+------------------+---------------------------------------+--------------+---------+------+------------------------------------------+-----------------------+----------+
| 3 | system user | | NULL | Connect | 3640 | Queueing master event to the relay log | NULL | 0.000 |
| 2 | system user | | NULL | Connect | 5 | Update_rows_log_event::ha_update_row(-1) | NULL | 0.000 |
Also i caught a simple UPDATE table SET timestamp = NOW() WHERE static_ip = 'a-valid-ip' AND process_id = '13217' taking up to 6 seconds while the table has the static_ip and process_id columns as PK and the command takes 0.078 seconds when executed directly.
Contents of /etc/my.cnf
[mysqld]
max_allowed_packet = 1G
max_connections = 600
thread_cache_size = 16
query_cache_size = 64M
tmp_table_size= 512M
max_heap_table_size= 512M
wait_timeout=60
#Innodb Settings
innodb_file_per_table=1
innodb_buffer_pool_size = 25G
innodb_log_file_size = 2048M
innodb_flush_log_at_trx_commit = 0
innodb_file_format = Barracuda
innodb_flush_neighbors = 0
#Log
log-error =/var/log/error.log
tmpdir = /dev/shm
#Replication SLAVE
server-id=6
slave-skip-errors=1062
my.cnf is same as the server that is running OK except for the slave-id.
Any suggestions/help on what is happening?
Thank you.
From help from the guys at mariadb the ha_update_rows was not relevant and the reason for the slowness was dual disk failure on the machine.
[root#ser3 ~]# dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k;
1024+0 records in
1024+0 records out
402653184 bytes (403 MB) copied, 43.1096 s, 9.3 MB/s
This is an SSD.
Related
i had a MySQL 5.7 instance on Google CloudSQL.
Now i created a MySQL 8 instance.
The configuration is pretty much the same (except i use 2 CPUs instead of one, 3.75GB of RAM).
Now the default config for MySQL Memory usage (like innodb_buffer_pool_size,..) seems to be the same.
I migrated about half of my applications to use this instance. What is happening now is, that the instance Memory usage goes above 3.XX GB and the service gets restarted.
Which is super annoying because in that time my applications obviously crash.
It seems like memory usage grows with every select statement, and everything is cached.
Here are some of the config values:
| key_buffer_size | 8.00 MB |
| innodb_buffer_pool_size | 1408.00 MB |
| innodb_log_buffer_size | 16.00 MB |
| sort_buffer_size | 0.25 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.25 MB |
| join_buffer_size | 0.25 MB |
| thread_stack | 0.273 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.00 MB |
This makes CloudSQL pretty much unusable to me. I need MySQL 8 without crashing several times a day.
I recently migrated a database from a MySQL 5.6 physical server to Percona 5.7 Centos 7 VM.
In the legacy environment, the loading of a 27G CSV file into a single table took 2 hours to complete.
In the new environment, with heavily upgraded resources (RAM, CPUs, etc), it will run for over 24 hours and never complete.
Details:
Server CPU for the mysqld process will spike to over 100% when the job starts and maintain until the process is killed in db or command line.
This is a MyISAM table. (I do not want to hear about InnoDB. This engine is a customer requirement and there is no changing it)
Within 10 seconds, the MYI file for the table will build to 451MB and stop. 5 minutes later, it increases to 939MB within 5-10 seconds and stops again. Up to an hour or two later, it will increase again to 1.6G. 24 hours later, it may reach 6.2G; but does not increase past that point.
Recall that during the 'quiet' times, CPU is at 100+%. IO is zero except during the few seconds it is writing to the MYI file. Server load is 1-2. Memory usage is 27% at most. Disk is SSD. Server has 96G RAM.
The table is truncated before each script run, so bulk_insert_biffer_size is unused. Keys are automatically disabled due to empty table. I have tried tweaking every buffer and nothing changes the results in any way. I have changed the table to InnoDB, with no different except the files are a little bigger; but the stopping points are the same and it does not finish.
I have looked at OS level buffers and caching and have not found anything either.
Ideas?
mysql> show global variables like '%buffer%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| audit_log_buffer_size | 1048576 |
| bulk_insert_buffer_size | 67108864 |
| innodb_buffer_pool_chunk_size | 134217728 |
| innodb_buffer_pool_dump_at_shutdown | ON |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_dump_pct | 25 |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | ON |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 10737418240 |
| innodb_change_buffer_max_size | 25 |
| innodb_change_buffering | all |
| innodb_log_buffer_size | 134217728 |
| innodb_sort_buffer_size | 1048576 |
| join_buffer_size | 8388608 |
| key_buffer_size | 26843545600 |
| myisam_sort_buffer_size | 4294967296 |
| net_buffer_length | 16384 |
| preload_buffer_size | 32768 |
| read_buffer_size | 1048576 |
| read_rnd_buffer_size | 10485760 |
| sort_buffer_size | 67108864
Server Cores: 8 CPUs with 8 cores each
set sql_log_bin = 0;
LOCK TABLES t_surescripts_full WRITE;
TRUNCATE TABLE t_surescripts_full;
LOAD DATA LOCAL INFILE '/data/0149561_SS_PRE/SS_PRE_20200108_v44.match_output.just_output_records' INTO TABLE t_surescripts_full CHARACTER SET 'latin1'
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"' ESCAPED BY ''
LINES TERMINATED BY '\n';
UNLOCK TABLES;
The processlist is not really helpful as the load data infile is the only query and its status is 'executing', even after 20 hours.
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 385429
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10240
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 10240
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Consider these possibilities, please.
To set sql_bin_log OFF requires the user to have Super_priv as Y.
SELECT host,user,super_priv FROM mysql.user WHERE user = 'somename';
may be used to confirm the super_priv is Y
SET sql_log_bin = 0;
might be more successful with
SET SESSION sql_log_bin = OFF;
to avoid logging 27G of CSV data being loaded.
Please post new time required and see email sent to you this AM for additional details.
Many of us who are working on their home or pet projects and who use databases for storing structured data may encounter performance issues when trying to dump/restore data. It can annoying just to sit and wait for another dump restore operation for dozens of minutes or even for hours.
I have quite typical machine specs - 4 core i5 7300, 8 Gb RAM, quite fast M2 drive and Windows 10/MySQL 5.7.
The problem was that trying to restore ~4.5Gb file it took more than 4 hours. That was ridiculous and I wondered if mysqld process isn't using even a half of system resources - CPU/Memory/Disk I/O
Generally speaking, this post relates to some kind of summary of related issues including credits to many other posts which I put below
I performed a number of experiments with MySQL parameters for better dump restore operations
+--------------------------------+---------+---------+-----------------------+---------------------+
| Parameter | Default | Changed | Performance (minutes) | Perfomance gain (%) |
+--------------------------------+---------+---------+-----------------------+---------------------+
| All default | - | - | 259 min | - |
| innodb_buffer_pool_size | 8M | 4G | 32 min | +709% |
| innodb_buffer_pool_size | 4G | 6G | 32 min | ~0% |
| innodb_log_file_size | 48M | 1G | 11 min | +190% |
| innodb_log_file_size | 1G | 2G | 10 min | +10% |
| max_allowed_packet | 4M | 128M | 10 min | ~0% |
| innodb_flush_log_at_trx_commit | 1 | 0 | 9 min 25 sec | +5% |
| innodb_thread_concurrency | 9 | 0 | 9 min 27 sec | ~0% |
| innodb_double_write | - | off | 8 min 5 sec | +18% |
+--------------------------------+---------+---------+-----------------------+---------------------+
Summary (for best dump restore performance):
Set innodb_buffer_pool_size to half of RAM
Set innodb_log_file_size to 1G
Set innodb_flush_log_at_trx_commit to 0
Disabling innodb_double_write recommended only for fastest performance, it should be enabled on production. I also found, that changing another related parameter innodb_flush_method didn't change performance. But this can be an issue of Windows platform.
If you have complex structure with a lot of foreign keys for example, you can try Bulk Data Loading for InnoDB Tables tricks, link is listed at bottom of page
As you can see, I tried to increase CPU utilization by setting innodb_thread_concurrency to 0 (and also setting innodb_read_io_threads to maximum of 64) but results didn't change - it seems that mysqld process is already quite efficient for multi-core environment.
Restoring only data (without table structure) also didn't affect performance
I also changed a number of other parameters, but those above are most relevant ones for dump restore operation so far.
It may seem obvious, but novice question can be - where I can find and set those settings?
In Windows, my.ini file is located at ProgramData/MySQL/MySQL Server <version>/my.ini. You won't find some settings there (like innodb_double_write) - it's ok, just add to the end of the file.
The best way to change settings is to use MySQL Workbench (Server > Options file > InnoDB).
I pay my credits to following posts (and a lot of similar ones), which I found very useful:
https://www.percona.com/blog/2018/02/22/restore-mysql-logical-backup-maximum-speed/
https://www.percona.com/blog/2014/01/28/10-mysql-performance-tuning-settings-after-installation/
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/questions/86636/when-is-it-safe-to-disable-innodb-doublewrite-buffering
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html
My mariadb is increase disk usage every day until full the disk up. Actually data inserting to the other disk.
Only reclaim the used disk space when stop mariadb daemon.(or restart)
Mariadb is work fine until the disk full up.
Mariadb Env.
Version : 10.3.x
OS : Ubuntu 18.04 LTS
Related DB Configurations.
| Variable_name | Value |
+------------------------+--------------------------------------+
| basedir | /usr |
| innodb_temp_data_file_path | ibtmp1:12M:autoextend |
| datadir | /mysql_data |
| innodb_tmpdir | |
| max_tmp_tables | 32 |
| slave_load_tmpdir | /mysql_data/mysql_tmp |
| tmp_disk_table_size | 4294967295 |
| tmp_memory_table_size | 33554432 |
| tmp_table_size | 33554432 |
| tmpdir | /mysql_data/mysql_tmp |
+----------------------------+----------------------------------+
'/mysql_data' is physically separated disk with '/'.
I understand that increased '/mysql_data' disk by inserted data but don't under stand why increased every day about 2G size usage to '/' disk.
When stop mariadb daemon then reclaim the space clearly at the disk '/'.
Everyday full backup the database by mysqldump command to the '/mysql_data' disk.
Searching mariadb log but there's noting error or noticeable messages.
I think mariadb daemon is grabbed system space('/'), but don't know why.
I need your help. How can i solve this?
I am getting Too many connections error from Magento.
I have increased the max_connection to 1000 but I am still getting the error.
I contacted hosting provider and they asked me to use command show processlist; and review my coding.
When I ran the command, I only saw few active connections (about 4 to 5 connection). Therefore, I have no clue how to fix the problem.
I have increased the max_connection to 1500 and I am getting create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug error now.
Could anyone can help me with this situation please?
I am grateful for your help and time.
This is my my.cnf
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 1024
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache_size = 16
query_cache_type = 1
query_cache_size = 48M
log_slow_queries=/var/log/mysqld.slowquery.log
max_connections=1000
wait_timeout=120
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_buffer_pool_size = 2048M
innodb_additional_mem_pool_size = 20M
open_files_limit=34484
#
And this is show proccesslist
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
| 4729 | root | localhost | abc_def| Sleep | 13093 | | NULL |
| 16282 | eximstats | localhost | eximstats | Sleep | 84 | | NULL |
| 16283 | DELAYED | localhost | eximstats | Delayed insert | 84 | Waiting for INSERT | |
| 16343 | root | localhost | NULL | Query | 0 | NULL | show processlist |
+-------+-----------+-----------+--------------------+----------------+-------+--------------------+------------------+
4 rows in set (0.00 sec)
You can increase max connections, but just so much
In essence there are too many connections being started, or not closed
So you can
increase max connections to allow more connections being started
Reduce wait_timeout for the connections that have gone away to free up again
Investigate where all these connections request are coming from? (can often be bots, or may bots at once, some index update or other cronjob etc)
thanks, hope it helps