I have a Ruby program that uses a thread pool of 20 threads to download more than 1000 compressed MySQL dump files ("xxx.sql.gz"). After each file is downloaded, the script creates a database and calls
gzip -dc xxx.sql.gz | cat - commit.txt | mysql -D db_name
to restore the dump file.
commit.txt appends commit; to the SQL file content since autocommit is turned off.
The problem is, sometimes all the MySQL processes for restoring dumps hang at a random time.
I use a ramdisk by the way.
my.cnf is:
[mysqld]
autocommit=0
innodb_buffer_pool_size=16G
max_allowed_packet=256M
innodb_file_per_table=1
innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT
innodb_use_native_aio = 0
tmpdir=/mnt/ramdisk
datadir=/mnt/ramdisk/mysql
socket=/mnt/ramdisk/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Related
Hello dear StackOverflow community,
I have a large WordPress site that now crashes the database by creating dozens of Creating sort index tasks with:
SELECT t.*, tt.*
FROM wp_terms AS t
INNER JOIN wp_term_taxonomy AS tt ON t.term_id = tt.term_id
WHERE tt.taxonomy IN ('categories')
ORDER BY t.name ASC
Those run for more than 20 seconds +. Afterwards its "sending data" with the same SQL query. The server is using an AMD epyc cpu and shouldn't have any problems (even though the database is large) and it did not have until it suddenly seems to be stuck.
Shouldn't it be caching this query anyways?
The mariadb my.cf config looks like:
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Basic Settings
#
user = mysql
pid-file = /run/mysqld/mysqld.pid
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
lc-messages = en_US
skip-external-locking
# Broken reverse DNS slows down connections considerably and name resolve is
# safe to skip if there are no "host by domain name" access grants
#skip-name-resolve
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
#key_buffer_size = 128M
#max_allowed_packet = 1G
#thread_stack = 192K
#thread_cache_size = 8
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
#myisam_recover_options = BACKUP
#max_connections = 100
#table_cache = 64
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# Recommend only changing this at runtime for short testing periods if needed!
#general_log_file = /var/log/mysql/mysql.log
#general_log = 1
# When running under systemd, error logging goes via stdout/stderr to journald
# and when running legacy init error logging goes to syslog due to
# /etc/mysql/conf.d/mariadb.conf.d/50-mysqld_safe.cnf
# Enable this if you want to have error logging into a separate file
#log_error = /var/log/mysql/error.log
# Enable the slow query log to see queries with especially long duration
#slow_query_log_file = /var/log/mysql/mariadb-slow.log
#long_query_time = 10
#log_slow_verbosity = query_plan,explain
#log-queries-not-using-indexes
#min_examined_row_limit = 1000
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 10
#max_binlog_size = 100M
# * Character sets
#
# MySQL/MariaDB default is Latin1, but in Debian we rather default to the full
# utf8 4-byte character set. See also client.cnf
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
# Most important is to give InnoDB 80 % of the system RAM for buffer use:
# https://mariadb.com/kb/en/innodb-system-variables/#innodb_buffer_pool_size
#innodb_buffer_pool_size = 8G
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.5]
I restarted mariadb, but the advisor mentions a huge rate of opening tables (same for files). The server is running only for 15 minutes. Is this due to the short time or do I have to change the settings (before the huge amount of Creating sort index tasks, the server was stable)
Issue:
The rate of opening tables is high.
Recommendation:
Opening tables requires disk I/O which is costly. Increasing table_open_cache might avoid this.
Justification:
Opened table rate: 1.26 per second, this value should be less than 10 per hour
Used variable / formula:
Opened_tables / Uptime
Test:
value*60*60 > 10
I would be awesome if somebody has an idea, what to do about that problem
Last week after some previous database tuning, I stopped mysql and it failed to restart.
After a long period of troubleshooting I found that the ibdata1 file was not as big as it should have been, it had been deleted and recreated as new.
I retrieved the old 9.5gb file from backup, replaced it and mysql started again, happy days.
I've been having some more server trouble today, had a look in the mysql folder and the file has disappeared again.
I haven't stopped mysql yet so everything is still up and running, I will have to retrieve it from backup and restart with my fingers crossed.
So my question is, why is it disappearing?? My guess us I've made an accidental change in the my.cnf file and then not restarted. Unfortunately I don't have a backup of the file because I didn't know there was a change made.
(Untidy) My.cnf is as follows:
[mysqld]
local-infile=0
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
innodb_thread_concurrency= 4
innodb_buffer_pool_size = 2G
thread_concurrency = 3
thread_cache_size = 32
table_cache = 1024
query_cache_size = 64M
query_cache_limit = 2M
join_buffer_size = 8M
tmp_table_size = 256M
key_buffer = 32M
innodb_autoextend_increment=512
max_allowed_packet = 16M
max_heap_table_size = 256M
read_buffer_size = 2M
read_rnd_buffer_size = 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
innodb_log_file_size = 100M
innodb_additional_mem_pool_size = 20M
innodb_flush_log_at_trx_commit=2
innodb_lock_wait_timeout=1800
innodb_log_buffer_size=500K
log-error=/var/log/mysqld.log
slow_query_log = /var/log/mysql-slow.log
long_query_time = 5
pid-file=/var/run/mysqld/mysqld.pid
sort_buffer_size = 2M
read_buffer_size = 2M
wait_timeout = 120
key_buffer = 384M
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 1M
max_connections=50
query_cache_type = 1
Any help greatly appreciated!
Thanks
Don't stop MySQL! mysqld keeps it open while it's running, so it's still on the file system.
MySQL never deletes ibdata1, so it must be some external command.
To recover the database stop all writes to the database, wait till the main thread is in "waiting for server activity" or "sleeping" state:
mysql> pager grep Main
PAGER set to 'grep Main'
mysql> show engine innodb status\G
Main thread id 4994568192, state: sleeping
1 row in set (0.00 sec)
Take dump of all databases (this step is not necessary, but for extra safety do it);
mysqldump -A > mydb.sql
Find the deleted ibdata1 in /proc filesystem and copy it back to MySQL datadir
# ls -la /proc/`pidof mysqld`/fd/ | grep -e ibdata
lrwx------ 1 root root 64 May 26 02:41 3 -> /var/lib/mysql/ibdata1 (deleted)
Note 3 - it's a file descriptor of ibdata1.
Copy ibdata1 back:
# cp /proc/`pidof mysqld`/fd/3 /var/lib/mysql/ibdata1
Then restart MySQL
#Akuzminsky's answer is what you need to do to get your current ibdata1 back. And he is correct that MySQL never deletes ibdata1, regardless of your my.cnf configuration.
So something else is deleting the file. How can one find out? Try running the Linux Audit Daemon. You won't be able to find out what deleted the file last time (unless you were already running the Audit Daemon), but in case it happens again you're ready.
See this StackExchange answer for details: https://askubuntu.com/questions/48844/how-to-find-the-pid-of-the-process-which-has-deleted-a-file
I would like to log my slow queries to a table, but keep my general log(?) logged in a table. I assume the general log is my binary log? MySQL docs are less than clear on this stuff. My MySQL server is setup as a replication master and these are the relevant logging stanzas from my.cnf.
# BINARY LOGGING #
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 3
max_binlog_size = 1000M
sync_binlog = 1
# LOGGING #
log_error = /var/log/mysql/mysql-error.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
log_queries_not_using_indexes = 0
I'm afriad if I add something like this:
log_output = TABLE
general-log
expire_logs_days = 1
it will affect my binlog or start logging everything that's already written to my binlog to a table, which I don't want. I'm essentially just looking to have slow queries (a day or two's worth maybe) written to a table rather than a file, without affecting any of my other current logging.
I'm using Server version: 5.5.22-0ubuntu1-log (Ubuntu)
Thanks.
The setting log_output doesn't affect the binary log. It affects the general query log and the slow query log.
I'm doing some preparatory work for a large website migration.
The database is around the 10GB in size and several tables contain > 15 Million records. Unfortunately, this only comes in a large single mysqldump file in SQL format due to client relations outside my remit, but you know how that goes. My goal is to minimize downtime and hence import the data as fast as possible.
I have attempted to use the standard MySQL CLI interface like so:
$mysql database_name < superhuge_sql_file -u username -p
This is however, super slow.
To try and speed things up I've used awk to split the file in to chunks for each table with associated data, and have built a little shell script to try and import the tables in parallel, like so;
#!/bin/sh
awk '/DROP TABLE/{f=0 ;n++; print >(file="out_" n); close("out_" n-1)} f{ print > file}; /DROP TABLE/{f=1}' superhuge.sql
for (( i = 1; i <= 95; i++ ))
do
mysql -u admin --password=thepassword database_name < /path/to/out_$i &
done
It's worth mentioning that this is a "use once and destroy" script (passwords in scripts etc...).
Now, this works, but still takes over 3 hours to complete on a quad core server doing nothing else at present. The tables do import in parallel but not all of them at once, and trying to get MySQL server information through the CLI is very slow during the process. I'm not sure why but trying to access tables using the same mysql user account hangs while this is in process. max_user_connections is unlimited.
I have set max connections to 500 in my.cnf but have otherwise not configured MySQL on this server.
I've had a good hunt around but was wondering if there are any MySQL config options that will help speed this process up, or any other methods I have missed that will be quicker.
If you can consider using GNU parallel, please check this example found on wardbekker gist:
# Split MYSQL dump file
zcat dump.sql.gz | awk '/DROP TABLE IF EXISTS/{n++}{print >"out" n ".sql" }'
# Parallel import using GNU Parallel http://www.gnu.org/software/parallel/
ls -rS *.sql | parallel --joblog joblog.txt mysql -uXXX -pYYY db_name "<"
which will split big file into separate SQL files then run parallel for parallel processing.
So to run 10 threads in GNU parallel, you can run:
ls -rS data.*.sql | parallel -j10 --joblog joblog.txt mysql -uuser -ppass dbname "<"
On OS X, it can be:
gunzip -c wiebetaaltwat_stable.sql.gz | awk '/DROP TABLE IF EXISTS/{n++}{filename = "out" n ".sql"; print > filename}'
Source: wardbekker/gist:964146
Related: Import sql files using xargs at Unix.SE
Importing the dumpfile to the server
$ sudo apt-get install pigz pv
$ zcat /path/to/folder/<dbname>_`date +\%Y\%m\%d_\%H\%M`.sql.gz | pv | mysql --user=<yourdbuser> --password=<yourdbpassword> --database=<yournewdatabasename> --compress --reconnect --unbuffered --net_buffer_length=1048576 --max_allowed_packet=1073741824 --connect_timeout=36000 --line-numbers --wait --init-command="SET GLOBAL net_buffer_length=1048576;SET GLOBAL max_allowed_packet=1073741824;SET FOREIGN_KEY_CHECKS=0;SET UNIQUE_CHECKS = 0;SET AUTOCOMMIT = 1;FLUSH NO_WRITE_TO_BINLOG QUERY CACHE, STATUS, SLOW LOGS, GENERAL LOGS, ERROR LOGS, ENGINE LOGS, BINARY LOGS, LOGS;"
Optional: Command Arguments for connection
--host=127.0.0.1 / localhost / IP Address of the Import Server
--port=3306
The optional software packages are helpful to import your database SQL file faster
with a progress view (pv)
Parallel gzip (pigz/unpigz) to gzip/gunzip files in parallel
for faster zipping of the output
Alternatively you do have a range of MySQL Options for
Export
https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html
Import
https://dev.mysql.com/doc/refman/5.7/en/mysql-command-options.html
Configuration
https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html
https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html
Here is a sample my.cnf I run on my server SSD Quad Core and usually imports a 100GB DB file in about 8 Hours. But you can tweak your Server to the settings to help it to write much faster.
Check each config variable using the above link to match your MySQL Server with the variable and values.
# Edit values to as per your Server Processor and Memory requirements.
[mysqld]
# Defaults
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
log-error = /var/log/mysql/error.log
datadir = /var/lib/mysql
log_timestamps = SYSTEM
character_set_server = utf8mb4
collation_server = utf8mb4_general_ci
# InnoDB
innodb_buffer_pool_size = 48G
innodb_buffer_pool_instances = 48
innodb_log_file_size = 3G
innodb_log_files_in_group = 4
innodb_log_buffer_size = 256M
innodb_log_compressed_pages = OFF
innodb_large_prefix = ON
innodb_file_per_table = true
innodb_buffer_pool_load_at_startup = ON
innodb_buffer_pool_dump_at_shutdown = ON
innodb_autoinc_lock_mode = 2
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 360
innodb_flush_neighbors = 0
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2500
innodb_io_capacity_max = 5000
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_monitor_enable = all
performance_schema = ON
key_buffer_size = 32M
wait_timeout = 30
interactive_timeout = 3600
max_connections = 1000
table_open_cache = 5000
open_files_limit = 8000
tmp_table_size = 32M
max_heap_table_size = 64M
# Slow/Error
log_output = file
slow_query_log = ON
slow_query_log_file = /var/log/mysql/slow_query.log
long_query_time = 10
log_queries_not_using_indexes = ON
log_slow_rate_limit = 100
log_slow_rate_type = query
log_slow_verbosity = full
log_slow_admin_statements = ON
log_slow_slave_statements = ON
slow_query_log_always_write_time = 1
slow_query_log_use_global_control = all
# Query
join_buffer_size = 32M
sort_buffer_size = 16M
read_rnd_buffer_size = 8M
query_cache_limit = 8M
query_cache_size = 8M
query_cache_type = 1
# TCP
max_allowed_packet = 1G
Does the sql in the dump insert multiple rows? Does the dump use multiple row inserts? (Or maybe you can pre-process it to?)
This guy covers a lot of the basics, for example:
Disabling indexes which makes import many times faster.
Disable MySQL indexes, so before import run:
ALTER TABLE `table_name` DISABLE KEYS;
then after import change it back:
ALTER TABLE `table_name` DISABLE KEYS;
When using MyISAM table type, use MySQL's INSERT DELAYED command instead, so it encourage MySQL to write the data to the disk when the database is idle.
For InnoDB tables, use these extra commands to avoid a great deal of disk access:
SET FOREIGN_KEY_CHECKS = 0;
SET UNIQUE_CHECKS = 0;
SET AUTOCOMMIT = 0;
and these at the end:
SET UNIQUE_CHECKS = 1;
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
So, I'd like to be able to set the max log file size to 64M, but after doing so with innodb_log_file_size=64M MySQL starts OK, but nothing seems to work properly.
EDIT: and by properly I mean not at all. Setting other InnoDB variables aren't causing any problems.
How should I go about troubleshooting this one?
Make sure MySQL shuts down cleanly, and delete (or move elsewhere) all ib_logfile* files from MySQL data directory (/var/lib/mysql/ usually).
I've tested it and worked for me. Here's source of this hint.
InnoDB reports some errors in show table status comment field. You'll find other problems in MySQL error log (hostname.err in MySQL data directory).
I ran into this problem too, and as per #porneL's answer, here were my specific bash steps to correct this:
service mysql stop # Stop MySQL
rm /var/lib/mysql/ib_logfile0 # Delete log file 1
rm /var/lib/mysql/ib_logfile1 # Delete log file 2
vim my.conf # Change innodb_log_file_size = 64M
service mysql start # Start MySQL
I found these specific steps on the MySQL forums.
Before changing the innodb_log_file_size, you must flush all remaining transactional data out of it. You simply set innodb_fast_shutdown to 0 or 2.
innodb_fast_shutdown = 0 : InnoDB does a slow shutdown, a full purge and an insert buffer merge before shutting down
innodb_fast_shutdown = 2 : InnoDB flushes its logs and shuts down cold, as if MySQL had crashed; no committed transactions are lost, but the crash recovery operation makes the next startup take longer.
In light of this, this is how you handle it
mysql -ANe"SET GLOBAL innodb_fast_shutdown = 2"
vi /etc/my.cnf # Change innodb_log_file_size = 64M
service mysql stop # Stop MySQL
rm /var/lib/mysql/ib_logfile0 # Delete log file 1
rm /var/lib/mysql/ib_logfile1 # Delete log file 2
service mysql start # Start MySQL