MySQL TokuDB engine using too much CPU - mysql

I have converted tables of a database from InnoDB to TokuDB and i noticed that with TokuDB, reads are using way too much CPU. Why is this?
To be more specific, the server with TokuDB tables is a slave of a server with InnoDB which is part of the PXC. The slave just used regular percona server and not PXC. But the slave seems to be using way too much CPU and i do not know why?
Below is my my.cnf config:
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
thp-setting=never
socket = /var/run/mysqld/mysqld.sock
nice = 0
flush_caches
numa_interleave
core-file-size = unlimited
open_files_limit = 1024
[mysqld]
back_log = 65535
bind-address = 0.0.0.0
binlog_format = ROW
character_set_server = utf8
collation_server = utf8_general_ci
core_file
basedir = /usr
datadir = /var/lib/mysql
#default_storage_engine = InnoDB
enforce-gtid-consistency = 1
expand_fast_index_creation = 1
expire_logs_days = 7
gtid_mode = ON
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_instances = 1
innodb_buffer_pool_populate = 1
innodb_buffer_pool_size = 512M
innodb_data_file_path = ibdata1:64M;ibdata2:64M:autoextend
innodb_file_format = Barracuda
innodb_file_per_table
innodb_force_recovery = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_io_capacity = 1600
innodb_large_prefix
innodb_locks_unsafe_for_binlog = 1
innodb_log_file_size = 64M
innodb_print_all_deadlocks = 1
innodb_read_io_threads = 64
innodb_stats_on_metadata = FALSE
innodb_support_xa = FALSE
innodb_write_io_threads = 64
lc-messages-dir = /usr/share/mysql
log-bin = mysqld-bin
log-queries-not-using-indexes
log-slave-updates
long_query_time = 1
master_info_repository = TABLE
max_allowed_packet = 64M
max_connect_errors = 4294967295
max_connections = 2500
max_user_connections = 2550
min_examined_row_limit = 1000
open_files_limit = 1024
port = 3306
relay_log_info_repository = TABLE
relay-log-recovery = TRUE
relay-log-recovery = 1
skip-external-locking
skip-name-resolve
slave_parallel_workers = 8
slow_query_log = 1
slow_query_log_timestamp_always = 1
socket = /var/run/mysqld/mysqld.sock
table_open_cache = 4096
thread_cache = 1024
tmpdir = /srv/tmp
transaction_isolation = REPEATABLE-READ
updatable_views_with_limit = 0
user = mysql
wait_timeout = 60
server-id = 2
# TokuDB fine tuning
default_storage_engine = TokuDB
tokudb_analyze_time = 5
#tokudb_cache_size = 6G
tokudb_directio = 1
tokudb_commit_sync = 0
tokudb_fsync_log_period = 1000
tokudb_load_save_space =1
tokudb_alter_print_error=0
tokudb_block_size = 4MB
tokudb_bulk_fetch = 1
tokudb_disable_slow_alter = 1
tokudb_last_lock_timeout = empty
tokudb_row_format = tokudb_quicklz
#tokudb_data_dir = /var/lib/tokudb
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
#no-auto-rehash # faster start of mysql but no tab completition
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
The following replication message was being reported by our monitoring system xymon when tokudb_cache_size when initially set to 80% of total RAM.
2016-02-25 16:42:04 9604 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=db-kdb-slave-6-relay-bin' to avoid this problem.
2016-02-25 16:42:05 9604 [Warning] Recovery from master pos 552554502 and file mysqld-bin.001163. Previous relay log pos and relay log file had been set to 552554714, ./db-kdb-slave-6-relay-bin.002933 respectively.
2016-02-25 16:42:05 9604 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
------More info about the Master server running InnoDB and part of PXC-----------
## Results from top
top - 10:05:12 up 14 days, 7:56, 2 users, load average: 2.16, 2.31, 2.39
Tasks: 413 total, 1 running, 412 sleeping, 0 stopped, 0 zombie
%Cpu(s): 8.9 us, 0.6 sy, 0.0 ni, 89.9 id, 0.3 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem: 65704012 total, 63553216 used, 2150796 free, 169832 buffers
KiB Swap: 975868 total, 809892 used, 165976 free. 16304268 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2485 mysql 20 0 60.146g 0.045t 2.612g S 314.9 73.3 27762:43 mysqld
## disk info
george#db-erp-3:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 8.0K 32G 1% /dev
tmpfs 6.3G 1.2M 6.3G 1% /run
/dev/sda2 274G 2.1G 258G 1% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 32G 0 32G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/nvme0n1p1 1.1T 542G 503G 52% /srv
na1:/vol/yphome 4.5T 3.7T 875G 82% /net/account
## Memory info
george#db-erp-3:~$ free -g
total used free shared buffers cached
Mem: 62 60 2 0 0 15
-/+ buffers/cache: 44 17
Swap: 0 0 0
george#db-erp-3:~$
## Database info
+--------------------+----------------------+
| Data Base Name | Data Base Size in MB |
+--------------------+----------------------+
| information_schema | 0.00976563 |
| dberp | 347143.32031250 |
| mysql | 2.11562061 |
| performance_schema | 0.00000000 |
+--------------------+----------------------+
4 rows in set (0.13 sec)
+--------------------+----------------------+------------------+
| Data Base Name | Data Base Size in MB | Free Space in MB |
+--------------------+----------------------+------------------+
| information_schema | 0.00976563 | 0.00000000 |
| dberp | 347143.32031250 | 6270.00000000 |
| mysql | 2.11562061 | 4.00199127 |
| performance_schema | 0.00000000 | 0.00000000 |
+--------------------+----------------------+------------------+
4 rows in set (0.03 sec)

Your CPU will be higher for reads because TokuDB data needs to be decompressed to be used. Also, if this slave is processing any activity from the master than it's also doing compression for the insert/update/delete activity.
Couple of ideas.
1. Reduce the value of tokudb_block_size. While 4MB is great for compression it means that your point queries need to decompress a lot more data than they have to. Try using 256KB and see how CPU and performance changes. You might have to rebuild your slave to accomplish this easily (I'm now over a year away from working at TokuDB).
2. Look at your tokudb_cache_size. It defaults to 50% of RAM, but if nothing else is on this server you should up it to somewhere between 75% and 80%. This will mean less reads and decompression since more data will be in your cache.

Related

mariadb high Disk IO and IO Wait

We have two mariadb servers Master and Slave each on PR and DR, We are observing high disk io above 90% and iowait some time above 20, when mariadb backup start at night iowait goes above 30 and system become unresponsive causing watch dog process timeout and restart mariadb. when running dd command
sudo dd if=/dev/zero of=/data/test2.img bs=512 count=1000 oflag=dsync
it write data between 300 to 500 Kb/s, we have checked storage level as well but no issue found at storage level as same storage has been in used for other DB's (Oracle) and no such IO issues there. i need help to identify this issue.
following is server.cnf of mariadb
[mysqld]
symbolic_links = 0
local_infile = 0
basedir = /usr
datadir = /data/mdb_data
pid_file = /var/lib/mysql/mysqld.pid
log_error = /var/log/mariadb/mysqld.log
bind-address = ::
port = 3306
userstat = 1
plugin-load-add = server_audit=server_audit.so
server_audit = FORCE_PLUS_PERMANENT
server_audit_logging = on
server_audit_events = CONNECT,QUERY_DCL,QUERY_DDL
server_audit_output_type = syslog
cracklib_password_check = off
log_bin_trust_function_creators = 1
lower_case_table_names = 1
character-set-server = utf8
init_connect = SET NAMES utf8
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 1
skip-name-resolve
innodb_autoinc_lock_mode = 2
max_connections = 3000
innodb_buffer_pool_size = 58G
innodb_buffer_pool_instances = 8
innodb_log_file_size = 1024m
key_buffer_size = 16M
log-error = /var/log/mariadb/mysqld.log
skip-external-locking
slow_query_log = on
long_query_time = 1
slow_query_log_file = /var/log/mariadb/slowSQLs.log
performance_schema = on
innodb_flush_method = O_DIRECT
sync_binlog = 1
max_allowed_packet = 512m
slave_max_allowed_packet = 200m
innodb_read_io_threads = 32
innodb_write_io_threads = 32
innodb_io_capacity = 600
slave_parallel_threads = 50
slave_parallel_max_queued = 2097152
slave_parallel_mode = optimistic
rpl_semi_sync_master_enabled = 0
rpl_semi_sync_slave_enabled = 1
rpl_semi_sync_master_wait_point = AFTER_SYNC
binlog_commit_wait_usec = 10000
binlog_commit_wait_count = 1
server_audit_excl_users = repl#b.c,maxscale
slave-skip-errors = 1062,1032
[mysqld_safe]
syslog
[mariadb]
server_id = 228575792
log_slave_updates = on
log-bin
log-basename = db2
report_host = db2
from top
KiB Mem : 74046272 total, 32528492 free, 26723664 used, 14794120 buff/cache
KiB Swap: 33554428 total, 31482364 free, 2072064 used. 33507036 avail Mem

Optimizing MariaDB for wordpress

I have a server with 2 CPU cores and 1GB of RAM.The server only run one wordpress site.My Server Stack is LEMP.I ran mysql tuner two weeks after setting up the wordpress site.
Here are the results
[!!] Maximum reached memory usage: 884.8M (89.15% of installed RAM)
[!!] Maximum possible memory usage: 1.4G (139.86% of installed RAM)
[!!] Overall possible memory usage with other process exceeded memory
[!!] Slow queries: 15% (629K/4M)
[OK] Highest usage of available connections: 9% (19/200)
[OK] Aborted connections: 0.75% (4103/548857)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
Here is my my.cnf configuration
[mysql]
# CLIENT #
port = 3306
socket = /var/lib/mysql/mysql.sock
[mysqld]
# GENERAL #
user = mysql
default-storage-engine = InnoDB
socket = /var/lib/mysql/mysql.sock
pid-file = /var/lib/mysql/mysql.pid
# MyISAM #
key-buffer-size = 32M
myisam-recover = FORCE,BACKUP
# SAFETY #
max-allowed-packet = 16M
max-connect-errors = 1000000
# DATA STORAGE #
datadir = /var/lib/mysql/
# BINARY LOGGING #
log-bin = /var/lib/mysql/mysql-bin
expire-logs-days = 14
sync-binlog = 1
# CACHES AND LIMITS #
tmp-table-size = 32M
max-heap-table-size = 32M
query-cache-type = 0
query-cache-size = 0
max-connections = 200
thread-cache-size = 20
open-files-limit = 65535
table-definition-cache = 1024
table-open-cache = 2048
# INNODB #
innodb-flush-method = O_DIRECT
innodb-log-files-in-group = 2
innodb-log-file-size = 64M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table = 1
innodb-buffer-pool-size = 624M
# LOGGING #
log-error = /var/lib/mysql/mysql-error.log
log-queries-not-using-indexes = 1
slow-query-log = 1
slow-query-log-file = /var/lib/mysql/mysql-slow.log
How can i optimize the configuation to fix those issues
There is one terribly bad setting:
innodb-buffer-pool-size = 624M
in a tiny 1GB server that probably includes both WP and MySQL? Change that to 200M. And watch for swapping. If there is any swapping, lower it more. Swapping leads to a huge amount of I/O; it is better to shrink the settings instead. Here's a head start:
tmp-table-size = 32M -> 8M
max-heap-table-size = 32M -> 8M
query-cache-type = 0 -- good
query-cache-size = 0 -- good
max-connections = 200 -> 50
thread-cache-size = 20
open-files-limit = 65535
table-definition-cache = 1024 -> 200
table-open-cache = 2048 -> 300
You have the slow log turned on? Let's see the worst query, as indicated by mysqldumpslow -s t or pt-query-digest.
Here's another tip. This vital table currently has lousy indexes; these will help:
CREATE TABLE wp_postmeta (
post_id …,
meta_key …,
meta_value …,
PRIMARY KEY(post_id, meta_key),
INDEX(meta_key)
) ENGINE=InnoDB;
IS WORDPRESS LISTENING?
Here's why:
AUTO_INCREMENT was a waste
This is a much better PK
Use 191 if necessary (5.6.3 thru 5.7.6)
InnoDB for clustered PK
More details: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#speeding_up_wp_postmeta

Why does Master think it's a Slave on Reboot?

In a simple MySQL replication Master-Slave configuration I have a problem where Master tries to connect to itself as a slave on reboot.
So when I restart MySQL on Master, I see errors related to the same server trying to replicate to itself and I have to manually run mysql -e "STOP SLAVE;" every time I restart MySQL.
How can I disable slave on master for good?
Here's the relevant portion of my.cnf:
## Logging
binlog_format = mixed
log_bin = /var/log/mysql/mysql-bin.log
sync_binlog = 1
pid_file = /var/run/mysqld/mysqld.pid
log_error = /var/log/mysql/error.log
#general_log = 0
#general_log_file = /var/log/mysql/general.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 3
expire_logs_days = 14
sql_mode = STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
# sql_mode = ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
## Replication
server_id = 200
## Master Configuration
binlog-do-db = my_db_1
binlog-do-db = my_db_2
binlog-do-db = my_db_3
binlog-do-db = my_db_4
binlog-do-db = my_db_5
binlog-do-db = my_db_6
Also, when I run SELECT * FROM mysql.user; I don't see the repl user that's allegedly a "slave" on Master.
BUT, I do see that localhost has replication grants:
mysql> select Host, User, grant_priv, Repl_slave_priv, Repl_client_priv from mysql.user;
+-----------------+---------------+------------+-----------------+------------------+
| Host | User | grant_priv | Repl_slave_priv | Repl_client_priv |
+-----------------+---------------+------------+-----------------+------------------+
| localhost | root | Y | Y | Y |
| localhost | mysql.sys | N | N | N |
Here's an example of the errors I see on Reboot (before I run STOP SLAVE; on Master):
2016-09-01T15:22:23.845505Z 384 [Note] Access denied for user 'repl'#'192.168.100.200' (using password: YES)
2016-09-01T15:22:23.845761Z 1 [ERROR] Slave I/O for channel '': error connecting to master 'repl#192.168.100.200:3306' - retry-time: 30 retries: 8, Error_code: 1045
2016-09-01T15:22:50.191636Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6843ms. The settings might not be optimal. (flushed=15210 and evicted=0, during the time.)
Apart from this, replication is running fine. Writes to Master show up flawlessly on the real, read-only, Slave.
Full my.cnf:
[mysql]
default_character_set = utf8
[mysqld]
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
symbolic-links = 0
## Custom Configuration
skip_external_locking = 1
skip_name_resolve
open_files_limit = 20000
## Cache
thread_cache_size = 16
query_cache_type = 1
query_cache_size = 256M
query_cache_limit = 4M
## Per-thread Buffers
sort_buffer_size = 32M
read_buffer_size = 4M
read_rnd_buffer_size = 8M
join_buffer_size = 2M
## Temp Tables
tmp_table_size = 1024M
max_heap_table_size = 1024M
## Networking
back_log = 250
max_connections = 512
max_connect_errors = 100000
max_allowed_packet = 128M
interactive_timeout = 1800
wait_timeout = 1800
character_set_client_handshake = FALSE
character_set_server = utf8mb4
collation_server = utf8mb4_unicode_ci
### Storage Engines
default_storage_engine = InnoDB
innodb = FORCE
## MyISAM
key_buffer_size = 128M
myisam_sort_buffer_size = 16M
## InnoDB
innodb_buffer_pool_size = 46G
innodb_buffer_pool_instances = 64
innodb_log_files_in_group = 2
innodb_log_buffer_size = 32M
innodb_log_file_size = 64M
innodb_file_per_table = 1
innodb_thread_concurrency = 0
innodb_flush_log_at_trx_commit = 1
## Logging
binlog_format = mixed
log_bin = /var/log/mysql/mysql-bin.log
sync_binlog = 1
pid_file = /var/run/mysqld/mysqld.pid
log_error = /var/log/mysql/error.log
#general_log = 0
#general_log_file = /var/log/mysql/general.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 3
expire_logs_days = 14
sql_mode = STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
# sql_mode = ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
## Replication
# Master Server ID:
server_id = 200
# Slave Server ID:
# server_id = 300
## Master Configuration
# Comment out on Slave
binlog-do-db = db_1
binlog-do-db = db_2
binlog-do-db = db_3
binlog-do-db = db_4
binlog-do-db = db_5
binlog-do-db = db_6
## Slave Configuration
# Uncomment the following on Slave
# relay-log = /var/log/mysql/mysql-relay-bin.log
# binlog-do-db = db_1
# binlog-do-db = db_2
# binlog-do-db = db_3
# binlog-do-db = db_4
# binlog-do-db = db_5
# binlog-do-db = db_6
# log_slave_updates = 1
# read_only = 1
# slave_skip_errors = 1062
[mysqld_safe]
datadir = /var/lib/mysql
socket = /var/lib/mysql/mysql.sock
symbolic-links = 0
pid_file = /var/run/mysqld/mysqld.pid
log_error = /var/log/mysql/error.log
Also:
mysql> SHOW GLOBAL VARIABLES LIKE '%master_info_repository%';
+------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| master_info_repository | FILE |
+------------------------+-------+
For managing this kind of setups I recommend to use MHA manager. For this specific situation you may want to clean up the master_info_repository (located by default in master.info). Also, you can use --skip-slave-start on the master host to avoid this situations after failover.
I think that you must have set the master information on the master server (maybe this was a slave at some point or refreshed from one). Run
SHOW SLAVE STATUS
on the master. If the entries are not all empty then this is the cause and on reboot (without skip-slave-start being set) MySQL will try to start the slave.
To fix this, on the master, stop the slave if you did not already and run
RESET SLAVE ALL
to clear the master settings - assuming that you are using 5.5.16 or higher otherwise leave off the ALL.
This can be confirmed with another SHOW SLAVE STATUS which should show all the entries as empty.
When you reboot now the slave will not try to start.
If you prefer for some reason to leave the master settings in place, add skip-slave-start to your my.cnf under [mysqld] and the settings will then be ignored on start-up.

mysql - usage of memory still grow up

I have problem with my production mysql server. Usage of memory still grow up and I don`t know why. Trouble started when we changed the server.
My mysql version: 5.5.44-0+deb8u1-log - (Debian).
My my.cnf file:
[mysqld_safe]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
skip-external-locking
key_buffer_size = 5M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
tmp_table_size = 384M
max_heap_table_size = 384M
table_open_cache = 7000
open_files_limit = 14000
interactive_timeout=3600
wait_timeout=3600
myisam-recover_options = BACKUP
max_connections = 150
query_cache_limit = 8M
query_cache_size = 127M
slow_query_log_file = /var/log/mysql/mysql-slow.log
slow_query_log = 1
long_query_time = 2
expire_logs_days = 10
max_binlog_size = 100M
innodb_file_per_table
innodb_buffer_pool_instances = 9
innodb_buffer_pool_size = 10000M
innodb_log_file_size = 2000M
It is something wrong in my configuration?
EDIT
For production we have dedicated server:
4 x Intel(R) Xeon(R) CPU E5-2680 v2 # 2.80GHz
20 GB RAM memory
40 GB na DHH - data base have now 15 GB
in the data base is on this moment 650 tables on the innoDB engine
Screenshot of htop:
htop of proces on server

(2006) MySQL server has gone away

I've read so many threads as well as the MySQL documentation about this issue and nothing suggested seems to work.
Here's my.cnf
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
default-storage-engine=INNODB
character-set-server=utf8
collation-server=utf8_bin
interactive_timeout = 2880000
wait_timeout = 2880000
net_write_timeout = 6000
net_read_timeout = 6000
delayed_insert_timeout = 6000
key_buffer = 256M
key-buffer-size = 32M
max_allowed_packet = 600M
thread_stack = 256K
thread_cache_size = 8
max-connections = 500
thread-cache-size = 50
open-files-limit = 65535
table-definition-cache = 4096
table-open-cache = 10240
query-cache-type = 0
query_cache_limit = 2M
query_cache_size = 32M
myisam-recover = BACKUP
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 10M
innodb_log_buffer_size = 64M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 180
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
[mysqldump]
quick
quote-names
max_allowed_packet = 64M
[isamchk]
key_buffer = 32M
In addition I ran queries in the MySQL CLI to make sure my settings were sticking, and they appear to be:
mysql> select ##global.wait_timeout, ##session.wait_timeout;
+-----------------------+------------------------+
| ##global.wait_timeout | ##session.wait_timeout |
+-----------------------+------------------------+
| 2880000 | 2880000 |
+-----------------------+------------------------+
mysql> select ##global.max_allowed_packet, ##session.max_allowed_packet;
+-----------------------------+------------------------------+
| ##global.max_allowed_packet | ##session.max_allowed_packet |
+-----------------------------+------------------------------+
| 629145600 | 629145600 |
+-----------------------------+------------------------------+
Server environment: Ubuntu Server 14.04LTS
MySQL version: 5.6
This is a dedicated MySQL server, it has no other apps on it.
I am not running out of memory:
MemTotal: 32948824 kB
MemFree: 31494136 kB
Cached: 281624 kB
SwapCached: 0 kB
SwapTotal: 33550332 kB
SwapFree: 33550332 kB
I was finally able to fix this issue but bypassing the MySQL Workbench Migration Tool and using mysqldup to generate the database .sql file used to restore to the server. Here's what I did:
Deleted all databases from the MySQL server that was timing out
Exported all the data from our backup database via mysqldump
Restored that dump file to the new MySQL server (the one that was
timing out)
Ran mysql_upgrade, all tables OK Executed the query: everything now
works
Here's the strange thing: mysqlcheck and mysql_upgrade, when run on the database that was somehow timing out, were returning status OK and not finding any errors. I do not know why, but I see this is a pretty strange and annoying problem, considering that's exactly what mysqlcheck is for.
Anyway if you're having this problem try to restore the database from an older backup (if you have one) and see if that works.