Alter Table in MySQL is using only one core - mysql

Is there a way to enable all CPU cores for "Alter Table" query?
All other queries use 100% cores available, just "Alter Table" uses only one core.
Here are some my.cnf settings:
join_buffer_size = 32M
read_buffer_size = 32M
read_rnd_buffer_size = 32M
tmp_table_size = 1G
max_heap_table_size = 1G
#net_buffer_length = 1M
sort_buffer_size = 32M
key_buffer_size = 32M
innodb_buffer_pool_size = 5G
innodb_thread_concurrency = 0
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
We're using MySQL server 5.6.33 on Ubuntu Server 14.04

No you can't use more than one core for an ALTER TABLE, even in MySQL-8.0.
5.7+ has significant improvements to the time of queries that can be done online.
For background ALTER TABLE, use the tools gh-ost or pt-online-schema-change are usable with 5.6.

Related

Mysql server cannot be started with specific configurations

I have a MySQL configured in a separate DB server. And there was a high CPU usage initially. I am pasting the /etc/mysql/my.cnf details here
innodb_buffer_pool_size=4G
innodb_buffer_pool_size = 9G
innodb_buffer_pool_instances=8
No specific configurations for innodb_buffer_pool_chunk_size. Therefore it is 128 M
To resolve the high CPU usage issue, I have updated the /etc/mysql/my.cnf file by adding/ updating several configurations and then the server is not started anymore
query_cache_limit = 16M
query_cache_type = ON
query_cache_size = 128M
innodb_buffer_pool_size = 9G
max_allowed_packet = 16M
thread_stack = 192K
innodb_log_file_size = 1768M
thread_cache_size = 8
max_heap_table_size = 536870912
innodb_buffer_pool_instances=8
innodb_io_capacity = 1000
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 0
myisam-recover = BACKUP
max_connections = 1000
Here, the configurations are added by referring the MySQL documents and the configurations and the configurations satisfies the following equation as well
innodb_buffer_pool_size = n times * (innodb_buffer_pool_chunk_size * innodb_buffer_pool_instances)
mysql-documentation
Any idea what I am doing wrong here?
How much RAM?
innodb_buffer_pool_size -- about 70% or RAM if you have more than 4GB of RAM
max_heap_table_size -- 1% of RAM
query_cache_size -- 0 or 50M
innodb_buffer_pool_instances -- 1 per GB of buffer_pool
max_connections -- 100 until you have a clear need for more

MAMP Pro MySQL issue with changing database engine to InnoDB and migrating databases

I have MAMP Pro on El Capitan running. It has been fine up until now, but I've run into a problem. I have a mixture of database, some using the MyISAM Engine and others using InnoDB. I don't really know how that works. I guess if some are InnoDB, the Engine is still MyISAM by default. The issue is with databases that I have for Atlassians Confluence and JIRA. In Confluence, all is good, but it says:
You should increase innodb_log_file_size to 256M
I tried playing around with the my.cnf, but ran into issues. I restored things, and these are the relevant sections from the config.
[mysqld]
#port = 9999
socket = /Applications/MAMP/tmp/mysql/mysql.sock
key_buffer_size = 64M
max_allowed_packet = 512M
# table_cache only works for MySQL 5.5.x
#table_cache = 64
# If you are running MySQL 5.6.x, use table_open_cache.
#table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 32M
#Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /Applications/MAMP/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /Applications/MAMP/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 128M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 512M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
When I uncommented the InnoDB section the server crashes and the database got corrupted.
Just wondering how I can turn on InnoDB for MAMP, if that is recommended, and update my existing databases at the same time, the MyISAM ones and the InnoDB ones.
While I'm at it, I might want to upgrade MAMP to the newer MySQL version, maybe later.
How much RAM do you have?
Keep max_allowed_packet under 2% of RAM.
Since you are using both MyISAM and InnoDB, set innodb_buffer_pool_size to about 1/3 of RAM unless; less if you have a tiny system.
Do not change innodb_log_file_size without further instructions. That is, don't set it in my.cnf if it is not already set.
MyISAM and InnoDB can coexist.

mysql - Very slow response even with a high number of available connection

I'm running this server for data mining purposes. It runs several compute intensive data mining applications parallely and does simultaneous access to the MySQL server.
Here are the configuarations.
Server config: 8 core Intel Xeon, 16gb RAM, 500 GB SAS drive
MySQL my.cnf
[client]
#password = [your_password]
port = 3306
socket = /var/lib/mysql/mysql.sock
[mysqld]
# generic configuration options
port = 3306
socket = /var/lib/mysql/mysql.sock
datadir = /database/mysql
log_bin = OFF
expire-logs-days = 3
pid-file = /database/mysql/localhost.localdomain.pid
back_log = 50
max_connections = 3000
max_connect_errors = 100
table_open_cache = 2048
max_allowed_packet = 16M
binlog_cache_size = 1M
max_heap_table_size = 64M
read_buffer_size = 128M
read_rnd_buffer_size = 32M
sort_buffer_size = 32M
join_buffer_size = 8M
thread_cache_size = 8
thread_concurrency = 4
query_cache_size = 64M
query_cache_limit = 2M
ft_min_word_len = 4
default-storage-engine = innodb
thread_stack = 192K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 64M
log-bin = mysql-bin
binlog_format = mixed
server-id = 1
key_buffer_size = 32M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size = 10G
myisam_repair_threads = 1
myisam_recover
innodb_additional_mem_pool_size = 32M
innodb_buffer_pool_size = 4G
innodb_data_file_path = ibdata1:10M:autoextend
#innodb_data_home_dir = <directory>
innodb_write_io_threads = 8
innodb_read_io_threads = 8
#innodb_force_recovery = 6
innodb_thread_concurrency = 0
innodb_flush_log_at_trx_commit= 2
#innodb_fast_shutdown
innodb_log_buffer_size = 8M
innodb_log_file_size = 1G
innodb_log_files_in_group = 3
#innodb_log_group_home_dir
innodb_max_dirty_pages_pct = 90
#innodb_flush_method = O_DSYNC
innodb_lock_wait_timeout = 120
[mysqldump
quick
max_allowed_packet = 16M
[mysql]
auto-rehash
[myisamchk]
key_buffer_size = 512M
sort_buffer_size = 512M
read_buffer = 8M
write_buffer = 8M
[mysqlhotcopy]
interactive-timeout
[mysqld_safe]
open-files-limit = 8192
There are only 2 users who access this server which includes me. On peak hour I get this
mysql > show processlist
...
120 rows in set
Which shows that around 120 connections are established to the mysql server during peak computation hours. MySQL consumes around 9.5gb of memory and uses 98-99% CPU which I can still live with. But during this time a front end site build with php/javascript takes around 1 - 2 min to load which is because mysql responds very slowly during these hours. While normally it takes somewhere around 890ms to 4 seconds.
I want to know how to further optimize the mysql server configuration. Currently as can be seen from the posted my.cnf , buffer pool is at 4GB and max number of connections are set at 3000 . All the tables are Innodb with proper indexes, but in my case transaction safety is not a issue the main and the only issue is performance. The data mining applications uses MySQL C API Connector and each has around 24 parallel threads running which equals to 24 simultaneous connections to MySQL
How can I further optimize the mysql server configuration so that I may get a reasonable response time of around 10 - 15 seconds for front end access . Please let me know if there is any way to optimize this further.
You really should dedicate another server just for data mining and set up replication between your MySQL servers. Data mining application should use transactions to union multiple small queries into blocks. This way your site will not wait for other queries to be executed and synchronization would be made in background without visible lag.
Another options is to cache as much as possible and hope that user will not request data that is not in cache when heavy hours are.
But I prefer to do both of this things so you'll have 100% reliable service.

mySQL running 5x slower after optimization?

I have a Xeon 2.0Ghz server (12 cores) with 16GB memory, running Apache and mySQL for a website with around 50,000 records in InnoDB (Percona). My queries used to return in about 0.17 to 0.25 seconds, then I ran the Percona tools mySQL optimizer, uploaded the new my.cnf file and suddenly the same queries are taking 1.20 to 1.30 seconds, so about 5x longer.
What did I do wrong? Here are my old and new my.cnf files"
NEW:
[mysqld]
default_storage_engine = InnoDB
key_buffer_size = 32M
myisam_recover = FORCE,BACKUP
max_allowed_packet = 16M
max_connect_errors = 1000000
log_bin = /var/lib/mysql/mysql-bin
expire_logs_days = 14
sync_binlog = 1
tmp_table_size = 32M
max_heap_table_size = 32M
query_cache_type = 0
query_cache_size = 0
max_connections = 200
thread_cache_size = 50
open_files_limit = 65535
table_definition_cache = 1024
table_open_cache = 2048
innodb_flush_method = O_DIRECT
innodb_log_files_in_group = 2
innodb_log_file_size = 256M
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1
innodb_buffer_pool_size = 12G
log_error = /var/lib/mysql/mysql-error.log
log_queries_not_using_indexes = 1
slow_query_log = 1
slow_query_log_file = /var/lib/mysql/mysql-slow.log
OLD:
[mysqld]
innodb_buffer_pool_size = 12000M
innodb_log_file_size = 256M
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_additional_mem_pool_size = 20M
innodb_thread_concurrency = 20
read_rnd_buffer_size=50M
query_cache_size=128M
query_cache_type=1
tmp_table_size=512M
wait_timeout=90
query_cache_limit=64M
key_buffer_size=128M
max_heap_table_size=512M
max_allowed_packet=32M
log_slow_queries
log-queries-not-using-indexes
long_query_time = 1
Are you swapping at all after running for a while?
You might try turning down your innodb_buffer_pool_size since you say the server is also running Apache. At the moment it looks like MySQL has the potential to use up all the server's memory for itself and leave nothing for the OS and Apache.
Try setting innodb_buffer_pool_size to 8G and then set innodb_log_file_size to 2G.
You can probably up your innodb_thread_concurrency as well, but since it isn't a dedicated MySQL server it may be fine at the default of 8. It depends on what CPU you have but the docs say:
The correct value for this variable is dependent on environment and
workload. You will need to try a range of different values to
determine what value works for your applications. A recommended value
is 2 times the number of CPUs plus the number of disks.
So play around with that and see what works best.
Also, is your database larger than the amount of RAM you have or could your entire DB fit in memory?
Just keep in mind that since you are running Apache on the same server, Apache is going to want to create a bunch of its own threads and consume as much memory as required for all the server processes and if you're running something like PHP that's going to take up memory as well.
You're going to have to find a good balance where both Apache and MySQL can both perform at maximum capacity on the same system but where neither one uses so much memory that the other has to swap.
Additional ways you can troubleshoot or profile performance would be to check your slow query log and run explains on the slow queries. In addition, you can install the Percona toolkit and run pt-query-digest to analyze your performance. Read the docs here.

InnoDB Optimization Tips Needed - mysql

I recently got a new dedicated MySQL machine. Now it's running fine, but sometimes it gets slowed down a lot by queries that state: Copying to tmp table. It seems to happen randomly.
The machine has 12GB of DDR3 ram, and runs in a RAID10 setup with (4x 15k RPM SAS drives).
This machine hosts 5 databases, all between 1 and 8gb in size each. Reads / Writes: 66% / 34%
Below is my my.cnf file. If anyone has performance optimization tips, I would love to hear them.
[mysqld]
skip-name-resolve
datadir=/var/lib/mysql
#socket=/tmp/mysql.sock
log-error=/var/log/mysqld.log
old_passwords=0
max_connections = 1500
table_cache = 1024
max_allowed_packet = 16M
sort_buffer_size = 2M
thread_cache = 8
thread_concurrency = 32
query_cache_size = 0M
query_cache_type = 0
default-storage-engine = innodb
transaction_isolation = REPEATABLE-READ
tmp_table_size = 256M
long_query_time = 3
log_slow_queries = 1
innodb_additional_mem_pool_size=48M
innodb_flush_log_at_trx_commit=2
innodb_log_buffer_size=32M
innodb_buffer_pool_size=6G
innodb_autoinc_lock_mode=2
innodb_io_capacity=500
innodb_read_io_threads=16
innodb_write_io_threads=8
innodb_buffer_pool_size = 5000M
innodb_lock_wait_timeout = 300
innodb_max_dirty_pages_pct = 90
innodb_thread_concurrency =32
It seems I have found the solution myself. max_heap_table_size wasnt set. this limited tmp_table_size. i have now set both values on 512m.