MySQL my.cnf config setup for MyISAM - mysql

I currently have a Cloud based server with the following config.
CentOS 7 64-Bit
CPU:8 vCore
RAM:16 GB
MariaDB/MySQL 5.5.5
Unfortunately, I've inherited a MyISAM database and tables that I have no control to convert to INNODB even though the application performs many writes from many connections. The data is Wordpress Posts with the typical large text and photos.
I'm experimenting with my.cnf config changes and was wondering if the config I've developed here is making use of the resources in the most effecient way. Is there anything glaring I could increase/decrease to squeak out more performance?
key_buffer_size=4G
thread_cache_size = 128
bulk_insert_buffer_size=256M
join_buffer_size=64M
max_allowed_packet=128M
query_cache_limit=128M
read_buffer_size=16M
read_rnd_buffer_size=16M
sort_buffer_size=16M
table_cache=128
tmp_table_size=128M

This will depend entirely on the type of data you are storing, the structure and size of your tables and the type of usage your database has. Not to mention the amount of available RAM and the type of disks your server has.
The best recommendation, if you have shell access to the server (which I assume you must, otherwise you couldn't change my.cnf) is to download the mysqltuner script from major.io
Run this script as a user with privileges to access your database, and preferably with root privileges on mysql too (the ideal is to run it under sudo or root) and it will analyse your database access since mysql's last restart, and then give you recommendations to change the options in my.cnf
It isn't perfect, but it'll get you much further, and more quickly, than anyone on here trying guess what values would be appropriate for your use case.
And, while not trying to pre-empt the results, I wouldn't be surprised if mysqltuner recommends that you drastically increase the size of your join buffer, table_cache and query_cache_limit.

Related

my.cnf >> fully change it, having real production data on our DB

Good morning.
I've a MariaDB environtment, with a very basic my.cnf on it. Coming by default on product installation:
We've been working with this configuration from the beggining. So we have real productive data for a year long, on our DB.
If I substitute this my.cnf, by this one here:
https://gist.github.com/fevangelou/fb72f36bbe333e059b66
which is close to our instance specs (VM, quad-core 2.20Ghz, 8GB RAM), adapting it accordingly, do I have risk to corrupt, break, lose or dirtying existing data in any sense or meaning on DB? I mean, block size changes, buffer changes, etc. Is it dangerous?
In my case, we have this MariaDB on a WINDOWS instance Server 2k16 edition. So I would've to adapt this optimized config file. Then I should ask you too, do you have a basic/recommended optimized my.cnf file for Windows, like the one adove or so?
Thanks and regards.

MySQL max_connections

I have setup with Linux, Debian Jessie with Mysql 5.7.13 installed.
I have set following settings in
my.cnf: default_storage_engine= innodb, innodb_buffer_pool_size= 44G
When I start MySQL I manually set max_connections with SET GLOBAL max_connections = 1000;
Then I trigger my loadtest that sends a lot of traffic to the DB server which mostly consists of slow/bad queries.
The result I expected was that I would reach close to 1000 connections but somehow MySQL limits it to 462 connections and I can not find the setting that is responsible for this limit. We are not even close to maxing out the CPU or Memory.
If you have any idea or could point me in a direction where you think the error might be it would be really helpful.
What loadtest did you use? Are you sure that it can utilize about thousands of connections?
You may maxing out your server resources in the disk IO area, especially if you're talking about lot of slow/bad queries. Did you check for disk utilization on your server?
Even if your InnoDB pool size is large your DB still need to read your DB to the cache first, and if your entire DB is large it will not help you.
I can recommend you to perform such a test once more time and track your disk performance during loadtest using iostat or iotop utility.
Look here for more examples of the server performance troubleshooting.
I found the issue, it was du to limitation of Apache server, there is a "hidden" setting inside /etc/apache2/mods-enabled/mpm_prefork.conf which will overwrite setting inside /etc/apache2/apache2.conf
Thank you!

Are docker-hosted databases somehow exempt from backup best practices?

As far as I was aware, for MS SQL, PostgreSQL, and even MySQL databases (so, I assumed, in general for RDBMS engines), you cannot simply back up the file system they are hosted on, but need to do an SQL-level backup to have any hope of internal consistency and therefore ability to actually restore.
But then answers like this and indeed the official docs referenced seem to suggest that one can just tar away on database data:
docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
These two ideas seem at odds with one another. Is there something special about how Docker works that makes it unnecessary to use SQL-level backups? If not, what am I missing in my understanding? (Why is something used as the official example when you can't use it to back up a production database? That can't be right...)
Under certain circumstances, it should be safe to use the image of a database on a disk:
The database server is not running.
All persistent data is on the disk system(s) being backed up (logs, tables spaces, temporary storage).
All components are restored together.
You are restoring the image to the same server on the same path.
The last condition is important, because some aspects of the database configuration may be stored in operating system files.
You need to do the backup within the database whenever the server is running. The server is responsible for the internal consistency of the data, and the disk image may not be complete or recoverable. If the server is not running, then the state of the database should be consistent in the persistent storage.

mysql 5.6 Linux vs windows performance

The below command takes 2-3 seconds in a Linux MySQL 5.6 server running Php 5.4
exec("mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file");
On windows with similar configuration it takes 10-15 seconds. The windows machine has a lot more ram (16gb) and similar hard drive. I installed MySQL 5.6 and made no configuration changes. This is on windows server 2012.
What are configurations I can change to fix this?
The database file creates about 40 innodb tables with very minimal inserts.
EDIT: Here is the file I am running:
https://www.dropbox.com/s/uguzgbbnyghok0o/database_14.4.sql?dl=0
UPDATE: On windows 8 and 7 it was 3 seconds. But on windows server 2012 it is 15+ seconds. I disabled System center 2012 and that made no difference.
UPDATE 2:
I also tried killing almost every service except for mysql and IIS and it still performed slowly. Is there something in windows server 2012 that causes this to be slow?
Update 3
I tried disable write cache buffer flush and performance is now great.
I didn't have to do this on other machines I tested with. Does this indicate a bottleneck With how disk is setup?
https://social.technet.microsoft.com/Forums/windows/en-US/282ea0fc-fba7-4474-83d5-f9bbce0e52ea/major-disk-speed-improvement-disable-write-cache-buffer-flushing?forum=w7itproperf
That is why we call it LAMP stack and no doubt why it is so popular mysql on windows vs Linux. But that has more to do more with stability and safety. Performance wise the difference should be minimal. While a Microsoft Professional can best tune the Windows Server explicitly for MySQL by enabling and disabling the services, but we would rather be interested to see the configuration of your my.ini. So what could be the contributing factors w.r.t Windows on MySQL that we should consider
The services and policies in Windows is sometimes a big impediment to performance because of all sorts of restrictions and protections.
We should also take into account the Apache(httpd.conf) and PHP(php.ini) configuration as MySQL is so tightly coupled with them.
Antivirus : Better disable this when benchmarking about performance
Must consider these parameters in my.ini as here you have 40 Innodb tables
innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, query_cache_size, innodb_flush_method, innodb_log_file_size, innodb_file_per_table
For example: If file size of ib_logfile0 = 524288000, Then
524288000/1048576 = 500, Hence innodb_log_file_size should be 500M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
https://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert
SET autocommit=0;
Most importantly innodb_flush_log_at_trx_commit as in this case it is about importing database. Setting this to '2' form '1' (default)hm can be a big performance booster specially during data import as log buffer will be flushed to OS file cache on every transaction commit
For reference :
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/a/72766/60318
http://kvz.io/blog/2009/03/31/improve-mysql-insert-performance/
Lastly, based on this
mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file
If the mysqldump (.sql) file is not residing in the same host where you are importing, performance will be slow. Consider to copy the (.sql) file exactly in the server where you need to import the database, then try importing without --host option.
Windows is slower at creating files, period. 40 InnoDB tables involves 40 or 80 file creations. Since they are small InnoDB tables, you may as well set innodb_file_per_table=OFF before doing the CREATEs, thereby needing only 40 file creations.
Good practice in MySQL is to create tables once, and not be creating/dropping tables frequently. If your application is designed to do lots of CREATEs, we should focus on that. (Note that, even on Linux, table create time is non-trivial.)
If these are temporary tables... 5.7 will have significant changes that will improve the performance (on either OS) in this area. 5.7 is on the cusp of being GA.
(RAM size is irrelevant in this situation.)

How to configure Amazon EC2 t1.micro instance for Buddypress

We have installed WordPress on EC2 t1.micro instance and installed Buddypress on top of that, everything work fine for single user, but when two user access at same time, site goes down, because of RAM issue, httpd (Apache) takes maximum memory, how to overcome this, is there any configuration need to do in http.conf file or any network / traffic blocking tool do i need to install?
Micro instances are notoriously too small to handle WordPress and MySQL together. They're going to thrash (overuse the disk swap feature) or just run out of RAM and crash.
You are going to have to do a lot of tuning to get this right on a micro instance, and it is never going to be rock-stable. It's a pain in the neck. If your time is worth more than a dollar an hour compared to hosting fees, you should upgrade to an instance with more RAM, or sign up for one of the many US$6 per month shared hosting accounts available in the world.
Where to start tuning? Try setting a value in the Apache httpd.conf.
Set MaxRequestWorkers to a low number. You might try 4. When this number is low then you also won't have many simultaneous clients connecting from your Apache/php to your MySQL server.
Requests from web-browser clients will be enqueued when all your workers are busy. That works correctly, but may make your web site seem slow to your users. See the backlog parameter in the Linux documentation for listen(2) for an explanation of that queuing.
That will save both on Apache RAM and MySQL resources.
http://httpd.apache.org/docs/current/mod/mpm_common.html#maxrequestworkers
Then you probably should look at the my.conf file for MySQL, and see what you can play around with.
Edit MySQL, Apache, and php are all drawing on the same pool of RAM -- 512MB if I remember correctly. Reducing the number of Apache workers should help control RAM usage by Apache (and php, which is probably running in the Apache server's address space). Do that.
Then, go find the memory_limit in php.ini. It's set to 128M in many standard installations. Try reducing it to 64M or 40M. That will make each php instance use less RAM. But, if your WordPress installation is complex (lots of plugins, fancy theme), it may make some pages fail to load. WordPress will announce the problem as memory running out. http://php.net/memory-limit
Then, jump into MySQL's my.ini. The standard MySQL install comes with a file called my-small.ini, which contains the configuration parameters for a small MySQL instance. Yours can be small: WordPress's tables contain hundreds or a few thousands of rows, not hundreds of thousands. Save your old my.ini and then copy the contents of my-small.ini into my.ini. Restart your MySQL server after doing that.
Those steps may help you squeak by in a micro instance. They may not. They are, I suppose, worth a try.