The below command takes 2-3 seconds in a Linux MySQL 5.6 server running Php 5.4
exec("mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file");
On windows with similar configuration it takes 10-15 seconds. The windows machine has a lot more ram (16gb) and similar hard drive. I installed MySQL 5.6 and made no configuration changes. This is on windows server 2012.
What are configurations I can change to fix this?
The database file creates about 40 innodb tables with very minimal inserts.
EDIT: Here is the file I am running:
https://www.dropbox.com/s/uguzgbbnyghok0o/database_14.4.sql?dl=0
UPDATE: On windows 8 and 7 it was 3 seconds. But on windows server 2012 it is 15+ seconds. I disabled System center 2012 and that made no difference.
UPDATE 2:
I also tried killing almost every service except for mysql and IIS and it still performed slowly. Is there something in windows server 2012 that causes this to be slow?
Update 3
I tried disable write cache buffer flush and performance is now great.
I didn't have to do this on other machines I tested with. Does this indicate a bottleneck With how disk is setup?
https://social.technet.microsoft.com/Forums/windows/en-US/282ea0fc-fba7-4474-83d5-f9bbce0e52ea/major-disk-speed-improvement-disable-write-cache-buffer-flushing?forum=w7itproperf
That is why we call it LAMP stack and no doubt why it is so popular mysql on windows vs Linux. But that has more to do more with stability and safety. Performance wise the difference should be minimal. While a Microsoft Professional can best tune the Windows Server explicitly for MySQL by enabling and disabling the services, but we would rather be interested to see the configuration of your my.ini. So what could be the contributing factors w.r.t Windows on MySQL that we should consider
The services and policies in Windows is sometimes a big impediment to performance because of all sorts of restrictions and protections.
We should also take into account the Apache(httpd.conf) and PHP(php.ini) configuration as MySQL is so tightly coupled with them.
Antivirus : Better disable this when benchmarking about performance
Must consider these parameters in my.ini as here you have 40 Innodb tables
innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, query_cache_size, innodb_flush_method, innodb_log_file_size, innodb_file_per_table
For example: If file size of ib_logfile0 = 524288000, Then
524288000/1048576 = 500, Hence innodb_log_file_size should be 500M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
https://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert
SET autocommit=0;
Most importantly innodb_flush_log_at_trx_commit as in this case it is about importing database. Setting this to '2' form '1' (default)hm can be a big performance booster specially during data import as log buffer will be flushed to OS file cache on every transaction commit
For reference :
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/a/72766/60318
http://kvz.io/blog/2009/03/31/improve-mysql-insert-performance/
Lastly, based on this
mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file
If the mysqldump (.sql) file is not residing in the same host where you are importing, performance will be slow. Consider to copy the (.sql) file exactly in the server where you need to import the database, then try importing without --host option.
Windows is slower at creating files, period. 40 InnoDB tables involves 40 or 80 file creations. Since they are small InnoDB tables, you may as well set innodb_file_per_table=OFF before doing the CREATEs, thereby needing only 40 file creations.
Good practice in MySQL is to create tables once, and not be creating/dropping tables frequently. If your application is designed to do lots of CREATEs, we should focus on that. (Note that, even on Linux, table create time is non-trivial.)
If these are temporary tables... 5.7 will have significant changes that will improve the performance (on either OS) in this area. 5.7 is on the cusp of being GA.
(RAM size is irrelevant in this situation.)
Related
I have MySQL database which is in size of 4 TB and when I'm dumping it using mysqldump then it is taking around 2 days to dump that database in the .sql format
Can anyone help to faster this process?
OS ubuntu 14
MySQL 5.6
The single database of size 4 TB
hundreds of table average tables size is around 100 to 200 GB
Please help if anyone have any solution to this
I would:
stop the database,
copy the files in a new database
restart the database
process the data from the new place (maybe in an other machine).
If you are replicating, just stop replication, process, start replication.
These methods should improve speed, because of lack of concurrent processes that access the database (and all lock logic).
On such large databases, I would try not to have to make dumps. Just use mysql table files if possible.
In any case 2 days seems a lot, also for a old machine. Check that you are not swapping, and try to check your mysql configuration for possible problems. In general, try to get a better machine. Computer are cheaper than time to optimize.
I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.
I currently have a Cloud based server with the following config.
CentOS 7 64-Bit
CPU:8 vCore
RAM:16 GB
MariaDB/MySQL 5.5.5
Unfortunately, I've inherited a MyISAM database and tables that I have no control to convert to INNODB even though the application performs many writes from many connections. The data is Wordpress Posts with the typical large text and photos.
I'm experimenting with my.cnf config changes and was wondering if the config I've developed here is making use of the resources in the most effecient way. Is there anything glaring I could increase/decrease to squeak out more performance?
key_buffer_size=4G
thread_cache_size = 128
bulk_insert_buffer_size=256M
join_buffer_size=64M
max_allowed_packet=128M
query_cache_limit=128M
read_buffer_size=16M
read_rnd_buffer_size=16M
sort_buffer_size=16M
table_cache=128
tmp_table_size=128M
This will depend entirely on the type of data you are storing, the structure and size of your tables and the type of usage your database has. Not to mention the amount of available RAM and the type of disks your server has.
The best recommendation, if you have shell access to the server (which I assume you must, otherwise you couldn't change my.cnf) is to download the mysqltuner script from major.io
Run this script as a user with privileges to access your database, and preferably with root privileges on mysql too (the ideal is to run it under sudo or root) and it will analyse your database access since mysql's last restart, and then give you recommendations to change the options in my.cnf
It isn't perfect, but it'll get you much further, and more quickly, than anyone on here trying guess what values would be appropriate for your use case.
And, while not trying to pre-empt the results, I wouldn't be surprised if mysqltuner recommends that you drastically increase the size of your join buffer, table_cache and query_cache_limit.
I don't know what's going on. My server has been fine for probably a year. Now I'm having a severe problem with MariaDB/MySQL. The DB server keeps crashing. When it does and I bring it back online I get errors, several tables are marked crashed and I have to repair them. Here are the server specs...
CloudLinux Server release 6.6 installed on Centos 6.5 (x64)
WHM/Cpanel 11.50.1 Build 1 (Current)
MariaDB 10.0.21
RAM: 3,820MB (3750MB+ in use)
Swap: 1,023MB (1,023MB in use)
4 Cores (Low idle load)
Available Disk Space: 26GB
I suspect it has to do with memory. Here's a memory alert I get in WHM:
Here's what I get when I try to visit a web site on my server that uses MySQL (As expected):
Warning: mysql_connect(): Connection refused in /home/mysite/public_html/index.php on line 19
Unable to connect to server.
Here's a link to the main error log of my database server (Too much to post here): http://wikisend.com/download/182056/proton.myserver.com.err.txt
This is what happens when I restart my database server from WHM. Each time I restart the db server, random tables are marked as crashed. Sometimes a lot of tables, sometimes just a few and then I have to repair them:
Here is the contents of the /etc/my.cnf file:
root#proton [~]# cat /etc/my.cnf
[mysqld]
default-storage-engine=MyISAM
innodb_file_per_table=1
max_allowed_packet=268435456
open_files_limit=10000
innodb_buffer_pool_size=123731968
The only thing I've tried to fix this is setting this option in WHM:
I only have a handful of sites on the server. Any help is greatly appreciated.
SHOW VARIABLES LIKE '%buffer%';
Do you have other products running in the same VM/server? How much of the 3750MB are they using? Consider increasing RAM as a quick fix. Otherwise, lets look for what is chewing up RAM.
You are probably no using any InnoDB tables? If not then change this to 0:
innodb_buffer_pool_size=123731968
For MyISAM, the most important factor is key_buffer_size; it should be no more than about 500M for your case.
What is WHM?
Abrupt stops of mysql (for any reason) leads to the need to REPAIR MyISAM tables ("marked crashed"). (Consider moving to InnoDB to avoid this recurring nuisance.)
Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?