I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.
Related
I have a large Drupal 7 site with many concurrent logged-in users running on a server where disk space recently ran out (MySQL InnoDB behind Memcached on Ubuntu 16.04). How/why this happened is a discussion for another day.
I've cleared up the disk space issue, and the site seems to be running fine as far as general interaction indicates, but Drupal log is full of errors like this:
Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 3 Error writing file '/tmp/MYGWmIvU' (Errcode: 28 - No space left on device)' in /var/www/pixelscrapper.com/public_html/includes/database/database.inc:2229
My question now is: will the mysql database that is running Drupal likely be corrupted/flaky at this point? i.e. On a scale of 0-10, how vital is it that I restore the database to a point before the disk space ran out?
(Anything other than 0 means I will likely restore the database--but there are other things that went wrong as well here, which means that we would lose quite a few days of data if I need to restore, which is a huge drag. C'est la vie, etc.).
My assumption is that the data in MySQL may be more or less fine, but that I cannot rely on the integrity of the actual Drupal data (users, nodes, etc.) which are made up of collections of many database rows...
Out of space crashes can cause serious damages to databases. The sole fact that your database is able to startup and to run apparently as usual is already a good indication that it was not totally messed up by the incident.
Next thing you can do is to perform an in-depth scan using the sqlcheck program :
The mysqlcheck client performs table maintenance: It checks, repairs, optimizes, or analyzes tables.
Analyze all tables in all databases :
$ mysqlcheck -A
Analyze and repair all tables in all databases :
$ mysqlcheck -A -r
NB : as explained in the documentation, you would better shutdown your application and make a backup before you run this.
After upgradig mysql to 5.7 form 5.5 few months ago, I forgot to do mysql_upgrade.
And facing some problems.. mysql, sys, performance_schema databases are missing and root privileges are broken. A lot of Access denied for user 'root'... messages pop up, when I try to do some mysql user privileges things.
This stack answer will have to solve my problem. But I need to know it won't affect any of the schemas, data... ect.
Because my database is pretty huge. It amounts to 10 GB and consists of about 50 tables. I'm afraid some bad things could happend. I know the answer will be the mysqldump.
But the full backup will cost a long time, maybe an hour. And the business won't accept that downtime.
So what is the risk of mysql_upgrade without doing mysqldump?
The risk of doing anything administrative to your database without backups is unacceptably high... not because of any limitations in MySQL per se, but because we're talking about something critical to your business. You should be backing it no less often than the time interval of data you are willing to lose.
If you are using InnoDB, then use the --single-transaction option of mysqldump and there should be no locking, because MVCC handles the consistency. If you are not using InnoDB, that is a problem itself, but using --skip-lock-tables should mimimize locking.
Note that it should be quite safe to kill a mysqldump in progress if you find it is causing issues -- find the thread-id of the dump using SHOW PROCESSLIST; and then KILL QUERY #; where # is the ID of the dump connection from the process list.
The potential problem with the answer you cited is that 5.1 > 5.5 is a supported upgrade path, because those two versions are sequential. 5.5 > 5.7 isn't. You should have upgraded to 5.6 and then 5.7, running the appropriate versions of mysql_upgrade both before and after each step (appropriate meaning the version of the utility matching the version of the server running at the moment).
You may be in a more delicate situation than you imagine... or you may not.
Given similar circumstances, I would not want to do anything less than stop the server completely, clone it by copying all the files to a new machine, and test the remediation steps against the clone.
If this system is business-critical, it should have a live, running replica server, that could be promoted to master and permanently replace this machine in the event of a failure. In a circumstance like this one, you would apply your fixes to the replica and promote it.
Access denied for user 'root'... may or may not be related to the schema incompatibilites.
I have a Mac Pro with i7 processor, 16GB RAM, and sufficient storage running Win 8.1 via Parallel on top of OS X Yosemite. I have a 23GB MySQL data and I am wondering if I am able to have such a big data loaded into MySQL in my PC. I started to import data but it stops after an hour throwing error
Error 1114 (HY000) at line 223. The table X is full.
I googled the error and found the same error discussed in Stackoverflow (but not this much of data). I tried to resolve using the given solutions but failed. MySQL imports about 3G of data and then throws the error.
Now, here are my 3 main questions.
Is my data much more bigger than a MySQL data engine can have on a PC?
If this is not the case and I am good to go with that much data, do I have any configuration required to enable running a 23GB data on my PC?
Final concluding question is how big is big that one cannot run on its machine? Is it only matter to be able to store data locally or it needs some other things?
Of course MySQL on Windows can handle 23GB of data. That's not even close to its limit.
Keep in mind that a database takes lots of disk space for indexes and other things. 23GB of raw data probably will need 100GB of disk space to load, to index, and to get running. If you are loading it into an InnoDB table you will also need transaction rollback space for the load.
It seems likely that your Windows 8.1 virtual machine running on Parallels is running out of disk space. You can allocate more of your Mac's disk for use by Parallels. Read this. http://kb.parallels.com/en/113972
Your answers can be found within the MySQL reference
The effective maximum table size for MySQL databases is usually
determined by operating system constraints on file sizes, not by MySQL
internal limits. The following table lists some examples of operating
system file-size limits. This is only a rough guide and is not
intended to be definitive. For the most up-to-date information, be
sure to check the documentation specific to your operating system.
The below command takes 2-3 seconds in a Linux MySQL 5.6 server running Php 5.4
exec("mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file");
On windows with similar configuration it takes 10-15 seconds. The windows machine has a lot more ram (16gb) and similar hard drive. I installed MySQL 5.6 and made no configuration changes. This is on windows server 2012.
What are configurations I can change to fix this?
The database file creates about 40 innodb tables with very minimal inserts.
EDIT: Here is the file I am running:
https://www.dropbox.com/s/uguzgbbnyghok0o/database_14.4.sql?dl=0
UPDATE: On windows 8 and 7 it was 3 seconds. But on windows server 2012 it is 15+ seconds. I disabled System center 2012 and that made no difference.
UPDATE 2:
I also tried killing almost every service except for mysql and IIS and it still performed slowly. Is there something in windows server 2012 that causes this to be slow?
Update 3
I tried disable write cache buffer flush and performance is now great.
I didn't have to do this on other machines I tested with. Does this indicate a bottleneck With how disk is setup?
https://social.technet.microsoft.com/Forums/windows/en-US/282ea0fc-fba7-4474-83d5-f9bbce0e52ea/major-disk-speed-improvement-disable-write-cache-buffer-flushing?forum=w7itproperf
That is why we call it LAMP stack and no doubt why it is so popular mysql on windows vs Linux. But that has more to do more with stability and safety. Performance wise the difference should be minimal. While a Microsoft Professional can best tune the Windows Server explicitly for MySQL by enabling and disabling the services, but we would rather be interested to see the configuration of your my.ini. So what could be the contributing factors w.r.t Windows on MySQL that we should consider
The services and policies in Windows is sometimes a big impediment to performance because of all sorts of restrictions and protections.
We should also take into account the Apache(httpd.conf) and PHP(php.ini) configuration as MySQL is so tightly coupled with them.
Antivirus : Better disable this when benchmarking about performance
Must consider these parameters in my.ini as here you have 40 Innodb tables
innodb_buffer_pool_size, innodb_flush_log_at_trx_commit, query_cache_size, innodb_flush_method, innodb_log_file_size, innodb_file_per_table
For example: If file size of ib_logfile0 = 524288000, Then
524288000/1048576 = 500, Hence innodb_log_file_size should be 500M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
https://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert
SET autocommit=0;
Most importantly innodb_flush_log_at_trx_commit as in this case it is about importing database. Setting this to '2' form '1' (default)hm can be a big performance booster specially during data import as log buffer will be flushed to OS file cache on every transaction commit
For reference :
https://dev.mysql.com/doc/refman/5.5/en/optimizing-innodb-bulk-data-loading.html
https://dba.stackexchange.com/a/72766/60318
http://kvz.io/blog/2009/03/31/improve-mysql-insert-performance/
Lastly, based on this
mysql --host=$db_host --user=$db_user --password=$db_password $db_name < $sql_file
If the mysqldump (.sql) file is not residing in the same host where you are importing, performance will be slow. Consider to copy the (.sql) file exactly in the server where you need to import the database, then try importing without --host option.
Windows is slower at creating files, period. 40 InnoDB tables involves 40 or 80 file creations. Since they are small InnoDB tables, you may as well set innodb_file_per_table=OFF before doing the CREATEs, thereby needing only 40 file creations.
Good practice in MySQL is to create tables once, and not be creating/dropping tables frequently. If your application is designed to do lots of CREATEs, we should focus on that. (Note that, even on Linux, table create time is non-trivial.)
If these are temporary tables... 5.7 will have significant changes that will improve the performance (on either OS) in this area. 5.7 is on the cusp of being GA.
(RAM size is irrelevant in this situation.)
Inside my cronjobs I make a full mysqldump every night.
My database has total 1.5GB data inside 20 tables.
Nearly every table has indexes.
I make backup like this:
mysqldump --user=user --password=pass --default-character-set=utf8 database
--single-transaction| gzip > "mybackupfile"
I make this for 2 months. This process takes nearly 1,5 minutes for 2 months.
Last week my hosting company changed my server. Just after the server change, this process started to long for 5 minutes. I told this to server company and they increased my CPU from 4GHz to 6 GHz so mysqldump process became 3,5 minutes. Then they increased to 12 GHz. But this didn't change the performance.
I checked my shared SSD disk performance with hdparm. It was 70 MB/sec. So I complain again. So they changed my hard disk to another one. Hard disk read speed became 170 MB/sec. So mysqldump process became 3 minutes.
But the duration is far from the previous value. What would be the cause for this performance degradation ? How can I isolate the problem ?
(Server is Centos 6.4, 12 GHz CPU, 8 GB RAM)
Edit: My company changed server again and I still have same problem. Old server has 3,5 minutes backup time now new server has 5 minutes time. Resultant file is 820 MB when zipped, 2.9 GB when unzipped.
I'm trying to find out what makes this dump slow.
Dump process started at 11:24:32 and stopped at 11:29:40. You can check it from screenshots' timestamps.
Screenshots:
General
Consumed memory
Memory and CPU of gzip
Memory and CPU of mysqldump
Disk operations
hdparm results:
/dev/sda2:
Timing cached reads: 3608 MB in 1.99 seconds = 1809.19 MB/sec
Timing buffered disk reads: 284 MB in 3.00 seconds = 94.53 MB/sec
/dev/sda2:
Timing cached reads: 2120 MB in 2.00 seconds = 1058.70 MB/sec
Timing buffered disk reads: 330 MB in 3.01 seconds = 109.53 MB/sec
Obvious thing I'd look at is whether the amount of data increased in recent months. Most database-driven applications collect more data over time, so the database grows. If you still have copies of your nightly backups, I'd look at the file sizes to see if they have been increasing steadily.
Another possibility is that you have other processes doing read queries while you are backing up. Mysqldump by default creates a read lock on the database to ensure a consistent snapshot of data. But that doesn't block read queries. If there are still queries running, this could compete for CPU and disk resources, and naturally slow things down.
Or there could be other processes besides MySQL on the same server competing for resources.
And finally, as #andrew commented above, there could be other virtual machines on the same physical server, competing for the physical resources. This is beyond your control and not even something you can test for within the virtual machine. It's up to the hosting company to manage a balanced allocation of virtual machines.
The fact that the start of the problems coincides with your hosting company moving your server to another host makes a pretty good case that they moved you onto a busier host. Maybe they're trying to crowd more VM's onto a single host to save rackspace or something. This isn't something StackOverflow can answer for you -- you should talk to the hosting company.
The number or size of indexes doesn't matter during the backup, since mysqldump just does a SELECT * from each table, and that's a table-scan. No secondary indexes are used for those queries.
If you want a faster backup solution, here are a few solutions:
If all your tables are InnoDB, you can use the --single-transaction option, which uses transaction isolation instead of locking to ensure a consistent backup. Then the difference between 3 and 6 minutes isn't so important, because other clients can continue to read and write to the database. (P.S.: You should be using InnoDB anyway.)
Mysqldump with the --tab option. This dumps data into tab-delimited files, one file per table. It's a little bit faster to dump, but much faster to restore.
Mydumper, an open source alternative to mysqldump. This has the option to run in a multi-threaded fashion, backing up tables in parallel. See http://2bits.com/backup/fast-parallel-mysql-backups-and-imports-mydumper.html for a good intro.
Percona XtraBackup performs a physical backup, instead of a logical backup like mysqldump or mydumper. But it's often faster than doing a logical backup. It also avoids locking InnoDB tables, so you can run a backup while clients are reading and writing. Percona XtraBackup is free and open-source, and it works with plain MySQL Community Edition, as well as all variants like Percona Server, MariaDB, and even Drizzle. Percona XtraBackup is optimized for InnoDB, but it also works with MyISAM and any other storage engines (it has to do locking while backing up non-InnoDB tables though).
My question is: Do you really need a dump or just a copy?
There is a great way that is far away from mysql dump, it uses Linux LVM "LVM Snapshot":
http://www.lenzg.net/mylvmbackup/
The idea is to hold the database for a milli second, then LVM will make a hot copy (which takes another milli second) and then the database can continue to write data. The LVM copy is now ready for every action you want: copy the table files or open it as new mysql instance for a dump (which is not on the production database!).
This needs some modifications to your system. Maybe those mylvmbackup scripts are not completely finished jet. But if you have time yourself you can build on them and do your own work.
Btw: if you really go this way, I'm very interested in the scripts as I am also need a fast solution to clone a database from production environment to a test system for developers. We decided to use this LVM snapshot feature but - as always - had no time to start with it.