How to recover disk space used by mysql databases? - mysql

I am using Ubuntu for PHP/MySQL development. Ubuntu is installed on VM and unfortunately VM has been assigned very low disk space (40 GB).
After 1st startup the disk had more than 20 GB disk space available.
Over past week I had to import and delete the db.sql.gz file quite some times. Now I only have 50 MB disk space left. Even though I have deleted all the databases using phpmyadmin, I suspect they still use the disk space.
How can I recover this disk space?

Related

MySQL taking too much disk space in windows

I would like to know how MySQL uses disk space. Consider the following scenario.
I tried to import a large mysqldump (1.6 GB) using the command mysql -u root dbname < mydump.sql. But the import was failed and I got table xxx is full error.
I found that it was due to no memory left in C: drive. Actually, before importing the database, I had 4.1 GB free in C: drive. But after running the command, only 13 MB was left in C: drive.
To free up some space in C: drive, I dropped the database which I tried to import (as half of the total tables were imported already). But after deleting the database it freed up only 2 GB in C:drive now.
I have got a few questions here
Before importing I had 4.1 GB. After dropping the database I have 2.1 GB. So what is occupying the remaining 2GB data in my desk?
Is there a way I can find and clean that space? (I tried clearing %temp% system cache, and ran FLUSH QUERY CACHE. But nothing works)
If the mysqldump file is 1.96 GB, I thought the imported database will also be having almost the same size. If it is not, how much memory actually the database would occupy in the disk.
I managed to import the database after removing cache tables from the database. But I would like to know how to free up memory in my desk.

MySQLDump on Windows performance degrading over time

I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.

HDD space in Ubuntu Apache server is running out

I've created a 10 GB HDD and 3.75 GB RAM instance in Google Cloud and hosted a quite heavy DB transaction application's backend/API there. The OS is Ubuntu 14.04 LTS and I'm using Apache web server with PHP and MySQL for the backend. The problem here is that the HDD space has almost run out of memory very quickly.
Using Linux (Ubuntu) commands, I've found that my source code (/var/www/html) size is about 200 MB and the MySQL DB folder (/var/lib/mysql) size is 3.7 GB (around 20,000,000 records in my project DB). I'm confused how rest of my HDD space is occupied (except OS files). As of today, I only have 35 MB left. Once for testing purpose, I copied the source code to another folder. Even then I had the same problem. When I realized that my HDD space is running out, I deleted that folder and freed around 200 MB. But later (around 10 minutes) that freed space has also gone!!!
I figured that some log file like Apache error log, access log, MySQL error log or CakePHP debug log may occupy that space but I've disabled and truncated those files long ago and checked if these file are creating again but it doesn't. So how????????
I'm seriously worried about this project to continue with this instance. I thought about adding additional HDD to remedy this situation but I need to be sure how my HDD space is being occupied first. Any help will be highly appreciated.
You can start by searching all the largest files in your system.
On the / directory type:
sudo find . -type f -size +5000k -exec ls -lh {} \;
Once you find the files you can start to troubleshoot.
If you get many file you can increase +5000k to aim for the larger files.

Mysqldump performance degradation

Inside my cronjobs I make a full mysqldump every night.
My database has total 1.5GB data inside 20 tables.
Nearly every table has indexes.
I make backup like this:
mysqldump --user=user --password=pass --default-character-set=utf8 database
--single-transaction| gzip > "mybackupfile"
I make this for 2 months. This process takes nearly 1,5 minutes for 2 months.
Last week my hosting company changed my server. Just after the server change, this process started to long for 5 minutes. I told this to server company and they increased my CPU from 4GHz to 6 GHz so mysqldump process became 3,5 minutes. Then they increased to 12 GHz. But this didn't change the performance.
I checked my shared SSD disk performance with hdparm. It was 70 MB/sec. So I complain again. So they changed my hard disk to another one. Hard disk read speed became 170 MB/sec. So mysqldump process became 3 minutes.
But the duration is far from the previous value. What would be the cause for this performance degradation ? How can I isolate the problem ?
(Server is Centos 6.4, 12 GHz CPU, 8 GB RAM)
Edit: My company changed server again and I still have same problem. Old server has 3,5 minutes backup time now new server has 5 minutes time. Resultant file is 820 MB when zipped, 2.9 GB when unzipped.
I'm trying to find out what makes this dump slow.
Dump process started at 11:24:32 and stopped at 11:29:40. You can check it from screenshots' timestamps.
Screenshots:
General
Consumed memory
Memory and CPU of gzip
Memory and CPU of mysqldump
Disk operations
hdparm results:
/dev/sda2:
Timing cached reads: 3608 MB in 1.99 seconds = 1809.19 MB/sec
Timing buffered disk reads: 284 MB in 3.00 seconds = 94.53 MB/sec
/dev/sda2:
Timing cached reads: 2120 MB in 2.00 seconds = 1058.70 MB/sec
Timing buffered disk reads: 330 MB in 3.01 seconds = 109.53 MB/sec
Obvious thing I'd look at is whether the amount of data increased in recent months. Most database-driven applications collect more data over time, so the database grows. If you still have copies of your nightly backups, I'd look at the file sizes to see if they have been increasing steadily.
Another possibility is that you have other processes doing read queries while you are backing up. Mysqldump by default creates a read lock on the database to ensure a consistent snapshot of data. But that doesn't block read queries. If there are still queries running, this could compete for CPU and disk resources, and naturally slow things down.
Or there could be other processes besides MySQL on the same server competing for resources.
And finally, as #andrew commented above, there could be other virtual machines on the same physical server, competing for the physical resources. This is beyond your control and not even something you can test for within the virtual machine. It's up to the hosting company to manage a balanced allocation of virtual machines.
The fact that the start of the problems coincides with your hosting company moving your server to another host makes a pretty good case that they moved you onto a busier host. Maybe they're trying to crowd more VM's onto a single host to save rackspace or something. This isn't something StackOverflow can answer for you -- you should talk to the hosting company.
The number or size of indexes doesn't matter during the backup, since mysqldump just does a SELECT * from each table, and that's a table-scan. No secondary indexes are used for those queries.
If you want a faster backup solution, here are a few solutions:
If all your tables are InnoDB, you can use the --single-transaction option, which uses transaction isolation instead of locking to ensure a consistent backup. Then the difference between 3 and 6 minutes isn't so important, because other clients can continue to read and write to the database. (P.S.: You should be using InnoDB anyway.)
Mysqldump with the --tab option. This dumps data into tab-delimited files, one file per table. It's a little bit faster to dump, but much faster to restore.
Mydumper, an open source alternative to mysqldump. This has the option to run in a multi-threaded fashion, backing up tables in parallel. See http://2bits.com/backup/fast-parallel-mysql-backups-and-imports-mydumper.html for a good intro.
Percona XtraBackup performs a physical backup, instead of a logical backup like mysqldump or mydumper. But it's often faster than doing a logical backup. It also avoids locking InnoDB tables, so you can run a backup while clients are reading and writing. Percona XtraBackup is free and open-source, and it works with plain MySQL Community Edition, as well as all variants like Percona Server, MariaDB, and even Drizzle. Percona XtraBackup is optimized for InnoDB, but it also works with MyISAM and any other storage engines (it has to do locking while backing up non-InnoDB tables though).
My question is: Do you really need a dump or just a copy?
There is a great way that is far away from mysql dump, it uses Linux LVM "LVM Snapshot":
http://www.lenzg.net/mylvmbackup/
The idea is to hold the database for a milli second, then LVM will make a hot copy (which takes another milli second) and then the database can continue to write data. The LVM copy is now ready for every action you want: copy the table files or open it as new mysql instance for a dump (which is not on the production database!).
This needs some modifications to your system. Maybe those mylvmbackup scripts are not completely finished jet. But if you have time yourself you can build on them and do your own work.
Btw: if you really go this way, I'm very interested in the scripts as I am also need a fast solution to clone a database from production environment to a test system for developers. We decided to use this LVM snapshot feature but - as always - had no time to start with it.

About autoextend functionality in innodb of mysql

I have a server with limited storage
/apps: available 60GB ; used 59GB ; free 1GB
We have mysql datadirectory pointed to /apps
datadir /apps/mysql/data/
/apps/mysql/data/ibdata 50GB
Yesterday we observed the server shutdown with table full error which is a disc full basically.
So we have purged one old month data in one of the tables and have 10GB free space under innodb. But still there only 1GB free space on the file system.
Now if I try to insert again, whether the innodb again try to auto extend the ibdata file or it used the 9GB free space?