taking physical backup of single database in MYsql - mysql

we need to take backup of a large DB of Mysql . making a script seems to take a lot of time and any error in between requires whole process to restart.
was curious if we can take physical backup of a single Database (taking physical backup of whole MYSQL seems possible).
e.g if there are databases schemas like DB1, DB2 , DB3
can we take physical backup of only DB1
most of the tables are in InnoDB.
any help appreciated.

Take a look at msyqldump. It lets you dump a database to an SQL file.

Related

How to dump large mysql databse faster ?

I have MySQL database which is in size of 4 TB and when I'm dumping it using mysqldump then it is taking around 2 days to dump that database in the .sql format
Can anyone help to faster this process?
OS ubuntu 14
MySQL 5.6
The single database of size 4 TB
hundreds of table average tables size is around 100 to 200 GB
Please help if anyone have any solution to this
I would:
stop the database,
copy the files in a new database
restart the database
process the data from the new place (maybe in an other machine).
If you are replicating, just stop replication, process, start replication.
These methods should improve speed, because of lack of concurrent processes that access the database (and all lock logic).
On such large databases, I would try not to have to make dumps. Just use mysql table files if possible.
In any case 2 days seems a lot, also for a old machine. Check that you are not swapping, and try to check your mysql configuration for possible problems. In general, try to get a better machine. Computer are cheaper than time to optimize.

large MYSQL DB single file

I have a large MYSQL innodb database (115GB) running on single file mode in MySQL server.
I NEED to move this to file per table mode to allow me to optimize and reduce the overall DB size.
Im looking at various options to do this, but my problem falls in there only being a small window of downtime (roughly 5 hours).
1. Setup a clone of the server as a slave. Set the slave up with file_per_table, take a mysqldump from the main DB, run in the slave and have this replicating.
I will then look to fail over to the slave.
2. The other option is the usual mysqldump, drop DB and then import.
My concern is around the time to take the mysqldump and the quality of the dump being such a large size. I have BLOB data in the DB also.
Can anyone offer advise on a good approach?
Thanks

Data change during doing Database backup

I have a question regarding to mysql DB backup:
1.AT 8.00.00 AM ,I do backup Database by using mysqldump command.It somtime takes 5 second to finish.
2.While database backup is inprogress(At 8.00.01 AM ), someone make some changes to DB
Will backup version containt data changes of step 2 ?
I have goolge but not found explaination yet.Pls help me !
Percona provides a free tool for this purpose called xtrabackup.
For InnoDB tables MySQL uses log files to store DML commands, so that commands can be rolled back and some other things.
You can backup your database (in a non locking way), and after you created the backup you can apply the logs, so that you have a backup which holds the database status when the backup is finished. Don't know the commands without looking them up now, sorry, you'll have to have a look into the documentation.
It depends on your mysqldump command and which table the dump is on at the time of the update:
If you used the --single-transaction option, then, no, it will not contain the late change.
If you did not use that option then:
if the table being updated has not been dumped yet, then yes it will have the change.
If the table being updated has been dumped already, then no it won't have the change.
Chances are, you don't want that change in your backup, because you would like your backup to be consistent to a point in time. Here is some more discussion about all those kinds of things:
How to obtain a correct dump using mysqldump and single-transaction when DDL is used at the same time?

Backup frequently a huge MySQL database

I have a huge MySQL InnoDB database (about 15 Go, and 100M rows), on a Debian server.
I have to save my database every two hours in another server, but without affect performances.
I looked at the MySQL replication, but it does not correspond to the fact that I look for, because I also want to protect of problems which the application could possibly cause.
What would be the best way of dealing with it?
Thank you very much!
I think you need incremental backups.
You can use Percona XtraBackup to make fast incremental backups. This works only if your database uses only InnoDB tables.
Refer to the documentation about how to create incremental backups:
http://www.percona.com/doc/percona-xtrabackup/howtos/recipes_ibkx_inc.html
Have you looked at writing a script that uses mysqldump to dump the contents of the DB, transfers it over to the backup DB (piping it to SSH would work), and inserts it via the command line?
There are options so that mysqldump won't lock the tables and so won't degrade performance too much.

Backup multiple Databases[MySQL] at 1 time?

Hi I have multiple databases need to back up daily. Currently, I am using cronjob to set a batch file to back it up. Here are my situation, I have about 10 databases need to backup, 3 of them are growing pretty fast, let me show you the current DB size:
DB1 = 35 mb
DB2 = 10 mb
DB3 = 9 mb
the rest: DBx = 5 mb
My batch file code is:
mysqldump -u root -pxxxx DB1 > d:/backup/DB1_datetime.sql
mysqldump -u root -pxxxx DB2 > d:/backup/DB2_datetime.sql
... and so for the rest
I have run this for 2 days, seems quite okay to me. But I wonder, if it will effect my website performance when executing the batch file.
If this method is not good, how do you backup multiple databases while its on live and the size keep increasing daily?
It depends on table type. If the tables are innoDB, then you should be using the --single-transaction flag so that the dumps are coherent. If you're tables are MyISAM, you have a small issue. If you run the mysqldump as is, the dump will cause the tables to lock (no writing) while performing the dump. This is obviously a huge bottleneck as the databases get larger. You can override this with the --lock-tables=false option, but you can be gauranteed that the backups won't have some inconsistent data in them.
The ideal solution would be to have a backup replication slave server that is outside of your production environment to take dumps of.
If you're still looking for a way to do that, you'll probably be interested in: this.
It creates a file for each database with the dump, and makes you save a lot of time making just the first configuration.
Hope it helps, regards.