MySQL backup using AutoMySQLBackup from slave to remote machine - mysql

I am trying to setup daily backup for MySQL database from slave server or MySQL instance. My database is a mixture of InnoDb and MyISAM tables. I have installed AutoMySQLBackup on another machine. I am trying to take a full backup and a daily incremental backup of a MySQL database from that machine with the help of AutoMySQLBackup.

Mysql Full backup:-
https://dev.mysql.com/doc/mysql-enterprise-backup/3.12/en/mysqlbackup.full.html
Options on Command Line or in Configuration File?
For clarity, the examples in this manual often show some of the
command-line options that are used with the mysqlbackup commands. For
convenience and consistency, you can include those options that remain
unchanged for most backup jobs into the [mysqlbackup] section of the
MySQL configuration file that you supply to mysqlbackup. mysqlbackup
also picks up the options from the [mysqld] section if they are
present there. Putting the options into a configuration file can
simplify backup administration for you: for example, putting port
information into a configuration file, you can avoid the need to edit
your backup scripts each time the database instance switches to a
different port. See Chapter 14, Configuration Files and Parameters for
details about the use of configuration files.
Output in Single Directory or Timestamped Subdirectories?
For convenience, the --with-timestamp option creates uniquely named
subdirectories under the backup directory to hold the output from each
backup job. The timestamped subdirectories make it simpler to
establish retention periods, allowing easy removal and archiving of
backup data that has passed a certain age.
If you do use a single backup directory (that is, if you omit the
--with-timestamp option), either specify a new unique directory name for each backup job, or specify the --force option to overwrite
existing backup files.
For incremental backups that uses the --incremental-base option to
specify the directory containing the previous backup, in order to make
the directory names predictable, you might prefer to not use the
--with-timestamp option and generate a sequence of directory names with your backup script instead .
Always Full Backup, or Full Backup plus Incremental Backups?
If your InnoDB data volume is small, or if your database is so busy
that a high percentage of data changes between backups, you might want
to run a full backup each time. However, you can usually save time and
storage space by running periodic full backups and then several
incremental backups in between them, as described in Section 4.3.2,
“Making a Differential or Incremental Backup”.
Use Compression or Not?
Creating a compressed backup can save you considerable storage space
and reduce I/O usage significantly. And with the LZ4 compression
method (introduced since release 3.10), the overhead for processing
compression is quite low. In cases where database backups are moving
from a faster disk system where the active database files sit to a
possibly slower storage, compression will often significantly lower
the overall backup time. It can result in reduced restoration time as
well. In general, we recommend LZ4 compression over no compression for
most users, as LZ4-based backups often finish in a shorter time
period. However, test out MySQL Enterprise Backup within your
environment to determine what is the most efficient approach.
The incremental backup feature is primarily intended for InnoDB tables, or non-InnoDB tables that are read-only or rarely updated. For non-InnoDB files, the entire file is included in an incremental backup if that file changed since the previous backup.
You cannot perform incremental backups with the --compress option.
Incremental backups detect changes at the level of pages in the InnoDB data files, as opposed to table rows; each page that has changed is backed up. Thus, the space and time savings are not exactly proportional to the percentage of changed InnoDB rows or columns.
When an InnoDB table is dropped and you do a subsequent incremental backup, the apply-log step removes the corresponding .ibd file from the full backup directory. Since the backup program cannot have the same insight into the purpose of non-InnoDB files, when a non-InnoDB file is removed between the time of a full backup and a subsequent incremental backup, the apply-log step does not remove that file from the full backup directory. Thus, restoring a backup could result in a deleted file reappearing.
Creating Incremental Backups Using Only the Redo Log
The --incremental-with-redo-log-only might offer some benefits over
the --incremental option for creating an incremental backup:
The changes to InnoDB tables are determined based on the contents of
the InnoDB redo log. Since the redo log files have a fixed size that
you know in advance, it can require less I/O to read the changes from
them than to scan the InnoDB tablespace files to locate the changed
pages, depending on the size of your database, amount of DML activity,
and size of the redo log files.
Since the redo log files act as a circular buffer, with records of
older changes being overwritten as new DML operations take place, you
must take new incremental backups on a predictable schedule dictated
by the size of the log files and the amount of redo data generated for
your workload. Otherwise, the redo log might not reach back far enough
to record all the changes since the previous incremental backup, in
which case mysqlbackup will quickly determine that it cannot proceed
and will return an error. Your backup script should be able to catch
that error and then perform an incremental backup with the
--incremental option instead.
For example:
To calculate the size of the redo log, issue the command SHOW
VARIABLES LIKE 'innodb_log_file%' and, based on the output, multiply
the innodb_log_file_size setting by the value of
innodb_log_files_in_group. To compute the redo log size at the
physical level, look into the datadir directory of the MySQL instance
and sum up the sizes of the files matching the pattern ib_logfile*.
The InnoDB LSN value corresponds to the number of bytes written to the
redo log. To check the LSN at some point in time, issue the command
SHOW ENGINE INNODB STATUS and look under the LOG heading. While
planning your backup strategy, record the LSN values periodically and
subtract the earlier value from the current one to calculate how much
redo data is generated each hour, day, and so on.
Prior to MySQL 5.5, it was common practice to keep the redo logs
fairly small to avoid a long startup time when the MySQL server was
killed rather than shut down normally. With MySQL 5.5 and higher, the
performance of crash recovery is significantly improved, as described
in Optimizing InnoDB Configuration Variables, so that you can make
your redo log files bigger if that helps your backup strategy and your
database workload.
This type of incremental backup is not so forgiving of too-low
--start-lsn values as the standard --incremental option. For example, you cannot make a full backup and then make a series of
--incremental-with-redo-log-only backups all using the same --start-lsn value. Make sure to specify the precise end LSN of the previous backup as the start LSN of the next incremental backup; do
not use arbitrary values.
Note To ensure the LSN values match up exactly between successive
incremental backups, it is recommended that you always use the
--incremental-base option when you use the --incremental-with-redo-log-only option.
To judge whether this type of incremental backup is practical and
efficient for a particular MySQL instance:
Measure how fast the data changes within the InnoDB redo log files.
Check the LSN periodically to decide how much redo data accumulates
over the course of some number of hours or days.
Compare the rate of redo log accumulation with the size of the redo
log files. Use this ratio to see how often to take an incremental
backup, in order to avoid the likelihood of the backup failing because
the historical data are not available in the redo log. For example, if
you are producing 1GB of redo log data per day, and the combined size
of your redo log files is 7GB, you would schedule incremental backups
more frequently than once a week. You might perform incremental
backups every day or two, to avoid a potential issue when a sudden
flurry of updates produced more redo than usual.
Benchmark incremental backup times using both the --incremental and
--incremental-with-redo-log-only options, to confirm if the redo log backup technique performs faster and with less overhead than the
traditional incremental backup method. The result could depend on the
size of your data, the amount of DML activity, and the size of your
redo log files. Do your testing on a server with a realistic data
volume and a realistic workload. For example, if you have huge redo
log files, reading them in the course of an incremental backup could
take as long as reading the InnoDB data files using the traditional
incremental technique. Conversely, if your data volume is large,
reading all the data files to find the few changed pages could be less
efficient than processing the much smaller redo log files.
Other Considerations for Incremental Backups
The incremental backup feature is primarily intended for InnoDB
tables, or non-InnoDB tables that are read-only or rarely updated.
Incremental backups detect changes at the level of pages in the InnoDB
data files, as opposed to table rows; each page that has changed is
backed up. Thus, the space and time savings are not exactly
proportional to the percentage of changed InnoDB rows or columns.
For non-InnoDB files, the entire file is included in an incremental
backup if that file has changed since the previous backup, which means
the savings for backup resources are less significant when comparing
with the case with InnoDB tables.
You cannot perform incremental backups with the --compress option.
When making an incremental backup that is based on a backup (full or
incremental) created using the --no-locking option, use the
--skip-binlog option to skip the backing up of the binary log, as binary log information will be unavailable to mysqlbackup in that
situation.
Examples of Incremental Backups
This example uses mysqlbackup to make an incremental backup of a MySQL server, including all databases and tables. We show two alternatives, one using the --incremental-base option and the other using the --start-lsn option.
With the --incremental-base option, you do not have to keep track of LSN values between one backup and the next. Instead, you can just specify the directory of the previous backup (either full or incremental), and mysqlbackup figures out the starting point for this backup based on the metadata of the earlier one. Because you need a known set of directory names, you might want to use hardcoded names or generate a sequence of names in your own backup script, rather than using the --with-timestamp option.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--incremental-base=dir:/incr-backup/wednesday \
--incremental-backup-dir=/incr-backup/thursday \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/thursday'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714z
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
Note that if your last backup was a single-file instead of a directory backup, you can still use --incremental-base by specifying for dir:directory_path the location of the temporary directory you supplied with the --backup-dir option during the full backup.
As an alternative to specifying --incremental-base=dir:directory_path, you can tell mysqlbackup to query the end_lsn value from the last successful backup as recorded in the backup_history table on the server using --incremental-base=history:last_backup (this required that the last backup was made with mysqlbackup connected to the server).
You can also use the --start-lsn option to specify where the incremental backup should start. You have to record the LSN of the previous backup reported by mysqlbackup at the end of the backup:
mysqlbackup: Was able to parse the log up to lsn 2654255716
The number is also recorded in the meta/backup_variables.txt file in the folder specified by --backup-dir during the backup. Supply then that number to mysqlbackup using the --start-lsn option. The incremental backup then includes all changes that came after the specified LSN. Since then the location of the previous backup is not very significant then, you can use --with-timestamp to create named subdirectories automatically.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-backup \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/2010-12-08_17-14-48'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
To create an incremental backup image instead, use the following command, specifying with --incremental-backup-dir a temporary directory for storing the metadata for the backup and some temporary files:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-tmp \
--backup-image=/incr-backup/incremental_image.bi
backup-to-image
In the following example though, because --backup-image does not provide a full path to the image file to be created, the incremental backup image is created under the folder specified by --incremental-backup-dir:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-images \
--backup-image=incremental_image1.bi
backup-to-image
https://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/mysqlbackup.incremental.html

Related

Rsync MariaDB Data Folder Docker Safe?

I'm looking for some insight whether or not rsyncing a copy of the data folder from MariaDB well running in docker will provide a usable backup. I'm deploying several containers with mapped folders in a production environment using docker.
I'm thinking of using rsnapshot for nightly backups as it uses hardlinks incrementally and I can specify the number of weekly / daily / monthly copies to keep.For the code and actual files I suspect this will work wonderfully.
For MariaDB I could run mysqldump every night but this would essentially use a new copy of the database each time instead of an incremental one. If I could rsync the data folder and be 100% sure the backup would be fully intact it would be advantageous I presume. Is there any chance this backup method would fail if data was written during the rsync? Would all the files inside of MariaDB change with daily usage (it wouldn't be advantageous if so)?
This is probably a frequent question, but I can't find a really exact match right now.
The answer is NO — you can't use filesystem-level copy tools to back up a MySQL database unless the mysqld process is stopped. In a Docker environment, I would expect the container to stop if the mysqld process stops.
Even if there are no queries running, the InnoDB engine is probably doing writes in the background to flush pages from memory into the tablespace, clean up rolled-back transactions, or finish some deferred index merges.
If you try to use rsync or cp or any other filesystem-level tools to copy InnoDB files, you will only get corrupted files that can't be restored.
Some people use LVM snapshots to get an atomic snapshot of the whole filesystem as of a single instant, and this can be used to get quick backups.
Another useful tool is Percona XtraBackup, which copies the InnoDB tablespace files while it is also copying the InnoDB transaction log continually. Only with both of these in sync can the backup be restored. Read the documentation here: https://www.percona.com/doc/percona-xtrabackup/LATEST/index.html
At my current job, we use Percona XtraBackup to make nightly backups for thousands of MySQL instances. We run Percona Server (not MariaDB) in Docker pods, and Percona XtraBackup runs as another container in the pod. It works very well, and it's free, open-source software.

How to restore innodb tables without ibdata1 and *.ibd files

By some reason these files ibdata1, ib_logfile0, ib_logfile1 were been deleted from hard drive, mysql server was restarted and of course one of the database stoped with error "db.table does not exist!". Is it possible to restore somehow these files, if there are only *.frm, db.opt files?
Thanks.
Take disk image as soon as possible or mount MySQL partition read-only(if it was not on /).
Then you need to compile data recovery toolkit(install dependencies if "make" fails):
make
Then scan the disk image with page_parser. It will find InnoDB pages and sort them per page type, per index_id.
./page_parser -f /path/to/disk.img
Then you need table structures (either take them from old backup or restore from .frm). You need the structure to compile constraints_parser - a tool that fetches records from InnoDB page(s).
./constraints_parser -5f pages-XXXXXX/FIL_PAGE_INDEX/0-x/
Save output in some file. It will also generate LOAD DATA to stderr to upload the dump back to MySQL.
Check the documentation on Percona's website for more details.
From my experience, 70-80%% of data should be recoverable after this kind of accident.

mysql best backup method? and how to dump backup to target directory

What is the best method to do a MySQl backup with compression? Also, how do you dump that to specific directory such a C:\targetdir
mysqldump command will output CREATE TABLE and INSERT commands that are sufficient to recreate your whole database. You can back up individual tables or databases with this command.
You can easily compress this. If you want it to be compressed as it goes, you will need some sort of streaming tool for the command line. On UNIX it would be mysqldump ... | gzip. On Windows, you will have to find a tool that works with pipes.
This I think is what you are looking for. I will list other options just because.
FLUSH TABLES WITH READ LOCK will flush all data to the disk and lock them from changing which you can do while you are making a copy of the data folder.
Keep in mind, when doing restores, if you want to preserve the full capability of MySQL bin logs, you will not want to restore parts of a database by touching the files directly. Best option is to have an alternate data dir with restored files and dump from there, then feed to your production database using regular mysql connection channels. Any direct changes to the filesystem will not be recorded by binlogs.
If you restore the whole database using files, you will be OK. Just not if you to peices.
mysqldump does not have this problem
Replication will allow you to back up to another instance of MySQL running on the same or different machine.
binlogs. Given a static copy of a database, you can use these to move it forward in time. binlogs are a log of all the commands that ever changed the data. If you have binlogs back to day one, then you may already have what you are looking for. You can run all the commands from the binlogs from day one to any date you wish and then you have a copy of the database from that date.
I recommend checking out Percona XtraBackup. It's a GPL licensed alternative to MySQL's paid Enterprise Backup tool and can create consistent non-blocking backups from databases even when they are written to. See this article for more information on why you'd want to use this over mysqldump.
You could use a script like AutoMySQLBackup, which automatically does a backup every day, keeping daily, weekly and monthly backups, keeping your backup directory pretty clean and uncluttered, while still providing you a long history of backups.
The backups are also compressed, naturally.

mysql binary log only log deletes

Is there any way in MySQL to only log deletes? I tried using the mysql binary log option but unfortunately, there are too many inserts to my database and the file swells instantly. If I let it grow for more than a day or so it will take up all the room on the server. I just want to log deletes for disaster recovery purposes. Any thoughts?
A log file contains all changes to a database. It can be used to roll a backup forward:
After a backup file has been restored,
the events in the binary log that were
recorded after the backup was made are
re-executed. These events bring
databases up to date from the point of
the backup
But in order to roll a database forward, the log has to contain all updates, not just deletes. So I don't think you can change it to just log deletes.
Is it an option to make full backups more often? After a full backup, you delete the old log files. That's a good way to keep the size of the binary log under control.
The MySQL binary logs take up disk
space. To free up space, purge them
from time to time. One way to do this
is by deleting the binary logs that
are no longer needed, such as when we
make a full backup:
shell> mysqldump --single-transaction --flush-logs --master-data=2 \
--all-databases --delete-master-logs > backup_sunday_1_PM.sql

Include ib_logfiles in backup?

The short question:
Is it safe skip ib_logfile* files while creating custom-made backup of MySQL installation with InnoDB engine used?
Custom-made backup is just copy all (or subset of) /var/lib/mysql + /etc/my.cnf to safe place.
As I know ib_logfile* files are recreated by MySQL on start + the files are really huge, so that may not be wise to store them in backup. And yes, assume MySQL is (correctly) stopped before backup.
It's not safe to back up a database in the manner you're describing.
The reason is that you cannot know if some data pages are still in MySQL Server's memory, pending an I/O flush to disk. So copying the files has a good chance of copying garbage, whether or not you include the ib_log files.
Don't do what you're doing. Use mysqlbackup or InnoDB's Hot Backup product.
edit:
Read Farhan Mashraqi's blog article about InnoDB Performance Optimization, in which he says you can delete the ib_log files if you have shut down MySQL Server.
InnoDB log resizing: is not as simple
as changing option and restarting. You
need to shut down MySQL server and
make sure it shuts down normally
(check error log for errors to
confirm). Then move away InnoDB log
files ib_log*. Then start the MySQL
server and check error log files to
see if it successfully created new log
files.
So yes, you should be able to do a filesystem copy of the ibdata files only. You don't need to include the ib_log files, because they should be recreated as MySQL Server starts up. I reiterate, however, that I do not recommend backing up databases with filesystem copy.
Suppose you back up a database today in this manner, using MySQL 5.0. Someday later you need to restore the backup, but by that time you'll be using some future version like MySQL 6.5 for example. Do the ib_data files still use the same internal format in that future version? Will MySQL 6.5 read those files you backed up today? You can't be sure. The backup format is intended to be more portable.