Is there any way in MySQL to only log deletes? I tried using the mysql binary log option but unfortunately, there are too many inserts to my database and the file swells instantly. If I let it grow for more than a day or so it will take up all the room on the server. I just want to log deletes for disaster recovery purposes. Any thoughts?
A log file contains all changes to a database. It can be used to roll a backup forward:
After a backup file has been restored,
the events in the binary log that were
recorded after the backup was made are
re-executed. These events bring
databases up to date from the point of
the backup
But in order to roll a database forward, the log has to contain all updates, not just deletes. So I don't think you can change it to just log deletes.
Is it an option to make full backups more often? After a full backup, you delete the old log files. That's a good way to keep the size of the binary log under control.
The MySQL binary logs take up disk
space. To free up space, purge them
from time to time. One way to do this
is by deleting the binary logs that
are no longer needed, such as when we
make a full backup:
shell> mysqldump --single-transaction --flush-logs --master-data=2 \
--all-databases --delete-master-logs > backup_sunday_1_PM.sql
Related
I am trying to setup daily backup for MySQL database from slave server or MySQL instance. My database is a mixture of InnoDb and MyISAM tables. I have installed AutoMySQLBackup on another machine. I am trying to take a full backup and a daily incremental backup of a MySQL database from that machine with the help of AutoMySQLBackup.
Mysql Full backup:-
https://dev.mysql.com/doc/mysql-enterprise-backup/3.12/en/mysqlbackup.full.html
Options on Command Line or in Configuration File?
For clarity, the examples in this manual often show some of the
command-line options that are used with the mysqlbackup commands. For
convenience and consistency, you can include those options that remain
unchanged for most backup jobs into the [mysqlbackup] section of the
MySQL configuration file that you supply to mysqlbackup. mysqlbackup
also picks up the options from the [mysqld] section if they are
present there. Putting the options into a configuration file can
simplify backup administration for you: for example, putting port
information into a configuration file, you can avoid the need to edit
your backup scripts each time the database instance switches to a
different port. See Chapter 14, Configuration Files and Parameters for
details about the use of configuration files.
Output in Single Directory or Timestamped Subdirectories?
For convenience, the --with-timestamp option creates uniquely named
subdirectories under the backup directory to hold the output from each
backup job. The timestamped subdirectories make it simpler to
establish retention periods, allowing easy removal and archiving of
backup data that has passed a certain age.
If you do use a single backup directory (that is, if you omit the
--with-timestamp option), either specify a new unique directory name for each backup job, or specify the --force option to overwrite
existing backup files.
For incremental backups that uses the --incremental-base option to
specify the directory containing the previous backup, in order to make
the directory names predictable, you might prefer to not use the
--with-timestamp option and generate a sequence of directory names with your backup script instead .
Always Full Backup, or Full Backup plus Incremental Backups?
If your InnoDB data volume is small, or if your database is so busy
that a high percentage of data changes between backups, you might want
to run a full backup each time. However, you can usually save time and
storage space by running periodic full backups and then several
incremental backups in between them, as described in Section 4.3.2,
“Making a Differential or Incremental Backup”.
Use Compression or Not?
Creating a compressed backup can save you considerable storage space
and reduce I/O usage significantly. And with the LZ4 compression
method (introduced since release 3.10), the overhead for processing
compression is quite low. In cases where database backups are moving
from a faster disk system where the active database files sit to a
possibly slower storage, compression will often significantly lower
the overall backup time. It can result in reduced restoration time as
well. In general, we recommend LZ4 compression over no compression for
most users, as LZ4-based backups often finish in a shorter time
period. However, test out MySQL Enterprise Backup within your
environment to determine what is the most efficient approach.
The incremental backup feature is primarily intended for InnoDB tables, or non-InnoDB tables that are read-only or rarely updated. For non-InnoDB files, the entire file is included in an incremental backup if that file changed since the previous backup.
You cannot perform incremental backups with the --compress option.
Incremental backups detect changes at the level of pages in the InnoDB data files, as opposed to table rows; each page that has changed is backed up. Thus, the space and time savings are not exactly proportional to the percentage of changed InnoDB rows or columns.
When an InnoDB table is dropped and you do a subsequent incremental backup, the apply-log step removes the corresponding .ibd file from the full backup directory. Since the backup program cannot have the same insight into the purpose of non-InnoDB files, when a non-InnoDB file is removed between the time of a full backup and a subsequent incremental backup, the apply-log step does not remove that file from the full backup directory. Thus, restoring a backup could result in a deleted file reappearing.
Creating Incremental Backups Using Only the Redo Log
The --incremental-with-redo-log-only might offer some benefits over
the --incremental option for creating an incremental backup:
The changes to InnoDB tables are determined based on the contents of
the InnoDB redo log. Since the redo log files have a fixed size that
you know in advance, it can require less I/O to read the changes from
them than to scan the InnoDB tablespace files to locate the changed
pages, depending on the size of your database, amount of DML activity,
and size of the redo log files.
Since the redo log files act as a circular buffer, with records of
older changes being overwritten as new DML operations take place, you
must take new incremental backups on a predictable schedule dictated
by the size of the log files and the amount of redo data generated for
your workload. Otherwise, the redo log might not reach back far enough
to record all the changes since the previous incremental backup, in
which case mysqlbackup will quickly determine that it cannot proceed
and will return an error. Your backup script should be able to catch
that error and then perform an incremental backup with the
--incremental option instead.
For example:
To calculate the size of the redo log, issue the command SHOW
VARIABLES LIKE 'innodb_log_file%' and, based on the output, multiply
the innodb_log_file_size setting by the value of
innodb_log_files_in_group. To compute the redo log size at the
physical level, look into the datadir directory of the MySQL instance
and sum up the sizes of the files matching the pattern ib_logfile*.
The InnoDB LSN value corresponds to the number of bytes written to the
redo log. To check the LSN at some point in time, issue the command
SHOW ENGINE INNODB STATUS and look under the LOG heading. While
planning your backup strategy, record the LSN values periodically and
subtract the earlier value from the current one to calculate how much
redo data is generated each hour, day, and so on.
Prior to MySQL 5.5, it was common practice to keep the redo logs
fairly small to avoid a long startup time when the MySQL server was
killed rather than shut down normally. With MySQL 5.5 and higher, the
performance of crash recovery is significantly improved, as described
in Optimizing InnoDB Configuration Variables, so that you can make
your redo log files bigger if that helps your backup strategy and your
database workload.
This type of incremental backup is not so forgiving of too-low
--start-lsn values as the standard --incremental option. For example, you cannot make a full backup and then make a series of
--incremental-with-redo-log-only backups all using the same --start-lsn value. Make sure to specify the precise end LSN of the previous backup as the start LSN of the next incremental backup; do
not use arbitrary values.
Note To ensure the LSN values match up exactly between successive
incremental backups, it is recommended that you always use the
--incremental-base option when you use the --incremental-with-redo-log-only option.
To judge whether this type of incremental backup is practical and
efficient for a particular MySQL instance:
Measure how fast the data changes within the InnoDB redo log files.
Check the LSN periodically to decide how much redo data accumulates
over the course of some number of hours or days.
Compare the rate of redo log accumulation with the size of the redo
log files. Use this ratio to see how often to take an incremental
backup, in order to avoid the likelihood of the backup failing because
the historical data are not available in the redo log. For example, if
you are producing 1GB of redo log data per day, and the combined size
of your redo log files is 7GB, you would schedule incremental backups
more frequently than once a week. You might perform incremental
backups every day or two, to avoid a potential issue when a sudden
flurry of updates produced more redo than usual.
Benchmark incremental backup times using both the --incremental and
--incremental-with-redo-log-only options, to confirm if the redo log backup technique performs faster and with less overhead than the
traditional incremental backup method. The result could depend on the
size of your data, the amount of DML activity, and the size of your
redo log files. Do your testing on a server with a realistic data
volume and a realistic workload. For example, if you have huge redo
log files, reading them in the course of an incremental backup could
take as long as reading the InnoDB data files using the traditional
incremental technique. Conversely, if your data volume is large,
reading all the data files to find the few changed pages could be less
efficient than processing the much smaller redo log files.
Other Considerations for Incremental Backups
The incremental backup feature is primarily intended for InnoDB
tables, or non-InnoDB tables that are read-only or rarely updated.
Incremental backups detect changes at the level of pages in the InnoDB
data files, as opposed to table rows; each page that has changed is
backed up. Thus, the space and time savings are not exactly
proportional to the percentage of changed InnoDB rows or columns.
For non-InnoDB files, the entire file is included in an incremental
backup if that file has changed since the previous backup, which means
the savings for backup resources are less significant when comparing
with the case with InnoDB tables.
You cannot perform incremental backups with the --compress option.
When making an incremental backup that is based on a backup (full or
incremental) created using the --no-locking option, use the
--skip-binlog option to skip the backing up of the binary log, as binary log information will be unavailable to mysqlbackup in that
situation.
Examples of Incremental Backups
This example uses mysqlbackup to make an incremental backup of a MySQL server, including all databases and tables. We show two alternatives, one using the --incremental-base option and the other using the --start-lsn option.
With the --incremental-base option, you do not have to keep track of LSN values between one backup and the next. Instead, you can just specify the directory of the previous backup (either full or incremental), and mysqlbackup figures out the starting point for this backup based on the metadata of the earlier one. Because you need a known set of directory names, you might want to use hardcoded names or generate a sequence of names in your own backup script, rather than using the --with-timestamp option.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--incremental-base=dir:/incr-backup/wednesday \
--incremental-backup-dir=/incr-backup/thursday \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/thursday'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714z
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
Note that if your last backup was a single-file instead of a directory backup, you can still use --incremental-base by specifying for dir:directory_path the location of the temporary directory you supplied with the --backup-dir option during the full backup.
As an alternative to specifying --incremental-base=dir:directory_path, you can tell mysqlbackup to query the end_lsn value from the last successful backup as recorded in the backup_history table on the server using --incremental-base=history:last_backup (this required that the last backup was made with mysqlbackup connected to the server).
You can also use the --start-lsn option to specify where the incremental backup should start. You have to record the LSN of the previous backup reported by mysqlbackup at the end of the backup:
mysqlbackup: Was able to parse the log up to lsn 2654255716
The number is also recorded in the meta/backup_variables.txt file in the folder specified by --backup-dir during the backup. Supply then that number to mysqlbackup using the --start-lsn option. The incremental backup then includes all changes that came after the specified LSN. Since then the location of the previous backup is not very significant then, you can use --with-timestamp to create named subdirectories automatically.
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-backup \
backup
...many lines of output...
mysqlbackup: Backup created in directory '/incr-backup/2010-12-08_17-14-48'
mysqlbackup: start_lsn: 2654255717
mysqlbackup: incremental_base_lsn: 2666733462
mysqlbackup: end_lsn: 2666736714
101208 17:14:58 mysqlbackup: mysqlbackup completed OK!
To create an incremental backup image instead, use the following command, specifying with --incremental-backup-dir a temporary directory for storing the metadata for the backup and some temporary files:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-tmp \
--backup-image=/incr-backup/incremental_image.bi
backup-to-image
In the following example though, because --backup-image does not provide a full path to the image file to be created, the incremental backup image is created under the folder specified by --incremental-backup-dir:
$ mysqlbackup --defaults-file=/home/pekka/.my.cnf --incremental \
--start-lsn=2654255716 \
--with-timestamp \
--incremental-backup-dir=/incr-images \
--backup-image=incremental_image1.bi
backup-to-image
https://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/mysqlbackup.incremental.html
Every night the MySQL database is saved by a cronjob which uses mysqldump.
During the day, when the CakePHP application is running, I would like to have a logfile working, that could be used as backup in case of a damage happening during the day.
For recovery, it would be necessary to first run the recovery from the mysqldump that was established at night. And secondly, run the recovery from the logfile to get the database changes from the current day.
Does there exist such a logfile possibility and where or how could I get it?
Or are there any other ways to get a proper backup?
It's built into MySQL, you wouldn't do this in your applictation typically.
enter link description here
The binlogs do what you say - it is a binary file that contains every transaction that hits the database. So, if you had a nightly back up and the server crashed, you would recover the bin logs, export the transaction logs from the datetime of the last transaction in the database and essentially re-run or re-play every transaction from that point.
And having used it >10 times before ... it is awesome... but you have to turn in on in your my.conf file (google that) and manage the binlog files as they do get big on busy servers.
I have tried several commands (FLUSH LOGS, PURGE MASTER) but none deletes the log files (when previously activated) or the log tables (mysql/slow_log.CSV and mysql/general_log.CSV and their .frm and .CSM counterparts).
SHOW BINARY LOGS returns "You are not using binary logging".
Edit: I found this simple solution to clear the table logs (but not yet the file logs using a mysql command):
TRUNCATE mysql.general_log;
TRUNCATE mysql.slow_log;
FLUSH LOGS just closes and reopens log files. If the log files are large, it won't reduce them. If you're on Linux, you can use mv to rename log files while they're in use, and then after FLUSH LOGS, you know that MySQL is writing to a new, small file, and you can remove the old big files.
Binary logs are different. To eliminate old binlogs, use PURGE BINARY LOGS. Make sure your slaves (if any) aren't still using the binary logs. That is, run SHOW SLAVE STATUS to see what binlog file they're working on, and don't purge that file or later files.
Also keep in mind that binlogs are useful for point-in-time recovery in case you need to restore from backups and then reapply binlogs to bring the database up to date. If you need to use binlogs in this manner, don't purge the binlogs that have been written since your last backup.
If you are on amazon RDS, executing this twice will do the trick:
PROMPT> CALL mysql.rds_rotate_slow_log;
PROMPT> CALL mysql.rds_rotate_general_log;
Source: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html
It seems binary logging is not enabled in your server .And i guess you want to delete the old log files which were used/created at the time of binary logging is enabled . you can delete them manually using 'rm' command if you want . if you want to enable the binary logging you can do the same by updating the configuaration file ( but it needs restart of the server if it is already running) . You can refer below links.
http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#option_mysqld_log-bin
http://dev.mysql.com/doc/refman/5.0/en/replication-options-binary-log.html#sysvar_log_bin
What is the best method to do a MySQl backup with compression? Also, how do you dump that to specific directory such a C:\targetdir
mysqldump command will output CREATE TABLE and INSERT commands that are sufficient to recreate your whole database. You can back up individual tables or databases with this command.
You can easily compress this. If you want it to be compressed as it goes, you will need some sort of streaming tool for the command line. On UNIX it would be mysqldump ... | gzip. On Windows, you will have to find a tool that works with pipes.
This I think is what you are looking for. I will list other options just because.
FLUSH TABLES WITH READ LOCK will flush all data to the disk and lock them from changing which you can do while you are making a copy of the data folder.
Keep in mind, when doing restores, if you want to preserve the full capability of MySQL bin logs, you will not want to restore parts of a database by touching the files directly. Best option is to have an alternate data dir with restored files and dump from there, then feed to your production database using regular mysql connection channels. Any direct changes to the filesystem will not be recorded by binlogs.
If you restore the whole database using files, you will be OK. Just not if you to peices.
mysqldump does not have this problem
Replication will allow you to back up to another instance of MySQL running on the same or different machine.
binlogs. Given a static copy of a database, you can use these to move it forward in time. binlogs are a log of all the commands that ever changed the data. If you have binlogs back to day one, then you may already have what you are looking for. You can run all the commands from the binlogs from day one to any date you wish and then you have a copy of the database from that date.
I recommend checking out Percona XtraBackup. It's a GPL licensed alternative to MySQL's paid Enterprise Backup tool and can create consistent non-blocking backups from databases even when they are written to. See this article for more information on why you'd want to use this over mysqldump.
You could use a script like AutoMySQLBackup, which automatically does a backup every day, keeping daily, weekly and monthly backups, keeping your backup directory pretty clean and uncluttered, while still providing you a long history of backups.
The backups are also compressed, naturally.
I am dealing with an incremental backup solution for a mysql database in centos. I need to write a perl script to take incremental backup. then i will run this script by using crontabs. I am a bit confused. There are solutions but not really helping. I did lots of research. there are so many ways to take full backup and incremental backup for files. I can easily understand them but I need to take an incremental backup of a mysql database. I do not know how to do it. Can anyone help me either advising a source or a piece of code.
The incremental backup method you've been looking at is documented by MySQL here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
What you are essentially going to want to do is set up your mysql instance to write any changes to your database to this binary log. What this means is any updates, deletes, inserts etc go in the binary log, but not select statements (which don't change the db, therefore don't go in the binary log).
Once you have your mysql instance running with binary logging turned on, you take a full backup and take note of the master position. Then later on, to take an incremental backup, you want to run mysqlbinlog from the master position and the output of that will be all the changes made to your database since you took the full backup. You'll want to take note of the master position again at this point, so you know the point that you want to take the next incremental backup from.
Clearly, if you then take multiple incremental backups over and over, you need to retain all those incremental backups. I'd recommend taking a full backup quite often.
Indeed, I'd recommend always doing a full backup, if you can. Taking incremental backups is just going to cause you pain, IMO, but if you need to do it, that's certainly one way to do it.
mysqldump is the ticket.
Example:
mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql
-u = mysql database user name
-p = mysql database password
Note: there is no space after the -p option. And if you have to do this in perl, then you can use the system function to call it like so:
system("mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql") or die "system call failed: $?";
Be aware though of the security risks involved in doing this. If someone happened to do a listing of the current processes running on a system as this was running, they'd be able to see the credentials that were being used for database access.