I use cbbackup to backup couchbase. Then do some operations to the db, like adding and deleting documents. After the operation, I want to discard all the operations and recover db to the status of the backup.
I did the following test. cbbackup and cbrestore don't help during my test. How to achieve my goal?
Backup db
$ rm -rf /tmp/cbbackup
$ /opt/couchbase/bin/cbbackup http://cb_ip:8091 /tmp/cbbackup -u 'xxx' -p '***' -v
Remember the item count of the social bucket.
Count: 33
Delete one document from social bucket
Delete id: existing-item-110
Add two new document to social bucket
Ids: new-item-1, new-item-2
Remember the item count of the social bucket.
Count: 34
Restore social bucket
$ /opt/couchbase/bin/cbrestore /tmp/cbbackup http://cb_ip:8091 -u 'xxx' -p '***' -b mybucket -v
Verify if the deleted document is back and if the added documents are removed.
Result: Count: 34, no change
The deleted item is not back.
The added items are not deleted.
Conclution: cbrestore can't recover the db to a backup time point. The changes after the backup time point are not removed.
Use cbtransfer to restore data. The result and conclusion is the same as cbrestore.
$ /opt/couchbase/bin/cbtransfer /tmp/cbbackup http://cb_ip:8091 -u 'xxx' -p '***' -b mybucket -v
Before I directly answer you're question let me explain two important concepts about cbbackup and cbrestore.
These tools do not transfer raw data files during the backup and restore process. During backup data is streamed out of the server and written to disk and during restore data is put into the database using set operations.
Couchbase has the ability to do conflict resolution during sets. This means that if you backed up a key, then updated it, then do a restore and conflict resolution is enabled then the set during the restore will be discarded since it is not the latest update.
Below are two backup scenarios that are applicable to your use case.
First let's take a look at the point-in-time restore scenario. In order to achieve this you should delete and recreate your bucket and then run cbrestore. The reason is that cbrestore will not be aware of new keys you have added after the backup and will not be able to delete them.
Let's say in another scenario you just want to force overwrite all of the data in your bucket with the data you backed up. In this case you want to disable conflict resolution and you can do this with the "-x conflict_resolve=0" flag. This would work in a case when I backed up 1000 keys, then updated them, and then wanted to revert the updates I did after the backup. (Note that the conflict_resolve flag was accidentally removed in Couchbase 4.0 and 4.1, but will be added back in 4.1.1 and 4.5)
On a final note, I would recommend against using cbtranfer since it is not tested as well as cbbackup and cbrestore and that tool is generally only used as a last resort.
Related
I took a backup of my database from engine yard, which is downloaded in sql.gz File. Since one of the tables of my database is too large, so I want to skip it while restoring it in my local system.
I use gunzip < file_name.sql.gz | mysql -u user_name -p password database name command to restore backup.
You may have identified a solution for this by now but I figured I'd add some insight. The restore operation of MySQL does not currently offer an easy way to exclude a single table from the restore operation, but there are a few options you could consider:
If your local MySQL server offers the 'Blackhole' engine you could use gawk to alter the ENGINE definition for that table when it is created. This would be something like:
gunzip < file_name.sql.gz | gawk -v RS='' '{print gensub(/(CREATE TABLE .[table_to_be_skipped].*) ENGINE=InnoDB/, "\\1 ENGINE=Blackhole", 1)}' | mysql -u user_name -p password database name.
This instructs the database to just pass through the row inserts against this table during the reload. Once the load completes you could modify this back to the InnoDB engine with alter table [table_to_be_skipped] engine=innodb;. The drawback to this would be that you are still downloading and parsing through a larger backup.
Another option would be to use the Awk method described here to create two backup files, one representing the tables and data prior to the excluded table, and another representing everything that follows it. The tricky part here is that if you add tables on either side of the excluded table you would have to update this script. This also has the consequence of having to download and parse a larger backup file but tends to be a bit easier to remember from a syntax perspective.
By far the best option to addressing this would be to simply do a manual backup on your source database that ignores this table. Use a replica if possible, and use the --single-transaction option to mysqldump if you are using all InnoDB tables or consistency of non-InnoDB tables is of minimal importance to your local environment. The following should do the trick:
mysqldump -u user_name -p --single-transaction --ignore-table=[table_to_be_skipped] database name | gzip > file_name.sql.gz
This has the obvious benefit of not requiring any complex parsing or larger file downloads.
Not sure if this will help, but we have documentation available for dealing with database backups - also, you may want to talk to someone from support through a ticket or in #engineyard on IRC freenode.
$ user=jocular; cat ~/list|while read db; do echo rm -vi /var/lib/mysql/$user_$db; done
That is what I came up with but my instructor gave me this feedback :
Removing MySQL databases in this manner can cause catastrophic problems for the MySQL server that can lead to loss of the MySQL server. Remove a database using InnoDB tables in this fashion and attempt to restore it from a backup to learn more.
What would be the safest command to remove the unmapped databases ?
Your instructor is right, InnoDB makes it harder to manipulate tables and databases using shell tools. The reason is that InnoDB manages a "data dictionary" inside the ibdata1 file, which catalogs the databases and tables and which tablespace files they belong in. If you move or rename or delete files in the shell, InnoDB's data dictionary now references out-of-date information, and subsequently trying to use those table names or database names runs into conflicts.
Sort of like when you get a new phone, and you keep getting calls from friends of the former owner of that phone number.
If you use SQL or other MySQL commands to drop the database, InnoDB makes sure to update its data dictionary and keep it in sync with reality.
user=jocular; cat ~/list | while read db; do mysqladmin drop "${user}_${db}" ; done
Mysqladmin prompts you before dropping a database, since there's no undoing that change. But you can also use the -f option to force dropping without prompting. Just be careful you don't drop the wrong database!
This question already has answers here:
mysqldump doing a partial backup - incomplete table dump
(4 answers)
Closed 9 years ago.
Does anyone know why MYSQLDUMP would only perform a partial backup of a database when run with the following instruction:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" databaseSchema -u root --password=rootPassword > c:\backups\daily\mySchema.dump
Sometimes a full backup is performed, at other times the backup will stop after including only a fraction of the database. This fraction is variable.
The database does have several thousand tables totalling about 11Gb. But most of these tables are quite small with only about 1500 records, many only have 150 - 200 records. The column counts of these tables can be in the hundreds though because of the frequency data stored.
But I am informed that the number of tables in a schema in MySQL is not an issue. There are also no performance issues during normal operation.
And the alternative of using a single table is not really viable because all of these tables have different column name signatures.
I should add that the database is in use during the backup.
Well after running the backup with instruction set:
"C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump" mySchema -u root --password=xxxxxxx -v --debug-check --log-error=c:\backups\daily\mySchema_error.log > c:\backups\daily\mySchema.dump
I get this:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE '\_dm\_10730\_856956\_30072013\_1375194514706\_keyword\_frequencies'': Error on delete of 'C:\Windows\TEMP\#sql67c_10_8c5.MYI' (Errcode: 13) (6)
Which I think is a permissions problem.
I doubt any one table in my schema is in the 2GB range.
I am using MySQL Server 5.5 on a Windows 7 64 bit server with 8 Gb of memory.
Any ideas?
I am aware that changing the number of files which MySQL can open, the open_files_limit parameter, may cure this matter.
Another possibility is interference from anti virus products as described here:
How To Fix Intermittent MySQL Errcode 13 Errors On Windows
There are a few possibilities for this issue that I have run into and here is my workup:
First: Enable error/debug logging and/or verbose output, otherwise we won't know of an error that could be creating the issue:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check --log-error=c:\backup\mysqldump_error.log > c:\backup\visualRSS.dump
So long as debug is enabled in your distribution, you should now
both be able to log errors to a file, as well as view output on a
console. The issue is not always clear here, but it is a great first
step.
Have you reviewed your error or general logs? Not often useful information for this issue, but sometimes there is, and every little bit helps with tracking these problems down.
Also watch SHOW PROCESSLIST while you are running this. See if you are seeing status columns like: WAITING FOR..LOCK/METADATA LOCK which would indicates the operation is unable to acquire a lock because of another operation.
Depending on info gathered above: Assuming I found nothing and had to shoot blind, here is what I would do next with some common cases I have experienced:
Max Packet Size errors: If you receive an error regarding max-allowed-packet-size, which, you can add --max_allowed_packet=160M to your parameters to see if you can get it large enough:
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --max_allowed_packet=160M > c:\backup\visualRSS.dump
Try to reduce run time/size using --compact flag. mysqldump will add everything you need to create the schema and insert the data along with other information: You can significantly reduce run-time and file size by just requiring the dump contain only the INSERTS to your schema and avoid all statements to create the schema and other non-critical info within ea. insert.This can mitigate a lot of problems is appropriate for use, but you will want to use a separate dump with the --nodata to export your schema ea. run to allow you to create all the empty tables etc.
/Create Raw data, exclude add-drop table, comment, lock and key check statements/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --compact > c:\backup\visualRSS.dump
/Create Schema dump with no data:/
"c:\path\to\mysqldump" -b yourdb -u root -pRootPasswd -v --debug-check
--log-error=c:\backup\mysqldump_error.log --nodata > c:\backup\visualRSS.dump
Locking Issues: By default, mysqldump uses the LOCK TABLE (unless you specify
single transaction) to read a table while it is dumping and wants to acquire a read-lock on the table, DDL operations and your global lock type may create this case. Without seeing the hung query you will typically see a small backup file size as you described, and usually the mysqldump operation will sit until you kill it, or the server closes the idle connection. You can use the --single-transaction flag to set a REPEATABLE READ type for the transaction to essentially take a snapshot of the table without blocking operations or being block, saved for some older server vers that have issues with ALTER/TRUNCATE TABLE while in this mode.
FileSize issues: If I read incorrectly that this backup HAS NOT successfully run before, indication the 2GB filesize
potential issue, you can try piping mysqldump output straight into
something like 7zip on the fly:
mysqldump |7z.exe a -si name_in_outfile
output_path_and_filename
If you continue to have issues or there is an unavoidable issue prohibiting mysqldump from being used. Percona XtraBackup is what I prefer, or there is the Enterprise Backup for MySQL from Oracle. It is open source, far more versatile than mysqldump, has a very reliable group of developers working on it and has many great features that mysqldump does not have, like streaming/hot backups, etc. Unfortunately the windows build is old, unless you can compile from binary or run a local linux VM to handle that for you.
Very important I noticed that you are not backing up your information_schema table, this needs to be mentioned exclusively if it is of significance to your backup scheme.
I am dealing with an incremental backup solution for a mysql database in centos. I need to write a perl script to take incremental backup. then i will run this script by using crontabs. I am a bit confused. There are solutions but not really helping. I did lots of research. there are so many ways to take full backup and incremental backup for files. I can easily understand them but I need to take an incremental backup of a mysql database. I do not know how to do it. Can anyone help me either advising a source or a piece of code.
The incremental backup method you've been looking at is documented by MySQL here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
What you are essentially going to want to do is set up your mysql instance to write any changes to your database to this binary log. What this means is any updates, deletes, inserts etc go in the binary log, but not select statements (which don't change the db, therefore don't go in the binary log).
Once you have your mysql instance running with binary logging turned on, you take a full backup and take note of the master position. Then later on, to take an incremental backup, you want to run mysqlbinlog from the master position and the output of that will be all the changes made to your database since you took the full backup. You'll want to take note of the master position again at this point, so you know the point that you want to take the next incremental backup from.
Clearly, if you then take multiple incremental backups over and over, you need to retain all those incremental backups. I'd recommend taking a full backup quite often.
Indeed, I'd recommend always doing a full backup, if you can. Taking incremental backups is just going to cause you pain, IMO, but if you need to do it, that's certainly one way to do it.
mysqldump is the ticket.
Example:
mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql
-u = mysql database user name
-p = mysql database password
Note: there is no space after the -p option. And if you have to do this in perl, then you can use the system function to call it like so:
system("mysqldump -u [user_name] -p[password] --database [database_name] >/tmp/databasename.sql") or die "system call failed: $?";
Be aware though of the security risks involved in doing this. If someone happened to do a listing of the current processes running on a system as this was running, they'd be able to see the credentials that were being used for database access.
The databases are prohibitively large (> 400MB), so dump > SCP > source is proving to be hours and hours work.
Is there an easier way? Can I connect to the DB directly and import from the new server?
You can simply copy the whole /data folder.
Have a look at High Performance MySQL - transferring large files
Use can use ssh to directly pipe your data over the Internet. First set up SSH keys for password-less login. Next, try something like this:
$ mysqldump -u db_user -p some_database | gzip | ssh someuser#newserver 'gzip -d | mysql -u db_user --password=db_pass some_database'
Notes:
The basic idea is that you are just dumping standard output straight into a command on the other side, which SSH is perfect for.
If you don't need encryption then you can use netcat but it's probably not worth it
The SQL text data goes over the wire compressed!
Obviously, change db_user to user user and some_database to your database. someuser is the (Linux) system user, not the MySQL user.
You will also have to use --password the long way because having mysql prompt you will be a lot of headache.
You could setup a MySQL slave replication and let MySQL copy the data, and then make the slave the new master
400M is really not a large database; transferring it to another machine will only take a few minutes over a 100Mbit network. If you do not have 100M networks between your machines, you are in a big trouble!
If they are running the exact same version of MySQL and have identical (or similar ENOUGH) my.cnf and you just want a copy of the entire data, it is safe to copy the server's entire data directory across (while both instances are stopped, obviously). You'll need to delete the data directory of the target machine first of course, but you probably don't care about that.
Backup/restore is usually slowed down by the restoration having to rebuild the table structure, rather than the file copy. By copying the data files directly, you avoid this (subject to the limitations stated above).
If you are migrating a server:
The dump files can be very large so it is better to compress it before sending or use the -C flag of scp. Our methodology of transfering files is to create a full dump, in which the incremental logs are flushed (use --master-data=2 --flush logs, please check you don't mess any slave hosts if you have them). Then we copy the dump and play it. Afterwards we flush the logs again (mysqladmin flush-logs), take the recent incremental log (which shouldn't be very large) and play only it. Keep doing it until the last incremental log is very small so that you can stop the database on the original machine, copy the last incremental log and then play it - it should take only a few minutes.
If you just want to copy data from one server to another:
mysqldump -C --host=oldhost --user=xxx --database=yyy -p | mysql -C --host=newhost --user=aaa -p
You will need to set the db users correctly and provide access to external hosts.
try importing the dump on the new server using mysql console, not an auxiliar software
I have no experience with doing this with mysql, but to me it seems the bottleneck is transferring the actual data?
4oo MB isnt that much. But if dump -> SCP is slow, i dont think connecting to the db server from the remove box would be any faster?
I'd suggest dumping, compressing, then copying over network or burning to disk and manually transfering the data.
Compressing such a dump will most likely give you quite good compression rate since, most likely , theres a lot of repeptetive data.
If you are only copying all the databases of the server, copy the entire /data directory.
If you are just copying one or more databases and adding them to an existing mysql server:
create the empty database in the new server, set up the permissions for users etc.
copy the folder for the database in /data/databasename to the new server /data/databasename
I like to use BigDump: Staggered Mysql Dump Importer after Exporting my database from the old server.
http://www.ozerov.de/bigdump/
One thing to note though, if you don't set the export options (namely the maximum length of created queries) respective to the load your new server can handle, it'll just fail and you will have to try again with different parameters. Personally, I set mine to about 25,000, but that's just me. Test it out a bit and you'll get the hang of it.