I want to make a Incremental backup of my database,and for that i am using mysql Enterprise backup.The problem is that when I write down the command
mysqlbackup --user=root --password=password --backup-dir=/home/admin/Fullbackup backup-and-apply-log
it's all fine,and the full backup is taken finely. But When i Write
mysqlbackup --user=root --password=password --incremental --incremental-base=/home/admin/Fullbackup --incremental-backup-dir=/home/joy/incremental_1 backup
it acts as same,and make the full backup.I tried a lot to identify why it is happening,but I failed :(
Can anyone please help??Thanks in advance
I have just got what was my problem,my problem was that,in my database,all the table was type MYISAM, not-innodb .The mysqlbackup command makes incremental backup in case of innodb tables. If we gonna take the incremental backup of the database containing MYISAM table,the entire database(files) will be copied
Related
I have my VPS installed cPanel. Today it suddenly not work, all of Wordpress websites on VPS show the error:
This webpage has a redirect loop
ERR_TOO_MANY_REDIRECTS
and auto redirect to wp-admin/install.php.
I log in to phpmyadmin then see some tables have collation "in use" (not "utf8_general_ci" such as normally). Can't be loaded because of error:
#1286 - Unknown storage engine 'InnoDB'".
So how to fix this error?
Thanks for help me!
I faced the same issue, this issue raised up because your innoDB got corrupted and all your databases using innoDB tables are showing in-use.
To fix this issue you need to follow the below steps
To get 100% clean tablespace you need to start MySQL with innodb_force_recovery=4, take mysqldump and restore it on a fresh instance of InnoDB (by fresh I mean you have to delete ibdata1, and all databases directories).
UPDATE:
At this point MySQL is started with innodb_force_recovery=x (x != 0)
Take dump of all databases:
mysqldump --skip-lock-tables -A > alldb.sql
Check where MySQL keeps its files(in my case it's /var/lib/mysql/):
mysql -NBe "SELECT ##datadir"
/var/lib/mysql/
Stop MySQL
mysqladmin shut
Move old MySQL files to safe place
mv /var/lib/mysql /var/lib/mysql.old
Create new system database
mkdir /var/lib/mysql
mysql_install_db
Start MySQL
/etc/init.d/mysql start
Restore the dump
mysql < alldb.sql
Restore may take long time if the database is big.
Another trick may work in that case. Run ALTER TABLE ... ENGINE INNODB on each InnoDB table. It will rebuild all InnoDB indexes and thus the errors will go away.
+++++++++++++++++++++++++++++++++++++++++++++++++
Another solution to this is restoring the databases from backup.
For this first you need to remove ibdata1 file
cd /var/lib/mysql
rm -f ibdata1
Then restore all the databases one by one using below command
mysql -u username -p databasename < backupfile.sql
++++++++++++++++++++++++++++++++++++++++++++++++++
I'm using the following command to create an incremental backup in MySQL
mysqldump -uusername -ppassword db_name --flush-logs > D:\dbname_incremental_backup.sql
However the sql file is as big as a complete backup, and obviously importing it takes a long time as well. Could anybody tell me how to create incremental backups and import just the new data from each incremental backup rather than the whole database again?
I have read all the related articles in dev.mysql.com but still can not understand how to do it.
mysqldump only creates full backups. There's no built-in functionality for incremental backups.
For that sort of thing you probably want Percona xtrabackup but that will only work with InnoDB tables. This is usually not an issue since using MyISAM tables is considered extremely harmful.
By default a mysql dump will drop tables making an incremental update impossible. If you open up the resulting file, you will see something like:
DROP TABLE IF EXISTS `some_table_name`;
You can create a dump without dumping and creating new tables using the --no-create-info option. To make your dump friendly to incremental imports, you should also use --skip-extended-import which will break inserts out into one insert statement per row. Combined with using --force on the import will mean that inserts for rows that exist will fail but the import will continue. You will end up seeing errors in the logs for rows that already exist, but new rows will be inserted as desired.
You should be able to export with the following command (I also recommend not typing the password in the command so that it won't appear in your history)
mysqldump -u username -p --no-create-info --skip-extended-insert db_name --flush-logs > D:\dbname_incremental_backup.sql
You can then import with the following command:
mysql -u username -p --force db_name < D:\dbname_incremental_backup.sql
Trying to find out how people do a full backup/restore procedure: The user defined database schema and data can be easily backed up via mysqldump, but what about the master tables and data? i.e. if the server goes completely bananas, how can I rebuild the database, i.e. including all the settings in Mysql? is it just a matter of dumping/importing the information_schema and mysql databases + restore my.cnf ? (innodb or MyISAM, not ISAM)
--
edit: Thanks!
You don't back up information_schema, but otherwise, yes, keep a copy of your my.cnf and a dump of the mysql db tables and log settings. To do this do:
mysqldump -u$user -p$pass --all-databases > db_backup.sql
If you're going to restore to the 100% same version of MySQL, you could also backup by shutting down your server and doing a full copy of the contents of /var/lib/mysql (or wherever your data files are) along with your my.cnf file. Then just drop the copy back in place when you want to go live and turn on your server.
I need to backup the whole of a MySQL database with the information about all users and their permissions and passwords.
I see the options on http://www.igvita.com/2007/10/10/hands-on-mysql-backup-migration/,
but what should be the options to backup all of the MySQL database with all users and passwords and permissions and all database data?
Just a full backup of MySQL so I can import later on another machine.
At it's most basic, the mysqldump command you can use is:
mysqldump -u$user -p$pass -S $socket --all-databases > db_backup.sql
That will include the mysql database, which will have all the users/privs tables.
There are drawbacks to running this on a production system as it can cause locking. If your tables are small enough, it may not have a significant impact. You will want to test it first.
However, if you are running a pure InnoDB environment, you can use the --single-transaction flag which will create the dump in a single transaction (get it) thus preventing locking on the database. Note, there are corner cases where the initial FLUSH TABLES command run by the dump can lock the tables. If that is the case, kill the dump and restart it. I would also recommend that if you are using this for backup purposes, use the --master-data flag as well to get the binary log coordinates from where the dump was taken. That way, if you need to restore, you can import the dump file and then use the mysqlbinlog command to replay the binary log files from the position where this dump was taken.
If you'd like to transfer also stored procedures and triggers it's may be worth to use
mysqldump --all-databases --routines --triggers
if you have master/slave replication you may dump their settings
--dump-slave and/or --master-data
Oneliner suitable for daily backups of all your databases:
mysqldump -u root -pVeryStrongPassword --all-databases | gzip -9 > ./DBBackup.$(date +"%d.%m.%Y").sql.gz
If put in cron it will create files in format DBBackup.09.07.2022.sql.gz on a daily basis.
How to take the backup of full database using mysql?
You could use mysqldump. Have a look at the reference manual: 4.5.4. mysqldump — A Database Backup Program
[mysqldump] can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both.
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
Here is an example using mysqldump, you can create an Scheduled task or a cron job to automatize the backup process:
mysqldump --opt --host=localhost --user=myUser --password=myPass --result-file=C:\Backups\myBackupFile.sql myDatabase
If you have access to phpMyadmin, try the export function as it's probably the easiest way.