I have a dump of 2gb size and its taking more than 2 days to restore whole database and still going on.
I am using this command to restore my Database from dump file
mysql -uroot -ppassword new_db < old_db.sql
is there any faster way to do the same. so I can wrap up the Restoration task in one day or less.
Related
I am running a web application within a shared hosting environment which uses a MYSQL database which is about 3GB large.
For testing purposes I have setup a XAMPP environment on my local macOS machine. To copy the online DB to my local machine I used mysqldump on the server and than directly imported the dump file to mysql:
// Server
$ mysqldump -alv -h127.0.0.3 --default-character-set=utf8 -u dbUser -p'dbPass' --extended-insert dbName > dbDump.sql
// Local machine
$ mysql -h 127.0.0.3 -u dbUser -p'dbPass' dbName < dbDump.sql
The only optimization here is the use of extended-insert. However the import takes about 10 hours!
After some searching I found that disabling foreign constraint checks during the import should speed up the process. So I added the following line at the begining of the dump file:
// dbDump.sql
SET FOREIGN_KEY_CHECKS=0;
...
However this did not make any significant difference... The import now took about 8 hours. Faster but still pretty long.
Why does it take so much time to import the data? Is there a better/faster way to do this?
The server is not the fastest (shared hosting...) but it takes just about 2 minutes to export/dump the data. That exporting is faster than importing (no syntax checks, no parsing, just writing...) is not surprising but 300 times faster (10 hours vs. 2 minutes)? This is a huge difference...
Isn't there any other solution that would be faster? Copy the binary DB file instead, for example? Anything would be better than using a text file as transfer medium.
This is not just about transferring the data to another machine for testing purposes. I also create daily backups of the database. If it would be necessary to restore the DB it would be pretty bad if the site is down for 10 hours...
We have around 20 GB of backup data. we took backup using by below command in MySQL 5.5 Server.
mysqldump -u username -p password (database Name) > Backupfile.sql
Now we are trying to restore same data in MySQL 5.7 Server. after immediate installation of MySQL 5.7.
It's taking huge time to restore the data.
We did search in google, we found this.
https://serverfault.com/questions/146525/how-can-i-speed-up-a-mysql-restore-from-a-dump-file
But its not helping.
Is there any other way to speed up this restoration process.
Thanks...
I want to backup MySQL database every 10 minutes. how i can do it. I don't know how to use procedure or function for it.
I have used
mysqldump -u root -p mydatabase > mydb_backup.sql
I also want to add date and time in end of backup database name. I should only keep latest 3 backup database in system and destroy other database.
How about a backup every second? Well, actually it is "continually". It is called "Replication".
You build another mysql server (machine) as the Slave.
Then copy the data to the Slave, and do CHANGE MASTER on the Slave to have it continually replicate from the Master (which is your current instance of mysql).
AutoMySQLBackup has some great features to:
backup a single database, multiple databases, or all the databases on the server;
each database is saved in a separate file that can be compressed (with gzip or bzip2);
it will rotate the backups and not keep them filling your hard drive (as normal in the daily backup you will have only the last 7 days of backups, the weekly if enabled will have one for each week, etc.).
or you can find more info here 10 Ways to Automatically & Manually Backup MySQL Database
If you are working in unix or linux you can use crontab for scheduling.
To add the the time and date to the backup file you can use a syntax similar to the following
mysqldump -u root -p mydatabase > mydb_backup_`date+"%Y%m%d%H%M%S"`.sql
I currently make a backup of my 2.5GB (and growing) MySQL Database every day. I have over 100 tables that are backed up.
I use this command:
mysqldump --user=user --password=pass --host=localhost db_name | gzip > backup.sql.gz
Works great but when I need to quickly restore data to a single table, it's a horrible process. I have to download the backup, extract the ZIP file, wait forever for the editor to load the SQL file so that I can remove the other tables I don't require. When I need this done fast, I'm pulling my hair out
Can anyone recommend a better way to store MySQL backups? Is there a command to split all the tables into their own sql files?
Appreciate your help!
I need to backup the whole of a MySQL database with the information about all users and their permissions and passwords.
I see the options on http://www.igvita.com/2007/10/10/hands-on-mysql-backup-migration/,
but what should be the options to backup all of the MySQL database with all users and passwords and permissions and all database data?
Just a full backup of MySQL so I can import later on another machine.
At it's most basic, the mysqldump command you can use is:
mysqldump -u$user -p$pass -S $socket --all-databases > db_backup.sql
That will include the mysql database, which will have all the users/privs tables.
There are drawbacks to running this on a production system as it can cause locking. If your tables are small enough, it may not have a significant impact. You will want to test it first.
However, if you are running a pure InnoDB environment, you can use the --single-transaction flag which will create the dump in a single transaction (get it) thus preventing locking on the database. Note, there are corner cases where the initial FLUSH TABLES command run by the dump can lock the tables. If that is the case, kill the dump and restart it. I would also recommend that if you are using this for backup purposes, use the --master-data flag as well to get the binary log coordinates from where the dump was taken. That way, if you need to restore, you can import the dump file and then use the mysqlbinlog command to replay the binary log files from the position where this dump was taken.
If you'd like to transfer also stored procedures and triggers it's may be worth to use
mysqldump --all-databases --routines --triggers
if you have master/slave replication you may dump their settings
--dump-slave and/or --master-data
Oneliner suitable for daily backups of all your databases:
mysqldump -u root -pVeryStrongPassword --all-databases | gzip -9 > ./DBBackup.$(date +"%d.%m.%Y").sql.gz
If put in cron it will create files in format DBBackup.09.07.2022.sql.gz on a daily basis.