Hi All,
I am trying to restore nearly 8GB DB into remote server using mysql command in command prompt. It is been 8 hours since i started the process. But it still restores the DB. I tried with the command
> mysql -h hostname -u username -p dbname < location of the dump file
My questions are,
Does it take these much hours time to restore these amount of DB?
Is it possible to restore 8GB database?
Am i doing in correct way?
Is there any other better way to restore the DB?
In my opinion the answer of #Ferri is good, in cases like this the CLI is always the best option.
The only improvement that I suggest is to use gzip to reduce the weight of the script.
Dump the db like so:
mysqldump --host yourhost -u root --port 3306 -p yourdb | gzip -9 > yourdb.sql.gz
Restore the db like so:
gzip -cd yourdb.sql.gz | mysql -h yourhost -u root -p yourdb
Command
mysql -h IP -u Username -p schema < file
Example
mysql -h 192.168.10.122 -u root -p mydatabase < /tmp/20160628_test_minificated.sql
Does it take these much hours time to restore these amount of DB?
Depends of size of dumpfile and connection speed.
Is it possible to restore 8GB database?
Yes, by this way you can restore big databases.
Am i doing in correct way?
For me this is the best way when you are working from command line interface and destination also is a command line interface.
Is there any other better way to restore the DB?
Yes you have multiple options, like phpmyadmin, workbrench, heidisql and many others but each one have their own limitations.
Related
I may have gone completely braindead today, but I'm having trouble copying my DB down to my local MAMP server. I'm not too familiar with mysqldump, etc, but I want to know how to copy a database from a test server to my MAMP local server in the easiest way possible. I have very limited experience with server stuff, but have a bit of experience with command line.
Any straight-forward help would be much appreciated. I look forward to smacking myself in the head when I realise what a dick I've been ;)
Dalogi
mysqldump's the best way:
on the test server: mysqldump -p name_of_db > dump.sql
on the map server: mysql -p < dump.sql
The dump file contains the full instructions in SQL query format to recreating the db, it stable structure, and data. The -p option forces both apps to prompt for your password. If your MySQL username is different than your system's login account, then you'll need the -u option as well:
mysqldump -p -u yourDBusername name_of_db > dump.sql
mysql -p -u yourDBusername < dump.sql
mysqldump -h 'remotehost' -uremoteuser -premotepass db_name | mysql -ulocaluser -plocalpass db_name
Which scripts/solutions do use for import and export large mysql databases?
Phpmyadmin gives an error for these operations, if there is a big amount of data.
http://sypex.net/en/ is better than Phpmyadmin in that
If you have access to the command line in both locations, mysqldump
For more verbose answers, you'll need to add much more information about your setup, e.g. whether you are on some sort of hosting package or a server of your own.
Using phpmyadmin is pointless for large databases. As of yet I have been using databases just over 1 GB in size, with over 12 million records. In my experience, the best way to export data is to use
mysqldump -h HOST -u USER -p database_name > export_file.sql
-h is optional in most cases. If you are on a remote server and the error "mysqldump: Got error: 1044: Access denied for user..." pops up, add --single-transaction;
mysqldump --single-transaction -h HOST -u USER -p database_name > export_file.sql
You can look up the reason here. To import the database you can use
mysql -h HOST -u USER -p database_name < export_file.sql
I have a backup of my database in mysql of 250MB !!
How can I restore it in a new database on another server ?
Or just use phpMyAdmin for restore porpoise.
You are providing no detail on what operating system you're on and what kind of backup you have, but the short answer is
mysql -u username -p -h hostname databasename < dumpfile.sql
where dumpfile.sql needs to be a file containing SQL statements, for example produced with mysqldump.
Using:
mysql -u USER -p -h HOST DATABASE < mysqldump.sql
I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.
I want to copy all the tables, fields, and data from my local server mysql to my hosting sites mysql. Is there a way to copy all the data? (It's only 26kb, very small)
In phpMyAdmin, just export a dump (using the export) tab and re-import it on the other server using the sql tab.
Make sure you compare the results, I have had phpMyAdmin screw up the import more than once.
If you have shell access to both servers, a combination of
mysqldump -u username -p databasename > dump.sql
and a
mysql -u username -p databasename < dump.sql
on the target server is the much more fast and reliable alternative in my experience.
Have a look at
Copying MySQL Databases to Another Machine
Copy MySQL database from one server to another remote server
Please follow the following steps:
Create the target database using MySQLAdmin or your preferred method. In this example, db2 is the target database, where the source database db1 will be copied.
Execute the following statement on a command line:
mysqldump -h [server] -u [user] -p[password] db1 | mysql -h [server]
-u [user] -p[password] db2
Note: There is NO space between -p and [password]
I copied this from Copy/duplicate database without using mysqldump.
It works fine. Please ensure that you are not inside mysql while running this command.
If you have the same version of mysql on both systems (or versions with compatible db file sytsem), you may just copy the data files directly. Usually files are kept in /var/lib/mysql/ on unix systems.