Import/export large mysql databases - mysql

Which scripts/solutions do use for import and export large mysql databases?
Phpmyadmin gives an error for these operations, if there is a big amount of data.

http://sypex.net/en/ is better than Phpmyadmin in that

If you have access to the command line in both locations, mysqldump
For more verbose answers, you'll need to add much more information about your setup, e.g. whether you are on some sort of hosting package or a server of your own.

Using phpmyadmin is pointless for large databases. As of yet I have been using databases just over 1 GB in size, with over 12 million records. In my experience, the best way to export data is to use
mysqldump -h HOST -u USER -p database_name > export_file.sql
-h is optional in most cases. If you are on a remote server and the error "mysqldump: Got error: 1044: Access denied for user..." pops up, add --single-transaction;
mysqldump --single-transaction -h HOST -u USER -p database_name > export_file.sql
You can look up the reason here. To import the database you can use
mysql -h HOST -u USER -p database_name < export_file.sql

Related

Which username and password does mysqldump expect?

I'm trying to make a copy of my website's database, so that I can download it and import it into wamp for local testing.
Here is what I'm entering in Putty:
mysqldump -u my_database_username -p dataname_db.sql --single-transaction --quick --lock-tables=false > dataname_db_local-$(date +%F).sql && gzip dataname_db_local.sql
No matter what combo of user and pass I use, I get this error
Got error: 1044: "Access denied for user to database when selecting the database
It wants the MySQL user that has full privileges to that database, right? ie the same credentials as what I use to connect to the database in a new MySQLi() command in php, right?
I read that sometimes passwords with special characters aren't allowed, so I made a new user, full priveleges, for that database with a plain alphanumeric pass, but it's still not accepted.
I then thought maybe it wants the same username and pass as what I use to connect to my server via Putty, but that didn't work. Neither did -u root with the server password.
Can someone please clarify exactly which username it wants?
Thank you
Yes, you are right, mysqldump requires exactly the same username and password as what you use to connect to the database in a new MySQLi() command in php.
Make sure your account has Lock_Table privilege.
If it still didn't work, try to pass the –-single-transaction option to mysqldump:
mysqldump --single-transaction -u db_username -p DBNAME > backup.sql
Notice that there is a syntax problem, you should select your DB at last of mysqldump statement:
mysqldump [options] db_name [tbl_name ...] > filename.sql
Reference: [1] , [2]

Importing only diff of mysql schema from server A to B

I do have a DEV mysql server where I change the table design of several tables.
We also do have a second DEV mysql server where other team members are changing the design as well.
My question is: Is there a way to simply import the difference of the schema of each table from one to another server in order to keep them synched?
Data is not relevant. Just the schema. Having only one DEV server is unfortunatley not an option.
Thanks for any hints.
Alex
you can certainly use mysqldump to then import in the Prod DB.
mysqldump -h localhost -u root -p --no-data dbname1>file1
mysqldump -h localhost -u root -p --no-data dbname2>file2
or use
schemasync [options]

MYSQL Restoring large DB into Remote Server

Hi All,
I am trying to restore nearly 8GB DB into remote server using mysql command in command prompt. It is been 8 hours since i started the process. But it still restores the DB. I tried with the command
> mysql -h hostname -u username -p dbname < location of the dump file
My questions are,
Does it take these much hours time to restore these amount of DB?
Is it possible to restore 8GB database?
Am i doing in correct way?
Is there any other better way to restore the DB?
In my opinion the answer of #Ferri is good, in cases like this the CLI is always the best option.
The only improvement that I suggest is to use gzip to reduce the weight of the script.
Dump the db like so:
mysqldump --host yourhost -u root --port 3306 -p yourdb | gzip -9 > yourdb.sql.gz
Restore the db like so:
gzip -cd yourdb.sql.gz | mysql -h yourhost -u root -p yourdb
Command
mysql -h IP -u Username -p schema < file
Example
mysql -h 192.168.10.122 -u root -p mydatabase < /tmp/20160628_test_minificated.sql
Does it take these much hours time to restore these amount of DB?
Depends of size of dumpfile and connection speed.
Is it possible to restore 8GB database?
Yes, by this way you can restore big databases.
Am i doing in correct way?
For me this is the best way when you are working from command line interface and destination also is a command line interface.
Is there any other better way to restore the DB?
Yes you have multiple options, like phpmyadmin, workbrench, heidisql and many others but each one have their own limitations.

Copying MySQL database from test server to local MAMP server

I may have gone completely braindead today, but I'm having trouble copying my DB down to my local MAMP server. I'm not too familiar with mysqldump, etc, but I want to know how to copy a database from a test server to my MAMP local server in the easiest way possible. I have very limited experience with server stuff, but have a bit of experience with command line.
Any straight-forward help would be much appreciated. I look forward to smacking myself in the head when I realise what a dick I've been ;)
Dalogi
mysqldump's the best way:
on the test server: mysqldump -p name_of_db > dump.sql
on the map server: mysql -p < dump.sql
The dump file contains the full instructions in SQL query format to recreating the db, it stable structure, and data. The -p option forces both apps to prompt for your password. If your MySQL username is different than your system's login account, then you'll need the -u option as well:
mysqldump -p -u yourDBusername name_of_db > dump.sql
mysql -p -u yourDBusername < dump.sql
mysqldump -h 'remotehost' -uremoteuser -premotepass db_name | mysql -ulocaluser -plocalpass db_name

Copying a mysql database from localhost to remote server using mysqldump.exe

I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.