MySQLdump hangs with routines - function

I am trying to use a bat file to do a complete copy of a database to a different database. There are procedures and functions which need to be transferred over which is my biggest problem.
When I use
mysqldump -u user -p password db1 -v -R | mysql -u user -p password db2
It will copy all of the tables no problem, but hangs when it comes to the procedures and functions with this
..
-- Sending SELECT query...
-- Retrieving rows...
I have quite a few functions and procedures that needs to be copied over. Any help would be appreciated.

ROOT CAUSE
When you mysqldump stored procedures, it requires locking mysql.proc
You are loading the same table you are dumping from AND IT'S LOCKED !!!
SUGGESTION
Load the data first. Write Stored Procedures to a text file. The, load the code.
mysqldump -u user -p password db1 -v --skip-routines | mysql -u user -p password db2
mysqldump -u user -p password db1 -t -d --routines > stored_procs_from_db1.sql
mysql -u user -p password db2 < stored_procs_from_db1.sql
GIVE IT A TRY !!!

Related

ssh Mysql dump table parts (11GB DB to smaller pieces)

I'm facing the following:
We have a DB table of 11GB with over 257 million records and need a backup. Exporting via PHPmyAdmin isn't possible (chrome keeps crashing) and backing up with SSH mysqldump tablename will give a insufficient space disk error (error 28).
Now I'd like to know if there is a way to export a mysqldump with a row 0 till ~100.000.000 command so we can make 3 parts (or smaller parts if required).
What I'm using:
mysqldump -p -u username database_name database_table > dbname.sql
[EDIT]
Found out how to get a row of <50.0000.0000 to SQL with the following:
mysqldump -p -u db_name db_table --where='id<50000000'
But the big question remains now, how to go further? Now I want to get all records between 50.000.000 and 100.000.000 ..
Anybody knows the answer if it's possible and what command I should use?
Problem solved:
Part 1 (<50.000.000):
mysqldump -p -u db_name db_table --where='id<50000000' >part_1.sql
Part 2 (>50.000.000 till <100.0000.000):
mysqldump -p -u db_name db_table --where='id>=50000000 &&
id<100000000' >part_2.sql
Part last (>250.000.000)
mysqldump -p -u db_name db_table --where='id>250000000' >part_final.sql
And so on..
mysqldump creates a text file that contains sql statements, if want to take mysql backup in parts then you will have to run mysqldump like this
mysqldump --where "id%2=0" database_name table > table_even.sql
mysqldump --where "id%2=1" database_name table > table_odd.sql
OR
you need to write some program, script to achieve that
I found a nice solution for heavy transfers! This might also help you to avoid to transfer your database in parts (as in this example) - since it does this super fast:
Exporting a full database or in parts as mentioned using mysqldump:
mysqldump -p -u db_name db_table --where='id<50000000' >part_1.sql
To import to the new database - login via terminal to the new database:
mysql -h localhost -upotato -p123456
Enter the database:
USE databasename;
Use the source command:
source /path/to/file.sql;
This works X1000 faster than the standard:
mysql -h localhost_new -upotato -p1234567 table_name < /path/to/file.sql
Since you enter the database.

creating mysqldump to backup database

I know how mysqldump works.
But dont know where to use it?
If I execute this command after starting mysql program then it says error.
I am using ubuntu. So how can I use this utility?
Backup your database this way too..
mysql -u root -p DB_NAME > db_name_backup.sql
If you want to backup all database simply run this
mysql -u root -p > mysql_db_backup.sql
You will learn more about mysql and mysqldump here..
Guide:
mysqldump and mysql
MySQL Database Backup using mysqldump
shell> mysqldump --opt db_name > backup-file.sql
You can read the dump file back into the server like this:
shell> mysql db_name < backup-file.sql
Or like this:
shell> mysql -e "source /path-to-backup/backup-file.sql" db_name
mysqldump is also very useful for populating databases by copying data
from one MySQL server to another:
shell> mysqldump --opt db_name | mysql --host=remote_host -C db_name
It is possible to dump several databases with one command:
shell> mysqldump --databases db_name1 [db_name2 ...] > my_databases.sql
If you want to dump all databases, use the --all-databases option:
shell> mysqldump --all-databases > all_databases.sql
If tables are stored in the InnoDB storage engine, mysqldump provides a
way of making an online backup of these (see command below). This
backup just needs to acquire a global read lock on all tables (using
FLUSH TABLES WITH READ LOCK) at the beginning of the dump. As soon as
this lock has been acquired, the binary log coordinates are read and
lock is released. So if and only if one long updating statement is
running when the FLUSH... is issued, the MySQL server may get stalled
until that long statement finishes, and then the dump becomes
lock-free. So if the MySQL server receives only short (in the sense of
"short execution time") updating statements, even if there are plenty
of them, the initial lock period should not be noticeable.
shell> mysqldump --all-databases --single-transaction > all_databases.sql
For point-in-time recovery (also known as “roll-forward”, when you need
to restore an old backup and replay the changes which happened since
that backup), it is often useful to rotate the binary log (see
Section 8.4, “The Binary Log”) or at least know the binary log
coordinates to which the dump corresponds:
shell> mysqldump --all-databases --master-data=2 > all_databases.sql
or
shell> mysqldump --all-databases --flush-logs --master-data=2 > all_databases.sql
The simultaneous use of --master-data and --single-transaction works as
of MySQL 4.1.8. It provides a convenient way to make an online backup
suitable for point-in-time recovery if tables are stored in the InnoDB
storage engine.
For more information on making backups, see Section 6.1, “Database
Backups”.
mysqldump -u MYSQL_USER -h MYSQL_SERVER -pMYSQL_PASS --all-databases > "dbs.sql"
You use it directly on the terminal, just like mysql it self, and pass the parameters directly to it.
mysqldump -u [user] -p[password] [database name] > dumpfilename.sql
yes you can.
see http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html for more information on the tool.
If it's an entire DB, then:
$ mysqldump -u [uname] -p[pass] db_name > db_backup.sql
If it's all DBs, then:
$ mysqldump -u [uname] -p[pass] --all-databases > all_db_backup.sql
If it's specific tables within a DB, then:
$ mysqldump -u [uname] -p[pass] db_name table1 table2 >
table_backup.sql
You can even go as far as auto-compressing the output using gzip (if your DB is very big):
$ mysqldump -u [uname] -p[pass] db_name | gzip > db_backup.sql.gz
If you want to do this remotely and you have the access to the server in question, then the following would work (presuming the MySQL server is on port 3306):
$ mysqldump -P 3306 -h [ip_address] -u [uname] -p[pass] db_name >
db_backup.sql
To IMPORT:
ype the following command to import sql data file:
$ mysql -u username -p -h localhost DATA-BASE-NAME < data.sql
In this example, import 'data.sql' file into 'blog' database using vivek as username:
$ mysql -u sat -p -h localhost blog < data.sql
If you have a dedicated database server, replace localhost hostname with with actual server name or IP address as follows:
$ mysql -u username -p -h 202.54.1.10 databasename < data.sql
OR use hostname such as mysql.cyberciti.biz
$ mysql -u username -p -h mysql.cyberciti.biz database-name < data.sql
If you do not know the database name or database name is included in sql dump you can try out something as follows:
$ mysql -u username -p -h 202.54.1.10 < data.sql
REfer: http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html

How to use MySQL dump from a remote machine

How can I backup a mysql database which is running on a remote server, I need to store the back up file in the local pc.
Try it with Mysqldump
#mysqldump --host=the.remotedatabase.com -u yourusername -p yourdatabasename > /User/backups/adump.sql
Have you got access to SSH?
You can use this command in shell to backup an entire database:
mysqldump -u [username] -p[password] [databasename] > [filename.sql]
This is actually one command followed by the > operator, which says, "take the output of the previous command and store it in this file."
Note: The lack of a space between -p and the mysql password is not a typo. However, if you leave the -p flag present, but the actual password blank then you will be prompted for your password. Sometimes this is recommended to keep passwords out of your bash history.
No one mentions anything about the --single-transaction option. People should use it by default for InnoDB tables to ensure data consistency. In this case:
mysqldump --single-transaction -h [remoteserver.com] -u [username] -p [password] [yourdatabase] > [dump_file.sql]
This makes sure the dump is run in a single transaction that's isolated from the others, preventing backup of a partial transaction.
For instance, consider you have a game server where people can purchase gears with their account credits. There are essentially 2 operations against the database:
Deduct the amount from their credits
Add the gear to their arsenal
Now if the dump happens in between these operations, the next time you restore the backup would result in the user losing the purchased item, because the second operation isn't dumped in the SQL dump file.
While it's just an option, there are basically not much of a reason why you don't use this option with mysqldump.
This topic shows up on the first page of my google result, so here's a little useful tip for new comers.
You could also dump the sql and gzip it in one line:
mysqldump -u [username] -p[password] [database_name] | gzip > [filename.sql.gz]
mysqldump -h [domain name/ip] -u [username] -p[password] [databasename] > [filename.sql]
Tried all the combinations here, but this worked for me:
mysqldump -u root -p --default-character-set=utf8mb4 [DATABASE TO BE COPIED NAME] > [NEW DATABASE NAME]
If you haven't install mysql_client yet and using Docker container instead:
sudo docker exec MySQL_CONTAINER_NAME /usr/bin/mysqldump --host=192.168.1.1 -u username --password=password db_name > dump.sql
You can directly pipe it to the remote server where you wish to copy your data to:
mysqldump -u your_db_user_name -p --set-gtid-purged=OFF --triggers --routines --events --compress --skip-lock-tables --verbose your_local_sql_db_name | mysql -u your_db_user_name -p -h your_remote_server_ip your_remote_server_db_name
You need to have created the db on your remote sql server.
Using the above command, I was able to copy from my local sql server version 8.0.23 to my remote sqlserver running 8.0.25
This is how you would restore a backup after you successfully backup your .sql file
mysql -u [username] [databasename]
And choose your sql file with this command:
source MY-BACKED-UP-DATABASE-FILE.sql

MySQL Backup default command in MySQL Administrator

I am writing a batch file to backup MySQL database. When I typed mysqldump --help in the command-line, it shows a long list of options. I want to know what are the default parameters that MySQL Administrator uses to create a full backup.
I am using Windows XP.
mysqldump -u [username] -p [password] [databasename] > [backupfile.sql]
[username] - this is your database username
[password] - this is the password for your database
databasename] - the name of your database
[backupfile.sql] - the file to which the backup should be written
Example:
mysqldump -u root -pasd dbname > dbbackup.sql
Note that the above solution would not backup stored procedures nor triggers, if the underlying database has any. A better command would be:
mysqldump -uroot -ppassword --routines --triggers dbname > dbbackup.sql
If there are no stored procedures nor triggers, the additional switches will do no harm

Copying a mysql database from localhost to remote server using mysqldump.exe

I want to copy a mysql database from my local computer to a remote server.
I am trying to use the mysql dump command. All the examples on the internet suggest doing something like
The initial mysql> is just the prompt I get after logging in.
mysql> mysqldump -u user -p pass myDBName | NewDBName.out;
But when I do this I get You have an error in your SQL syntax; check the manual that corresponds ... to use near 'mysqldump -u user -p pass myDBName | NewDBName.out'
Since I have already logged in do I need to use -u and -p? Not doing so gives me the same error. Can you see what is wrong?
In addition to what Alexandre said, you probably don't want to pipe (|) output to NewDBName.out, but rather redirect it there (>).
So from the Windows/Unix command line:
mysqldump -u user -p pass myDBName > NewDBName.out
Note that if you have large binary fields (e.g. BLOBS) in some columns you may need to set an additional option (I think it was --hex-blob, but there might have been another option too). If that applies to you, add a comment and I'll research the setting.
mysqldump is not an SQL statement that you execute inside a mysql session but a distinct binary that should be started from your OS shell.
The are a few ways to use this. One of them is to pipe the output of mysqldump to another MySQL instance:
echo CREATE DATABASE remote_db | mysql -h remote_host -u remote_user -premote_password
mysqldump -h source_host -u root -ppassword source_db | mysql -h remote_host -u remote_user -premote_password -D remote_db
I have had to dump large sets of data recently. From what I have found on a 200Mb database with 10,000+ records in many of the tables is the following. I used the linux 'time' command to get actual time.
12 minutes using:
mysqldump -u user -p pass myDBName > db-backups.sql
7 minutes to clone the database:
mysqldump -u user -p pass myDBName | mysql -u user -p pass cloneDBName
And in less than a second:
mysqlhotcopy -u user -p pass myDBName cloneDBName
The last one blew my mind, but you have to be logged in locally where the database server resides. Personally I think this is much faster than remotely doing a dump, the you can compress the .sql file and transfer it manually.