I'm running this command mysqldump -hlocalhost -uroot -d db_name -e --skip-add-drop-table --quick --skip-lock-tables >> /tmp/db1.sql
But when I do a load this file locally to my mysql server, I don't get any data in my folder. I'm not sure which of these flags is causing that to happen, because all they seem to be doing is skipping dropping the tables before hand, locking the tables, and retrieving rows from the table one row at a time. Thanks for the help!
The -d flag actually means no data, not specifying the database.
Related
I'm running mysqldump from an older mysql database. The mysqldump is part of a mariadb distribution if it matters.
When I run mysqldump locally, it's fine. When I run it on a remote system, I get no data dumped. If I run it with mysqldump -v the last line is
Skipping dump data for table 'table1', it has no fields
From some googling and this reddit thread, I determined that you need to set the default locale.
So the command that worked for me was:
mysqldump --default-character-set=latin1 --lock-tables=false --single-transaction=TRUE --host=$HOST --user=$USER --password=$PASSWORD $DB
I used both lock-tables and single transaction because I have a mix of myisam and innodb tables.
I need to migrate the data from Mysql to ClickHouse and do some testing. These two database networks are not working, I have to use files to transfer. The first thing I think of is that I can use the mysqldump tool to export .sql files.
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 -uroot -proot database_name table_name > test.sql
Then I found that there are 120 million pieces of data in the mysql table. The insert statement of the .sql file exported in this way is very long. How to avoid this situation, such as exporting 1000 data each time as an insert statement ?
In addition, this .sql file is too big, can it be divided into small files, what needs to be done?
mysqldump has an option to turn on or off using multi-value inserts. You can do either of the following according to which you prefer:
Separate Insert statements per value:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert -uroot -proot database_name table_name > test.sql
Multi-value insert statements:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --extended-insert -uroot -proot database_name table_name > test.sql
So what you can do is dump the schema first with the following:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --no-data -uroot -proot database_name > dbschema.sql
Then dump the data as individual insert statements by themselves:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert --no-create-info -uroot -proot database_name table_name > test.sql
You can then split the INSERT file into as many pieces as possible. If you're on UNIX use the split command, for example.
And if you're worried about how long the import takes you might also want to add the --disable-keys option to speed up inserts as well..
BUT my recommendation is not to worry about this so much. mysqldump should not exceed MySQL's ability to import in a single statement and it should run faster than individual inserts. As to file size, one nice thing about SQL is that it compresses beautifully. That multi-gigabyte SQL dump will turn into a nicely compact gzip or bzip or zip file.
EDIT: If you really want to adjust the amount of values per insert in a multi-value insert dump, you can add the --max_allowed_packet option. E.g. --max_allowed_packet=24M . Packet size determines the size of a single data packet (e.g. an insert) so if you set it low enough it should reduce the number of values per insert. Still, I'd try it as is before you start messing with that.
clickhouse-client --host="localhost" --port="9000" --max_threads="1" --query="INSERT INTO database_name.table_name FORMAT Native" < clickhouse_dump.sql
this is probably massively simple, however I will be doing this for a live server and don't want to mess it up.
Can someone please let me know how I can do a mysqldump of all databases, procedures, triggers etc except the mysql and performance_schema databases?
Yes, you can dump several schemas at the same time :
mysqldump --user=[USER] --password=[PASS] --host=[HOST] --databases mydb1 mydb2 mydb3 [...] --routines > dumpfile.sql
OR
mysqldump --user=[USER] --password=[p --host=[HOST] --all-databases --routines > dumpfile.sql
concerning the last command, if you don't want to dump performance_schema (EDIT: as mentioned by #Barranka, by default mysqldump won't dump it), mysql, phpMyAdmin schema, etc. you just need to ensure that [USER] can't access them.
As stated in the reference manual:
mysqldump does not dump the INFORMATION_SCHEMA or performance_schema database by default. To dump either of these, name it explicitly on the command line and also use the --skip-lock-tables option. You can also name them with the --databases option.
So that takes care of your concern about dumping those databases.
Now, to dump all databases, I think you should do something like this:
mysqldump -h Host -u User -pPassword -A -R > very_big_dump.sql
To test it without dumping all data, you can add the -d flag to dump only database, table (and routine) definitions with no data.
As mentioned by Basile in his answer, the easiest way to ommit dumping the mysql database is to invoke mysqldump with a user that does not have access to it. So the punch line is: use or create a user that has access only to the databases you mean to dump.
There's no option in mysqldump that you could use to filter the databases list, but you can run two commands:
# DATABASES=$(mysql -N -B -e "SHOW DATABASES" | grep -Ev '(mysql|performance_schema)')
# mysqldump -B $DATABASES
I need to restore a dumped database, but without discarding existing rows in tables.
To dump I use:
mysqldump -u root --password --databases mydatabase > C:\mydatabase.sql
To restore I do not use the mysql command, since it will discard all existing rows, but instead mysqlimport should do the trick, obviously. But how? Running:
mysqlimport -u root -p mydatabase c:\mydatabase.sql
says "table mydatabase.mydatabase does not exist". Why does it look for tables? How to restore dump with entire database without discarding existing rows in existing tables? I could dump single tables if mysqlimport wants it.
What to do?
If you are concerned with stomping over existing rows, you need to mysqldump it as follows:
MYSQLDUMP_OPTIONS="--no-create-info --skip-extended-insert"
mysqldump -uroot --ppassword ${MYSQLDUMP_OPTIONS} --databases mydatabase > C:\mydatabase.sql
This will do the following:
remove CREATE TABLE statements and use only INSERTs.
It will INSERT exactly one row at a time. This helps mitigate rows with duplicate keys
With the mysqldump performed in this manner, now you can import like this
mysql -uroot -p --force -Dtargetdb < c:\mydatabase.sql
Give it a Try !!!
WARNING : Dumping with --skip-extended-insert will make the mysqldump really big, but at least you can control each duplicate done one by one. This will also increase the length of time the reload of the mysqldump is done.
I would edit the mydatabase.sql file in a text editor, dropping the lines that reference dropping tables or deleting rows, then manually import the file normally using the mysql command as normal.
mysql -u username -p databasename < mydatabase.sql
The mysqlimport command is designed for dumps created with the mysql command SELECT INTO OUTFILE rather than direct database dumps.
This sounds like it is much more complicated than you are describing.
If you do a backup the way you describe, it has all the records in your database. Then you say that you do not want to delete existing rows from your database and load from the backup? Why? The reason why the backup file (the output from mysqldump) has the drop and create table commands is to ensure that you don't wind up with two copies of your data.
The right answer is to load the mysqldump output file using the mysql client. If you don't want to do that, you'll have to explain why to get a better answer.
I mysqldump --all-databases nightly as a backup. But on importing this dump into a clean installation, I obviously run into a couple issues.
I obviously can't (and don't want to) overwrite the new information_schema.
All my users and permissions settings are lost, unless I overwrite the mysql database.
What is standard practice in this situation? Parse out information_schema from .sql file before uploading? And do I overwrite the mysql database or not?
you will not have problems with the info schema
http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
mysqldump does not dump the INFORMATION_SCHEMA database. If you name that database explicitly on the command line, mysqldump silently ignores it.
For excluding database, try this bash script.
for DB in $(echo "show databases" | mysql -u <username> -p'<password>' | grep -v Database grep -v <some_db_to_exclude>)
do
mysqldump -u <username> -p'<password>' ${DB}
done