How to export thousands of tables from large MySQL database - mysql

I have a MySQL database with tens of thousands of tables in a Wordpress multisite database, and I need to export several thousand of them so that I can import them into a new database.
I know that I can use mysqldump like this: "mysqldump -u user -p database_name table_1 table_2 table_3 > filename.sql", but what's the best way to make this scale? If it helps, the tables are named as follows: "wp_blogid_tablename" where blogid is the ID of the blog (there are around 1000 blogs to export), and tablename is one of many different tables names, for example:
wp_8_commentmeta
wp_8_comments
wp_8_links
wp_8_options
wp_8_postmeta
wp_8_posts
wp_8_referer_blacklist
wp_8_referer_visitLog
wp_8_signups
wp_8_term_relationships
wp_8_term_taxonomy
wp_8_termmeta
wp_8_terms

You can try this but not tested though -
mysqldump -u user -p database_name table_blogid_* > wp_blogid.sql

The first approach might not work. Anyway, here is another solution for you -
mysqldump DBNAME $(mysql -D DBNAME -Bse "show tables like 'wp_8_%'") > wp_8.sql
Or you can try this, get the tables into a file first -
mysql -N information_schema -e "select table_name from tables where table_schema = 'databasename' and table_name like 'wp_8_%'" > wp_8_tables.txt
Now execute the sqldump script to export the tables -
mysqldump -u user -p database_name `cat wp_8_tables.txt` > wp_blogid.sql

My best solution so far was to create a shell script with an array of numbers (the blog ids) that would loop and then used the mydumper command to export the SQL tables to a single directory. The command looked like this:
mydumper --database="mydbname" --outputdir="/path/dir/" --regex="mydbname\.wp_${i}_.*"
(the ${i} is the blog id from the array loop)
Once completed I was able to load all of the sql files into the new database with the myloader command:
myloader --database="mynewdbname" --directory="/path/dir/"
Next challenge is to figure out how to DROP all of the exported tables from the original database...

Related

Importing a MySQL Database on Localhost

So I wanted to format my system and I had a lot of works that I have done on my localhost that involves databases. I followed the normal way of backing up the database by exporting it into an SQL file but I think I made a mess by making a mistake of backing up everything in one SQL file (I mean the whole localhost was exported to just one SQL file).
The problem now is: when I try to import the backed up file I mean the (localhost.sql), I get an error like
tables already exist.
information_schema
performance_schema
an every other tables that comes with Xampp, which has been preventing me from importing the database.
These tables are the phpmyadmin tables that came with Xampp. I have been trying to get past this for days.
My question now is that can I extract different databases from the same compiled SQL database file?
To import a database you can do following things:
mysql -u username -p database_name < /path/to/database.sql
From within mysql:
mysql> use database_name;
mysql> source database.sql;
The error is quite self-explanatory. The tables information_schema and performance_schema are already in the MySQL server instance that you are trying to import to.
Both of these databases are default in MySQL, so it is strange that you would be trying to import these into another MySQL installation. The basic syntax to create a .sql file to import from the command line is:
$ mysqldump -u [username] -p [database name] > sqlfile.sql
Or for multiple databases:
$ mysqldump --databases db1 db2 db3 > sqlfile.sql
Then to import them into another MySQL installation:
$ mysql -u [username] -p [database name] < sqlfile.sql
If the database already exists in MySQL then you need to do:
$ mysqlimport -u [username] -p [database name] sqlfile.sql
This seems to be the command you want to use, however I have never replaced the information_schema or performance_schema databases, so I'm unsure if this will cripple your MySQL installation or not.
So an example would be:
$ mysqldump -uDonglecow -p myDatabase > myDatabase.sql
$ mysql -uDonglecow -p myDatabase < myDatabase.sql
Remember not to provide a password on the command line, as this will be visible in plain text in the command history.
The point the previous responders seem to be missing is that the dump file localhost.sql when fed into mysql using
% mysql -u [username] -p [databasename] < localhost.sql
generates multiple databases so specifying a single databasename on the command line is illogical.
I had this problem and my solution was to not specify [databasename] on the command line and instead run:
% mysql -u [username] -p < localhost.sql
which works.
Actually it doesn't work right away because of previous attempts
which did create some structure inside mysql, and those bits in localhost.sql
make mysql complain because they already exist from the first time around, so
now they can't be created on the second time around.
The solution to THAT is to manually edit localhost.sql with modifications like
INSERT IGNORE for INSERT (so it doesn't re-insert the same stuff, nor complain),
CREATE DATABASE IF NOT EXISTS for CREATE DATABASE,
CREATE TABLE IF NOT EXISTS for CREATE TABLE,
and to delete ALTER TABLE commands entirely if they generate errors because by then
they've already been executed ((and INSERTs and CREATEs perhaps too for the same reasons). You can check the tables with DESCRIBE TABLE and SELECT commands to make sure that the ALTERations, etc. have taken hold, for confidence.
My own localhost.sql file was 300M which my favorite editor emacs complained about, so I had to pull out bits using
% head -n 20000 localhost.sql | tail -n 10000 > 2nd_10k_lines.sql
and go through it 10k lines at a time. It wasn't too hard because drupal was responsible for an enormous amount, the vast majority, of junk in there, and I didn't want to keep any of that, so I could carve away enormous chunks easily.
unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Best way to import database in localhost has simple 5 steps:
zip sql file first to compress databse size.
go to termianl.
create empty database.
Run Command unzip databse With Import database: unzip -p /pathoffile/database_file.zip | mysql -uusername -p databsename;
Enter Password

Skip tables in mysqldump based on a pattern

Is there a way to restrict certain tables (ie. start with name 'test') from the mysqldump command?
mysqldump -u username -p database \
--ignore-table=database.table1 \
--ignore-table=database.table2 etc > database.sql
But the problem is, there is around 20 tables with name start with 'test'. Is there any way to skip these tables(without using these long command like "--ignore-table=database.table1 --ignore-table=database.table2 --ignore-table=database.table3 .... --ignore-table=database.table20"?
And is there any way to dump only schema but no data?
Unfortunately mysqldump requires table names to be fully qualified so you can't specify a parameter as a regex pattern.
You could, however, use a script to generate your mysqldump by having it connect to the information_schema and list all the tables using something like:
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('INFORMATION_SCHEMA', 'mysql', 'PERFORMANCE_SCHEMA');
And then having it generate --ignore-table parameters for all table names that match the regex of ^test.
To dump only the schema and no data you can use --no-data=true as a parameter.
If you want to get everything for all of the non test tables but only the schema for another table then you would need to use two separate mysqldump commands (one for the ignore-table for all test tables plus the schema only one and another for only the schema of the schema only table) with the second one appending to the output file by using the >> append operator.
So your resulting script might generate something like:
mysqldump -u root -ptoor databaseName --ignore-table=testTable1 --ignore-table=testTable2 --ignore-table=testTable3 --ignore-table=schemaOnlyTable > mysqldump.sql
mysqldump -u root -ptoor databaseName schemaOnlyTable --no-data=true >> mysqldump.sql

ssh Mysql dump table parts (11GB DB to smaller pieces)

I'm facing the following:
We have a DB table of 11GB with over 257 million records and need a backup. Exporting via PHPmyAdmin isn't possible (chrome keeps crashing) and backing up with SSH mysqldump tablename will give a insufficient space disk error (error 28).
Now I'd like to know if there is a way to export a mysqldump with a row 0 till ~100.000.000 command so we can make 3 parts (or smaller parts if required).
What I'm using:
mysqldump -p -u username database_name database_table > dbname.sql
[EDIT]
Found out how to get a row of <50.0000.0000 to SQL with the following:
mysqldump -p -u db_name db_table --where='id<50000000'
But the big question remains now, how to go further? Now I want to get all records between 50.000.000 and 100.000.000 ..
Anybody knows the answer if it's possible and what command I should use?
Problem solved:
Part 1 (<50.000.000):
mysqldump -p -u db_name db_table --where='id<50000000' >part_1.sql
Part 2 (>50.000.000 till <100.0000.000):
mysqldump -p -u db_name db_table --where='id>=50000000 &&
id<100000000' >part_2.sql
Part last (>250.000.000)
mysqldump -p -u db_name db_table --where='id>250000000' >part_final.sql
And so on..
mysqldump creates a text file that contains sql statements, if want to take mysql backup in parts then you will have to run mysqldump like this
mysqldump --where "id%2=0" database_name table > table_even.sql
mysqldump --where "id%2=1" database_name table > table_odd.sql
OR
you need to write some program, script to achieve that
I found a nice solution for heavy transfers! This might also help you to avoid to transfer your database in parts (as in this example) - since it does this super fast:
Exporting a full database or in parts as mentioned using mysqldump:
mysqldump -p -u db_name db_table --where='id<50000000' >part_1.sql
To import to the new database - login via terminal to the new database:
mysql -h localhost -upotato -p123456
Enter the database:
USE databasename;
Use the source command:
source /path/to/file.sql;
This works X1000 faster than the standard:
mysql -h localhost_new -upotato -p1234567 table_name < /path/to/file.sql
Since you enter the database.

Select MySQL Tables with less than 100k Rows

I'd like to be able to make a backup of tables with less than 100K rows. What I'm trying to do is clone a development database to my local machine that has many log tables that I don't need the data for, and tables with "legitimate" content.
So I'm going to have one dump that just copies the structures of the tables, and another, that copies the relevant data from these tables with less than 100k rows.
If I have to use an intermediary language like Python or PHP, I'm fine with that.
edit: So the question is, how do I create a mysql dump of data from tables with less than 100k rows?
USe something like this
mysql databasename -u [root] -p[password] —disable-column-names -e
'select table_name from information_schema.tables where table_rows < 100000;'
| xargs mysqldump [databasename] -u [root] -p[password] > [target_file]
p.s. all this will need to be in a single line
To dump only the schema
mysqldump --user=dbuser --password --no-data --tab=/tmp dbname
or try to export schema and data seperately for each table with below command
mysqldump --user=dbuser --password --tab=/tmp dbname
Or
mysqldump --opt --where="1 limit 100000" database > fileName.sql
that would give you the 100K rows from every table.
To Ignore some tables
mysqldump --opt --where="1 limit 100000" --ignore-table=database.table1
--ignore-table=database.table2 database > fileName.sql

Can I restore a single table from a full mysql mysqldump file?

I have a mysqldump backup of my mysql database consisting of all of our tables which is about 440 megs. I want to restore the contents of just one of the tables from the mysqldump. Is this possible? Theoretically, I could just cut out the section that rebuilds the table I want but I don't even know how to effectively edit a text document that size.
You can try to use sed in order to extract only the table you want.
Let say the name of your table is mytable and the file mysql.dump is the file containing your huge dump:
$ sed -n -e '/CREATE TABLE.*`mytable`/,/Table structure for table/p' mysql.dump > mytable.dump
This will copy in the file mytable.dump what is located between CREATE TABLE mytable and the next CREATE TABLE corresponding to the next table.
You can then adjust the file mytable.dump which contains the structure of the table mytable, and the data (a list of INSERT).
I used a modified version of uloBasEI's sed command. It includes the preceding DROP command, and reads until mysql is done dumping data to your table (UNLOCK). Worked for me (re)importing wp_users to a bunch of Wordpress sites.
sed -n -e '/DROP TABLE.*`mytable`/,/UNLOCK TABLES/p' mydump.sql > tabledump.sql
This can be done more easily? This is how I did it:
Create a temporary database (e.g. restore):
mysqladmin -u root -p create restore
Restore the full dump in the temp database:
mysql -u root -p --one-database restore < fulldump.sql
Dump the table you want to recover:
mysqldump restore mytable > mytable.sql
Import the table in another database:
mysql -u root -p database < mytable.sql
A simple solution would be to simply create a dump of just the table you wish to restore separately. You can use the mysqldump command to do so with the following syntax:
mysqldump -u [user] -p[password] [database] [table] > [output_file_name].sql
Then import it as normal, and it will only import the dumped table.
One way or another, any process doing that will have to go through the entire text of the dump and parse it in some way. I'd just grep for
INSERT INTO `the_table_i_want`
and pipe the output into mysql. Take a look at the first table in the dump before, to make sure you're getting the INSERT's the right way.
Edit: OK, got the formatting right this time.
Backup
$ mysqldump -A | gzip > mysqldump-A.gz
Restore single table
$ mysql -e "truncate TABLE_NAME" DB_NAME
$ zgrep ^"INSERT INTO \`TABLE_NAME" mysqldump-A.gz | mysql DB_NAME
You should try #bryn command but with the ` delimiter otherwise you will also extract the tables having a prefix or a suffix, this is what I usually do:
sed -n -e '/DROP TABLE.*`mytable`/,/UNLOCK TABLES/p' dump.sql > mytable.sql
Also for testing purpose, you may want to change the table name before importing:
sed -n -e 's/`mytable`/`mytable_restored`/g' mytable.sql > mytable_restored.sql
To import you can then use the mysql command:
mysql -u root -p'password' mydatabase < mytable_restore.sql
One possible way to deal with this is to restore to a temporary database, and dump just that table from the temporary database. Then use the new script.
sed -n -e '/-- Table structure for table `my_table_name`/,/UNLOCK TABLES/p' database_file.sql > table_file.sql
This is a better solution than some of the others above because not all SQL dumps contain a DROP TABLE statement. This one will work will all kinds of dumps.
This tool may be is what you want: tbdba-restore-mysqldump.pl
https://github.com/orczhou/dba-tool/blob/master/tbdba-restore-mysqldump.pl
e.g. Restore a table from database dump file:
tbdba-restore-mysqldump.pl -t yourtable -s yourdb -f backup.sql
Table should present with same structure in both dump and database.
`zgrep -a ^"INSERT INTO \`table_name" DbDump-backup.sql.tar.gz | mysql -u<user> -p<password> database_name`
or
`zgrep -a ^"INSERT INTO \`table_name" DbDump-backup.sql | mysql -u<user> -p<password> database_name`
This may help too.
# mysqldump -u root -p database0 > /tmp/database0.sql
# mysql -u root -p -e 'create database database0_bkp'
# mysql -u root -p database0_bkp < /tmp/database0.sql
# mysql -u root -p database0 -e 'insert into database0.table_you_want select * from database0_bkp.table_you_want'
Most modern text editors should be able to handle a text file that size, if your system is up to it.
Anyway, I had to do that once very quickly and i didnt have time to find any tools. I set up a new MySQL instance, imported the whole backup and then spit out just the table I wanted.
Then I imported that table into the main database.
It was tedious but rather easy. Good luck.
You can use vi editor. Type:
vi -o mysql.dump mytable.dump
to open both whole dump mysql.dump and a new file mytable.dump.
Find the appropriate insert into line by pressing / and then type a phrase, for example: "insert into `mytable`", then copy that line using yy. Switch to next file by ctrl+w then down arrow key, paste the copied line with pp. Finally save the new file by typing :wq and quite vi editor by :q.
Note that if you have dumped the data using multiple inserts you can copy (yank) all of them at once using Nyy in which N is the number of lines to be copied.
I have done it with a file of 920 MB size.
I tried a few options, which were incredibly slow. This split a 360GB dump into its tables in a few minutes:
How do I split the output from mysqldump into smaller files?
The 'sed' solutions mentioned earlier are nice but as mentioned not 100% secure
You may have INSERT commands with data containing:
... CREATE TABLE...(whatever)...mytable...
or even the exact string "CREATE TABLE `mytable`;"
if you are storing DML commands for instance!
(and if the table is huge you don't want to check that manually)
I would verify the exact syntax of the dump version used, and have a more restrictive pattern search:
Avoid ".*" and use "^" to ensure we start at the begining of the line.
And I'd prefer to grab the initial 'DROP'
All in all, this works better for me:
sed -n -e '/^DROP TABLE IF EXISTS \`mytable\`;/,/^UNLOCK TABLES;/p' mysql.dump > mytable.dump
Get a decent text editor like Notepad++ or Vim (if you're already proficient with it). Search for the table name and you should be able to highlight just the CREATE, ALTER, and INSERT commands for that table. It may be easier to navigate with your keyboard rather than a mouse. And I would make sure you're on a machine with plenty or RAM so that it will not have a problem loading the entire file at once. Once you've highlighted and copied the rows you need, it would be a good idea to back up just the copied part into it's own backup file and then import it into MySQL.
The chunks of SQL are blocked off with "Table structure for table my_table" and "Dumping data for table my_table."
You can use a Windows command line as follows to get the line numbers for the various sections. Adjust the searched string as needed.
find /n "for table `" sql.txt
The following will be returned:
---------- SQL.TXT
[4384]-- Table structure for table my_table
[4500]-- Dumping data for table my_table
[4514]-- Table structure for table some_other_table
... etc.
That gets you the line numbers you need... now, if I only knew how to use them... investigating.
You can import single table using terminal line as given below.
Here import single user table into specific database.
mysql -u root -p -D my_database_name < var/www/html/myproject/tbl_user.sql
I admire some of the ingenuity here, but there is literally no reason to use sed at all to address the OP's question.
The comment "use --one-database" is the correct answer, built into MySQL/MariaDB. No need for third-party hacks.
mysql -u root -p databasename --one-database < localhost.sql will just import the desired database.
I also found in some cases, when using this to import a series of databases, it would create the next database in the list for me (but not put anything in it). Not sure why it did that, but it made the restore easier.
With this command, enter the password interactively and it will import the requested database.