Select MySQL Tables with less than 100k Rows - mysql

I'd like to be able to make a backup of tables with less than 100K rows. What I'm trying to do is clone a development database to my local machine that has many log tables that I don't need the data for, and tables with "legitimate" content.
So I'm going to have one dump that just copies the structures of the tables, and another, that copies the relevant data from these tables with less than 100k rows.
If I have to use an intermediary language like Python or PHP, I'm fine with that.
edit: So the question is, how do I create a mysql dump of data from tables with less than 100k rows?

USe something like this
mysql databasename -u [root] -p[password] —disable-column-names -e
'select table_name from information_schema.tables where table_rows < 100000;'
| xargs mysqldump [databasename] -u [root] -p[password] > [target_file]
p.s. all this will need to be in a single line

To dump only the schema
mysqldump --user=dbuser --password --no-data --tab=/tmp dbname
or try to export schema and data seperately for each table with below command
mysqldump --user=dbuser --password --tab=/tmp dbname
Or
mysqldump --opt --where="1 limit 100000" database > fileName.sql
that would give you the 100K rows from every table.
To Ignore some tables
mysqldump --opt --where="1 limit 100000" --ignore-table=database.table1
--ignore-table=database.table2 database > fileName.sql

Related

How to use a file to migrate data from mysql to a clickhouse?

I need to migrate the data from Mysql to ClickHouse and do some testing. These two database networks are not working, I have to use files to transfer. The first thing I think of is that I can use the mysqldump tool to export .sql files.
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 -uroot -proot database_name table_name > test.sql
Then I found that there are 120 million pieces of data in the mysql table. The insert statement of the .sql file exported in this way is very long. How to avoid this situation, such as exporting 1000 data each time as an insert statement ?
In addition, this .sql file is too big, can it be divided into small files, what needs to be done?
mysqldump has an option to turn on or off using multi-value inserts. You can do either of the following according to which you prefer:
Separate Insert statements per value:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert -uroot -proot database_name table_name > test.sql
Multi-value insert statements:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --extended-insert -uroot -proot database_name table_name > test.sql
So what you can do is dump the schema first with the following:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --no-data -uroot -proot database_name > dbschema.sql
Then dump the data as individual insert statements by themselves:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert --no-create-info -uroot -proot database_name table_name > test.sql
You can then split the INSERT file into as many pieces as possible. If you're on UNIX use the split command, for example.
And if you're worried about how long the import takes you might also want to add the --disable-keys option to speed up inserts as well..
BUT my recommendation is not to worry about this so much. mysqldump should not exceed MySQL's ability to import in a single statement and it should run faster than individual inserts. As to file size, one nice thing about SQL is that it compresses beautifully. That multi-gigabyte SQL dump will turn into a nicely compact gzip or bzip or zip file.
EDIT: If you really want to adjust the amount of values per insert in a multi-value insert dump, you can add the --max_allowed_packet option. E.g. --max_allowed_packet=24M . Packet size determines the size of a single data packet (e.g. an insert) so if you set it low enough it should reduce the number of values per insert. Still, I'd try it as is before you start messing with that.
clickhouse-client --host="localhost" --port="9000" --max_threads="1" --query="INSERT INTO database_name.table_name FORMAT Native" < clickhouse_dump.sql

How to export thousands of tables from large MySQL database

I have a MySQL database with tens of thousands of tables in a Wordpress multisite database, and I need to export several thousand of them so that I can import them into a new database.
I know that I can use mysqldump like this: "mysqldump -u user -p database_name table_1 table_2 table_3 > filename.sql", but what's the best way to make this scale? If it helps, the tables are named as follows: "wp_blogid_tablename" where blogid is the ID of the blog (there are around 1000 blogs to export), and tablename is one of many different tables names, for example:
wp_8_commentmeta
wp_8_comments
wp_8_links
wp_8_options
wp_8_postmeta
wp_8_posts
wp_8_referer_blacklist
wp_8_referer_visitLog
wp_8_signups
wp_8_term_relationships
wp_8_term_taxonomy
wp_8_termmeta
wp_8_terms
You can try this but not tested though -
mysqldump -u user -p database_name table_blogid_* > wp_blogid.sql
The first approach might not work. Anyway, here is another solution for you -
mysqldump DBNAME $(mysql -D DBNAME -Bse "show tables like 'wp_8_%'") > wp_8.sql
Or you can try this, get the tables into a file first -
mysql -N information_schema -e "select table_name from tables where table_schema = 'databasename' and table_name like 'wp_8_%'" > wp_8_tables.txt
Now execute the sqldump script to export the tables -
mysqldump -u user -p database_name `cat wp_8_tables.txt` > wp_blogid.sql
My best solution so far was to create a shell script with an array of numbers (the blog ids) that would loop and then used the mydumper command to export the SQL tables to a single directory. The command looked like this:
mydumper --database="mydbname" --outputdir="/path/dir/" --regex="mydbname\.wp_${i}_.*"
(the ${i} is the blog id from the array loop)
Once completed I was able to load all of the sql files into the new database with the myloader command:
myloader --database="mynewdbname" --directory="/path/dir/"
Next challenge is to figure out how to DROP all of the exported tables from the original database...

Mysqldump, exclude data from tables by query

It's possible to ignore tables in mysqldump using:
mysqldump -u username -p database --ignore-table=database.table1 > database.sql
Is it possible to ignore certain records in tables while dumping?

how to take backup of two tables in the mysql database?

I am aware of mysqldump utility, as it takes backup of entire database. I need to take backup of two tables in mysql database, in which one table with all entries and second one without entries. and also i need both tables in a single sql(i.e mydb.sql) file.
is it possible ?
Mysqldump can also dump single tables, optionally with or without data:
mysqldump [options] db_name [tbl_name ...]
--no-data, -d: Do not write any table row information (that is, do not dump table contents).
So to dump table1 with all entries, and table2 without entries, you would invoke mysqldump twice like this:
mysqldump db_name table1 > table1.sql
mysqldump --no-data db_name table2 > table2.sql
UPDATE: To dump both tables into a single file, you can append the output of the second command to the first:
mysqldump db_name table1 > dump.sql
mysqldump --no-data db_name table2 >> dump.sql

Mysql - How to clear all data from all table in single database

I have database called Database1 and this DB have 40 tables. Now i want to delete all data from that 40 tables.I know that to display all tables from DB using
SELECT table_name
FROM INFORMATION_SCHEMA.tables
WHERE table_schema = 'Database1';
So how to delete all data from all tables form Database1 using single query?
Note :
I should delete only data, not tables.
I am using mysql workbench 6.0
You can try this:
mysqldump -d -uuser -ppass --add-drop-table yourdatabasename > yourdatabasename.sql
mysql -uuser -ppass yourdatabasename < yourdatabasename.sql
As pointed correctly by Zafar, if you want to include stored procedure/function also then you can include -R option.
Or you can try like
mysql -Nse 'show tables' yourdatabasename | while read table; do mysql -e "truncate table $yourtable" yourdatabasename; done
You can execute below command on server console.
mysql -uroot -p<pass> -Nse 'show tables' database1 | while read table; do mysql -uroot -p<pass> database1 -e "truncate table $table"; done
You can also do it by any gui like sqlyog by below steps-
right click on database1 > choose more database options > truncate database
Third option is by structure backup and restore as per below-
mysqldump -R -d -uroot -proot123 database1 | mysql -uroot -proot123 database1
Note: Always use -R if you are using stored procedures/function other wise you loose it.
enjoy...
For Oracle :
if you want to delete all the records of all the tables without drop the tables you should have a look, it may help you
https://dba.stackexchange.com/questions/74519/delete-all-the-data-from-all-tables
you can try any of following :
Delete data from all tables in MYSQL
How to empty all rows from all tables in mysql (in sql)