Import subset of a MySql dumpo - mysql

I have a database dump, let's say db.sql, that I have to import in MySql
I do not want to import all the tables in the dump but only the ones whose name start with a certain subset of letters (for example p-z)
I can grep somehow the text of the db.sql file but I am wondering if someone has a better solution for that.
Thank you

The dump files are almost plaintext files with DDL/DML operations. Hence, the easiest would be to read the dump file, select relevant operations and write to another file and import it into mysql. So, you already have the best solution as far as I can think.

Related

Mysql import multiple CSV and SQL from a folder

First of all, my initial intention is simply want to rename/copy a database. But the thrist of knowledge is extending a little further.
I knew about mysqldump db | mysql db. I also knew about LOAD LOCAL DATA INFILE to import a single CSV.
Mysql has a nice way to dump a DB as CSV into a single folder
mysqldump -T/some/folder db
Now that folder will contain both SQL and TXT files for each table (table1.sql, table1.txt, ...)
Why I'm choosing this method is because I have a database with 4Gb size, and using the traditionnal import is painfully slow. I heard that using CSV import might give a better performance.
Questions :
Is there any official way to do the reverse operation, which read from a folder that contains both SQL and TXT files?
Does exporting like this then import ensure an exact copy of the original DB? (indexes, primaries, uniques, views, etc...)
EDIT:
So I did some research.
Not the best way but it's from official doc to do the reverse operation https://dev.mysql.com/doc/refman/5.7/en/reloading-delimited-text-dumps.html (O.Jones'answer)
If we loop over the folder and import files one by one order alphabetically, we'll eventually run into InnoDB key constraints problem. Disable key check seems not solving the problem.
Quite a MYTH but mysqldump with --opt then import SQL seems import faster than CSV due to many optimizations! The command used is mysqlimport and not LOAD DATA INFILE (I will try later to see if there is any difference)
This is a little late, but there's a small trick you can do with bash.
cd to the folder where the files are and:
for x in $(ls *.txt); do mysqlimport -u YOURUSER -pYOURPASS --local database_to_import_into $x;done
this will do the trick.

Import Only Matched Data From CSV to MySQL

i need to import some data from very huge csv file which is about 1GB.
instead of importing all, i want to just import matched data, i think it will be more easy and faster than importing all data.
i need to search "Post Code District" column of CSV file, if it contains LS1 or LS2 or LS10, import matched data into tabel in SQL?
Misconception. You think that filtering a text file against a database table is going to be faster than just loading the entire file into the database.
I support there are extreme cases where this might be true. But, in general, the safest way to handle these types of situation is:
Import the file into a staging table.
Add indexes, as necessary to the staging table for performance.
Run a query to copy the data you want from the staging table.
I could phrase this a different way. In the time it would take you to figure out how to efficient combine information from the file and a database table, you could probably go through the above process 10-50 times.

How to import large mysql dumps into hadoop?

I need to import wikipedia dumps(mysql tables, unpacked files take about 50gb) into Hadoop(hbase). Now first I load dump into mysql and then transfer data from mysql to hadoop. But loading data into mysql takes huge amount of time - about 4-7 days. Is it possible to load mysql dump directly to hadoop(by means of some dump file parser or something similar)?
As far as I remember - MySQL Dumps are almost entirely is set of insert statements. You can parse them in your mapper and process as is... If you have only few tables hard code parsing in java should be trivial.
use sqoop . A tool that import mysql data into HDFS with map reduce jobs.
It is handy.

Exported databases have different sizes

If I export a database with phpmyadmin his size is 18MB
If I expoert it from terminal using this command is size is only 11MB.
/usr/bin/mysqldump --opt -u root -ppassword ${DB} | gzip > ${DB}.sql.gz
Could you explain me why ? Is because of --otp parameter ?
How can I be sure the database has been succesfully exported ? Should I inspect it.. still it is not a reliable evaluation. thanks
With the details you've given, there are a number of possibilties as to why the sizes may differ. Assuming the output from phpMyAdmin is also gzipped (otherwise the obvious reason for the difference would be that one is compressed, the other isn't), the following could affect size to some degree:
Different ordering of INSERT statements causing differences in the compressibility of the data
One using extended inserts, the other using only standard inserts (this seems most likely given the difference in sizes).
More comments added by the phpMyAdmin export tool
etc...
I'd suggest looking at the export to determine completeness (perhaps restore it to a test database and verifying that the row-counts on all tables are the
I don't have enough points to comment so I'm adding my comments in this answer...
If you look at the uncompressed contents of the export files from a phpmyadmin export and a mysqldump they will be quite different.
You could use diff to compare the two sql files:
diff file1.sql file2.sql
However, in my experience that will NOT be helpful in this case.
You can simply open the files in your favorite editor and compare them to see for yourself.
As mentioned by Iridium in the previous answer, the use of inserts can be different. I created two new empty databases and imported into each (via phpmyadmin) - one of the two exports mentioned above (one from phpmyadmin and the other via mysqldump).
The import using the mysqldump export file recreated the database containing 151 tables with 1484 queries.
The import using the phpmyadmin export file recreated the database containing 151 tables with 329 queries.
Of course these numbers apply only to my example, but it seems to be in line what Iridium was talking about earlier.

What is the easiest way to import an excel sheet into mysql

I have an Excel sheet with some cols and i want to import those into the mysql table. The Problem is that there are more colums in the mysql table than in the sheet (which is absolutely fine). What would be the easiest way of getting the data in the right fields?
My solution would be export to csv and put it in mysql via php, but there has to be a way that is more simple.
mysqlimport cmdline-tool has support for importing csv-files, and IIRC supports mapping of different columns in csv into different columns in your table.
http://linux.die.net/man/1/mysqlimport
I realize that it's just a cmd-line wrapper to the LOAD DATA INFILE sql statement, which can be used instead.
If you need to reaorganize the data, you could just import the csv flat into an equivalent table, and from there, do insert ... select from
You can use mysqlimport to import CSV data.
Although the mysqlimport solution is absolutely feasible, it can be cumbersome (NULL handling e.g.) if you have to import a lot of files or if you have to import them regularly. I sometimes use Toad® for MySQL which is able to directly import XLS(X) files into MySQL. This surely is overkill if you only import some Excel data now and then - but it's an alternative and it can be automated as Toad supports automation workflows.