Mysql import multiple CSV and SQL from a folder - mysql

First of all, my initial intention is simply want to rename/copy a database. But the thrist of knowledge is extending a little further.
I knew about mysqldump db | mysql db. I also knew about LOAD LOCAL DATA INFILE to import a single CSV.
Mysql has a nice way to dump a DB as CSV into a single folder
mysqldump -T/some/folder db
Now that folder will contain both SQL and TXT files for each table (table1.sql, table1.txt, ...)
Why I'm choosing this method is because I have a database with 4Gb size, and using the traditionnal import is painfully slow. I heard that using CSV import might give a better performance.
Questions :
Is there any official way to do the reverse operation, which read from a folder that contains both SQL and TXT files?
Does exporting like this then import ensure an exact copy of the original DB? (indexes, primaries, uniques, views, etc...)
EDIT:
So I did some research.
Not the best way but it's from official doc to do the reverse operation https://dev.mysql.com/doc/refman/5.7/en/reloading-delimited-text-dumps.html (O.Jones'answer)
If we loop over the folder and import files one by one order alphabetically, we'll eventually run into InnoDB key constraints problem. Disable key check seems not solving the problem.
Quite a MYTH but mysqldump with --opt then import SQL seems import faster than CSV due to many optimizations! The command used is mysqlimport and not LOAD DATA INFILE (I will try later to see if there is any difference)

This is a little late, but there's a small trick you can do with bash.
cd to the folder where the files are and:
for x in $(ls *.txt); do mysqlimport -u YOURUSER -pYOURPASS --local database_to_import_into $x;done
this will do the trick.

Related

dumping larg database backup or splitting large sql files into smaller parts to dump

I have a backup file from a big database. its about 85Mb in gzip format and 1.5Gb in sql format.
Now I want to import it in my local database. but no phpMyadmin and nor Naicat for Mysql can't do it. So i want an application to split it to smaller parts and import it part by part.
I tryed notepad++, glogg and TSE Pro ti read and manually split, but except TSE others couldn't open it and TSE hangs after selecting and cutting 10000 line of text.
I also tried Gsplit to split it but it seems Gsplit has it's own type for split-ed parts that isn't txt.
thanks for your help. your help may contain any other solution to restore my db in local...
Thanks to #souvickcse the bigdump worked great.

Import subset of a MySql dumpo

I have a database dump, let's say db.sql, that I have to import in MySql
I do not want to import all the tables in the dump but only the ones whose name start with a certain subset of letters (for example p-z)
I can grep somehow the text of the db.sql file but I am wondering if someone has a better solution for that.
Thank you
The dump files are almost plaintext files with DDL/DML operations. Hence, the easiest would be to read the dump file, select relevant operations and write to another file and import it into mysql. So, you already have the best solution as far as I can think.

How to import large mysql dumps into hadoop?

I need to import wikipedia dumps(mysql tables, unpacked files take about 50gb) into Hadoop(hbase). Now first I load dump into mysql and then transfer data from mysql to hadoop. But loading data into mysql takes huge amount of time - about 4-7 days. Is it possible to load mysql dump directly to hadoop(by means of some dump file parser or something similar)?
As far as I remember - MySQL Dumps are almost entirely is set of insert statements. You can parse them in your mapper and process as is... If you have only few tables hard code parsing in java should be trivial.
use sqoop . A tool that import mysql data into HDFS with map reduce jobs.
It is handy.

Exported databases have different sizes

If I export a database with phpmyadmin his size is 18MB
If I expoert it from terminal using this command is size is only 11MB.
/usr/bin/mysqldump --opt -u root -ppassword ${DB} | gzip > ${DB}.sql.gz
Could you explain me why ? Is because of --otp parameter ?
How can I be sure the database has been succesfully exported ? Should I inspect it.. still it is not a reliable evaluation. thanks
With the details you've given, there are a number of possibilties as to why the sizes may differ. Assuming the output from phpMyAdmin is also gzipped (otherwise the obvious reason for the difference would be that one is compressed, the other isn't), the following could affect size to some degree:
Different ordering of INSERT statements causing differences in the compressibility of the data
One using extended inserts, the other using only standard inserts (this seems most likely given the difference in sizes).
More comments added by the phpMyAdmin export tool
etc...
I'd suggest looking at the export to determine completeness (perhaps restore it to a test database and verifying that the row-counts on all tables are the
I don't have enough points to comment so I'm adding my comments in this answer...
If you look at the uncompressed contents of the export files from a phpmyadmin export and a mysqldump they will be quite different.
You could use diff to compare the two sql files:
diff file1.sql file2.sql
However, in my experience that will NOT be helpful in this case.
You can simply open the files in your favorite editor and compare them to see for yourself.
As mentioned by Iridium in the previous answer, the use of inserts can be different. I created two new empty databases and imported into each (via phpmyadmin) - one of the two exports mentioned above (one from phpmyadmin and the other via mysqldump).
The import using the mysqldump export file recreated the database containing 151 tables with 1484 queries.
The import using the phpmyadmin export file recreated the database containing 151 tables with 329 queries.
Of course these numbers apply only to my example, but it seems to be in line what Iridium was talking about earlier.

How would I go about creating a new MySQL table with the results of "myisam_ftdump -c"?

I'm using myisam_ftdump -c to dump the occurrences of words in my fulltext column. What's the simplest way to insert that information into a new MySQL table?
Thanks for any help, it's appreciated.
Dump the results > to a file and use a LOAD DATA INFILE query to import the contents back into your new table.
Note:
For security reasons, when reading text files located on the server, the files must either reside in the database directory or be readable by all. Also, to use LOAD DATA INFILE on server files, you must have the FILE privilege.