How to import large mysql dumps into hadoop? - mysql

I need to import wikipedia dumps(mysql tables, unpacked files take about 50gb) into Hadoop(hbase). Now first I load dump into mysql and then transfer data from mysql to hadoop. But loading data into mysql takes huge amount of time - about 4-7 days. Is it possible to load mysql dump directly to hadoop(by means of some dump file parser or something similar)?

As far as I remember - MySQL Dumps are almost entirely is set of insert statements. You can parse them in your mapper and process as is... If you have only few tables hard code parsing in java should be trivial.

use sqoop . A tool that import mysql data into HDFS with map reduce jobs.
It is handy.

Related

Mysql import multiple CSV and SQL from a folder

First of all, my initial intention is simply want to rename/copy a database. But the thrist of knowledge is extending a little further.
I knew about mysqldump db | mysql db. I also knew about LOAD LOCAL DATA INFILE to import a single CSV.
Mysql has a nice way to dump a DB as CSV into a single folder
mysqldump -T/some/folder db
Now that folder will contain both SQL and TXT files for each table (table1.sql, table1.txt, ...)
Why I'm choosing this method is because I have a database with 4Gb size, and using the traditionnal import is painfully slow. I heard that using CSV import might give a better performance.
Questions :
Is there any official way to do the reverse operation, which read from a folder that contains both SQL and TXT files?
Does exporting like this then import ensure an exact copy of the original DB? (indexes, primaries, uniques, views, etc...)
EDIT:
So I did some research.
Not the best way but it's from official doc to do the reverse operation https://dev.mysql.com/doc/refman/5.7/en/reloading-delimited-text-dumps.html (O.Jones'answer)
If we loop over the folder and import files one by one order alphabetically, we'll eventually run into InnoDB key constraints problem. Disable key check seems not solving the problem.
Quite a MYTH but mysqldump with --opt then import SQL seems import faster than CSV due to many optimizations! The command used is mysqlimport and not LOAD DATA INFILE (I will try later to see if there is any difference)
This is a little late, but there's a small trick you can do with bash.
cd to the folder where the files are and:
for x in $(ls *.txt); do mysqlimport -u YOURUSER -pYOURPASS --local database_to_import_into $x;done
this will do the trick.

Big data migration from Oracle to MySQL

I received over 100GB of data with 67million records from one of the retailers. My objective is to do some market-basket analysis and CLV. This data is a direct sql dump from one of the tables with 70 columns. I'm trying to find a way to extract information from this data as managing itself in a small laptop/desktop setup is becoming time consuming. I considered the following options
Parse the data and convert the same to CSV format. File size might come down to around 35-40GB as more than half of the information in each records is column names. However, I may still have to use a db as I cant use R or Excel with 66 million records.
Migrate the data to mysql db. Unfortunately I don't have the schema for the table and I'm trying to recreate the schema looking at the data. I may have to replace to_date() in the data dump to str_to_date() to match with MySQL format.
Are there any better way to handle this? All that I need to do is extract the data from the sql dump by running some queries. Hadoop etc. are options, but I dont have the infrastructure to setup a cluster. I'm considering mysql as I have storage space and some memory to spare.
Suppose I go in the MySQL path, how would I import the data? I'm considering one of the following
Use sed and replace to_date() with appropriate str_to_date() inline. Note that, I need to do this for a 100GB file. Then import the data using mysql CLI.
Write python/perl script that will read the file, convert the data and write to mysql directly.
What would be faster? Thank you for your help.
In my opinion writing a script will be faster, because you are going to skip the SED part.
I think that you need to setup a server on a separate PC, and run the script from your laptop.
Also use tail to faster get a part from the bottom of this large file, in order to test your script on that part before you run it on this 100GB file.
I decided to go with the MySQL path. I created the schema looking at the data (had to increase a few of the column size as there were unexpected variations in the data) and wrote a python script using MySQLdb module. Import completed in 4hr 40mins on my 2011 MacBook Pro with 8154 failures out of 67 million records. Those failures were mostly data issues. Both client and server are running on my MBP.
#kpopovbg, yes, writing script was faster. Thank you.

Php myadmin import data take too long

I have a csv db file. 5 columns, 3,321,986 rows, filesize-199M
I try to import into mysql database (php myadmin)
It's been 4 hours already and it's still importing.
Why does it take so long to import, is this normal?
It's normal with PHPMyAdmin.
Phpmyadmin must translate csv into SQL language and insert this with php functions. It's so long, memory expansive and it depends on the server.
Be careful, if max_execution_time in your php configuration is too short, the import may be interrupted.
The import speed depends on the server itself and its connectivity speed.
You can try to chunk the import to several files for example 10 files about 19M each and see if there is a problem with your CSV format at all.

Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)?

Summary:
Is it possible to:
Import data into Hadoop with the «MongoDB Connector for Hadoop».
Process it with Hadoop MapReduce.
Export it with Sqoop in a single transaction.
I am building a web application with MongoDB. While MongoDB work well for most of the work, in some parts I need stronger transactional guarantees, for which I use a MySQL database.
My problem is that I want to read a big MongoDB collection for data analysis, but the size of the collection means that the analytic job would take too long to process. Unfortunately, MongoDB's built-in map-reduce framework would not work well for this job, so I would prefer to carry out the analysis with Apache Hadoop.
I understand that it is possible read data from MongoDB into Hadoop by using the «MongoDB Connector for Hadoop», which reads data from MongoDB, processes it with MapReduce in Hadoop, and finally outputs the results back into a MongoDB database.
The problem is that I want the output of the MapReduce to go into a MySQL database, rather than MongoDB, because the results must be merged with other MySQL tables.
For this purpose I know that Sqoop can export result of a Hadoop MapReduce into MySQL.
Ultimately, I want too read MongoDB data then process it with Hadoop and finally output the result into a MySQL database.
Is this possible? Which tools are available to do this?
TL;DR: Set an an output formatter that writes to a RDBS in your Hadoop job:
job.setOutputFormatClass( DBOutputFormat.class );
Several things to note:
Exporting data from MongoDB to Hadoop using Sqoop is not possible. This is because Sqoop uses JDBC which provides a call-level API for SQL-based database, but MongoDB is not an SQL-based database. You can look at the «MongoDB Connector for Hadoop» to do this job. The connector is available on GitHub. (Edit: as you point out in your update.)
Sqoop exports are not made in a single transaction by default. Instead, according to the Sqoop docs:
Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.
The «MongoDB Connector for Hadoop» does not seem to force the workflow you describe. According to the docs:
This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB.
Indeed, as far as I understand from the «MongoDB Connector for Hadoop»: examples, it would be possible to specify a org.apache.hadoop.mapred.lib.db.DBOutputFormat into your Hadoop MapReduce job to write the output to a MySQL database. Following the example from the connector repository:
job.setMapperClass( TokenizerMapper.class );
job.setCombinerClass( IntSumReducer.class );
job.setReducerClass( IntSumReducer.class );
job.setOutputKeyClass( Text.class );
job.setOutputValueClass( IntWritable.class );
job.setInputFormatClass( MongoInputFormat.class );
/* Instead of:
* job.setOutputFormatClass( MongoOutputFormat.class );
* we use an OutputFormatClass that writes the job results
* to a MySQL database. Beware that the following OutputFormat
* will only write the *key* to the database, but the principle
* remains the same for all output formatters
*/
job.setOutputFormatClass( DBOutputFormat.class );
I would recommend you take a look at Apache Pig (which runs on top of Hadoop's map-reduce). It will output to MySql (no need to use Scoop). I used it to do what you are describing. It is possible to do an "upsert" with Pig and MySql. You can use Pig's STORE command with piggyBank's DBStorage and MySql's INSERT DUPLICATE KEY UPDATE (http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html).
Use MongoHadoop connector to read data from MongoDB and process it using Hadoop.
Link:
https://github.com/mongodb/mongo-hadoop/blob/master/hive/README.md
Using this connector you can use Pig and Hive to read data from Mongo db and process it using Hadoop.
Example of Mongo Hive table:
CREATE EXTERNAL TABLE TestMongoHiveTable
(
id STRING,
Name STRING
)
STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
WITH SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id","Name":"Name"}')
LOCATION '/tmp/test/TestMongoHiveTable/'
TBLPROPERTIES('mongo.uri'='mongodb://{MONGO_DB_IP}/userDetails.json');
Once it is exported to hive table you can use Sqoop or Pig to export data to mysql.
Here is a flow.
Mongo DB -> Process data using Mongo DB hadoop connector (Pig) -> Store it to hive table/HDFS -> Export data to mysql using sqoop.

CSV to MySQL conversion and import

Im working on a large project, havent had to do what I need help with before. I have a csv file, it contains a large amount of data, namely all of the cities, towns, suburbs in Australia. I need to convert the csv file to sql for mysql, and then import it into the database.
What would be the best way to achieve this?
Use LOAD DATA INFILE or the equivalent command-line tool mysqlimport.
These are easy to use for loading CSV data, and this method runs 20x faster than importing rows one at a time with SQL.