How to export data from Amazon DynamoDB into MySQL server - mysql

I have no experience dealing with nosql databases such as Amazon AWS DynamoDB.
I have some data stored in Amazon AWS DynamoDB.
Is it possible to export data from DynamoDB to MySQL Server ?
If so, how to go about accomplishing that ?
Thanks,

I would extract the data in CSV format. This "DynamoDBtoCSV" tool seems promising. Then you can import this CSV file into your MySQL database with LOAD DATA INFILE.
The drawback is that you 1. need to create the receiving structure first and 2. repeat the process for each table. But it shouldn't be too complicated to 1. generate a corresponding CREATE TABLE statement from the first line output by DynamoDBtoCSV, and 2. run the operation in a loop from a batch.
Now I am asking myself if MySQL is your best call as a target database. MySQL is a relational database, while DynamoDB is NoSQL (with variable length aggregates, non-scalar field values, and so on). Flatenning this structure into a relational schema may not be such a good idea.

Even though it is pretty old question, still leaving it here for future researchers.
Dynamodb supports streams which can be enabled on any table (from overview section in dynamodb table), which then can be taken via a lambda function (look for trigger tab in dynamodb table) to any storage including but not limited to mysql.
Data flow:
Dynamodb update/insert > Stream > Lambda > Mysql.

Related

Move records from a table in Azure MySql Database to Azure PostgreSql Db

I am new to Azure. I have a requirement to transfer data from a table in a transaction MySqlDb which is stored in json blobs(I am not aware why they have used a nosql format writes in a sql db) to a table in PostgreSql but in a flattened format. What is the best way to achieve this? This is not a one time task, but needs to be done everytime there is a ingestion to the transaction db and I need to push those records into postgresql db.
What you need is an ETL (Extract - Transform - Load) tool. The one available on Azure is the Azure Data Factory, which has connectors for MySQL, PostGreSQL and many more. So basically you'll create a pipeline, use the copy data activity to extract the data from MYSQL and then insert it into PostGreSQL
You can get more information in here:
https://learn.microsoft.com/en-us/azure/data-factory/connector-mysql

Accesing data from one mysql database to another in MYSQL Workbench

I have two different databases. I have to access data from one database and insert them into another ( with some data processing included, it is not only to copy data ) Also, the schema is really complex and each table has many rows, so copying data into schema in the second database is not an option. I have to do that using MySQL Workbench, so I have to do it using SQL queries. Is there a way to create a connection from one database to another and access its data?
While MySQL Workbench can be used to transfer data between servers (e.g. as part of a migration process) it is not useful when you have to process the data first. Instead you have 2 other options:
Use a dedicated tool you write yourself to do that (as eddwinpaz mentioned).
Use the capabilities of your server. That is, copy the data to the target server, into a temporary table (using dump and restore). Then use queries to modify the data as you need it. Finally copy it to the target table.

Exporting table data without the schema?

I've tried searching for this but so far I'm only finding results for "exporting the table schema without data," which is exactly the opposite of what I want to do. Is there a way to export data from a SQL table without having the script recreate the table?
Here's the problem I'm trying to solve if someone has a better solution: I have two databases, each on different servers; I'll call them the raw database and the analytics database. The raw database is the "real" database and collects records sent to its server, and stores them in a table using the transactional InnoDB engine. The analytics database is on an internal LAN, and is meant to mirror the raw database and will periodicly be updated so that it matches the raw database. It's separated like this because we have a program that will do some analysis and processing of the data, and we don't want to do it on the live server.
Because the analytics database is just a copy, it doesn't need to be transactional, and I'd like it to use the MyISAM engine for its table because I've found it to be much faster to import data into and query against. The problem is that when I export the table from the live raw database, the table schema gets exported too and the table engine is set to InnoDB, so when I run the script to import the data into the analytics database, it drops the MyISAM table and recreates it as an InnoDB table. I'd like to automate this process of exporting/importing data, but this problem of the generated sql script file changing the table engine from MyISAM to InnoDB is stopping me, and I don't know how to get around it. The only way I know is to write a program that has direct access to the live raw database, do a query, and update the analytics database with the results, but I'm looking for alternatives to this.
Like this?
mysqldump --no-create-info ...
Use the no-create-info option
mysqldump --no-create-info db [table]

hadoop mongodb connector read data but outputting as mysql data

is it possible to read mongodb data with hadoop connector but save output as mysql data table. So I want to read some data from mongodb collection by hadoop, processing it with hadoop and outputing it NOT already in mongodb but as MYSQL.
I used like, fetching data from mongodb as input and store result in different mongodb address. For that one you need to specify like
MongoConfigUtil.setInputURI(discussConf,"mongodb://ipaddress1/Database.Collection");
MongoConfigUtil.setOutputURI(discussConf,"mongodb://ipaddress2/Database.Collection");
for mongodb to mysql
my suggestion is , you can write normal java code to insert whatever data you need to insert in mysql . that code may be in reduce or map function

Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)?

Summary:
Is it possible to:
Import data into Hadoop with the «MongoDB Connector for Hadoop».
Process it with Hadoop MapReduce.
Export it with Sqoop in a single transaction.
I am building a web application with MongoDB. While MongoDB work well for most of the work, in some parts I need stronger transactional guarantees, for which I use a MySQL database.
My problem is that I want to read a big MongoDB collection for data analysis, but the size of the collection means that the analytic job would take too long to process. Unfortunately, MongoDB's built-in map-reduce framework would not work well for this job, so I would prefer to carry out the analysis with Apache Hadoop.
I understand that it is possible read data from MongoDB into Hadoop by using the «MongoDB Connector for Hadoop», which reads data from MongoDB, processes it with MapReduce in Hadoop, and finally outputs the results back into a MongoDB database.
The problem is that I want the output of the MapReduce to go into a MySQL database, rather than MongoDB, because the results must be merged with other MySQL tables.
For this purpose I know that Sqoop can export result of a Hadoop MapReduce into MySQL.
Ultimately, I want too read MongoDB data then process it with Hadoop and finally output the result into a MySQL database.
Is this possible? Which tools are available to do this?
TL;DR: Set an an output formatter that writes to a RDBS in your Hadoop job:
job.setOutputFormatClass( DBOutputFormat.class );
Several things to note:
Exporting data from MongoDB to Hadoop using Sqoop is not possible. This is because Sqoop uses JDBC which provides a call-level API for SQL-based database, but MongoDB is not an SQL-based database. You can look at the «MongoDB Connector for Hadoop» to do this job. The connector is available on GitHub. (Edit: as you point out in your update.)
Sqoop exports are not made in a single transaction by default. Instead, according to the Sqoop docs:
Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.
The «MongoDB Connector for Hadoop» does not seem to force the workflow you describe. According to the docs:
This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB.
Indeed, as far as I understand from the «MongoDB Connector for Hadoop»: examples, it would be possible to specify a org.apache.hadoop.mapred.lib.db.DBOutputFormat into your Hadoop MapReduce job to write the output to a MySQL database. Following the example from the connector repository:
job.setMapperClass( TokenizerMapper.class );
job.setCombinerClass( IntSumReducer.class );
job.setReducerClass( IntSumReducer.class );
job.setOutputKeyClass( Text.class );
job.setOutputValueClass( IntWritable.class );
job.setInputFormatClass( MongoInputFormat.class );
/* Instead of:
* job.setOutputFormatClass( MongoOutputFormat.class );
* we use an OutputFormatClass that writes the job results
* to a MySQL database. Beware that the following OutputFormat
* will only write the *key* to the database, but the principle
* remains the same for all output formatters
*/
job.setOutputFormatClass( DBOutputFormat.class );
I would recommend you take a look at Apache Pig (which runs on top of Hadoop's map-reduce). It will output to MySql (no need to use Scoop). I used it to do what you are describing. It is possible to do an "upsert" with Pig and MySql. You can use Pig's STORE command with piggyBank's DBStorage and MySql's INSERT DUPLICATE KEY UPDATE (http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html).
Use MongoHadoop connector to read data from MongoDB and process it using Hadoop.
Link:
https://github.com/mongodb/mongo-hadoop/blob/master/hive/README.md
Using this connector you can use Pig and Hive to read data from Mongo db and process it using Hadoop.
Example of Mongo Hive table:
CREATE EXTERNAL TABLE TestMongoHiveTable
(
id STRING,
Name STRING
)
STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
WITH SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id","Name":"Name"}')
LOCATION '/tmp/test/TestMongoHiveTable/'
TBLPROPERTIES('mongo.uri'='mongodb://{MONGO_DB_IP}/userDetails.json');
Once it is exported to hive table you can use Sqoop or Pig to export data to mysql.
Here is a flow.
Mongo DB -> Process data using Mongo DB hadoop connector (Pig) -> Store it to hive table/HDFS -> Export data to mysql using sqoop.