How to put openstreetmap data into my mysql database? - mysql

I downloaded .osm file from openstreetmap, map of our place, my question is how can i import this data to my mysql database?

When I was implementing such functionality - I user Osmosis tool to convert database to XML file. http://wiki.openstreetmap.org/wiki/Osmosis . Than I created my own tool to parse the file and insert to records to database. The structure was similar to OSM primitives http://wiki.openstreetmap.org/wiki/Elements .
As a result I got a very huge mysql database. And also I had to create difficult queries to retrieve data.
My advice is NOT TO USE MySQL to store this data. MySQL - is a very bad solution to store such kind of data. PostgreSQL - is better. You can use Osmosis tool to generate the database quickly.
If you need to get map data only - you can use http://overpass-api.de/ service. It works perfectly.

You can use - ogr2ogr in GDAL (http://www.gdal.org/ogr2ogr.html)
Here's an example:
ogr2ogr -F MySQL MySQL:osm_data,host=localhost,user=root,password=mypass -nln test -nlt MULTIPOLYGON -update -overwrite -lco engine=InnoDB -lco MYSQL_FID=ogr_fid -lco cp1252 path\to\file.shp -skipfailures

Related

Importing Geometry from MSSQL to MySQL (Linestring)

I've been given some data which I am trying to import into mysql, the data was provided in a text file format which is usually fine by me - i know mssql uses different data types so a SQL dump was a none starter...
For some reason mssql must store LINESTRINGS in reverse order, which seemed very odd to me. As a result of this, when i try to upload the file with navicat the import fails. Below is an example of the LINESTRING - as you can see the longitude is first, then the latitude - this is what i believe to be the issue?
LINESTRING (-1.61674 54.9828,-1.61625 54.9828)
Does anybody know how i can get this data into my database?
Im quite new to spatial/geometry extensions.
Thanks,
Paul
must remember that the columns with spatial data have their own data type, navicat it does is call the "toString ()" or "AsText ()" event to display data, but in the background are blob, the advantage is that 2 are based on the standard WKT, I recommend that the db of origin to become space for text in the db destination takes that text and use it to "geometrifromtext" to convert the data (obviously you have to make a script with some programming language, with navicat can not do that)
info wkt
info mysql spatial
info sql server

NoSql TO mysql Database convertion

Is there is any simple procedure for converting google's nosql to local mysql database ?
I have downloaded all the data using Remote API Bulk Loader .
How can i seperate desired entity from this bulk data .
Later i want to convert all entities to mysql databse.
First downloaded all the data in CSV format using following command ,
appcfg.py download_data --config_file=bulkloader.yaml --filename=users.csv --kind=Permission --url=http://your_app_id.appspot.com/_ah/remote_api
Then used phpmyadmin.

hadoop mongodb connector read data but outputting as mysql data

is it possible to read mongodb data with hadoop connector but save output as mysql data table. So I want to read some data from mongodb collection by hadoop, processing it with hadoop and outputing it NOT already in mongodb but as MYSQL.
I used like, fetching data from mongodb as input and store result in different mongodb address. For that one you need to specify like
MongoConfigUtil.setInputURI(discussConf,"mongodb://ipaddress1/Database.Collection");
MongoConfigUtil.setOutputURI(discussConf,"mongodb://ipaddress2/Database.Collection");
for mongodb to mysql
my suggestion is , you can write normal java code to insert whatever data you need to insert in mysql . that code may be in reduce or map function

Is it possible to read MongoDB data, process it with Hadoop, and output it into a RDBS (MySQL)?

Summary:
Is it possible to:
Import data into Hadoop with the «MongoDB Connector for Hadoop».
Process it with Hadoop MapReduce.
Export it with Sqoop in a single transaction.
I am building a web application with MongoDB. While MongoDB work well for most of the work, in some parts I need stronger transactional guarantees, for which I use a MySQL database.
My problem is that I want to read a big MongoDB collection for data analysis, but the size of the collection means that the analytic job would take too long to process. Unfortunately, MongoDB's built-in map-reduce framework would not work well for this job, so I would prefer to carry out the analysis with Apache Hadoop.
I understand that it is possible read data from MongoDB into Hadoop by using the «MongoDB Connector for Hadoop», which reads data from MongoDB, processes it with MapReduce in Hadoop, and finally outputs the results back into a MongoDB database.
The problem is that I want the output of the MapReduce to go into a MySQL database, rather than MongoDB, because the results must be merged with other MySQL tables.
For this purpose I know that Sqoop can export result of a Hadoop MapReduce into MySQL.
Ultimately, I want too read MongoDB data then process it with Hadoop and finally output the result into a MySQL database.
Is this possible? Which tools are available to do this?
TL;DR: Set an an output formatter that writes to a RDBS in your Hadoop job:
job.setOutputFormatClass( DBOutputFormat.class );
Several things to note:
Exporting data from MongoDB to Hadoop using Sqoop is not possible. This is because Sqoop uses JDBC which provides a call-level API for SQL-based database, but MongoDB is not an SQL-based database. You can look at the «MongoDB Connector for Hadoop» to do this job. The connector is available on GitHub. (Edit: as you point out in your update.)
Sqoop exports are not made in a single transaction by default. Instead, according to the Sqoop docs:
Since Sqoop breaks down export process into multiple transactions, it is possible that a failed export job may result in partial data being committed to the database. This can further lead to subsequent jobs failing due to insert collisions in some cases, or lead to duplicated data in others. You can overcome this problem by specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.
The «MongoDB Connector for Hadoop» does not seem to force the workflow you describe. According to the docs:
This connectivity takes the form of allowing both reading MongoDB data into Hadoop (for use in MapReduce jobs as well as other components of the Hadoop ecosystem), as well as writing the results of Hadoop jobs out to MongoDB.
Indeed, as far as I understand from the «MongoDB Connector for Hadoop»: examples, it would be possible to specify a org.apache.hadoop.mapred.lib.db.DBOutputFormat into your Hadoop MapReduce job to write the output to a MySQL database. Following the example from the connector repository:
job.setMapperClass( TokenizerMapper.class );
job.setCombinerClass( IntSumReducer.class );
job.setReducerClass( IntSumReducer.class );
job.setOutputKeyClass( Text.class );
job.setOutputValueClass( IntWritable.class );
job.setInputFormatClass( MongoInputFormat.class );
/* Instead of:
* job.setOutputFormatClass( MongoOutputFormat.class );
* we use an OutputFormatClass that writes the job results
* to a MySQL database. Beware that the following OutputFormat
* will only write the *key* to the database, but the principle
* remains the same for all output formatters
*/
job.setOutputFormatClass( DBOutputFormat.class );
I would recommend you take a look at Apache Pig (which runs on top of Hadoop's map-reduce). It will output to MySql (no need to use Scoop). I used it to do what you are describing. It is possible to do an "upsert" with Pig and MySql. You can use Pig's STORE command with piggyBank's DBStorage and MySql's INSERT DUPLICATE KEY UPDATE (http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html).
Use MongoHadoop connector to read data from MongoDB and process it using Hadoop.
Link:
https://github.com/mongodb/mongo-hadoop/blob/master/hive/README.md
Using this connector you can use Pig and Hive to read data from Mongo db and process it using Hadoop.
Example of Mongo Hive table:
CREATE EXTERNAL TABLE TestMongoHiveTable
(
id STRING,
Name STRING
)
STORED BY 'com.mongodb.hadoop.hive.MongoStorageHandler'
WITH SERDEPROPERTIES('mongo.columns.mapping'='{"id":"_id","Name":"Name"}')
LOCATION '/tmp/test/TestMongoHiveTable/'
TBLPROPERTIES('mongo.uri'='mongodb://{MONGO_DB_IP}/userDetails.json');
Once it is exported to hive table you can use Sqoop or Pig to export data to mysql.
Here is a flow.
Mongo DB -> Process data using Mongo DB hadoop connector (Pig) -> Store it to hive table/HDFS -> Export data to mysql using sqoop.

MySQL export to MongoDB

I am looking to export an existing MySQL database table to seed a MongoDB database.
I would have thought this was a well trodden path, but it appears not to be, as I am coming up blank with a simple MySQLDUMP -> MongoDB JSON converter.
It won't take much effort to code up such a conversion utility.
There are a method that doesn't require you to use any other software than mysql and mongodb utilities. The disadvantage is that you have to go table by table, but in your case you only need to migrate one table, so it won't be painful.
I followed this tutorial. Relevant parts are:
Get a CSV with your data. You can generate one with the following query in mysql.
SELECT [fields] INTO outfile 'user.csv' FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' FROM [table]
Finally, import the file using mongoimport.
That's all
If you're using Ruby, you can also try: Mongify
It will read your mysql database, build a translation file and allow you to map the information.
It supports:
Updating internal IDs (to BSON ObjectID)
Updating referencing IDs
Type Casting values
Embedding Tables into other documents
Before filters (to change data manually)
and much much more...
Read more about it at: http://mongify.com/getting_started.html
MongoVue is a new project that contains a MySQL import:
MongoVue. I have not used that feature.
If you are Mac user you can use MongoHub which has a built in feature to import (& export) data from MySql databases.
If you are using java you can try this
http://code.google.com/p/sql-to-nosql-importer/
For a powerful conversion utility, check out Tungsten Replicator
I'm still looking int this one called SQLToNoSQLImporter, which is written in Java.
I've ut a little something up on GitHub - it's not even 80% there but it's growing for work and it might be something other of you could help me out on!
https://github.com/jaredwa/mysqltomongo