When speaking of replicating data from MySQL to MongoDB,It seems that the tables in MySQL are almost the same as MongoDB's collections.However I want to take the MongoDB's flexible schema into considerations to map a relation database into a database in MongoDB with different data schema.So How can I tell the Tungsten-replicator using a special applier to replicate the data?
For example:I have a MySQL database named A,with 10 tables,however when considering MongoDB schema,I build a same database in MongoDB with 6 collections which map the models from MySQL database A.Now I want to replicate the data from MySQL to MongoDB,how can I do that?
Related
I am new to Azure. I have a requirement to transfer data from a table in a transaction MySqlDb which is stored in json blobs(I am not aware why they have used a nosql format writes in a sql db) to a table in PostgreSql but in a flattened format. What is the best way to achieve this? This is not a one time task, but needs to be done everytime there is a ingestion to the transaction db and I need to push those records into postgresql db.
What you need is an ETL (Extract - Transform - Load) tool. The one available on Azure is the Azure Data Factory, which has connectors for MySQL, PostGreSQL and many more. So basically you'll create a pipeline, use the copy data activity to extract the data from MYSQL and then insert it into PostGreSQL
You can get more information in here:
https://learn.microsoft.com/en-us/azure/data-factory/connector-mysql
I want to query data from two different MySQL databases to a new MySQL database.
I have two databases with a lot of irrelevant data and I want to create what can be seen as a data warehouse where only relevent data should be present coming from the two databases.
As of now all data gets sent to the two old databases, however I would like to have scheduled updating so the new database is up to speed. There is a key between the two databases so in best case I would like all data to be present in one table however this is not crucial.
I have done similar work with Logstash and ES, however I do not know how to do it when it comes to MySQL.
Best way to do that is create a ETL process with Pentaho Data Integrator or any ETL tool. Where your source will be two different databases, in the transformation part you can remove or add any business logic then load those data into new database.
If you create this ETL you can schedule it once a day so that your database will be up to date.
If you want to do this without an ETL than your database must be in same host. Than you can just add database name just before table name in query. like SELECT * FROM database.table_name
I have a doubt with this.
I'm using postgres to connect two different databases. I'm already can write in both, I can write in MySQL using the mysql_fdw and also I can write values on Cassandra using the Multicorn fdw, but now the thing is that I want to write on both databases from Postgres.
When I when make \du in Postgres I get that the two extensions are loaded (mysql_fdw and multicorn for cassandra), also when I make select * from information_schema._pg_foreign_servers; I got the name of both, I already have created the database and table name on MySQL and the keyspace and columnfamily on Cassandra.
My doubt now is how can I connect both to replicate the data?
I have no experience dealing with nosql databases such as Amazon AWS DynamoDB.
I have some data stored in Amazon AWS DynamoDB.
Is it possible to export data from DynamoDB to MySQL Server ?
If so, how to go about accomplishing that ?
Thanks,
I would extract the data in CSV format. This "DynamoDBtoCSV" tool seems promising. Then you can import this CSV file into your MySQL database with LOAD DATA INFILE.
The drawback is that you 1. need to create the receiving structure first and 2. repeat the process for each table. But it shouldn't be too complicated to 1. generate a corresponding CREATE TABLE statement from the first line output by DynamoDBtoCSV, and 2. run the operation in a loop from a batch.
Now I am asking myself if MySQL is your best call as a target database. MySQL is a relational database, while DynamoDB is NoSQL (with variable length aggregates, non-scalar field values, and so on). Flatenning this structure into a relational schema may not be such a good idea.
Even though it is pretty old question, still leaving it here for future researchers.
Dynamodb supports streams which can be enabled on any table (from overview section in dynamodb table), which then can be taken via a lambda function (look for trigger tab in dynamodb table) to any storage including but not limited to mysql.
Data flow:
Dynamodb update/insert > Stream > Lambda > Mysql.
We are handling a data aggregation project by having several microsoft sql server databases combining to one mysql database. all mssql database have the same schema.
The requirements are :
each mssql database can be imported to mysql independently
before being able to import each record to mysql we need to validates each records with a specific createrias via php.
each imported mssql database can be rollbacked. It means even it already imported to mysql, all the mssql database can be removed from the mysql.
we would still like to know where does each record imported to the mysql come from what mssql database.
All import process will be done with PHP .
we have difficulty in many aspects. we don't know what is the best approach to solve our problem.
your help will be highly appreciated.
ps: each mssql database has around 60 tables and each table can have a few hundred thousands .
Don't use PHP as a database administration utility. Any time you build a quick PHP script to transfer records directly from one database to another, you're going to cause yourself a world of hurt when that script becomes required for production operation.
You have a number of problems that you need solved:
You have multiple MSSQL databases with similar if not identical tables.
You have a single MySQL database that you want to merge the data into.
The imported data must be altered in a specific way before being merged.
You want to prevent all duplicate records in your import.
You want to know what database each record originally came from.
The solution?
Analyze the source MSSQL databases and create a merge strategy for them.
Create a database structure on the MySQL database that fits the merge strategy in #1, including all the new key constraints (like unique and foreign keys) required for the consolidation.
At this point you have two options left:
Dump the data from each of the source databases into raw data using your RDBMS administration utility of choice. Alter that data to fit your merge strategy and constraints. Document this, and then merge all of the data into your new database structure.
Use a tool like opendbcopy to map columns from one database to another and run a mass import.
Hope this helps.