Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I need to synchronize data between two databases. The primary database is a SQL server database where all insert, update and delete operations take place. The other database is a MySQL database that reflects the state of primary database at the time of synchronization.
Note that Real-time synchronization is not important, the synchronization will done randomly depending on operator and network availability.
My questions:
What are the possible ways to determine that the two databases are already in sync and synchronization is NOT required
What are the possible ways to push data from SQL to MySQL server (no need to pull data from MySQL)
Should I use custom scripting or is there a tool that can take care of the preocess
Try Pentaho Kettle, which is an industrial strength ETL tool. We wrote a custom Perl script to synchronize, which also works, before finding Kettle.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I have two databases. I want to export specific tables form main database to another database by programmatically. I am using node.js.
Is there any packages or idea available means please share.
To make it functional you have to establish multiple database connections within the node project. Few things you have to follow.
1) Create a database connection with the main table
2) Get table schema in which you want to export.(you can research how to get mysql table schema in nodejs)
3) Store received schema to variables.
4) close the previous database connection.
5) create a New database connection in which you want to export.
6) create the schema using the variable values.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm coping MySQL databases to Red-Shift with the help of an ETL tool called Matillion, and I'm using the same tool to query the database. Most of the queries I've written are basic select queries with lots of joins, unions and sub-queries.
Since Red-Shift is specialized for analytical processing, I want to transform my basic queries into OLTP queries.
I'll be grateful if someone could point me a direction to learn how to write queries more OLTP way.
Thanks!
To clarify, Redshift is not an OLAP database (like HANA or SSAS), so you can't query Redshift in an OLAP way.
However, Redshift does of course support the full range of analytic functions, which are very much OLAP-like: http://docs.aws.amazon.com/redshift/latest/dg/c_Window_functions.html
Matillion supports that too, for example with the
Window Calculation Component
https://redshiftsupport.matillion.com/customer/portal/articles/1991935-window-calculation-component
You can also search for a Rank Component on the Matillion ETL for Amazon Redshift support portal.
Matillion also has documentation/videos on Data Quality Framework which goes through some of these.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 months ago.
Improve this question
What are good resources to look at for adding mysql changes to our devops pipeline?
We are in process of standing up a CI/CD pipeline where we are automatically building, configuring and deploying software to servers.
We currently can deploy an application to a blank server but are taking a snapshot of a database to populate the data (essentially unpacking an existing database). We do not want to move data from enviornmnet to environment. We also do not want our database updates in all environments to be a manual process.
We would like to have some automated process to move database changes along with the code in some automated fashion, And keep the ability to deploy our application to a server and have the database be populated with the necessary data to have a run able application.
I can think of a few resources to help you understand how to make database changes in a deployment pipeline.
Enabling Continuous Delivery with Database Practices from about 25 minutes
Continuous Deployment at Etsy slides 50+ give an example of making a change to a schema
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed last year.
Improve this question
I have a program with a priority queue (PQ) so huge that it does not fit to the memory. It was decided to move some data to MySQL database (DB) in following way: the new elements are put into DB instead of PQ, and when the PQ is emptied, it is updated by the entries in the DB. But this way appeared to spoil the priority ordering. Is there any solution which does not corrupt the priority ordering and combines PQ with DB?
For some reason I cannot get rid of PQ and use only DB.
Your question is rather vague on the functionality, but I think the idea is wrong.
Someone seems to have the idea of using the database as secondary storage for an in-memory application. That doesn't really make much sense. Normally, you would use a simple file for this. Although you can use a database for managing secondary/tertiary storage, a database does many other things, so it is like using a smart phone only as a clock.
If you are going to use a database, then store the entire structure in the database and develop an API for it that meets your needs.
If you want help with how to structure the data, then write another question and include:
sample data
how the priority queue will be used
any ideas you have on the data structure
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
What is the best FREE solution to implement one ETL project in MySql?
I need to extract for analisys big amount of data, and put the results in other tables.
Regards,
Pedro
Pentaho Kettle (PDI) is open source and it has a community version here, which works quite good.
Talend also does an excellent job for ETL and ELT. You can take a look at this page on my website: http://www.hiregion.com/2010/01/data-loading-through-talend-etl-studio.html and related articles. I have also loaded hundreds of thousands of rows to millions through MySQL bulk loading (LOAD DATA INFILE syntax - dev.mysql.com/doc/refman/5.1/en/load-data.html ) and then doing some transformations in MySQL. You can do most of transformations before the load (ETL) or after load (ELT) or use hybrid technique.