I need to synchronize a table that is held in my local SQL Server 2008 database with one on my remote PHPMyAdmin database. This can be one way as only the local database will have changes made to it. The sync doesn't need to be instant, but ideally would be able to happen on a schedule (e.g. every 10 minutes).
I've read about possibly using a script for this but I can't find anywhere that shows how to do this. What would be the best way to go about doing this?
Edit: The question here is the along the same lines but the answers don't make much sense to me.
Related
I want to fetch data from multiple mysql databases which are on multiple servers.
I'm using phpmyadmin (mysql). All the databases will be mysql database (same vendor) which are on multiple servers. First I want to connect to those server databases and then I want to fetch data from them and then put the result in central database.
For example : remote_db_1 on server 1, remote_db_2 on server 2, remote_db_3 on server 3. and I have central database where I want to store the data which comes from different databases.
Query : select count(user) from user where profile !=2; same query will be run for all the databases.
central_db
school_distrct_info_table
id school_district_id total_user
1. 2 50
2. 55 100
3. 100 200
I've tried federated engine but it doesn't fit to our requirement.What can be done in this situation any tool, any alternative method or anything.
In future no. of databases on different server will be increased. It might 50, 100, maybe more, exporting the tables from source server & then load to central db will be hard task. So I'm also looking for some kind of etl tool which can directly fetch data from multiple source databases and then sending the data to destination database. In central db table, structure,datatypes,columns everything will be different. Sometimes we might need to add extra column to store some data I know it can be achieved through etl tool in the past I've used ssdt which works with SQL Server but here this is mysql.
The easiest way to handle this problem is with federated servers. But, you say that won't work for you.
So, your next best way to handle the problem is to export the tables from the source servers and then load them into your central server. But that's much harder. This sort of operation is sometimes called extract / transform / load or ETL.
You'll write a program in the programming language of your choice (Python, php, Java, PERL, nodejs??) to connect to each database separately, then query it, then put the information into a central database.
Getting this working properly is, sad to say, incompatible with really urgent. It's tricky to get working and to test.
May I suggest you write another question explaining why server federation won't meet your needs, and asking for help? Maybe somebody can help you configure it so it does. Then you'll have a chance to finish this project promptly.
Google says NO triggers, NO stored procedures, No views. This means the only thing I can dump (or import) is just a SHOW TABLES and SELECT * FROM XXX? (!!!).
Which means for a database with 10 tables and 100 triggers, stored procedures and views I have to recreate, by hand, almost everything? (either for import or for export).
(My boss thinks I am tricking him. He cannot understand how previous, to me, employers did that replication to a bunch of computers using two clicks and I personally need hours (or even days) to do this with an internet giant like Google.)
EDIT:
We have applications which are being created in local computers, where we use our local MySQL. These applications use MySQL DB's which consist, say, from n tables and 10*n triggers. For the moment we cannot even check google-cloud-sql since that means almost everything (except the n almost empty tables) must be "uploaded" by hand. And we cannot also check using google-cloud-sql DB since that means almost everything (except the n almost empty tables) must be "downloaded" by hand.
Until now we do these "up-down"-loads by taking a decent mysqldump from the local or the "cloud" MySQL.
It's unclear what you are asking for. Do you want "replication" or "backups" because these are different concepts in MySQL.
If you want to replicate data to another MySQL instance, you can set up replication. This replication can be from a Cloud SQL instance, or to a Cloud SQL instance using the external master feature.
If you want to backup data to or from the server, checkout these pages on importing data and exporting data.
As far as I understood, you want to Create Cloud SQL Replicas. There are a bunch of replica options found in the doc, use the one that fits the best to you.
However, if you said "replica" as Cloning a Cloud SQL instance, you can follow the steps to clone your instance in a new and independent instance.
Some of these tutorials are done by using the GCP Console and can be scheduled.
I am working on an application that we need disaster recovery plans. We currently use RDS to host the db and have 2 hourly backups running (we do not use Aurora but have plans to upgrade in future).
If the database somehow got deleted we want to make sure the backup we will be recovering from is current and therefore need some way of telling that.
One way is to save a heartbeat in the db at certain intervals then i can check that against what is expected.
I was wondering if anyone may have any other ways of solving this issue?
Assuming this MySQL database is connected to your web application, you could have a server side thread in your application which periodically does a heartbeat which writes a record to the database. You may create a special heartbeat table, which will store the heartbeats. Then, you can easily examine any backup and know roughly the last time the database were "alive."
I am not an expert in AWS, and there may be another way of doing this which is easier than what I described above.e
So I've recently ported over a bunch of very large databases from SQL Server to MySQL using the migration wizard. This was done manually and was very time consuming. My next step is to automate this process in some way. As the databases that are a part of MSSqlServer are constantly being updated, I need to track these changes in MySQL as well in a short amount of time. My problem is that through the migration wizard, it takes many days to port all the data over. If there is one small change, it would be beneficial to simply track the change and change it in MySQL rather than reporting the entire database every week or so.
There's a few routes I thought of but I'm not sure if these will work or if they're practical or not.
Convert the migrations into scripts and run these scripts every week. This would allow me a route into automating the process. The problem however, is that it would take several days to retransfer all the data so it's not very practical.
Somehow link MySQL to MSSQLServer and track changes live on MySQL. I don't know if this is possible but it seems like it would be the most practical way to do this.
Any suggestions or help on solving this problem would be appreciated.
TL;DR: I need to track small changes in data from a MSSQLServer database to MySQL database without having to retransfer all the data via the migration wizard each time.
I want to to completely copy a table from some sql database to my local. My question is that, is there an easy way to achive so. Since the table is much big I have to do this operation in a few steps by selecting from one database and then inserting it. I could not use file operation since this operation will be a part of web application and the remote database is the database of some user.