Sync data from local db to Central SQL Server - ssis

I have a requirement to sync local db data with central SQL server. The remote users (mostly around 10 people) will be using laptop which will host application and local db. The internet connection is not 24x7. During no connectivity, the laptop user should be able to make changes in local db and once the connection is restored, the data should be synced with central SQL server automatically. The sync is just usually data updates. I have looked at options Sync framework and Merge replication. I can’t use sync framework as I am not C# expert. For Merge replication, additional hardware is required I believe which is not possible. The solution should be easy to develop and maintain.
Are there any other options available? Is it possible to use SSIS in this scenario?

I would use Merge replication for this scenario. I'm unaware of any "additional hardware" requirements.
SSIS could do this job but it does not give you any help out-of-the-box - you would be reinventing the wheel for a very common and complex scenario.

an idea...
Idea requires an intermediate database (exchange database).
On the exchange database you have tables with data for each direction of synchronization. And using change tracking on exchange db, and central.
On the local database side could mean rows with flags:
row is created on local db
row comes with exchange db
row required resynchronisation (when is updated, ect.)
Synchronisation localdb-exchange db.
When synchronizing, first send the data in localdb (marked as created locally or required resynchronisation), later download the data from exchange db (marked by change trancking as changed).
Synchorisation beetween exchange db and central db is simply, basen on change tracking with the database engine.
About Change Trancking here!

Related

What is an efficient way to maintain a local readonly copy of a live remote MySQL database?

I maintain a server that runs daily cron jobs to aggregate data sources and generate reports, accessible by a private Ruby on Rails application.
One of our data sources is a partial dump of one of our partner's databases. The partner runs an active application and the MySQL DB has hundreds of tables. They have given us read-only access to a relatively underpowered readonly slave of their application DB.
Because of latency issues and performance bottlenecking on their slave DB, we have been maintaining a limited local copy of their DB. We only need about 20 tables for our reports, so I only dump those tables. We also only need the data to a daily granularity, so realtime sync is not a requirement.
For a few months, I had implemented a nightly cron which streamed the dump of the necessary tables into a local production_tmp database. Then, when all tables were imported, I dropped production and renamed production_tmp to production. This was working until the DB grew to over 25GB, and we started running into disk space limitations.
For now, I have removed the redundancy step and am just streaming the dump straight into production on our local server. This feels a bit flimsy to me, and I would like to implement a safer approach. Also, currently doing the full dump/load takes our server over 2 hours, and I'd like to implement an approach that doesn't take as long. The database will only keep growing, so I'd like to implement something future proof.
Any suggestions would be appreciated!
I take it you have never heard of, or considered MySQL Replication?
The idea is that you do your backup & restore once, and then configure the replica to "subscribe" to a continuous stream of changes as they are made on the primary MySQL instance. Any change applied to the primary is applied automatically to the replica within seconds. You don't have to do the backup & restore procedure again, unless the replica gets damaged.
It takes some care to set up and keep working, but it's a much more efficient method of keeping two instances in sync.
#SusannahPotts mentions hot backup and/or incremental backup. You can get both of these features for free, without paying for MySQL Enterprise using Percona XtraBackup.
You can also consider using MySQL Transportable Tablespaces.
You'll need filesystem access to run either Percona XtraBackup or MySQL Enterprise Backup. It's not possible to use these physical backup tools for Amazon RDS, for example.
One alternative is to create a replication slave in the same network as the live system, and run Percona XtraBackup on that slave, where you do have filesystem access.
Another option is to stream the binary logs to another host (see https://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html) and then transfer them periodically to your local instance and replay them.
Each of these solutions has pros and cons. It's hard to recommend which solution is best for you, because you aren't sharing full details about your requirements.
This was working until the DB grew to over 25GB, and we started running into disk space limitations.
Some question marks "here":
Why don't you just increase the available Diskspace for your database? 25 GB seems nothing when it comes down to disk-space?
Why don't you modify your script to: download table1, import table1_tmp, drop table1_prod, rename table1_tmp to table1_prod; rinse and repeat.
Other than that:
Why don't you ask your partner for a system with enough performance to run your reports on? I'm quite sure, he would prefer this rather than having YOU download sensitive data every day to your "local site"?
Last thought (requires MySQL Enterprise Backup https://www.mysql.de/products/enterprise/backup.html):
Rather than dumping, downloading and importing 25 GB every day:
Create a full backup
Download and import
Use Differential or incremental backups from now.
The next day you download (and import) only the data-delta: https://dev.mysql.com/doc/mysql-enterprise-backup/4.0/en/mysqlbackup.incremental.html

MySQL replication or something similar

I have question about data backup.
We are developing backed for mobile application.
So we have a few EC2 servers, one for api sub-domain and one for admin sub-domain. One RDS Mysql server for the database, also with 2 databases.
But I'm worried about one thing, RDS snapshots is good for database structure. If we will have some errors in application, or will need to revert some changes in structure.
I will just restore from yesterday snapshot. And how about content, because its adding every minute.
Maybe some one can describe mechanism or tools to prevent data our lost. Replications or something like that.
I think I've found the answer - bin log
https://dev.mysql.com/doc/refman/5.5/en/binary-log.html

How to overcome Network Failure error in MySQL

I am developing a CI app for a client with MySQL as back end.
The client has 8 shops. For each shop, there is a local server, and additionally, there is one central server, which is placed at the Head Quarters (HQ).
The problem i am facing is,
At the time of network failure at a shop, the billing and other
processes should work; without central server. Once the network is
back, they need it to sync with HQ Server.
Those who clicking on too board on close can you please say what all details you need? I am not getting that part thats why, please add that as comment, i will do it
This is a common problem in shop environment, you should cope to this requirements having basic data into the single store (eg. items, promotions, parameters) and setting up a database synchronization between local stores and center db ...
If you have MySQL in each store and as central DB, you can set up a MySQL replication, otherwise take a look at SymmetricDS that is in short the missing component that can perfectly fit your scenario, since :
SymmetricDS is open source software for both file and database
synchronization with support for multi-master replication, filtered
synchronization, and transformation across the network in a
heterogeneous environment. It supports multiple subscribers with one
direction or bi-directional, asynchronous data replication. It uses
web and database technologies to replicate data as a scheduled or near
real-time operation. The software was designed to scale for a large
number of nodes, work across low-bandwidth connections, and withstand
periods of network outage. It works with most operating systems, file
systems, and databases, including Oracle, MySQL, MariaDB, PostgreSQL,
MS SQL Server (including Azure), IBM DB2, H2, HSQLDB, Derby, Firebird,
Interbase, Informix, Greenplum, SQLite (including Android), Sybase
ASE, and Sybase ASA (SQL Anywhere) databases.

Is it possible to real-time synchronize 2 SQL Server databases

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.

How to clone mySQL continuously .. instantly on shared hosting

I have a MySQL install on a shared server and have access through phpMyAdmin. I want to make a continuous, real time clone of that database to a cloud mySQL database (we have created an Nginx-ready MySQL server specially for this database) I want to create a real time clone of the old one, then update code to point to the new database...
I think you will have difficulty doing real-time replication of a MySQL in a shared server environment. Since you appear to be moving db servers, I would be inclined to do a hot copy of your data, and install that on the new db server. At the same time as taking that copy, you should switch on query logging on your application.
Your switch over would then consist of running logged queries against the new database (faster than they were logged!) and finally, at a point that all logged queries have been run, switching the configuration of the app so that the new db is used.
Edit: the problem with a hot copy is that data is being written to the db at the same time as it is being copied. That means that the 'last updated' time will be different for each table. On that basis, is it possible in your application to set up a 'last_updated' column for each row? If so you will be able to tell for each table which logged queries still need to be copied.
What you're looking for is replication. It has far to many options to cover here in a single post.
http://dev.mysql.com/doc/refman/5.5/en/replication.html
If your going to do replication over the internet you'll want to secure it.Your host might allow a virtual local area network So this doesn't use up your bandwidth resources.
A great set of tools from percona you should look at are maatkit
https://launchpad.net/percona-toolkit
Documentation and usage examples
http://www.maatkit.org/doc/
It's good for other tasks but it also allows you to replicate a live database quickly.
When your working with live databases make sure your backups are upto date.