Is it possible to real-time synchronize 2 SQL Server databases - sql-server-2008

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only

I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.

Related

MySQL Group Replication or a Single Server is enough?

I'm planning to create a system which tracks visitors clicks into the database. I'm expecting around 1M inserts/day into the Database.
On the backend, I'll have an analytics system which will analyze all the data that's been collected over the days/weeks/months/years.
My question is: is it a practical approach to have 2 different MySQL Servers + 1 Web server? MySQL Server A would insert the clicks into it's DB and it would be connected to MySQL Server B by group replication, so whenever I create reports, etc on MySQL Server B, it doesn't load Server A heavily.
These 2 Database servers would then be connected to the Web Server which would handle all the click requests and displaying the backend reports also.
Is it a practical solution, or is it better to have one bigger server to handle all the MySQL data? Or have multiple MySQL servers that are load balancing each other? Anything else perhaps?
1M inserts/day is not a high load by modern standards. That's less than 12 per second on average.
On sufficiently powerful servers with fast storage and proper tuning of MySQL options, you can expect to support at least 100x that load with a single MySQL server.
A better reason to use multiple MySQL servers is redundancy. Inevitably, any MySQL server needs to be upgraded, or you might have hardware failures and need to replace a disk, or other components. To avoid downtime, you should have a standby database server, which stays in sync with the primary server, either using MySQL replication or by disk-level replication like DRBD.

Sync data from local db to Central SQL Server

I have a requirement to sync local db data with central SQL server. The remote users (mostly around 10 people) will be using laptop which will host application and local db. The internet connection is not 24x7. During no connectivity, the laptop user should be able to make changes in local db and once the connection is restored, the data should be synced with central SQL server automatically. The sync is just usually data updates. I have looked at options Sync framework and Merge replication. I can’t use sync framework as I am not C# expert. For Merge replication, additional hardware is required I believe which is not possible. The solution should be easy to develop and maintain.
Are there any other options available? Is it possible to use SSIS in this scenario?
I would use Merge replication for this scenario. I'm unaware of any "additional hardware" requirements.
SSIS could do this job but it does not give you any help out-of-the-box - you would be reinventing the wheel for a very common and complex scenario.
an idea...
Idea requires an intermediate database (exchange database).
On the exchange database you have tables with data for each direction of synchronization. And using change tracking on exchange db, and central.
On the local database side could mean rows with flags:
row is created on local db
row comes with exchange db
row required resynchronisation (when is updated, ect.)
Synchronisation localdb-exchange db.
When synchronizing, first send the data in localdb (marked as created locally or required resynchronisation), later download the data from exchange db (marked by change trancking as changed).
Synchorisation beetween exchange db and central db is simply, basen on change tracking with the database engine.
About Change Trancking here!

What is an efficient way to maintain a local readonly copy of a live remote MySQL database?

I maintain a server that runs daily cron jobs to aggregate data sources and generate reports, accessible by a private Ruby on Rails application.
One of our data sources is a partial dump of one of our partner's databases. The partner runs an active application and the MySQL DB has hundreds of tables. They have given us read-only access to a relatively underpowered readonly slave of their application DB.
Because of latency issues and performance bottlenecking on their slave DB, we have been maintaining a limited local copy of their DB. We only need about 20 tables for our reports, so I only dump those tables. We also only need the data to a daily granularity, so realtime sync is not a requirement.
For a few months, I had implemented a nightly cron which streamed the dump of the necessary tables into a local production_tmp database. Then, when all tables were imported, I dropped production and renamed production_tmp to production. This was working until the DB grew to over 25GB, and we started running into disk space limitations.
For now, I have removed the redundancy step and am just streaming the dump straight into production on our local server. This feels a bit flimsy to me, and I would like to implement a safer approach. Also, currently doing the full dump/load takes our server over 2 hours, and I'd like to implement an approach that doesn't take as long. The database will only keep growing, so I'd like to implement something future proof.
Any suggestions would be appreciated!
I take it you have never heard of, or considered MySQL Replication?
The idea is that you do your backup & restore once, and then configure the replica to "subscribe" to a continuous stream of changes as they are made on the primary MySQL instance. Any change applied to the primary is applied automatically to the replica within seconds. You don't have to do the backup & restore procedure again, unless the replica gets damaged.
It takes some care to set up and keep working, but it's a much more efficient method of keeping two instances in sync.
#SusannahPotts mentions hot backup and/or incremental backup. You can get both of these features for free, without paying for MySQL Enterprise using Percona XtraBackup.
You can also consider using MySQL Transportable Tablespaces.
You'll need filesystem access to run either Percona XtraBackup or MySQL Enterprise Backup. It's not possible to use these physical backup tools for Amazon RDS, for example.
One alternative is to create a replication slave in the same network as the live system, and run Percona XtraBackup on that slave, where you do have filesystem access.
Another option is to stream the binary logs to another host (see https://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html) and then transfer them periodically to your local instance and replay them.
Each of these solutions has pros and cons. It's hard to recommend which solution is best for you, because you aren't sharing full details about your requirements.
This was working until the DB grew to over 25GB, and we started running into disk space limitations.
Some question marks "here":
Why don't you just increase the available Diskspace for your database? 25 GB seems nothing when it comes down to disk-space?
Why don't you modify your script to: download table1, import table1_tmp, drop table1_prod, rename table1_tmp to table1_prod; rinse and repeat.
Other than that:
Why don't you ask your partner for a system with enough performance to run your reports on? I'm quite sure, he would prefer this rather than having YOU download sensitive data every day to your "local site"?
Last thought (requires MySQL Enterprise Backup https://www.mysql.de/products/enterprise/backup.html):
Rather than dumping, downloading and importing 25 GB every day:
Create a full backup
Download and import
Use Differential or incremental backups from now.
The next day you download (and import) only the data-delta: https://dev.mysql.com/doc/mysql-enterprise-backup/4.0/en/mysqlbackup.incremental.html

How to overcome Network Failure error in MySQL

I am developing a CI app for a client with MySQL as back end.
The client has 8 shops. For each shop, there is a local server, and additionally, there is one central server, which is placed at the Head Quarters (HQ).
The problem i am facing is,
At the time of network failure at a shop, the billing and other
processes should work; without central server. Once the network is
back, they need it to sync with HQ Server.
Those who clicking on too board on close can you please say what all details you need? I am not getting that part thats why, please add that as comment, i will do it
This is a common problem in shop environment, you should cope to this requirements having basic data into the single store (eg. items, promotions, parameters) and setting up a database synchronization between local stores and center db ...
If you have MySQL in each store and as central DB, you can set up a MySQL replication, otherwise take a look at SymmetricDS that is in short the missing component that can perfectly fit your scenario, since :
SymmetricDS is open source software for both file and database
synchronization with support for multi-master replication, filtered
synchronization, and transformation across the network in a
heterogeneous environment. It supports multiple subscribers with one
direction or bi-directional, asynchronous data replication. It uses
web and database technologies to replicate data as a scheduled or near
real-time operation. The software was designed to scale for a large
number of nodes, work across low-bandwidth connections, and withstand
periods of network outage. It works with most operating systems, file
systems, and databases, including Oracle, MySQL, MariaDB, PostgreSQL,
MS SQL Server (including Azure), IBM DB2, H2, HSQLDB, Derby, Firebird,
Interbase, Informix, Greenplum, SQLite (including Android), Sybase
ASE, and Sybase ASA (SQL Anywhere) databases.

Configuring Web Apps for Distributed Database

I have read MongoDB's replication docs and MySQL's Cluster page however I cannot figure out how to configure my web apps to connect to database.
My apps will have connection information, database host, username, password etc, however, even if there is multi server function, should I need a big master that has a fixed ip that distirbutes the load to servers? Then, how can I prevent single-point of failure? Is there any common approaches to that problem?
Features such as MongoDB's replica sets are designed to enable automatic failover and recovery. These will help avoid single points of failure at the database level if properly configured. You don't need a separate "big master" to distribute the load; that is the gist of what replica sets provide. Your application connects using a database driver and generally does not need to be aware of the status of individual replicas. For critical writes in MongoDB you can request that the driver does a "safe" commit which requires data to be confirmed written to a minimum number of replicas.
To be comprehensively insulated from server failures, you still have to consider other factors such as physical failure of disks, machines, or networking equipment and provision with appropriate redundancy. For example, your replica sets should be distributed across more than one server or instance. If all of those instances are in the same physical colocation facility, your single point of failure could still be the hopefully unlikely (but possible) case where the colocation facility loses power or network.