Sync Databases Mirroring /replication/log shipping - sql-server-2008

I need help in this scenario. We have one SQl server, I need to maintain two databases of this server in another physically far location.
Just the exact copy not like failover. Just the same data so that if some thing happens on one, other should also be up but not like automatic failover or anything.
Just to sync databases that whatever happened in primary should be synced with other. I am very confused that what should I use, replication, mirroring, log shipping.
Can anyone advice?
Thanks for help!

Replication does not maintain an identical copy of the database, it only replicates selected tables.
This leaves mirroring or log shipping:
Delay: Mirroring will keep the two replicas more close to the current master copies (will always try to be up to date, continuously). Log shipping has a built-in operational delay due to the log backup frequency, usually around 15-30 minutes or so.
Multiple copies: mirroring allows for exactly one replica copy. Log shipping allows for multiple copies.
Replica access: mirroring does not allow access to the replica. You can create database snapshots on the secondary server and the snapshots can be accesses. Log shipping allows read-only access to the replica copy, but will disconnect all users when applying the next backup log received (eg. every 15-30 minutes).
Ease of setup: this point is subjective, but I say log shipping is easier to set up (easier to understand)
Ease of operation: same as above, subjective, I would again say log shipping just because is easier to troubleshoot.
Security: log shipping requires file copy access, which requires a VPN or similar setup. Mirroring can work with certificate based security and traverse domains w/o trust, so it does not require a VPN.
Of course, you still have to make your own decision, based on your criteria.

To the Above query, try to keep your SQL servers on the same port. Have that port openend on the Firewall and try to ping each the servers from one domain to another. This shopuld help. Try and revet if you are facing issues.

Related

MySQL replication between main server and on-location laptops

We are a company that offers time tracking at all sorts of sport events. To store these timing results, we use laptops with a MySQL server, containing the timing data.
In the current situation we get a local copy of the master (the main server, running behind our website) just before we drive to the event, and submit these changes back to the master server after the event.
In the near future we would like implement live tracking on our website, and get user profile changes (users can change the time of their batch just before the batch starts) to the on-location machines.
An events exists of multiple batches (start times). A user subscribes to a certain batch, but sometimes when they are in traffic eg, they like to changes their batch to a later one.
So we need two way synchronisation, since data get updates on both our main server and our on-location machines.
On most events we have internet access. If we don't, I'd like to have the synchronisation working as soon as the connection gets online again.
I already found out about MySQL master-master replication. This looks pretty good, however I'm not feeling 100% satisfied.
Are there any suggestions how to setup such an environment? All suggestions are very welcome!
Multi-Master replication is best in environments where the databases are always connected. This reduces the chance of conflicts. Multi-master replication does not have any automatic conflict resolution, which can lead to incorrect data if there is any latency between the two masters.
Multi-master replication is generally used to provide redundancy. If one master fails, all writes can failover to the other.
With multi-master replication, if you allow updates on both masters when there is latency between the servers (not connected or slow connection), you can have data conflicts, which can lead to incorrect or unexpected data.
Multi-master replication wasn't designed for offline distributed database synchronization, but it can be used in this situation if you have a strategy to avoid data conflicts.
To avoid conflicts entirely, allow updates to only one database at a time.
You could design the website to detect whether replication is active with the local database, and if it is, allow changes on the web site only, which then replicates to the local database.
If there is no Internet connection and replication, require users to update the data at the event, on the local database. Once you go back online or reestablish the connection, you can then replicate back to the website database.
Since data for different events won't conflict, this won't prevent your website from remaining in operation for upcoming events while you restrict updates on the website for ongoing events.
Regarding time tracking data, since that won't be updated on the website at all, you don't have to worry about conflicts. You can replicate that data to the website master any time you want.

Mysql database sync between two databases

We are running a Java PoS (Point of Sale) application at various shops, with a MySql backend. I want to keep the databases in the shops synchronised with a database on a host server.
When some changes happen in a shop, they should get updated on the host server. How do I achieve this?
Replication is not very hard to create.
Here's some good tutorials:
http://www.ghacks.net/2009/04/09/set-up-mysql-database-replication/
http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html
http://www.lassosoft.com/Beginners-Guide-to-MySQL-Replication
Here some simple rules you will have to keep in mind (there's more of course but that is the main concept):
Setup 1 server (master) for writing data.
Setup 1 or more servers (slaves) for reading data.
This way, you will avoid errors.
For example:
If your script insert into the same tables on both master and slave, you will have duplicate primary key conflict.
You can view the "slave" as a "backup" server which hold the same information as the master but cannot add data directly, only follow what the master server instructions.
NOTE: Of course you can read from the master and you can write to the slave but make sure you don't write to the same tables (master to slave and slave to master).
I would recommend to monitor your servers to make sure everything is fine.
Let me know if you need additional help
three different approaches:
Classic client/server approach: don't put any database in the shops; simply have the applications access your server. Of course it's better if you set a VPN, but simply wrapping the connection in SSL or ssh is reasonable. Pro: it's the way databases were originally thought. Con: if you have high latency, complex operations could get slow, you might have to use stored procedures to reduce the number of round trips.
replicated master/master: as #Book Of Zeus suggested. Cons: somewhat more complex to setup (especially if you have several shops), breaking in any shop machine could potentially compromise the whole system. Pros: better responsivity as read operations are totally local and write operations are propagated asynchronously.
offline operations + sync step: do all work locally and from time to time (might be once an hour, daily, weekly, whatever) write a summary with all new/modified records from the last sync operation and send to the server. Pros: can work without network, fast, easy to check (if the summary is readable). Cons: you don't have real-time information.
SymmetricDS is the answer. It supports multiple subscribers with one direction or bi-directional asynchronous data replication. It uses web and database technologies to replicate tables between relational databases, in near real time if desired.
Comprehensive and robust Java API to suit your needs.
Have a look at Schema and Data Comparison tools in dbForge Studio for MySQL. These tool will help you to compare, to see the differences, generate a synchronization script and synchronize two databases.

What would be my best MySQL Synchronization method?

We're moving a social media service to be on separate data centers as our other hosting provider's entire data center went down. Twice.
This means that both websites need to be synchronized in some sense -- I'm less worried about the code of the pages, that's easy enough to sync, but they need to have the same database data.
From my research on SO, it seems MySQL Replication is a good option, but the MySQL manual, for scaling out, says that its best when there are far more reads then there are writes/updates:
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-scaleout.html
In our case, it's about equal. We're getting around 200-300 thousand requests a day right now, and we can grow rapidly. Every request is both a read and write request.
What would be the best method or tool to handle this?
Replication isn't instantaneous, and all writes have to be sent over the wire to the remote servers, so it takes bandwidth too. As long as this works for you and you understand the consequences, then don't worry about the read/write ratio.
However, are you sure that you need global replication? We handle millions of requests and have one location, with multiple web servers connected to two databases. One database is the live database, and the other is a replicated read only database.
We do have global fail over locations, and some people connect to these on any day, even if our main node is up because they have Internet issues. The data just trickles in though.
If the main node went down, then every body would be using the global fail over locations, in order. So, if our main node died, all customers would connect to Denver. If Denver went down, they'd all connect to Columbus.
Also, our main node is on two different Internet providers, so one ISP going down doesn't take us down.
Is the connection speed between two datacenters good enough? You can copy files to a new server and move database there. And then setup old server so that it will connect to new server's MySQL database in another DC? This will be slower of course, but depending on the nature of your queries it can be acceptable. As soon as DNS or whatever moves/finishes, you just power off the old server when there is no more requests for it.
To help you to assess your options you need to consider what your requirements are in a disaster recovery scenario (i.e. total loss of the system in one data-centre).
In particular for this scenario, how much data can you afford to lose (recovery point objective - RPO), and how quickly do you need to have the standby data-centre version of the site up and running (recovery time objective - RTO).
For example if your RPO is no transactions lost and recovery in 5 minutes, then the solution would be different than if you can afford to lose 5 mins of transactions and an hour to recover.
Another question I'd ask is if you're using SAN storage at all? This gives you options for replication at the storage level (SAN array to SAN array), rather than at the database level (e.g. MySQL replication).
Also to consider is the distance between the data-centres (e.g. timewise can you afford to perform a synchronous write to both databases, or would an asynchronous replication approach be more appropriate)

SQL Server 2008 - Best Backup solution

I'm setting up a SQL Server 2008 server on a production server, which way is the best to backup this data? Should I use replication and then backup that server? Should I just use a simple command-line script and export the data? Which replication method should i use?
The server is going to be pretty loaded so I need an efficent method.
I have access to multiple computers that I can use.
A very simple yet good solution is to run a full backup using sqlcmd (formerly osql) locally, then copy the BAK file over the network to a NAS or other store. It's sub-optimal in terms of network/disk usage, but it's very safe because every backup is independent and given that the process is very simple it is also very robust.
Moreover, this even works in Express editions.
The "best" backup solutions depends upon your recovery criteria.
If you need immediate access to the data in the event of a failure, a three server database mirroring scenario (live, mirror and witness) would seem to fit - although your application may need to be adapted to make use of automatic failover. "Log shipping" may produce similar results (although without automatic failover, or need for a witness).
If, however, there's some wiggle room in the recovery time, regular scheduled backups of the database (e.g., via SQL Agent) and it's transaction logs will allow you to do point-in-time restores. The frequency of backups would be determined by database size, how frequently the data is updated, and how far you are willing to rollback the database in the event of complete failure (unless you can extract a transaction log backup out of a failed server, you can only recover to the latest backup)
If you're looking to simply rollback to known-good states after, say, user error, you can make use of database snapshots as a lightweight "backup" scenario - but these are useless in the event of server failure. They're near instantaneous to create, and only take up room when the data changed - but incur a slight performance overhead.
Of course, these aren't the only backup solutions, nor are they mutually exclusive - just the ones that came to mind.

Full complete MySQL database replication? Ideas? What do people do?

Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror).
I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate).
The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data.
Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). What do people out there do? The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.
We at Percona offer free tools to detect discrepancies between master and server, and to get them back in sync by re-applying minimal changes.
pt-table-checksum
pt-table-sync
GoldenGate is a very good solution, but probably as expensive as the MySQL replicator.
It basically tails the journal, and applies changes based on what's committed. They support bi-directional replication (a hard task), and replication between heterogenous systems.
Since they work by processing the journal file, they can do large-scale distributed replication without affecting performance on the source machine(s).
I have never seen dropped statements but there is a bug where network problems could cause relay log corruption. Make sure you dont run mysql without this fix.
Documented in the 5.0.56, 5.1.24, and 6.0.5 changelogs as follows:
Network timeouts between the master and the slave could result
in corruption of the relay log.
http://bugs.mysql.com/bug.php?id=26489