I'm attempting to setup a Hybrid Cloud (private to AWS) HA Sql solution with SQL Server 2014 Standard Edition (not my first choice, but was the requirement that was given too me).
I'm wondering if it is possible and/or a best practice to log ship to a secondary mirrored set. In other words, I would configure two sets of mirrored databases and log ship between set a and set b. The configuration would be:
Server A <-Mirror->Server B---Log Ship->Server C<-Mirror->Server D
Or, the other option is to log ship into a single instance and enable mirroring on fail-over:
Server A <-Mirror->Server B--Log Ship->Server C
P.S. I know there are other HA options with SQL Server 2014; however, I'm not prepared to pay Enterprise Edition prices. I'm going to pay development cost to move to MySQL (replication)
O.k. After much research and trial and error, I've discovered that the pattern that can be followed for a log ship mirror.
First, read this technical article from MS: Database Mirroring and Log Shipping (SQL Server)
The basic steps are to:
Configure Mirroring on Server A and B
Configure backup log shipping on Server A
Manually fail over to server B and configure backup log shipping
Fail back to Server A if desired
Configure log shipping restore jobs on Server C and Server D (this will keep them 'transactionally' in sync)
On a "failure event" (failover to Server C and Server D):
Manually restore the log shipping logs on Server C and Server D (or wait for the log shipping restore job to run) and disable Log Shipping Restore Job
Bring Server C out of 'Recovering' mode RESTORE DATABASE <db name> WITH RECOVERY
Configure Mirroring on C and D
Note: This was tested on Sql Server 2012
Related
My company uses a product that uses MySQL 5.5 for its backend database. The product automatically installs and configures MySQL during it's installation process. The product can be configured to run in a Hot Standby Redundant configuration. In these cases, the same installation process is performed on 2 separate servers and then during the products initial configuration redundant mode is selected. The product internally handles all the processes of duplicating the database data and keeping the 2 databases in sync. MySQL has know knowledge of the redundant setup. The MySQL installation on both server are identical, same location and same structure. The product does not have a very elegant/efficient way to sync a large, say 300G is size with 3K tables, database from the Primary server to the Backup server in cases where this is required, such as when creating a redundant system from a Single/Primary server config that has already been running for a while. My question is as follows.
Is there a safe/supported way to just manually copy the database/files from the Primary server to the Backup server considering that the MySQL installation on both servers are identical? BTW, this is on Production Windows Servers. I know I can do a full Export of the database from the Primary and then Import it on the BU server, but this can take hours. I am hoping there is a faster supported way to just copy the files from one server to the other, but in researching this I see conflicting info.
System Info
Windows
MySQL 5.5
Identical installation on both servers
"C:\ProgramData\MySQL\MySQL Server 5.5\data"
Innodb
File per table = true
Thanks in advance for any advice.
I once tried to just copy the Database Folder that contains all the innodb table files, "C:\ProgramData\MySQL\MySQL Server 5.5\data\Mydbase", from one server to another but mysql would not start up and had errors.
Yes: shut down the MySQL Server service on both computers. Then you can move the files in the datadir in any way you want. But this incurs some downtime while you do the file transfer.
If you must have no downtime, it's also possible, but requires more steps.
What I do is use Percona XtraBackup to make a physical backup of the source instance, but this won't work as easily for you because you are on Windows. XtraBackup doesn't work on Windows. Some people use tricks to run XtraBackup in a Docker container on Windows.
Then restore the XtraBackup to your new computer in the normal way, and configure it as a replica of the source instance. See https://docs.percona.com/percona-xtrabackup/8.0/howtos/setting_up_replication.html
By making the new instance a replica, you can let it get updated with the most recent changes that have occurred on the source instance while you were setting up the replica.
Then at some point you decide to switch to the new instance. Then you set the source instance to read-only mode, to prevent client applications from making any new changes. Let the replica catch up with the last final changes (this should only take a second if the replica was keeping up with changes already). Now you can change your client applications to use the replica instead of the former source. Then un-configure replication on the new instance with RESET SLAVE because the last thing you want is for any more changes to occur on the former source and replicate to the new instance.
If you try this procedure, I suggest you test it on a test instance — NOT your production instance — until you are comfortable with the tools.
P.S.: In addition to not supporting Windows, I have no idea if the current version of XtraBackup works with MySQL 5.5. That version was released in 2010, and reached its end of life in 2018. So I think you will need to research which version of XtraBackup still can read a MySQL 5.5 instance. You might have to use an old version of XtraBackup.
I have 3 mysql servers that i need to backup daily.. each server uses just 1 database w/ multiple tables..
I've scripted a mysql dump script on each server.. but this time i want each mysql server backing up to a 4th server (MASTER SERVER) (w/c is remote location) ..
The Master server will serve as a MIRROR for all 3 servers, so that we can view the data of the other servers even if one of them goes down, because the Master server will be on a more reliable internet connection .
NOTES and LIMITATIONS:
1) EACH SERVER needs to "send" their backups to the MASTER SERVER, because the master server can not do "incoming" connection to each slave servers (port forwarding not supported on the slaves)
2) Prefer that only the "changes" are backed up to make things lighter on the network. (synchronization? incremental?)
3) All are running windows 7 at the moment, because for now i'm using Navicat MySQL's synchronization features.. I would prefer to use a PHP script based solution so i can migrate things to *nix.. i've read about replication and all that stuff, but I kinda wanted a ready solution, perhaps a software i could download or buy or something.. I've no time to code my own sync/replication scripts/software. just wana get over this remote sync hurdle and move on w/ the project.
regards to all
i've read about replication and all that stuff, but I kinda wanted a
ready solution
But replication is a ready-solution just type a few commands and change a few configuration.
I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.
I have been doing some research on best backup procedures for largish (27.678gb) MYSQL database tables.
Currently we are using a program called Rapidsync (which is a offsite backup tool) but it is slow and it locks the tables it's currently backing up therefore causing downtime/slowness of sql.
Our current server is running Windows 2008 r2 with SQL server 2008 (on the same box) also.
Hardware specs for the dedicated server are:
16gb Ram
CPU intel xeon E3-1230 V2 # 3.30GHz
1 TB hard drive
In terms of databases we have 58 in total varying in size in which some need to backed up weekly ideally or even daily.
Through a program we use called Navicat you can tunnel to a database using SSH and copy databases manually, is this a reliable and feasible option if we were to install it on our local machine and copy them across? Or would it be more secure/efficient to use SQL Dump maybe?
I hope I have given all of the necessary info but please do ask if you need to know more.
P.S Only free options at this moment as we are on a tight budget! :)
Thanks in advance
You mentioned SSH, so I suppose the backups can be done also on other server than the database server itself. For Unix, you can use great tool Percona xtrabackup, which is free and supports online (without locking) backup of InnoDB database files and also incremental backups. Maybe it is possible to compile it also on Windows (part of it is written in C, part in Perl).
So you can setup weekly full backup and daily incremental backups. The tool will keep track of which pages of the InnoDB datafile has been changed and will copy only those.
I'm working on a project that has a MySQL transactional database backing up a web application. The company uses SQL Server for back office and reporting applications. What is the best way to update SQL Server with the data from MySQL? Right now, we are performing a dump of the MySQL data and doing a full restore. This may not be feasible much longer due to the increasing size of the database.
I would prefer a solution that copies only newly inserted and updated rows. I also need the SQL Server database to be static after the updates are applied. Basically, it should change once a day. I can update SQL Server from a local copy of MySQL (i.e. not production) Is there a way to apply MySQL replication to a slave server at specified intervals? A perfect solution is to run a once daily update on MySQL that syncs the database as of a point in time.
Can you find a way to snapshot the mySQL DB and then do the copy? It would make an instant logical copy of the database which would be frozen in time.
http://aspiringsysadmin.com/blog/2007/08/13/consistent-mysql-backups-using-zfs-snapshots/
ZFS filesystem can do this - but you haven't mentioned your hardware/OS.
Also, perhaps you could restrict the data you are pulling - whatever is time sensitive so that your pull will only get data that is older than 1 hour if your pull takes 45 minutes. Or to make things a little safer - how about just pulling the day before?
I believe SSIS 2008 has a new module called 'maintain' table that does the common task of getting updated/inserted records and optionally deletes.
Look into DTS, Microsoft's ETL tool. It's rather nice. Do the mapping, schedule it as a cron job, and Bob's your uncle.
Regardless of how you do the import to SqlServer from the MySQL clone, I don't think you need to worry about restricting MySQL replication to specific times.
MySQL replication only requires one thread in the master server and basically just transfers the transaction log to the slave. If you can, put the master and slave MySQL servers on a private LAN segment so that replication traffic does not impact the web traffic.
if you have SQL Server Standard or higher, SQL Server will take care of all of your needs.
use ssis to grab the data
use agent to schedule your timed tasks
btw - I'm doing the exact same thing that you are doing. SQL Server is awesome - it was easy to setup (I'm a noob to SSIS) and it worked on the first shot.
It sounds like what you need to do is to set up a script to start and stop replication on a slave database. If you can do that via a script, then you can establish a workflow in SSIS such as follows:
Stop Replication to Slave MySQL Database
If Replication has Stopped, then Take Snapshot of Slave MySQL Database
If Snapshot has been Taken, then
a= Start Replication to Slave MySQL Database
b= Import Slave MySQL Database Replica into SQL Server
NB: 3a and 3b can run in parallel.
I think your best bet in such a scenario would be to use SSIS to enable and disable MySQL database replication to the slave as well as to take a snapshot of the slave database. Then you can drive the whole thing from the SQL Server Agent mechanism.
Hope this helps