Looking for Mysql Backup/Sync suggestions (multiple servers syncing to 1) - mysql

I have 3 mysql servers that i need to backup daily.. each server uses just 1 database w/ multiple tables..
I've scripted a mysql dump script on each server.. but this time i want each mysql server backing up to a 4th server (MASTER SERVER) (w/c is remote location) ..
The Master server will serve as a MIRROR for all 3 servers, so that we can view the data of the other servers even if one of them goes down, because the Master server will be on a more reliable internet connection .
NOTES and LIMITATIONS:
1) EACH SERVER needs to "send" their backups to the MASTER SERVER, because the master server can not do "incoming" connection to each slave servers (port forwarding not supported on the slaves)
2) Prefer that only the "changes" are backed up to make things lighter on the network. (synchronization? incremental?)
3) All are running windows 7 at the moment, because for now i'm using Navicat MySQL's synchronization features.. I would prefer to use a PHP script based solution so i can migrate things to *nix.. i've read about replication and all that stuff, but I kinda wanted a ready solution, perhaps a software i could download or buy or something.. I've no time to code my own sync/replication scripts/software. just wana get over this remote sync hurdle and move on w/ the project.
regards to all

i've read about replication and all that stuff, but I kinda wanted a
ready solution
But replication is a ready-solution just type a few commands and change a few configuration.

Related

Manually copy MySQL 5.5 database to a different computer

My company uses a product that uses MySQL 5.5 for its backend database. The product automatically installs and configures MySQL during it's installation process. The product can be configured to run in a Hot Standby Redundant configuration. In these cases, the same installation process is performed on 2 separate servers and then during the products initial configuration redundant mode is selected. The product internally handles all the processes of duplicating the database data and keeping the 2 databases in sync. MySQL has know knowledge of the redundant setup. The MySQL installation on both server are identical, same location and same structure. The product does not have a very elegant/efficient way to sync a large, say 300G is size with 3K tables, database from the Primary server to the Backup server in cases where this is required, such as when creating a redundant system from a Single/Primary server config that has already been running for a while. My question is as follows.
Is there a safe/supported way to just manually copy the database/files from the Primary server to the Backup server considering that the MySQL installation on both servers are identical? BTW, this is on Production Windows Servers. I know I can do a full Export of the database from the Primary and then Import it on the BU server, but this can take hours. I am hoping there is a faster supported way to just copy the files from one server to the other, but in researching this I see conflicting info.
System Info
Windows
MySQL 5.5
Identical installation on both servers
"C:\ProgramData\MySQL\MySQL Server 5.5\data"
Innodb
File per table = true
Thanks in advance for any advice.
I once tried to just copy the Database Folder that contains all the innodb table files, "C:\ProgramData\MySQL\MySQL Server 5.5\data\Mydbase", from one server to another but mysql would not start up and had errors.
Yes: shut down the MySQL Server service on both computers. Then you can move the files in the datadir in any way you want. But this incurs some downtime while you do the file transfer.
If you must have no downtime, it's also possible, but requires more steps.
What I do is use Percona XtraBackup to make a physical backup of the source instance, but this won't work as easily for you because you are on Windows. XtraBackup doesn't work on Windows. Some people use tricks to run XtraBackup in a Docker container on Windows.
Then restore the XtraBackup to your new computer in the normal way, and configure it as a replica of the source instance. See https://docs.percona.com/percona-xtrabackup/8.0/howtos/setting_up_replication.html
By making the new instance a replica, you can let it get updated with the most recent changes that have occurred on the source instance while you were setting up the replica.
Then at some point you decide to switch to the new instance. Then you set the source instance to read-only mode, to prevent client applications from making any new changes. Let the replica catch up with the last final changes (this should only take a second if the replica was keeping up with changes already). Now you can change your client applications to use the replica instead of the former source. Then un-configure replication on the new instance with RESET SLAVE because the last thing you want is for any more changes to occur on the former source and replicate to the new instance.
If you try this procedure, I suggest you test it on a test instance — NOT your production instance — until you are comfortable with the tools.
P.S.: In addition to not supporting Windows, I have no idea if the current version of XtraBackup works with MySQL 5.5. That version was released in 2010, and reached its end of life in 2018. So I think you will need to research which version of XtraBackup still can read a MySQL 5.5 instance. You might have to use an old version of XtraBackup.

mysql/mariadb single database replication with read-write-split only for this single database

In my setup there are two debian servers. The first one is the old production server and the second is the new one. On the first (old) one runs a mysql v5.5 db-server and an old application which lags support. It cannot be ported easily to the new server. The new server runs mariadb v10.1 and all the other applications were ported from the old server to this new one. These applications have to work also with the data of the application that cannot be ported.
The ported application can only access local databases. So there is no easy way of changing the connection for these apps to the old db server.
My idea:
I want to replicate (master->slave) the data of the one database (used by the old application that is not portable) of the mysql v5.5 db server to the maraidb v10.1 db-server.
No problem so far.
But the applications on the new server not only read the data of the old application, they can also modify them. And they also have there own databases that only exists on the new server. This is a problem as far as I know and can lead to the break of the replication in some situations if the applications would try to write at the replicated database on the slave.
My next thought to solve this was that I can make use of a sql dispatcher proxy and found some interesting ones (mariadb maxscale, haproxy, proxySQL) but as far as I understood they can split read and write operations but I couldn't find a way to route write operations for different databases to different servers.
Can Anybody give me a hint to solve this problem?
Setting:
Server 1 - Mysql v5.5 - database_1
Server 2 - Mariadb v10.1 - database_1, database_2, database_3
An application on server 1 is writing and reading data from database_1 on server 1.
Other applications on server 2 are reading and writing data to database_1 on server 2.
So the data of database_1 have to be replicated from server 1 to server 2 and could be changed there.
A master-master replication instead of master-slave could work, but in reason of auto_increment fields that could break the replication and in reason of the fact that the changed data from server 2 doesn't have to exist on server 1, I think this is not the way to go. (I'm aware that I could set the auto_increment interval to two to avoid this problem, but it's an already running production system, so changes like this are not so easy).
At the moment we're doing backups by hand and copy them over but that's way to slow and I'm sure there is a better way ;)
You can use write to a replication slave (server 2) for databases like database_2 and database_3 that will never appear in the replication scream.
If you started updating database_1 you probably would end up in trouble.
You are replicating between two database server of over a major version difference so there is the possibility that a deprecated SQL statement gets replicated to a server that has it removed and the replication will stop. Keep an eye out for this in the weeks after deployment. binlog_format=ROW may mitigate some of the SQL that could got incorrectly.

Is it possible to real-time synchronize 2 SQL Server databases

I have an application that runs on server A and the database is on the same server
there is a backup server B which I use in case the server A is down
the application will remain unchanged but the data in the DB is changing constantly
Is there a way to synchronize those 2 databases real-time automatically?
currently I wait till all the users are gone so I can manually backup and restore in the backup server.
Edit: When I said real-time I didn't mean it literally, I can handle up to one hour delay but the faster sync the better.
My databases are located on 2 servers on the same local network.
2 of them are SQL Server 2008, the main DB is on windows server 2008
the backup is on windows server 2003
A web application (intranet) is using the DB
I can use sql agent (if that can help)
I don't know what kind of details could be useful to solve this, kindly tell me what can help. Thanks.
Edit: I need to sync all the tables and table only.
the second database is writable not read-only
I think what you want is Peer to Peer Transactional Replication.
From the link:
Peer-to-peer replication provides a scale-out and high-availability
solution by maintaining copies of data across multiple server
instances, also referred to as nodes. Built on the foundation of
transactional replication, peer-to-peer replication propagates
transactionally consistent changes in near real-time. This enables
applications that require scale-out of read operations to distribute
the reads from clients across multiple nodes. Because data is
maintained across the nodes in near real-time, peer-to-peer
replication provides data redundancy, which increases the availability
of data.

4 master server replicate to single server + 4 instances

MYSQL 5.X
i want to know how can we replicate 5 different master server to a single server having 5 DB instances in MYSQL replication. what are advantages and disadavntages.
If you know how to
configure several MySQL servers on the same machine (http://dev.mysql.com/doc/refman/5.5/en/multiple-servers.html)
configure replication between a master and a slave (http://dev.mysql.com/doc/refman/5.5/en/replication-howto.html)
then you are good to go, I believe.
What is the purpose of this replication?
If for backup purposes only, then I do not see any disadvantage, especially if you run 5 separate instances of MySQL on the backup machine.
Just make sure your hardware is fast enough to deal with the combined activity of the 5 masters.
I don't see any other useful application for this setup.

Two mysql servers using same database

I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?
What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html