We have a web application running on local machine. And we would like to replicate the MySQL database to remote (web). I have looked at the tutorials online for that and I have one concern about the MASTER HOST ADDRESS. Since we usually shut down the laptops at the end of the day, the IP address will not be the same everyday. Is there any other way or method that replication can be achieved without specifying the IP from local to remote?
Thank you!
MySQL replication is pretty robust and will handle correctly situations where the master server is offline for prolonged periods of time.
your laptop could establish a VPN tunnel to the replica server or some relay server. private addressing that you'd use on both endpoints of the tunnel would be static. this approach will work fine regardless of type of internet connection used - one day it could be at home, another day at the office, another day using WiFi hotspot at a cafeteria.
Wireguard, OpenVPN would likely be easier to set up and troubleshoot, IPSec is another alternative.
things worth keeping in mind:
have long enough binlog retention period on the database master side - so the past transactions have chance to replicate to the slave. it's set by expire_log_days.
set infinite master-retry-count
Related
We are building a small advertising platform that will be used on several client sites. We want to setup multiple servers and load balancing (using Amazon Elastic Load Balancer) to help prevent downtime.
Our basic functions include rendering HTML for ads, recording click information (IP, user-agent, location, etc.), redirecting traffic with their click ID as a tracking variable (?click_id=XX), and basic data analysis for clients. It is very important that two different clicks don't end up with the same click ID.
I understand the basics of load balancing, but I'm not sure how to setup the MySQL server(s).
It seems there are a lot of options out there: master-master, master-slave, clusters, shards.
I'm trying to figure out what is best for us. The most important aspects we are looking for are:
Uptime - if one server goes down, automatically get picked up by another server.
Load sharing - keep CPU and RAM usage spread out.
From what I've read, it sounds like my best option might be a Master with 2 or more slaves. The Master would not be responsible for any HTTP traffic, that would go to the slaves only. The Master server would therefore only be responsible for database writes.
Would this slow down our click script? Since we have to insert first to get a click ID before redirecting, the Slave would have to contact the Master and then return with the data. Right now our click script is lightning fast and I'd like to keep it that way.
Also, what happens if the Master goes down? Would a slave be able to serve as the Master until the Master was back online?
If you use Amazon's managed database service, RDS, this will take a lot of the pain out of managing your database.
You can select the multi-AZ option on your master database instance to provide a redundant, synchronously replicated slave in another availability zone. In the event of a failure of the instance or the entire availability zone Amazon will automatically flip the A record pointing to your master instance to the slave in the backup AZ. This process, on vanilla MySQL or MariaDB, can take a couple of minutes during which time your application will be unable to write to the database.
You can also provision up to 5 read replicas for a MySQL or MariaDB instance that will replicate from the master asynchronously. You could then use an Elastic Load Balancer (or other TCP load balancer such as HAProxy or MariaDB's MaxScale for a more MySQL aware option) to distribute traffic across the read replicas. By default each of these read replicas will have a full copy of the master's data set but if you wanted to you could attempt to manually shard the data across these. You'd have to have some more complicated logic in your application or the load balancer to work out where to find the relevant shard of the data set though.
You can choose to promote a read replica into a stand alone master which will break replication to the master and give you a stand alone cluster which can then be reconfigured as to your previous setup (or something different if you want and just using the data set you had at the point of promotion). It doesn't sound like something you need to do here though.
Another option would be to use Amazon's own flavour of MySQL, Aurora, on RDS. Aurora is completely MySQL over the wire compatible so you can use whatever MySQL driver your application already uses to talk to it. Aurora will allow up to 15 read replicas and completely transparent load balancing. You simply provide your application with the Aurora cluster endpoint and then any writes will happen on the master and any reads will be balanced across however many read replicas you have in the cluster. In my limited testing, Aurora's failover between instances is pretty much instant too so that minimises down time in the event of a failure.
AWS now allows you to replicate data from an RDS instance to an external MySQL database.
However, according to the docs:
Replication to an instance of MySQL running external to Amazon RDS is only supported during the time it takes to export a database from a MySQL DB instance. The replication should be terminated when the data has been exported and applications can start accessing the external instance.
Is there a reason for this? Can I choose to ignore this if I want the replication to be persistent and permanent? Or does AWS enforce this somehow? If so, are there any work-arounds?
It doesn't look like Amazon explicitly states why they don't support ongoing replication other than the statement you quoted. In my experience, if AWS doesn't explicitly document a reason for why they do something then you're not likely to find out unless they decide to document it at a later time.
My guess would be that it has to do with the dynamic nature of Amazon instances and how they operate within RDS. RDS instances can have their IP address change suddenly without warning. We've encountered that on more than one occasion with the RDS instances that we run. According to the RDS Best Practices guide :
If your client application is caching the DNS data of your DB instances, set a TTL of less than 30 seconds. Because the underlying IP address of a DB instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service.
Given that RDS instances can and do change their IP address from time to time my guess is that they simply want to avoid the possibility of having to support people who set up external replication only to have it suddenly break if/when an RDS instance gets assigned a new IP address. Unless you set the replication user and any firewalls protecting your external mysql server to be pretty wide open then replication could suddenly stop if the RDS master reboots for any reason (maintenance, hardware failure, etc). From a security point of view, opening up your replication user and firewall port like that are not a good idea.
I have 3 mysql servers that i need to backup daily.. each server uses just 1 database w/ multiple tables..
I've scripted a mysql dump script on each server.. but this time i want each mysql server backing up to a 4th server (MASTER SERVER) (w/c is remote location) ..
The Master server will serve as a MIRROR for all 3 servers, so that we can view the data of the other servers even if one of them goes down, because the Master server will be on a more reliable internet connection .
NOTES and LIMITATIONS:
1) EACH SERVER needs to "send" their backups to the MASTER SERVER, because the master server can not do "incoming" connection to each slave servers (port forwarding not supported on the slaves)
2) Prefer that only the "changes" are backed up to make things lighter on the network. (synchronization? incremental?)
3) All are running windows 7 at the moment, because for now i'm using Navicat MySQL's synchronization features.. I would prefer to use a PHP script based solution so i can migrate things to *nix.. i've read about replication and all that stuff, but I kinda wanted a ready solution, perhaps a software i could download or buy or something.. I've no time to code my own sync/replication scripts/software. just wana get over this remote sync hurdle and move on w/ the project.
regards to all
i've read about replication and all that stuff, but I kinda wanted a
ready solution
But replication is a ready-solution just type a few commands and change a few configuration.
We have a few servers deployed in different ISPs (Internet Service Provider).
There is real-time data need to be synchronized to these servers constantly, I think MySQL Replication maybe is a good candidate for this job (we use MySQL in servers).
I know Replication works in intranet, but I'm not sure whether it works in complicated network topology in internet and in ISP subnet.
Some facts:
Need to run as Master-Slaves, Master is to get data, about ten slave DB.
Don't care much about replication time lag, 5 mins is fine.
There is not much data or transactions to be synchronized per hour.
We run Java web application in each server.
It works fine. You usually want to either run it over a VPN or use SSL in the MySQL connection if it's going over the public internet.
If your write updates take more bandwidth than you have available that will of course be a limitation as the replication log will have basically every byte used in insert, update and replace statements.
I have a MySQL database running on our server at this location.
However, the internet connection at this location is slow (Especially when several users are connected remotely).
We also have a remote web server on a very fast internet connection.
Can I run another MySQL server on the remote server and still be able to run queries and updates on it?
I want to have two servers because
- Users at this location can connect via lan (fast)
- Users working remotely can connect to synced remote server (fast)
Is this possible? From what I understand replication does not work this way. What is replication used for then? Backups?
Thanks for your help!
[Edit]
After doing some more reading, I am a little worried about setting up multi-master replication due to the fact that I had not considered multi-master when designing the database and conflicts could be an issue.
The good news though is that most time consuming operations are queries not updates.
And, I found out that there is a driver that handles master-slave connections.
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-replication-connection.html
That way writes will be sent to the master and reads can come from the faster connection.
Has anyone tried doing this before? My one concern is that if I update to the master, then run a query expecting to see the update on the slave, will it be there right away? Or will the slow connection make this solution just as slow as using the master for both read and write?
What you're asking, I believe, is called Multi-Master Replication, by which both servers serve as replication masters to each other. Changes on either server become replicated back to the other as soon as possible. MySQL can be configured to do it, however I'm not sure how the differences in speed would affect your performance and data integrity.
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-replication-multi-master.html