I have never used a master/slave setup for my mysql databases so please forgive if I make no sense here.
I am curious, let's say I want to have a master DB and 3 slave DB's. Would I need to write my database classes to connect and add/update/delete entries to the master DB or is this automated somehow?
Also for my SELECT queries, would I need to code it to randomly select a random DB server?
What you want to use (and research) is MySQL Replication. This is handled completely independent of your code. You work with the database the same as if there were 1 or 100 servers.
you sound like you are wanting to improve performance/balance load
yes you need to do any destructive changes to the master database. the slaves can only be used for readonly. you would also need to be careful that you don't write to the master and read from the slave instantaneously, otherwise the data may not have been replicated to the slave yet. so any instantaneous reads would still need to come from the master.
i wouldn't suggest just randomly selecting a slave. you could do this by geographical region if they are spread out, or if you are running in a cluster you can use a proxy to do the load balancing for you..
here is some more info that may help
http://agiletesting.blogspot.com/2009/04/mysql-load-balancing-and-read-write.html
You should consider using mysqlnd_ms - PHP's replication and load balancing plugin.
I think this is better solution, especially for a production environment, since it's native to PHP and MySQL Proxy is still in Alpha release.
Useful links:
https://blog.engineyard.com/2014/easy-read-write-splitting-php-mysqlnd
http://pecl.php.net/package/mysqlnd_ms
The master/slave set up should be handled automatically by the MySQL server, so you should not need any special code for this configuration.
Related
I'm looking for a best solution that suits my requirements. I would like to use MySQL with a lot of instances, so I need to be able to add as much master servers with slaves servers as might be needed in the future. There also will be sharding. Currently I've found out that GCP doesn't allow you to add more than one master server to a running instance. If so, what can I do then? I need to create 3 or more master servers and add slave servers to them. And if there is a new row in one of the master servers, the 3 slaves will receive that row and everything will by synchronized, so I'll be able to do a simple SELECT query in one of these slaves to get the actual data. I'm sorry for my english, I'm not a native speaker :)
What you are looking for it's called read replica. Using Google Cloud Cloud SQL for MySQL will let you implement a setup like the one you are describing, deploying multiples read replicas really fast
For the sharding part, you just need to deploy multiple masters with its own read replicas and on your application logic implement the needed code to find the data in the right instance.
I am trying to build a website that uses MySQL DB. What I am trying to do is make my database accessed by two servers, which means when server 1 is down server 2 can access the same database and the website continues working normally. I've read about multimaster replication but it does not seem to be what I need. And what happens when using a master slave replication and the master server goes down ? How it can be restored ?
Thanks for your help.
I think the master slave pattern is exactly what you're looking for. The master handles all the writes and the slaves handle all the reads. If your cloud hosting with someone like Rackspace or AWS they make it very easy to set up the data replication across each mode. As for your last sub question about what happens if the master goes down, I believe it is pretty straight forward to set up fallbacks for that too. There are likely several approaches but at the most basic level I know you can set up multiple db nodes (with a fallback algorithm) just like any other instance.
A final note... If its your first time doing this I highly recommend Rackspace because their support is amazing and they make a huge effort when you start to explain all your option and help you pick the best strategy.
Ps: retreading your question, it's a little unclear what you're trying to accomplish. You mention two servers accessing one DB and you also talk about redundant setups for multiple db instances. They're really two separate issues. The former is trivially easy because you can always just point more than one server to a db. As long as the credentials are right it will work. But the tricky part is keeping the data synched properly. If both are reading and writing the same tables things are going to bang together. That's where the master slave pattern comes into play. All the writes go through the master but anyone can read from any slave because the data gets replicated.
Two machines, each running mysql, each synchronized to the other peer-to-peer. I do not want a master db replicated. Rather, I want two users to be able to work on the data offline (each running a mysql server on his machine) and then when reconnected synchronize to each other. Any way to do this with mysql? Any other database I should be looking at to accomplish this better than mysql?
Two-way replication is provided by various database systems (e.g. SQLServer, Sybase etc.) but there are always problems with such a set up.
For example, if the same row is updated at the same time on the two databases, which update wins?
If your aim is to provide a highly-available MySQL database, then there are better options than using replication. MySQL has a clustering solution (though I've not had much success with it) or you can use things like DRBD and heartbeat to provide automatic failover with no loss of data.
If you mean synchronous writing back and forth, this would cause serious data consistency issues. I think you may be referring to MySQL replication, wherein a master server sends its updates to one or more slave database servers, which can be queried.
As for "Other Database Options" SQLServer supports a fairly advanced "replication" process for synchronizing the data between two or more db's. Looks like MySql has something like this as well though.
I've got two rails applications. One is internal and second is external client version.
In client version I have got cutted version of database. So, now I need to replicate my master MySQL db but not all data: only certain columns and certain tables.
How can I implement this job?
If there are some ruby stuff (gem for working with replication in this way), it'll be great.
Replication is typically something you do at the database layer, here is the documentation for Mysql replication:
http://dev.mysql.com/doc/refman/5.0/en/replication.html
That would typically replicate the entire database.
Another solution would be to have a job (perhaps written in ruby), that runs a couple of times a day and copies the desired data.
Perhaps you want to push data from the master to the slaves with as little delay as possible? Then you could make a hook on the save() method in ActiveRecord, that pushes the changes to the slave db.
Haven't looked in to it, but perhaps this is something: http://www.rubyrep.org/
I am developing one social-networking. For DB Load Balancing, i want to use
Master-slave Replication in Mysql.
Before start working on Replication, i want to know something like
1) How can we setup that replication
2) what is the advantage & Dis-advantages of Master-slave Replication
3) when we are using "select" queries, Is that , we need to send request to Master or Slave (manually)
or Master automatically sent to slave...i want to know about this.
4) can we install Master & Slaves in one system(i.e. CPU)? is it suggestable?
Thanks in advance
For Mysql Master-Slave Replication Setup,
Visit here http://ranjithonrails.wordpress.com/2012/07/21/mysql-master-slave-replication/
Firstly, read the MySQL Replication documentation. It's very useful and will answer a lot of questions you haven't even realized you will need to ask.
Handling replication in your application means you can distribute the SELECT statements. They don't need to replicate and will return the same results no matter which server they hit. However, UPDATE, INSERT and DELETE statements must occur on the master.
Remember that replication spreads the read load, but every server still has the same write load. Depending on your query read/write ratio, this might not be appropriate. (Check out LiveJournal's presentation about how they scaled. It's easy to find.)
Edit: Meant to reference LiveJournal, not Facebook. D'oh!