I am using Octopus gem to handle database sharding in my application. I have a master and a slave. The insert query always hits the master and the read goes to slave.
But I am facing a weird issue like, after inserting a record and when I try to fetch it, record is not found. This is affecting my whole application.
I tried to resolve this issue by the following code.
Model.using(:master).where(id: 250)
This will force the model to fetch record from master rather than from slave. But if we add this everywhere in the application there is no point of sharding.
Any solution for this?
Thanks in advance.
Welcome to fun world of asynchronous replication.
Generally, when updating data to your master database, the data is replicated asynchronously to the slaves, meaning it will arrive there at any later point in time. Unfortunately, you can't known when that will happen as the only thing typically guaranteed is the order of updates to the slave, not when they will happen.
Often, you'll try to keep the replication delay rather small but you can't ignore it. Generally, when using asynchronous replication, you have to think critically about your data access strategies to avoid presenting unwanted stale data.
Sharding and replication definitely doesn't come for free. Database systems try hard to implement strongly defined levels of atomicity via transactions but due to CAP, things get more complicated (or sometimes impossible) when introducing a distributed system.
There isn't a generally correct answer for this issue as it is not directly clear, which data can be stale and which doesn't. Think about your access patterns and chose the appropriate server. Often, the simplest answer is to get rid of sharding and replication completely and to simply use a bigger server.
Related
We are looking into options for our MySQL replication architecture, the relevant details of our current setup:
We manage several branches on different cities.
Every branch has the same database structure.
Every primary key on all tables are prefixed by a branch identifier.
We need that a branch keep working if it has a network outage and it must sync with the main branch once the connection is restored.
As we don't have any chance to get a duplicate index on any table I'm thinking on something like MySQL multi master, or maybe Percona XtraDB Cluster or Tungsten but I can't find documentation on what happens if a single node is isolated from the others and what happens with the data that it received once the connection is restored.
Is there any proven method that suit this kind of setup? I would appreciate any advice, thanks.
In the case of tungsten, how it behaves depends on how you tell it to behave
You seem to be describing a relaxed consistency model. But no off the shelf clustering solution is going to solve all your problems. Certainly if you ensure that each record is only ever modified at its "home" database then you shouldn't run into many problems, but this model also requires you to replicate all the data to all of the locations. Bandwidth might not be an issue and it does provide a good DR capability, but storage and scalability may become an issue.
If you do have a centralized, well managed datacentre then another approach would be to have each branch run off an asynchronous dual master - one located at the branch and one at the datacentre then roll your own scripts for consolidating the dataset.
Multi master replication would work, as long as your not performing UPDATEs on the same data when the nodes are out of sync. In that case, it will apply the changes silently, so your data would become inconsistent.
If you're not doing that, I think it's the best solution, as MySQL takes care of the binary logs pointers and handles reconnections. Just be sure that the auto_increment_offset setting is properly configured amongst the masters. Anyway, I've just tested this deployment with just 2 master servers (7 years in production with little issues).
I have the app a MySQL DB is a slave for other remote Master DB. And i use memcache to do caching of some DB data.
My slave DB can be updated if there are updates in a Master DB. So in my application i want to know when my local (slave) DB is updated to invalidate related cached data and display fresh data i got from master.
Is there any way to run some program when slave mysql DB is updated ? i would then filter q query and understand if i need to clean a cache or not.
Thanks
First of all you are looking for solution similar to what Facebook did in their db architecture (As I remember they patched MySQL for this).
You can build your own solution based on one of these techniques:
Parse replication log on slave side, remove cache entry when you see update of data in the log
Load UDF (user defined function) for memcached, attach trigger on replica side (it will call UDF remove function) to interested tables inside MySQL.
Please note that this configuration is complicated during the support and maintenance. If you can sacrifice stale data in the cache maybe small ttl will help you.
As Kirugan says, it's as simple as writing your own SQL parser, and ensuring that you also provide an indexed lookup keyed to the underlying data for anything you insert into the cache, then cross reference the datasets for any DML you apply to the database. Of course, this will be a lot simpler if you create a simplified, abstract syntax to represent the DML, but thereby losing the flexibilty of SQL and of course, having to re-implement any legacy code using your new syntax. Apart from fixing the existing code, it should only take a year or two to get this working right. Basing your syntax on MySQL's handler API rather than SQL will probably save a lot of pain later in the project.
Of course, if you need full cache consistency then you need to ensure that a logical transaction now spans all the relevant datacentres which will have something of an adverse impact on your performance (certainly much slower than just referencing the master directly).
For a company like facebook, with hundreds of thousands of servers and terrabytes of data (and no requirement for cache consistency) such an approach to solving the problem leads to massive savings. If you only have 2 servers, a better solution would be to switch to multi-master replication, possibly add another database node, optimize the storage (e.g. switching to ssds / adding fast bcache) make sure you have session affinity to the dbms from the aplication (but not stcky sessions) and spend some time tuning your dbms, particularly its cache performance.
We have a separate RDS Instance to handle session state tables, however found that the session DB load is very low. if we can convert the instance handling session as a Read Replica of the main DB, then we can use it for read-only tasks that are safe even with a large lag in the copy.
Has anyone done something like this on RDS (Is it possible and safe)? Should I watch out for any serious side effects? Any links or help in understanding this better would help.
http://aws.amazon.com/rds/faqs/#95 attempts to answer the question but am looking for more insights.
Yes, it is possible. I am using it with success using RDS, for a specific case of local cache.
You need to set the read_only parameter on your replica to 0. I've had to reboot my server in order for that parameter to work.
It's going to work nicely if use different table names, as RDS doesn't allow you to set: replicate-ignore-table parameter.
Remember there musn't be any data collision between master<>slave. If there is a statement which works ok on MASTER, but fails on SLAVE, then you've just broke your replication. That might happen e.g. when you've created table on SLAVE first then after some time you've added that table to MASTER. The CREATE statement will work clean on MASTER, but fail on SLAVE, as table already exist.
Assuming, you need to be really careful, allowing your application to write to SLAVE. If you forget / or make a mistake and start writing to read replica for some of your other data, in the end you might lose data or experience hard to debug issues.
There's not a lot to add -- the only normal scenario that really makes sense on a pure read replica is things like adding a few indexes and the like if its used primarily for reporting or something else read-intensive.
If you're trying to pre-calculate a lot of data and otherwise modify what's on the read replica you need to be really careful you're not changing data -- if the read is no longer consistent then you're in trouble :)
If you're curious about what happens if you change data on the slave and the master tries to update it, you're already heading down the wrong path IMHO.
TL;DR Don't do it unless you really know what you're doing and you understand all the ramifications.
And bluntly, MySQL replication can be quirky in my experience, so even knowing what is supposed to happen and what does happen if there's as the master tries to write updated data to slave you've also updated.... who knows.
I currently have a MySQL dual master replication (A<->B) set up and everything seems to be running swimmingly. I drew on the basic ideas from here and here.
Server A is my web server (a VPS). User interaction with the application leads to updates to several fields in table X (which are replicated to server B). Server B is the heavy-lifter, where all the big calculations are done. A cron job on server B regularly adds rows to table X (which are replicated to server A).
So server A can update (but never add) rows, and server B can add rows. Server B can also update fields in X, but only after the user no longer has the ability to update that row.
What kinds of potential disasters can I expect with this scenario if I go to production with it? Or does this seem OK? I'm asking mostly because I'm ignorant about whether any simultaneous operation on the table (from either the A copy or the B copy) can cause problems or if it's just operations on the same row that get hairy.
Dual master replication is messy if you attempt to write to the same database on both masters.
One of the biggest points of contention (and high blood pressure) is the use of autoincrement keys.
As long as you remember to set auto_increment_increment and auto_increment_offset, you can lookup any data you want and retrieve auto_incremented ids.
You just have to remember this rule: If you read an id from serverX, you must lookup needed data from serverX using the same id.
Here is one saving grace for using dual master replication.
Suppose you have
two databases (db1 and db2)
two DB servers (serverA and serverB)
If you impose the following restrictions
all writes of db1 to serverA
all writes of db2 to serverB
then you are not required to set auto_increment_increment and auto_increment_offset.
I hope my answer clarifies the good, the bad, and the ugly of using dual master replication.
Here is a pictorial example of 4 masters using auto increment settings
Nice article from Percona on this subject
Master-master replication can be very tricky, are you sure that this is the best solution for you ? Usually it is used for load-balancing purposes (e.g. round-robin connect to your db servers) and sometimes when you want to avoid the replication lag effect. A big known issue is the auto_increment problem which is supposedly solved using different offsets and increment value.
I think you should modify your configuration to simple master-slave by making A the master and B the slave, unless I am mistaken about the requirements of your system.
I think you can depend on
Percona XtraDB Cluster Feature 2: Multi-Master replication than regular MySQL replication
They promise the foll:
By Multi-Master I mean the ability to write to any node in your cluster and do not worry that eventually you get out-of-sync situation, as it regularly happens with regular MySQL replication if you imprudently write to the wrong server.
With Cluster you can write to any node, and the Cluster guarantees consistency of writes. That is the write is either committed on all nodes or not committed at all.
The two important consequences of Muti-master architecture.
First: we can have several appliers working in parallel. This gives us true parallel replication. Slave can have many parallel threads, and you can tune it by variable wsrep_slave_threads
Second: There might be a small period of time when the slave is out-of-sync from master. This happens because the master may apply event faster than a slave. And if you do read from the slave, you may read data, that has not changes yet. You can see that from diagram. However you can change this behavior by using variable wsrep_causal_reads=ON. In this case the read on the slave will wait until event is applied (this however will increase the response time of the read. This gap between slave and master is the reason why this replication named “virtually synchronous replication”, not real “synchronous replication”
The described behavior of COMMIT also has the second serious implication.
If you run write transactions to two different nodes, the cluster will use an optimistic locking model.
That means a transaction will not check on possible locking conflicts during individual queries, but rather on the COMMIT stage. And you may get ERROR response on COMMIT. I am highlighting this, as this is one of incompatibilities with regular InnoDB, that you may experience. In InnoDB usually DEADLOCK and LOCK TIMEOUT errors happen in response on particular query, but not on COMMIT. Well, if you follow a good practice, you still check errors code after “COMMIT” query, but I saw many applications that do not do that.
So, if you plan to use Multi-Master capabilities of XtraDB Cluster, and run write transactions on several nodes, you may need to make sure you handle response on “COMMIT” query.
You can find it here along with pictorial expln
From my rather extensive experience on this topic I can say you will regret writing to more than one master someday. It may be soon, it may not be for a long time, but it will happen. You will have two servers that each have some correct data and some wrong data, and you will either pick one as the authoritative source and throw the other away (probably without really knowing what you're throwing away) or you'll reconcile the two. No matter how you design it, you cannot eliminate the possibility of this happening, so it's a mathematical certainty that it will happen someday.
Percona (my employer) has handled probably several hundred cases of recovery after doing what you're attempting. Some of them take hours, some take weeks, one I helped with took a few months -- and that's with excellent tools to help.
Use a different replication technology or find a different way to do what you want to do. MMM will not help -- it will bring catastrophe sooner. You cannot do this with standard MySQL replication, with or without external tools. You need a replacement replication technology such as Continuent Tungsten or Percona XtraDB Cluster.
It's often easier to just solve the real need in some other fashion and give up multi-master writes, if you want to use vanilla MySQL replication.
and thanks for sharing my Master-Master Mysql cluster article. As Rolando clarified this configuration is not suitable for most production environment due to the limitation of autoincrement support.
The most adequate way to get a MySQL cluster is using NDB, which require at least 4 servers (2 management and 2 data nodes).
I have written a detailed article to get this running on two servers only, which is very similar to my previous article but using NDB instead.
http://www.hbyconsultancy.com/blog/mysql-cluster-ndb-up-and-running-7-4-and-6-3-on-ubuntu-server-trusty-14-04.html
Notice that I always recommend to analyse your needs and find out the most adequate solution, don't just look for available solutions and try to figure out if they fit with your needs or not.
-Hatem
I would highly recommend looking into a tool that will manage this for you. Multi-master replication can be very troublesome if things go wrong.
I would suggest something like Percona XtraDB Cluster. I've been following this project, and it looks very cool. I definitely think it will be a game changer in the MySQL world. It's still in beta though.
We're looking at potentially setting up replication for our primary MySQL database, and while setting up the replication seems pretty straight-forward, the application implementation seems a bit murkier.
My first idea would be to set up a master-slave configuration and RW-splitting, with all write queries (CREATE, INSERT, UPDATE) going to master, and all read queries (SELECT) going to slave. Having read up on it, it seems that there are essentially two options for how to implement this with our app:
Using an independent middleware layer for all MySQL connections, such as MySQL proxy or DBSlayer. However, the former is in Alpha and the latter has limited documentation.
Using a Ruby-based gem/plugin, such as Octopus to achieve RW-splitting in the framework.
If we wanted to go with a master-slave setup, what you recommend moving forward?
The other thought I've had was to use a master-master configuration, but am unsure about the implementation of such a setup.
Thoughts?
Generally you should do your R/W splitting in the framework because only it can understand the context. In PHP I do this by maintaining two connections - one for writes and one for reads and decide which you want explicitly in your code. The reason for this is that it's not as simple as splitting by query type. For example, if you start a transaction on the write connection, you want all the reads inside it to go through that as well, otherwise they will be outside the transaction and will probably get old data or get hung up with locks.
Unless your workload is really read-heavy, replication is not a scaling solution as replication lag will cause you to get out of date results. Master-master is not that special - it's just two instances of master-slave, but you should not make the mistake of trying to write to both masters as you're asking for split-brain nightmares.
The config I really like is using mmm with master-master pair. This makes failover and redundancy really easy and transparent to applications, and it works beautifully.