I've built an MMORPG that uses a MySQL database to store player related data when the user logs off.
We built in a auto save timer so that all the data of every logged in user is saved to the database every 3 hours.
In doing so we noticed a fatal flaw....
Due to the fact that all our database transactions are sent to a single DB Thread the thread can become backlogged with requests. This produces a login/saving issue. When this happens players unable to login as the login process requires the use of the DB Thread to confirm login credentials. Similarly all save requests are queued to the back of the DB thread schedule. This produces a backlog of requests...
The only solution that I can think of for this is to introduce multiple threads and have 3-4 threads interacting with the database.
However, this opens up a new issue. Since multiple threads are sent DB requests this means that one thread can receive a save request from a player while another DB thread receives a save request from the same player.
For example....
PlayerA Logs In to the game
3 Hours pass & the auto save happens, playerA's data will now be saved.
PlayerA kills a monster and gains experience.
PlayerA logs off, which adds a save request to a DB thread.
Now we have two different save requests queue'd in the database. Assuming they are both assigned to two different DB threads, this could cause the users data to be saved in the wrong order... For example maybe the the thread handling PlayerA's log out save runs first and then the auto save for PlayerA runs after that on a separate thread.... This would cause loss of data (in this case experience).
How do other MMORPG's handle something like this?
You need a database connection pool if you're not using one already and make sure you're not locking more data than you need. If you are saving how much gold a player has, you don't need to lock the table holding the credentials.
Keeping the order of events in a multi-threaded scenario is not a trivial problem, I suggest using a message queue, a single producer per player and a single consumer per player. This link shows 2 strategies to keep the order.
A queue is actually important for other reasons. If a save request fails, it would remain in the queue to retry later. When dealing with players money and items, you probably want this.
Your autosave is deterministic, meaning that you know exactly when the last one occured and when the next one would occur. I would use that somehow, along with the previously suggested idea to add a timestamp. Actually, it might be better to make the updates represent only the increments/decrements along with a user timestamp and calculate the experience upon request ( maybe cache it then)
To avoid this problem in all cases you must not allow users to continue doing stuff before their last database transaction has been successfully committed. Of course that means that the DB has to be very fast -- if it can't keep the request queue below a couple of seconds worth of transactions at most, you simply have to make it faster. More RAM cache, SSDs, the usual MySQL optimization dance. Adding extra logic in the form of triggers etc. isn't going to help in the long run, especially because they can become really complicated in the case of inventories and the like.
If on average the system is fast enough but struggling in peaks like when everybody logs in during lunch break, adding something like Redis as a fast cache might help. You'd load the data into Redis when a user logs on (or when they first need a certain piece of data) , remove it when they log off or when it expires, and write changes back to the relational DB as fast as it can keep up.
Related
Recently we've added a functionality in our RoR application which allows users to open a particular record, let's say in their own individual tabs. Doing so, we've started seeing frequent ActiveRecord::StaleObject errors. On investigating the issue I found that rails is indeed trying to update the session store first whenever a resource is opened in a tab and the exception is raised.
We've lock_version in our active record session store, so Rails is taking it as optimistic locking by default. Is there any way we could solve this issue without introducing much complexity, as the application is already live on the client's machine and without affecting any sessions' data we've stored in our session store DB.
Any suggestions would be much appreciated. Thanks
It sounds like you're using optimistic locking on a db session record and updating the session record when you process an update to other records. Not sure what you'd need to update in the session, but if you're worried about possibly conflicting updates to the session object (and need the locking) then these errors might be desired.
If you don't - you can refresh the session object before saving the session (or disable it's optimistic locking) to avoid this error for these session updates.
You also might look into what about the session is being updated and whether it's strictly necessary. If you're updating something like "last_active_on" then you might be better off sending off a background job to do this and/or using the update_column method which bypasses the rather heavyweight activerecord save callback chain.
--- UPDATE ---
Pattern: Putting side-effects in background jobs
There are several common Rails patterns that start to break down as your app usage grows. One of the most common that I've run into is when a controller endpoint for a specific record also updates a common/shared record (for example, if creating a 'message' also updates the messages_count for a user using counter cache, or updates a last_active_at on a session). These patterns create bottlenecks in your application as multiple different types of requests across your application will compete for write locks on the same database rows unnecessarily.
These tend to creep into your app over time and become hard to refactor later. I'd recommend always handling side-effects of a request in an asynchronous job (using something like Sidekiq). Something like:
class Message < ActiveRecord::Base
after_commit :enqueue_update_messages_count_job
def enqueue_update_messages_count_job
Jobs::UpdateUserMessageCountJob.enqueue(self.id)
end
end
While this may seem like overkill at first, it creates an architecture that is significantly more scalable. If counting the messages becomes slow... that will make the job slower but not impact the usability of the product. In addition, if certain activities create lots of objects with the same side-effects (lets say you have a "signup" controller that creates a bunch of objects for a user that all trigger an update of user.updated_at) it becomes easy to throw out duplicate jobs and prevent updating the same field 20 times.
Pattern: Skipping the activerecord callback chain
Calling save on an ActiveRecord object runs validations and all the before and after callbacks. These can be slow and (at times) unnecessary. For example, updating a message_count cached value doesn't necessarily care about whether the user's email address is valid (or any other validations) and you may not care about other callbacks running. Similar if you're just updating a user's updated_at value to clear a cache. You can bypass the activerecord callback chain by calling user.update_attribute(:message_count, ..) to write that field directly to the database. In theory this shouldn't be necessary for a well designed application but in practice some larger/legacy codebases may make significant use of the activerecord callback chain to handle business logic that you may not want to invoke.
--- Update #2 ---
On Deadlocks
One reason to avoid updating (or generally locking) a common/shared object from a concurrent request is that it can introduce Deadlock errors.
Generally speaking a "Deadlock" in a database is when there are two processes that both need a lock the other one has. Neither thread can continue so it must error instead. In practice, detecting this is hard, so some databases (like postgres) just throw a "Deadlock" error after a thread waits for an exclusive/write lock for x amount of time. While contention for locks is common (e.g. two updates that are both updating a 'session' object), a true deadlock is often rare (where thread A has a lock on the session that thread B needs, but thread B has a lock on a different object that thread A needs), so you may be able to partially address the problem by looking at / extending your deadlock timeout. While this may reduce the errors, it doesn't fix the issue that the threads may be waiting for up to the deadlock timeout. An alternative approach is to have a short deadlock timeout and rescue/retry a few times.
Our mobile app track user events (Events can have many types)
Each mobile reporting the user event and later on can retrieve it.
I thought of writing to Redis and Mysql.
When user request:
1. Find on Redis
2. If not on Redis find on Mysql
3. Return the value
4. Keep Redis modified in case value wasnt existed.
5. set expiry policy to each key on redis to avoid out of mem.
Problem:
1. Reads: If many users at once requesting information which not existed at Redis mysql going to be overloaded with Reads (latency).
2. Writes: I am going to have lots of writes into Mysql since every event going to be written to both datasources.
Facts:
1. Expecting 10m concurrect users which writes and reads.
2. Need to serv each request with max latency of one second.
3. expecting to have couple of thousands requests per sec.
Any solutions for that kind of mechanism to have good qos?
3. Is that in any way Lambda architecture solution ?
Thank you.
Sorry, but such issues (complex) rarely have a ready answer here. Too many unknowns. What is your budget and how much hardware you have. Since 10 million clients are concurrent use your service your question is about hardware, not the software.
Here is no any words about several important requirements:
What is more important - consistency vs availability?
What is the read/write ratio?
Read/write ratio requirement
If you have 10,000,000 concurrent users this is problem in itself. But if you have much of reads it's not so terrible as it may seem. In this case you should take care about right indexes in mysql. Also buy servers with lot of RAM to keep at least index data in RAM. So one server can hold 3000-5000 concurrent select queries without any problems with latency requirement in 1 second (one of our statistic project hold up to 7,000 select rps per server on 4 years old ordinary harware).
If you have much of writes - all becomes more complicated. And consistency becomes main question.
Consistency vs availability
If consistency is important - go to the store for new servers with SSD drives and moder CPU. Do not forget to buy much RAM as possible. Why? If you have much of write requests your sql server would rebuild index with every write. And you can't do not use indexes because of your read requests do not to keep in latency requirement. Under consistency i mean - if you write something, you should do this in 1 second and if you read this data right after write - you get actual written information in 1 second.
Your problem 1:
Reads: If many users at once requesting information which not existed at Redis mysql going to be overloaded with Reads (latency).
Or well known "cache miss" problem. And it has just some solutions - horizontal scaling (buy more hardware) or precaching. Precaching in this case may be done in at least 3 scenarios:
Using non blocking read and wait up to one second while data wont be queried from SQL server. If it not, return data from Redis. Update in Redis immediately or throw queue - as you want.
Using blocking/non blocking read and return data from Redis as fast as possible, but with every ready query push jub to queue about update cache data in Redis (also may inform app it should requery data after some time).
Always read/write from Redis, but register job in queue every write request to update data in SQL.
Every of them is compromise:
High availability but consistency suffers, Redis is LRU cache.
High availability but consistency suffers, Redis is LRU cache.
High availability and consistency but requires lot of RAM for Redis.
Writes: I am going to have lots of writes into Mysql since every event going to be written to both datasources.
The filed of compromise again. Lot's of writes rests to hardware. So buy more or use queues for pending writes. So availability vs consistency again.
Event tracking means (usualy) you can return data close to real time but not in real time. For example have 1-10 seconds latency to update data on disk (mysql) keeping 1 second latency for write/read serving requests.
So, it's combination of 1/2/3 (or some other) techniques for data provessing:
Use LRU in Redis and do not use expire. Lot's of expire keys - problem as is. So we can't use to be sure we save RAM.
Use queue to warm up missing keys in Redis.
Use queue to write data into mysql server from Redis server.
Use additional requests to update data from client size of cache missing situation accures.
On the customizable front page of our web site, we offer users the option of showing modules showing recently updated content, choosing from well over 100 modules.
All of the data is generated by MySQL queries, the results of which are cached via memcached. Our current system works like this: when a user load a page containing modules, module, they are immediately served the data from cache, and the query is added to a queue to be updated by a separate gearman process (so that the page load does not wait for the mysql query). That query is then run once every 15 minutes to refresh the data in cache. The queue of queries itself is periodically purged so that we do not continually refresh data that has not been requested recently.
The problem is what to do when the cache is empty, for some reason. This doesn't happen often, but when it does, the user is currently shown an empty module, and the data is refreshed in the gearman process so that a bit later, when the same (or a different) user reloads the page, there is data to show.
Our traffic is such that, if we were to try to run the query live for the user when the cache is empty, we would have a serious problem with stampeding--we'd be running the same (possibly slow) query many times as many users loaded the page. Is there any way to solve the "blank module" problem without opening up the risk of stampeding?
This is an interesting implementation though varies a bit from the way most typically implement memcached in fronT of MySQL.
In most cases users will set things up to where queries are first evaluated at memcached to see if there is is an available entry. If so they server it from memcached and never query the database at all. If there is a cache miss, then the query is made against the database, the results added to memcached, and the information returned to the caller. This is how you would typically build up your cache for read queries.
In cases where data is being updated, the update would be made against the database, and then the appropriate data in memcached invalidated and/or updated. Similarly for inserts, you could either do nothing regarding the cache (and let the next read on that record populate the cache), or you could actively add the data related to the insert into the cache, depending on your application needs.
In this way you wouldn't need to take the extra step of calling the database to get authoritative data after getting initial data from memcached. The data in memcached would be a copy of the authoritative data which is just updated/invalidated upon updates/inserts.
Based on your comments, one thing you might want to try in order to prevent a number of of queries on your database in case of cache misses is to use a mutex of sorts. For example, when the first client hits memcached and gets a cache miss for that lookup, you could could insert a temporary value in memcached indicating that the data is pending, then make the query against the database, and the update the memcached data with the result.
On the client side, when you get a cache miss or a "pending" result, you could simply initiate a retry for the cache after a certain period of time (which you may want to increase exponentially). So perhaps first hey wait for 1 second, then try back gain in 2 seconds if they still get a "pending" results, then retry in 4 seconds, and so on.
This would amount in possibly more requests against the memcached server, but should resolve any problems on the database layer.
We have a service which sees several hundred simultaneous connections throughout the day, peeking at about 2000, for about 3 million hits a day, and growing. With each request I need to log 4 or 5 pieces of data to MySQL, we originally used the logging that came with the app were using however it was terribly inefficient and would run my db server at >3x the average cpu load, and would eventually bring the server to it knees.
At this point we are going to add our own logging to the application (php), the only option I have for logging data is the MySQL db, as this is the only common resource available to all of the http servers. This data will be mostly writes however everyday we generate reports based on the data, then crunch and archive the old data.
What recommendations can be made to ensure that I don't take down our services with logging data?
The solution we took with this problem was to create an archive table then regularly ( every 15 minutes, on an app server) crunch the data and put it back into the tables that were used to generate reports. The archive table of course did not have any indices, the tables which the reports are generated from have several indices.
Some stats on this approach:
Short Version: >360 times faster
Long Version:
The original code/model did direct inserts into the indexed table, and the average insert took .036 seconds, using the new code/model inserts took less than .0001 seconds (I was not able to get an accurate fix on the insert time I had to measure 100,000 inserts and average for the insert time). The post-processing (crunch) took an average 12 seconds for several tens-of-thousands records. Overall we were greatly pleased with this approach and so far it has worked incredibly well for us.
Based on what you describe, I recommend you try to leverage the fact that you don't need to read this data immediately and pursue a "periodic bulk commit route". That is, buffer the logging data in RAM on the app servers and doing periodic bulk commits. If you have multiple application nodes, some sort of randomized approach would help even more (e.g., commit updated info every 5 +/- 2 minutes).
The main drawback with this approach is that if an app server fails, you lose the buffered data. However, that's only bad if (a) you absolutely need all of the data and (b) your app servers crash regularly. Small chance that both are true, but in the event they are, you can simply persist your buffer to local disk (temporarily) on an app server if that's really a concern.
The main idea is:
buffering the data
periodic bulk commits (leveraging some sort of randomization in a distributed system would help)
Another approach is to stop opening and closing connections if possible (e.g., keep longer lived connections open). While that's likely a good first step, it may require a fair amount of work on your part on a part of the system that you may not have control over. But if you do, it's worth exploring.
We have a MySQL driven site that will occasionally get 100K users in the space of 48 hours, all logging into the site and making purchases.
We are attempting to simulate this kind of load using tools like Apache Bench and Siege.
While the key metric seems to me number of concurrent users, and we've got our report results, we still feel like we're in the dark.
What I want to ask is: What kinds of things should we be testing to anticipate this kind of traffic?
50 concurrent users 1000 Times? 500 concurrent users 10 times?
We're looking at DB errors, apache timeouts, and response times. What else should we be looking at?
This is a vague question and I know there is no "right" answer, we're just looking for some general thoughts on how to determine what our infrastructure can realistically handle.
Thanks in advance!
Simultaneous users is certainly one of the key factors - especially as that applies to DB connection pools, etc. But you will also want to verify that the page rate (pages/sec) of your tests is also in the range you expect. If the the think-time in your testcases is off by much, you can accidentally simulate a much higher (or lower) page rate than your real-world traffic. Think time is the amount of time the user spends between page requests - reading the page, filling out a form, etc.
Depending on what other information you have on hand, this might help you calculate the number of simultaneous users to simulate:
Virtual User Calculators
The complete page load time seen by the end-user is usually the most important metric to evaluate system performance. You'll also want to look for failure rates on all transactions. You should also be on the lookout for transactions that never complete. Some testing tools do not report these very well, allowing simulated users to hang indefinitely when the server doesn't respond...and not reporting this condition. Look for tools that report the number of users waiting on a given page or transaction and the average amount of time those users are waiting.
As for the server-side metrics to look for, what other technologies is your app built on? You'll want to look at different things for a .NET app vs. a PHP app.
Lastly, we have found it very valuable to look at how the system responds to increasing load, rather than looking at just a single level of load. This article goes into more detail.
Ideally you are going to want to model your usage to the user, but creating simulated concurrent sessions for 100k users is usually not easily accomplished.
The best source would be to check out your logs for the busiest hour and try and figure out a way to model that load level.
The database is usually a critical piece of infrastructure, so I would look at recording the number and length of lock waits as well as the number and duration of db statements.
Another key item to look at is disk queue lengths.
Mostly the process is to look for slow responses either in across the whole site or for specific pages and then hone in on the cause.
The biggest problem for load testing is that is quite hard to test your network and if you have (as most public sites do) a limited bandwidth through your ISP, that may create a performance issue that is not reflected in the load tests.