Database same row concurrent read and update - mysql

I have an application with a row that EVERY user reads and updates constantly (almost every request) this causes DB locks and loss of data.
The problem is that it's critical information and everyone must have it synchronized and updated.
I can introduce a small delay in the update info but it must be synchronized for all users.
I'm using Django and MySql.
Note: To keep the application working I'm issuing the update in a separate thread. So even if the update fails it's not waiting for the lock to end to continue.

Related

ActiveRecord::StaleObject error on opening each result on a new tab

Recently we've added a functionality in our RoR application which allows users to open a particular record, let's say in their own individual tabs. Doing so, we've started seeing frequent ActiveRecord::StaleObject errors. On investigating the issue I found that rails is indeed trying to update the session store first whenever a resource is opened in a tab and the exception is raised.
We've lock_version in our active record session store, so Rails is taking it as optimistic locking by default. Is there any way we could solve this issue without introducing much complexity, as the application is already live on the client's machine and without affecting any sessions' data we've stored in our session store DB.
Any suggestions would be much appreciated. Thanks
It sounds like you're using optimistic locking on a db session record and updating the session record when you process an update to other records. Not sure what you'd need to update in the session, but if you're worried about possibly conflicting updates to the session object (and need the locking) then these errors might be desired.
If you don't - you can refresh the session object before saving the session (or disable it's optimistic locking) to avoid this error for these session updates.
You also might look into what about the session is being updated and whether it's strictly necessary. If you're updating something like "last_active_on" then you might be better off sending off a background job to do this and/or using the update_column method which bypasses the rather heavyweight activerecord save callback chain.
--- UPDATE ---
Pattern: Putting side-effects in background jobs
There are several common Rails patterns that start to break down as your app usage grows. One of the most common that I've run into is when a controller endpoint for a specific record also updates a common/shared record (for example, if creating a 'message' also updates the messages_count for a user using counter cache, or updates a last_active_at on a session). These patterns create bottlenecks in your application as multiple different types of requests across your application will compete for write locks on the same database rows unnecessarily.
These tend to creep into your app over time and become hard to refactor later. I'd recommend always handling side-effects of a request in an asynchronous job (using something like Sidekiq). Something like:
class Message < ActiveRecord::Base
after_commit :enqueue_update_messages_count_job
def enqueue_update_messages_count_job
Jobs::UpdateUserMessageCountJob.enqueue(self.id)
end
end
While this may seem like overkill at first, it creates an architecture that is significantly more scalable. If counting the messages becomes slow... that will make the job slower but not impact the usability of the product. In addition, if certain activities create lots of objects with the same side-effects (lets say you have a "signup" controller that creates a bunch of objects for a user that all trigger an update of user.updated_at) it becomes easy to throw out duplicate jobs and prevent updating the same field 20 times.
Pattern: Skipping the activerecord callback chain
Calling save on an ActiveRecord object runs validations and all the before and after callbacks. These can be slow and (at times) unnecessary. For example, updating a message_count cached value doesn't necessarily care about whether the user's email address is valid (or any other validations) and you may not care about other callbacks running. Similar if you're just updating a user's updated_at value to clear a cache. You can bypass the activerecord callback chain by calling user.update_attribute(:message_count, ..) to write that field directly to the database. In theory this shouldn't be necessary for a well designed application but in practice some larger/legacy codebases may make significant use of the activerecord callback chain to handle business logic that you may not want to invoke.
--- Update #2 ---
On Deadlocks
One reason to avoid updating (or generally locking) a common/shared object from a concurrent request is that it can introduce Deadlock errors.
Generally speaking a "Deadlock" in a database is when there are two processes that both need a lock the other one has. Neither thread can continue so it must error instead. In practice, detecting this is hard, so some databases (like postgres) just throw a "Deadlock" error after a thread waits for an exclusive/write lock for x amount of time. While contention for locks is common (e.g. two updates that are both updating a 'session' object), a true deadlock is often rare (where thread A has a lock on the session that thread B needs, but thread B has a lock on a different object that thread A needs), so you may be able to partially address the problem by looking at / extending your deadlock timeout. While this may reduce the errors, it doesn't fix the issue that the threads may be waiting for up to the deadlock timeout. An alternative approach is to have a short deadlock timeout and rescue/retry a few times.

MySQL paginated retrieval of the data avoiding race conditions

My service is clustered and I am running several instances of it.
I need to collect all entities in the paginated fashion and push them into the caching layer (Redis).
While doing so on one application server, an application that is running on server #2 can already be making the changes.
Those paginated calls to db will be fetching 1000 items at one call.
Now, since I want to prevent modifications while retrieval is ongoing, how do I achieve that?
Can I use SELECT FOR UPDATE mechanism even though I am not updating anything in this transaction, but only fetch the data in a paginated fashion?
If it were one app instance with multiple threads, you could use a critical section. But that doesn't work for a cluster of app instances.
I implemented this for a service a couple of months ago. The app is deployed in several instances. These instances don't communicate with each other, so they can't coordinate directly. But they all connect to the same MySQL database.
What I did was use the GET_LOCK() builtin function of MySQL.
When a routine wants exclusive access, it calls GET_LOCK('mylock', 0). This returns immediately, with a true value if it acquired the lock, or a false value if the lock was already held by some other client. That tells the client app whether it is the "winner" or not.
If a client is not the winner, then it calls GET_LOCK('mylock', -1) which means wait indefinitely. It does this because the winner is working on whatever it needs to do in the critical section.
When the winner finishes, it must call RELEASE_LOCK('mylock'). This unblocks the clients who were waiting. They now know that the work of the critical section is done, and they can feel free to read the contents of the cache or whatever else they need to do.
Also remember that the client who were waiting on GET_LOCK('mylock', -1) need to call RELEASE_LOCK('mylock') immediately, because once they stopped waiting, they actually acquired the lock themselves.
This design allows a single lock coordinator (MySQL) to be used by multiple clients. It implements pessimistic locking, without needing to rely on locking any table or set of rows.

Mysql debezium connector for rds in production caused deadlocks

We are creating a data pipeline from Mysql in RDS to elastic search for creating search indexes,
and for this using debezium cdc with its mysql source and elastic sink connector.
Now as the mysql is in rds we have to give the mysql user LOCK TABLE permission for two tables we wanted cdc, as mentioned in docs.
We also have various other mysql users performing transactions which may require any of the two tables.
As soon as we connected the mysql connector to our production database there was a lock created and our whole system went down, after realising this we soon stopped the kafka and also removed the connector, but the locks where still increasing and it only solved after we stop all the new queries by stopping our production code from running and manually killing the processes.
What could be the potential cause for this, and how could we prevent this ?
I'm only guessing because I don't know your query traffic. I would assume the locks you saw increasing were the backlog of queries that had been waiting for the table locks to be released.
I mean the following sequence is what I believe happened:
Debezium starts table locks on your two tables.
The application is still working, and it is trying to execute queries that access those locked tables. The queries begin waiting for the lock to be released. They will wait for up to 1 year (this is the default lock_wait_timeout value).
As you spend some minutes trying to figure out why your site is not responding, a large number of blocked queries accumulate. Potentially as many as max_connections. After all the allowed connections are full of blocked queries, then the application cannot connect to MySQL at all.
Finally you stop the Debezium process that is trying to read its initial snapshot of data. It releases its table locks.
Immediately when the table locks are released, the waiting queries can proceed.
But many of them do need to acquire locks too, if they are INSERT/UPDATE/DELETE/REPLACE or if they are SELECT ... FOR UPDATE or other locking statements.
Since there are so many of these queries queued up, it's more likely for them to be requesting locks that overlap, which means they have to wait for each other to finish and release their locks.
Also because there are hundreds of queries executing at the same time, they are overtaxing system resources like CPU, causing high system load, and this makes them all slow down too. So it will take longer for queries to complete, and therefore if they are blocked each other, they have to wait longer.
Meanwhile the application is still trying to accept requests, and therefore is adding more queries to execute. They are also subject to the queueing and resource exhaustion.
Eventually you stop the application, which at least allows the queue of waiting queries to gradually be finished. As the system load goes down, MySQL is able to process the queries more efficiently and finishes them all pretty soon.
The suggestion by the other answer to use a read replica for your Debezium snapshot is a good one. If your application can read from the master MySQL instance for a while, then no query will be blocked on the replica while Debezium has it locked. Eventually Debezium will finish reading all the data, and release the locks, and then go on to read only the binlog. Then the app can resume using the replica as a read instance.
If your binlog uses GTID, you should be able to make a CDC tool like Debezium read the snapshot from the replica, then when that's done, switch to the master to read the binlog. But if you don't use GTID, that's a little more tricky. The tool would have to know the binlog position on the master corresponding to the snapshot on the replica.
If the locking is problem and you cannot afford to tradeoff locking vs consistency then please take a look at snapshot.locking.mode config option.
Use the replica to prevent lock table statement getting executed, why debezium need lock table? all CDC tool fetch the events from bin logs.
The reason is that debezium is not as written in the document (version 1.5). Once FTWRL acquisition fails, it will execute the lock table. It will be released after the snapshot is read. If you see in the log that "Unable to refresh and obtain the global read lock, the table read lock will be used after reading the table name", congratulations, lucky one

Best Practice for synchronized jobs in Application clusters

We have got 3 REST-Applications within a cluster.
So each application server can receive requests from "outside".
Now we got timed events, which are analysing the database and add/remove rows from the database, send emails, etc.
The problem is, that each application server does start this timed events and it happens that 2 application server are starting this analysing job at the same time.
We got a sql table in the back.
Our idea was to lock a table within the sql database, when starting the job. If the table is locked, we exit the job, because an other application just started to analyse.
What's a good practice to insert some kind of semaphore ?
Any ideas ?
Don't use semaphores, you are over complicating things, just use message queueing, where you queue your tasks and get them executed in row.
Make ONLY one separate node/process/child_process to consume from the queue and get your task done.
We (at a previous employer) used a database-based semaphore. Each of several (for redundancy and load sharing) servers had the same set of cron jobs. The first thing in each was a custom library call that did:
Connect to the database and check for (or insert) "I'm working on X".
If the flag was already set, then the cron job silently exited.
When finished, the flag was cleared.
The table included a timestamp and a host name -- for debugging and recovering from cron jobs that fail to finish gracefully.
I forget how the "test and set" was done. Possibly an optimistic INSERT, then check for "duplicate key".

Implementing locking mysql

I want to use mysql row level lock. I can't lock complete table. I want to avoid two process processing two different message for server at same time.
What I thought that I can have some table called:
server_lock and if one process start working on server it will insert a row in the table.
Problem with this approach is that if application crashes. We need to remove the lock manually.
Is there a way I may row level lock and lock will get released if application is crashing ?
Edit
I am using C++ as language.
My application is similar to message queue. But difference is that there is two queue which are getting populated by one process for each queue. After action if action belong to same object and both are processing same object it may result in wrong data. So I want a locking mechanism b/w these two queue so that both processor don't modify same object at same time.
I can think of two ways:
Implement some error handler on your program where you remove the lock. Without knowing anything about your program it is hard to say how to do this, but most languages have some method to do some work before exiting upon a crash. This is dangerous, because a crash happens when something is not right. If you continue to do any work, it is possible that you corrupt the database or something like that.
Periodically update the lock. Add a thread on your program that periodically reacquires the lock, or reacquire the lock in some loop you are doing. Then, when a lock is not updated in a while, you know that it belonged to a program that crashed.