Writing an API in node.js with a mysql db and I am implementing a fairly standard pattern of:
If exists then update
else insert
This of course works fine until multiple simultaneous requests are made to the api at which point the If exists on request 2 can get executed before the insert of request 1, leading to two records instead of one.
I know that one way of dealing with this is ensure that the db has a constraint or key that prevents the duplicate record but in this case the rules that determine whether we should have an insert or update are more complicated and so the check needs to be done in code.
This sounded like a good case for using a mutex/lock. I need this to be distributed as the api may have multiple instances running as part of a pool/farm.
I've come up with the following implementation:
try {
await this.databaseConnection.knexRaw().raw(`SELECT GET_LOCK('lock1',10);`);
await this.databaseConnection.knexRaw().transaction(async (trx) => {
const existing = await this.findExisting(id);
if (existing) {
await this.update(myThing);
} else {
await this.insert(myThing);
}
});
} finally {
await this.databaseConnection.knexRaw().raw(`SELECT RELEASE_LOCK('lock1');`);
}
This all seems to work fine and my tests now produce only a single insert. Although it seems a bit brute force/manual. Being new to mysql and node (I come from a c# and sql server background) is this approach sane? Is there a better approach?
Is it sane? Subjective.
Is it technically safe? It could be -- GET_LOCK() is reliable -- but not as you have written it.
You are ignoring the return value of GET_LOCK(), which is 1 if you got the lock, 0 if the timeout expired and you didn't get the lock, and NULL in some failure cases.
As written, you'll wait 10 seconds and then do the work anyway, so, not safe.
This assumes you have only one MySQL master. It wouldn't work if you have multiple masters or Galera, since Galera doesn't replicate GET_LOCK() across all nodes. (A Galera cluster is a high availability MySQL/MariaDB/Percona cluster of writable masters that replicate synchronously and will survive the failure/isolation of up to (ceil(n/2) - 1) out of n total nodes).
It would be better to find and lock the relevant rows using SELECT ... FOR UPDATE, which locks the found rows or, in some cases, the gap where they would be if they existed, blocking other transactions that are attempting to capture the same locks until you rollback or commit... but if that is not practical, using GET_LOCK() is valid, subject to the point made above about the return value.
Related
Having distributed serverless application, based on AWS Aurora Serverless MySQL 5.6 and multiple Lambda functions. Some of Lambdas represent writing threads, another are reading treads. For denoting most important details, lets suppose that there is only one table with following structure:
id: bigint primary key autoincrement
key1: varchar(700)
key2: bigint
content: blob
unique(key1, key2)
Writing threads perform INSERTs in following manner: every writing thread generates one entry with key1+key2+content, where key1+key2 pair is unique, and id is generating automatically by autoincrement. Some writing threads can fail by DUPLICATE KEY ERROR, if key1+key2 will have repeating value, but that does not matter and okay.
There also some reading threads, which are polling table and tries to process new inserted entries. Goal of reading thread is retrieve all new entries and process them some way. Amount of reading threads is uncontrolled and they does not communicate with each other and does not write anything in table above, but can write some state in custom table.
Firstly it's seems that polling is very simple - it's enough to reading process to store last id that has been processed, and continue polling from it, e.g. SELECT * FROM table WHERE id > ${lastId}. Approach above works well on small load, but does not work with high load by obvious reason: there are some amount of inserting entries, which have not yet appeared in the database, because cluster had not been synchronized at this point.
Let's see what happens in cluster point of view, event if it consists of only two servers A and B.
1) Server A accepts write transaction with entry insertion and acquired autoincrement number 100500
2) Server B accepts write transaction with entry insertion and acquired autoincrement number 100501
3) Server B commits write transaction
4) Server B accepts read transaction, and returns entries with id > 100499, which is only 100501 entry.
5) Server A commits write transaction.
6) Reading thread receives only 100501 entry and moves lastId cursor to 100501. Entry 100500 is lost for current reading thread forever.
QUESTION: Is there way to solve problem above WITHOUT hard-lock tables on all cluster, in some lock-less aware way or something similar?
The issue here is that the local state in each lambda (thread) does not reflect the global state of said table.
As a first call I would try to always consult the table what is the latest ID before reading the entry with that ID.
Have a look at built in function LAST_INSERT_ID() in MySQL.
The caveat
[...] the most recently generated ID is maintained in the server on a
per-connection basis
Your lambda could be creating connections prior to handler function / method which would make them longer living (it's a known trick, but it's not bomb proof here), but I think new simultaneously executing lambda function would be given a new connection, in which case the above solution would fall apart.
Luckily what you have to do then is to wrap all WRITES and all READS in transactions so that additional coordination will take place when reading and writing simultaneously to the same table.
In your quest you might come across transaction isolation levels and SEERIALIZEABLE would be safest and least perfomant, but apparently AWS Aurora does not support it (I had not verified that statement).
HTH
As part of the persistence process in one of my models an md5 check_sum of the entire record is generated and stored with the record. The md5 check_sum contains a flattened representation of the entire record including all EAV attributes etc. This makes preventing absolute duplicates very easy and efficient.
I am not using a unique index on this check_sum for a specific reason, I want this all to be silent, i.e. if a user submits a duplicate then the app just silently ignores it and returns the already existing record. This ensures backwards compatibility with legacy app's and api's.
I am using Laravel's eloquent. So once a record has been created and before committing the application does the following:
$taxonRecords = TaxonRecord::where('check_sum', $taxonRecord->check_sum)->get();
if ($taxonRecords->count() > 0) {
DB::rollBack();
return $taxonRecords->first();
}
However recently I encountered a 60,000/1 shot incident(odds based on record counts at that time). A single duplicate ended up in the database with the same check_sum. When I reviewed the logs I noticed that the creation time was identical down to the second. Further investigation of Apache logs showed a valid POST but the POST was duplicated. I presume the users browser malfunctioned or something but both POSTS arrived simultaneously resulting in two simultaneous transactions.
My question is how can I ensure that a transaction and its contained SELECT for the previous check_sum is Atomic & Isolated. Based upon my reading the answer lies in https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html and isolation levels.
If transaction A and transaction B arrive at the server at the same time then they should not run side by side but should wait for the first to complete.
You created a classic race condition. Both transactions are calculating the checksum while they're both in progress, not yet committed. Neither can read the other's data, since they're uncommitted. So they calculate that they're the only one with the same checksum, and they both go through and commit.
To solve this, you need to run such transactions serially, to be sure that there aren't other concurrent transactions submitting the same data.
You may have to use use GET_LOCK() before starting your transaction to calculate the checksum. Then RELEASE_LOCK() after you commit. That will make sure other concurrent requests wait for your data to be committed, so they will see it when they try to calculate their checksum.
I have a general understanding question about how Slick/the database manage asynchronous operations. When I compose a query, or an action, say
(for {
users <- UserDAO.findUsersAction(usersInput.map(_.email))
addToInventoriesResult <- insertOrUpdate(inventoryInput, user)
deleteInventoryToUsersResult <- inventoresToUsers.filter(_.inventoryUuid === inventoryInput.uuid).delete if addToInventoriesResult == 1
addToInventoryToUsersResult <- inventoresToUsers ++= users.map(u => DBInventoryToUser(inventoryInput.uuid, u.uuid)) if addToInventoriesResult == 1
} yield(addToInventoriesResult)).transactionally
Is there a possibility that another user can for example remove the users just after the first action UserDAO.findUsersAction(usersInput.map(_.email)) is executed, but before the rest, such that the insert will fail (because of foreign key error)? Or a scenario that can lead to a lost update like: transaction A reads data, then transaction B updates this data, then transaction A does an update based on what it did read, it will not see B's update an overwrite it
I think this probably depends on the database implementation or maybe JDBC, as this is sent to the database as a block of SQL, but maybe Slick plays a role in this. I'm using MySQL.
In case there are synchronisation issues here, what is the best way to solve this?. I have read about approaches like a background queue that processes the operations sequentially (as semantic units), but wouldn't this partly remove the benefit of being able to access the database asynchronously -> have bad performance?
First of all, if the underlying database driver is blocking (the case with JDBC based drivers) then Slick cannot deliver async peformance in the truly non-blocking sense of the word (i.e. a thread will be consumed and blocked for however long it takes for a given query to complete).
There's been talk of implementing non-blocking drivers for Oracle and SQL Server (under a paid Typesafe subscription) but that's not happening any time soon AFAICT. There are a couple of projects that do provide non-blocking drivers for Postegres and MySQL, but YMMV, still early days.
With that out of the way, when you call transactionally Slick takes the batch of queries to execute and wraps them in a try-catch block with underlying connection's autocommit flag set to false. Once the queries have executed successfully the transaction is committed by setting autocommit back to the default, true. In the event an Exception is thrown, the connection's rollback method is called. Just standard JDBC session boilerplate that Slick conveniently abstracts away.
As for your scenario of a user being deleted mid-transaction and handling that correctly, that's the job of the underlying database/driver.
I am using 2 separate processes via multiprocessing in my application. Both have access to a MySQL database via sqlalchemy core (not the ORM). One process reads data from various sources and writes them to the database. The other process just reads the data from the database.
I have a query which gets the latest record from the a table and displays the id. However it always displays the first id which was created when I started the program rather than the latest inserted id (new rows are created every few seconds).
If I use a separate MySQL tool and run the query manually I get correct results, but SQL alchemy is always giving me stale results.
Since you can see the changes your writer process is making with another MySQL tool that means your writer process is indeed committing the data (at least, if you are using InnoDB it does).
InnoDB shows you the state of the database as of when you started your transaction. Whatever other tools you are using probably have an autocommit feature turned on where a new transaction is implicitly started following each query.
To see the changes in SQLAlchemy do as zzzeek suggests and change your monitoring/reader process to begin a new transaction.
One technique I've used to do this myself is to add autocommit=True to the execution_options of my queries, e.g.:
result = conn.execute( select( [table] ).where( table.c.id == 123 ).execution_options( autocommit=True ) )
assuming you're using innodb the data on your connection will appear "stale" for as long as you keep the current transaction running, or until you commit the other transaction. In order for one process to see the data from the other process, two things need to happen: 1. the transaction that created the new data needs to be committed and 2. the current transaction, assuming it's read some of that data already, needs to be rolled back or committed and started again. See The InnoDB Transaction Model and Locking.
We are developing online schedule application. One schedule can be edited simultaneously by several users. There is one very important business constraint. There must be only three events in one day.
Technically speaking and simplifying, there is a table in database with columns: | id | event | date |. Application runs in transaction "select... count... where..." and if result is less than 3, it inserts new event.
Wich approaches can be used to guarantee that two threads will not create four events in one day? This is a classical check-and-write problem. And we wonder how it can be solved on database level?
Using transactions doesn`t guarantee that in the second transaction another thread will not do the same: checks that number of events is less than 3 and makes insert. Locking the whole table is not acceptable because it will reduce response time, concurrency, etc.
Application is developed in Java using Spring, Hibernate, MySQL.
Thanks in advance for any pieces of advice.
For blocking process you should use Select ... FOR UPDATE statement. But this one works only in innodb.
Example:
//java logic
try {
//mysql logic
start transaction;
select * from where 'some condition' FOR UPDATE
INSERT INTO TABLE ....
commit;
//java logic
catch (Exception e) {
rollback;
}
See more info about specific row locking http://dev.mysql.com/doc/refman/5.1/en/innodb-locking-reads.html
With your data model you could use a check constraint to count the number of rows. AFAIK MySQL doesn't natively support this type of constraint, but it looks like it's possible to emulate them with a trigger
Alternatively you could consider a different data model with a days table and an events table. You could use optimistic locking of days to ensure that a second transaction didn't have an out of date understanding of the data.
Since you are going through Spring, and since there is a concurrency issue, try synchronizing execution at the Java layer, rather than at the DB layer. We've had similar issues when trying to use a DB to maintain concurrency.
Perhaps you could make the execution block in Java synchronized so that it forces execution to block; inside the synchronized method, check that all of your business logic returns true. if true, continue with normal execution. If false, abort with an exception.