How interdependent db calls handled in transaction.atomic - mysql

I have two DB calls inside the transaction.atomic()
Sample codes
with transaction.atomic():
result1, created = SomeModel.objects.get_or_create(**somedata)
if not created:
flag = SomeOtherModel.objects.filter(somemodel=result1).exists()
if flag:
result1.some_attr = value1 if flag else value2
result1.save()
AFAIK about the transaction.atomic when my python codes do not cause an exception, so all the DB calls will be committed on the database. If any of the exceptions are raised inside the block, no database operation will be committed to the database.
So how is this thing handled when the DB call of one is used in the python logic to make other DB operations?
Didn't find this specific in the documentation, if any good source, please mention it.

Database transactions are a complex topic, I don't have the exact answer with linked documentation but from experience, I can say that you're good to use mutated or new values created within a transaction within the same transaction. The simple explanation for a transaction is it ensures a series of commands either succeed or fail entirely so your database isn't left in a partial in-complete state, in between a transaction your experience with the database at least from an ORM perspective should remain the same.
Here's a good StackOverflow post I found with some good conversations around it: Database transactions - How do they work?

Related

Peewee row level blocking

I'm currently using Peewee as an ORM in my project. In my current situation, I have some processes each having a database connection. All of these processes need to access a certain table simultaneously. I'm looking for some way to make them coordinated without using a central controller. For this, when a row is read by a process, it must get locked and no other process could read that row. Blocked processes should continue with other non-blocked rows.
I searched around and found that MySql already has an internal locking mechanism, described here and apparently it must be used on indexed columns to behave as expected (from here). But I couldn't find anything related in the peewee documents. Is there any extension providing these feature? Or should i write raw SQL queries containing FOR Update clause?
Peewee supports using the FOR UPDATE clause, and I think this is probably what you want. It won't prevent other clients from reading, but it will prevent modifications for as long as the transaction holding the lock is open.
Ex:
with db.atomic():
# lock note.
note = Note.select().where(Note.id == 123).for_update().get()
# As long as lock is held no other client can modify note.

Thread safety in Slick

I have a general understanding question about how Slick/the database manage asynchronous operations. When I compose a query, or an action, say
(for {
users <- UserDAO.findUsersAction(usersInput.map(_.email))
addToInventoriesResult <- insertOrUpdate(inventoryInput, user)
deleteInventoryToUsersResult <- inventoresToUsers.filter(_.inventoryUuid === inventoryInput.uuid).delete if addToInventoriesResult == 1
addToInventoryToUsersResult <- inventoresToUsers ++= users.map(u => DBInventoryToUser(inventoryInput.uuid, u.uuid)) if addToInventoriesResult == 1
} yield(addToInventoriesResult)).transactionally
Is there a possibility that another user can for example remove the users just after the first action UserDAO.findUsersAction(usersInput.map(_.email)) is executed, but before the rest, such that the insert will fail (because of foreign key error)? Or a scenario that can lead to a lost update like: transaction A reads data, then transaction B updates this data, then transaction A does an update based on what it did read, it will not see B's update an overwrite it
I think this probably depends on the database implementation or maybe JDBC, as this is sent to the database as a block of SQL, but maybe Slick plays a role in this. I'm using MySQL.
In case there are synchronisation issues here, what is the best way to solve this?. I have read about approaches like a background queue that processes the operations sequentially (as semantic units), but wouldn't this partly remove the benefit of being able to access the database asynchronously -> have bad performance?
First of all, if the underlying database driver is blocking (the case with JDBC based drivers) then Slick cannot deliver async peformance in the truly non-blocking sense of the word (i.e. a thread will be consumed and blocked for however long it takes for a given query to complete).
There's been talk of implementing non-blocking drivers for Oracle and SQL Server (under a paid Typesafe subscription) but that's not happening any time soon AFAICT. There are a couple of projects that do provide non-blocking drivers for Postegres and MySQL, but YMMV, still early days.
With that out of the way, when you call transactionally Slick takes the batch of queries to execute and wraps them in a try-catch block with underlying connection's autocommit flag set to false. Once the queries have executed successfully the transaction is committed by setting autocommit back to the default, true. In the event an Exception is thrown, the connection's rollback method is called. Just standard JDBC session boilerplate that Slick conveniently abstracts away.
As for your scenario of a user being deleted mid-transaction and handling that correctly, that's the job of the underlying database/driver.

Is it possible to exclude certain queries/procedures from transaction rollbacks in MySQL?

The Setup
While working on some rather complex procedures I've started logging debug information into a _debug table, via a stored logging procedure: P_Log('message'), which just calls a simple INSERT query into the _debug table.
The complex procedures contain transactions, which are rolled back if an error is encountered. The problem is that any debug information that was logged during the course of the transaction is also rolled back. This is of course a little counter productive, since you want to be able to see the debug logs precisely when the procedure -does- fail.
The Question
Is there any way I can insert into _debug without having the inserts rolled back? The log is really only to be used in development, and I would only ever write to it, so I don't care if it would violate how transactions are intended to be used.
And just out of curiosity, how is this normally handled? it seems like being able to write arbitrary log information from inside transactions, to check states of variables, etc, regardless of said transactions being rolled back, would be absolutely crucial for debugging errors. What's the best practice here?
Possible alternatives
storing logs in variables and only writing them at the end of the procedure.
the problem with this is that I want to be able to insert an arbitrary number of debug entries. creating a text variable and parcing that later would work, but seems very hacky.
Using some built-in log in mysql
I'd actually fine with this, if it means I can write arbitrary text to it at will, but I haven't been able to find something like this so far.
The simplest way would be to change your logs table to MyISAM.
It does not support transactions and will completely ignore them. Also MyISAM is a bit faster when you only insert and select from it.
The only other solution that I know of is to create a separate connection for the logs.

Managing full SQL transaction in MySQL

Referring to this, http://docs.oracle.com/cd/B19306_01/server.102/b14220/transact.htm
How the transactions are managed into the MySQL database.
My concern is, I have an application that is writing a few queries similar to the given link for completing a full transaction. I want to ensure that it always write correct & full transaction for final save & doesn't write incomplete transaction into SQL in case of power failure or some other circumstances to ensure the correctness of the transaction.
Just want to know how it can be implemented into MySQL db.
First, make sure you are using a transactional storage engine, such an InnoDB.
Next, make sure you understand the statements that cause implicit commit.
Read documentation on transactions

MySQL table locking for a multi user JSP/Servlets site

Hi I am developing a site with JSP/Servlets running on Tomcat for the front-end and with a MySql db for the backend which is accessed through JDBC.
Many users of the site can access and write to the database at the same time ,my question is :
Do i need to explicitly take locks before each write/read access to the db in my code?
OR Does Tomcat handle this for me?
Also do you have any suggestions on how best to implement this ? I have written a significant amount of JDBC code already without taking the locks :/
I think you are thinking about transactions when you say "locks". At the lowest level, your database server already ensure that parallel read writes won't corrupt your tables.
But if you want to ensure consistency across tables, you need to employ transactions. Simply put, what transactions provide you is an all-or-nothing guarantee. That is, if you want to insert a Order in one table and related OrderItems in another table, what you need is an assurance that if insertion of OrderItems fails (in step 2), the changes made to Order tables (step 1) will also get rolled back. This way you'll never end up in a situation where an row in Order table have no associated rows in Order items.
This, off-course, is a very simplified representation of what a transaction is. You should read more about it if you are serious about database programming.
In java, you usually do transactions by roughly with following steps:
Set autocommit to false on your jdbc connection
Do several insert and/or updates using the same connection
Call conn.commit() when all the insert/updates that goes together are done
If there is a problem somewhere during step 2, call conn.rollback()