How do I revert / undo a `session.execute()` statement in SQLAlchemy - sqlalchemy

If I would like to undo a previous update statement executed with session.execute(my_update_sql) statement on a SQLAlchemy session, how do I go about it?
I am using SQLAlchemy, Zope in a Pyramid web application

Provided that you use a "real" DBMS with transactions, you should be able to roll back the current transaction, just as you would do with other operations involving session:
session.execute('DELETE FROM users') # feel the thrill!
session.rollback()
A typical setup of SQLAlchemy with Pyramid involves ZopeTransactionExtension, which integrates session management with Pyramid's request-response cycle. In this case, to roll back the transaction you'll need to do that using ZTE's transaction manager:
import transaction
transaction.rollback()
Note that rolling back the transaction will undo ALL changes made in the transaction, not only your session.execute() statement. If you only want to "undo" just one statement you can try using SQLAlchemy's nested transactions, support for which depends on the DBMS you're using,

Related

How to stop SQLAlchemy session transaction in a query?

The SQLAlchemy's docs say: "When you use a session to query, the session get transaction started". In fact, every operation is a transaction.
But it has a question, I'm using a MySQL middleware--mycat for Read/Write Splitting, if you send a transaction, all queries are transferred to the write server, even if it is a select query.
I wish to let the query not enable transactions without using the raw SQL query. So how to stop SQLAlchemy session transaction? Or change MySQL middleware to better?

Thread safety in Slick

I have a general understanding question about how Slick/the database manage asynchronous operations. When I compose a query, or an action, say
(for {
users <- UserDAO.findUsersAction(usersInput.map(_.email))
addToInventoriesResult <- insertOrUpdate(inventoryInput, user)
deleteInventoryToUsersResult <- inventoresToUsers.filter(_.inventoryUuid === inventoryInput.uuid).delete if addToInventoriesResult == 1
addToInventoryToUsersResult <- inventoresToUsers ++= users.map(u => DBInventoryToUser(inventoryInput.uuid, u.uuid)) if addToInventoriesResult == 1
} yield(addToInventoriesResult)).transactionally
Is there a possibility that another user can for example remove the users just after the first action UserDAO.findUsersAction(usersInput.map(_.email)) is executed, but before the rest, such that the insert will fail (because of foreign key error)? Or a scenario that can lead to a lost update like: transaction A reads data, then transaction B updates this data, then transaction A does an update based on what it did read, it will not see B's update an overwrite it
I think this probably depends on the database implementation or maybe JDBC, as this is sent to the database as a block of SQL, but maybe Slick plays a role in this. I'm using MySQL.
In case there are synchronisation issues here, what is the best way to solve this?. I have read about approaches like a background queue that processes the operations sequentially (as semantic units), but wouldn't this partly remove the benefit of being able to access the database asynchronously -> have bad performance?
First of all, if the underlying database driver is blocking (the case with JDBC based drivers) then Slick cannot deliver async peformance in the truly non-blocking sense of the word (i.e. a thread will be consumed and blocked for however long it takes for a given query to complete).
There's been talk of implementing non-blocking drivers for Oracle and SQL Server (under a paid Typesafe subscription) but that's not happening any time soon AFAICT. There are a couple of projects that do provide non-blocking drivers for Postegres and MySQL, but YMMV, still early days.
With that out of the way, when you call transactionally Slick takes the batch of queries to execute and wraps them in a try-catch block with underlying connection's autocommit flag set to false. Once the queries have executed successfully the transaction is committed by setting autocommit back to the default, true. In the event an Exception is thrown, the connection's rollback method is called. Just standard JDBC session boilerplate that Slick conveniently abstracts away.
As for your scenario of a user being deleted mid-transaction and handling that correctly, that's the job of the underlying database/driver.

mysql transaction isolation levels with example

Can some one explain me "TRANSACTION" and "transaction isolation levels" with good example. I am very much confused for using this within my application. I am doing many Insert/Update/Select transaction within Stored Procedure, so please explain in this context, (consider auto-commit too). I am using connection pooling too on my application server.
Thanks.
These are different concepts that all play well together. Transactions are one very basic and important concept in databases that i use every day. You can read quite a bit about the most important properties of transactions, ACID, here: http://en.wikipedia.org/wiki/ACID
But I'll try to give you an overview with my own words:
Transactions can be seen as grouping together a set of commands. If you change/add/delete anything in the database within a transaction, depending on the isolation-level noone outside that transaction can see these changes. And if you rollback the transaction (for example if an error occurs) there are no changes applied to the database at all. If you otherwise decide to commit your transaction, everything that happend within the transaction is executed at once. So as a good habit, grouping every logical action together in one transaction is a brilliant idea.
Auto-commit is the opposite: Every update/insert/delete is implicitly/directly commited as transaction. So it can still be seen as a transaction, but you omit the explicit commit at the end of it.
Connection pooling can only work if you ensure to use only one connection for a transaction. But usually you have to first get a connection from the pool to execute your statements, so this is no issue.
Prepared statements are a bit unconnected to transactions. You can of course use transactions within prepared statements and have to consider that because nested transactions are not possible in MySQL.

MySQL transactions implicit commit

I'm starting out with MySQL trnsactions and I have a doubt:
In the documentation it says:
Beginning a transaction causes any pending transaction to be
committed. See Section 13.3.3, “Statements That Cause an Implicit
Commit”, for more information.
I have more or less 5 users on the same web application ( It is a local application for testing ) and all of them share the same MySQL user to interact with the database.
My question is: If I use transactions in the code and two of them start a transaction ( because of inserting, updating or something ) Could it be that the transactions interfere with each other?
I see in the statements that cause an implicit commit Includes starting a transaction. Being a local application It's fast and hard to tell if there is something wrong going on there, every query turns out as expected but I still have the doubt.
The implicit commit occurs within a session.
So for instance you start a transaction, do some updates and then forget to close the transaction and start a new one. Then the first transaction will implicitely committed.
However, other connections to the database will not be affected by that; they have their own transactions.
You say that 5 users use the same db user. That is okay. But in order to have them perform separate operations they should not use the same connection/session.
With MySQl by default each connection has autocommit turned on. That is, each connection will commit each query immediately. For an InnoDb table each transaction is therefore atomic - it completes entirely and without interference.
For updates that require several operations you can use a transaction by using a START TRANSACTION query. Any outstanding transactions will be committed, but this won't be a problem because mostly they will have been committed anyway.
All the updates performed until a COMMIT query is received are guaranteed to be completed entirely and without interference or, in the case of a ROLLBACK, none are applied.
Other transations from other connections see a consistent view of the database while this is going on.
This property is ACID compliance (Atomicity, Consistency, Isolation, Durability) You should be fine with an InnoDB table.
Other table types may implement different levels of ACID compliance. If you have a need to use one you should check it carefully.
This is a much simplified veiw of transaction handling. There is more detail on the MySQL web site here and you can read about ACID compliance here

Should I commit after a single select

I am working with MySQL 5.0 from python using the MySQLdb module.
Consider a simple function to load and return the contents of an entire database table:
def load_items(connection):
cursor = connection.cursor()
cursor.execute("SELECT * FROM MyTable")
return cursor.fetchall()
This query is intended to be a simple data load and not have any transactional behaviour beyond that single SELECT statement.
After this query is run, it may be some time before the same connection is used again to perform other tasks, though other connections can still be operating on the database in the mean time.
Should I be calling connection.commit() soon after the cursor.execute(...) call to ensure that the operation hasn't left an unfinished transaction on the connection?
There are thwo things you need to take into account:
the isolation level in effect
what kind of state you want to "see" in your transaction
The default isolation level in MySQL is REPEATABLE READ which means that if you run a SELECT twice inside a transaction you will see exactly the same data even if other transactions have committed changes.
Most of the time people expect to see committed changes when running the second select statement - which is the behaviour of the READ COMMITTED isolation level.
If you did not change the default level in MySQL and you do expect to see changes in the database if you run a SELECT twice in the same transaction - then you can't do it in the "same" transaction and you need to commit your first SELECT statement.
If you actually want to see a consistent state of the data in your transaction then you should not commit apparently.
then after several minutes, the first process carries out an operation which is transactional and attempts to commit. Would this commit fail?
That totally depends on your definition of "is transactional". Anything you do in a relational database "is transactional" (That's not entirely true for MySQL actually, but for the sake of argumentation you can assume this if you are only using InnoDB as your storage engine).
If that "first process" only selects data (i.e. a "read only transaction"), then of course the commit will work. If it tried to modify data that another transaction has already committed and you are running with REPEATABLE READ you probably get an error (after waiting until any locks have been released). I'm not 100% about MySQL's behaviour in that case.
You should really try this manually with two different sessions using your favorite SQL client to understand the behaviour. Do change your isolation level as well to see the effects of the different levels too.