Does db.session.commit() terminate a SQLAlchemy session? - sqlalchemy

Does db.session.commit() terminate a SQLAlchemy session?
I suspect the answer is no, and that I have to additionally call db.session.close(), but wanted to confirm.

No, it doesn't. It commits the current transaction. You can still issue additional queries after you commit. A transaction will be started automatically if you do. The relevant documentation is here.

Related

SQLAlchemy Session.commit() hangs

I am using scoped_sessions from SQLAlchemy (to MySQL) and running the SQL commit inside tornado's threadpool. In my unit test, the first time Session.commit() passes but the second Session.commit() hangs. I am closing the session properly after the first time commit. I enabled SQLAlchemy logging and I can see that there is nothing emitted after INSERT INTO ... for the second commit.
Having same issue (hang in later commit) brought me to this question earlier today. I've managed to resolve the issue by making sure that only one session is in place.
In my case I'm running integration tests with session test for each test and doing roll back in tear-down as described in docs. I've got all DB stuff wrapped in DatabaseService class which is responsible for the test setup (a derived TestDatabaseClass doing the session setup in ctor).
The problem was that I managed to initialize the DatabaseService instance (with test setup) twice and experienced the hang in the later created nested session's commit (or in explicit flush call). So making sure that only one DatabaseService exists and all DB queries goes thru one session did resolve the problem for me.
What does you session init look like? Are you not using autoflush? Try calling session.flush() after the commit.
The problem was that I had a pending transaction in between the two tests. That was causing the next commit to just not happen.

Can MySQL commit fail if individual queries works? [duplicate]

When working with database transactions, what are the possible conditions (if any) that would cause the final COMMIT statement in a transaction to fail, presuming that all statements within the transaction already executed without issue?
For example... let's say you have some two-phase or three-phase commit protocol where you do a bunch of statements, then wait for some master process to tell you when it is ok to finally commit the transaction:
-- <initial handshaking stuff>
START TRANSACTION;
-- <Execute a bunch of SQL statements>
-- <Inform master of readiness to commit>
-- <Time passes... background transactions happening while we wait>
-- <Receive approval to commit from master (finally!)>
COMMIT;
If your code gets to that final COMMIT statement and sends it to your DBMS, can you ever get an error (uniqueness issue, database full, etc) at that statement? What errors? Why? How do they appear? Does it vary depending on what DBMS you run?
COMMIT may fail. You might have had sufficent resources to log all the changes you wished to make, but lack resources to actually implement the changes.
And that's not considering other reasons it might fail:
The change itself might not fit the constraints of the database.
Power loss stops things from completing.
The level of requested selection concurrency might disallow an update (cursors updating a modified table, for example).
The commit might time out or be on a connection which times out due to starvation issues.
The network connection between the client and the database may be lost.
And all the other "simple" reasons that aren't on the top of my head.
It is possible for some database engines to defer UNIQUE index constraint checking until COMMIT. Obviously if the constraint does not hold true at the time of commit then it will fail.
Sure.
In a multi-user environment, the COMMIT may fail because of changes by other users (e.g. your COMMIT would violate a referential constraint when applied to the now current database...).
Thomas
If you're using two-phase commit, then no. Everything that could go wrong is done in the prepare phase.
There could still be network outage, power less, cosmic rays, etc, during the commit, but even so, the transactions will have been written to permanent storage, and if a commit has been triggered, recovery processes should carry them through.
Hopefully.
Certainly, there could be a number of issues. The act of committing, in and of itself, must make some final, permanent entry to indicate that the transaction committed. If making that entry fails, then the transaction can't commit.
As Ignacio states, there can be deferred constraint checking (this could be any form of constraint, not just unique constraint, depending on the DBMS engine).
SQL Server Specific: flushing FILESTREAM data can be deferred until commit time. That could fail.
One very simple and often overlooked item: hardware failure. The commit can fail if the underlying server dies. This might be disk, cpu, memory, or even network related.
The transaction could fail if it never receives approval from the master (for any number of reasons).
No matter how wonderfully a system may be designed, there is going to be some possibility that a commit will get into a situation where it's impossible to know whether it succeeded or not. In some cases, it may not matter (e.g. if a hard drive holding the database turns into a pile of slag, it may be impossible to tell whether the commit succeeded or not before that occurred but it wouldn't really matter); in others cases, however, this could be a problem. Especially with distributed database systems, if a connection failure occurs at just the right time during a commit, it will be impossible for both sides to be certain of whether the other side is expecting a commit or a rollback.
With MySQL or MariaDB, when used with Galera clustering, COMMIT is when the other nodes in the cluster are checked. So, yes important errors can be discovered by COMMIT, and you must check for these errors.

How to avoid 'Transaction managed block ended with pending COMMIT/ROLLBACK' error across methods

I have a situation where I have applied the #transaction.commit_manually decorator to a method in which I am importing information passed back in an http request response. I need to control committing and rolling back depending on whether business validation rules pass or fail.
Now, when I have some sort of validation failure I have a separate method in which I log an error to the database. This action should always commit immediately, while leaving the primary transaction in its current state. However, what happens is if I apply the #transaction.commit_on_success decorator to the error capturing routine, my primary transaction commits automatically as well. If I don't apply the #transaction.commit_on_success decorator, then, I receive the 'Transaction managed block ended with pending COMMIT/ROLLBACK' error as soon as a call is made to the error capturing routine.
I am using MYSQL database version 5.1.49 using storage engine INNODB.
Is there a way to persist the open transaction in the calling routine while committing the transaction in the second routine?
Django's default transaction management doesn't support nested transactions. In general, transactions can't be nested. Everything that's done in the midst of a transaction is either committed or rolledback. So when you commit the transaction, no matter where you commit the transaction, it's atomic.
Looking around online, I found a snippet that might be a good starting point for you. It essentially overrides the commit_on_success decorator, adding a form of reference counting. In a sense, it forgoes committing if it's not the last out.

How to find out where a COMMIT might be happening?

I'm refactoring some code, converting a number of related updates into a single transaction.
This is using JDBC, MySQL, InnoDB.
I believe there is an unwanted COMMIT still happening somewhere in the (rather large and undocumented) library or application code.
What's the easiest way to find out where this is happening?
There must be some sort of implicit commit, because I'm not finding any COMMIT statements.
Check out this page in the MySQL docs for statements that cause an implicit commit.
Also, since you're using JDBC, make sure autocommit is false, as in
connection.setAutoCommit(false);
I am not an expert with mysql, but there should be a possibility to log all executed statements to a file and/or console. This will probably help. If you can debug through code set breakpoints right before the commits you know, and then have a look to the logged statements. Thus you'll probably see if or if not there is a unwanted commit.

Multiple processes accessing Django db backend; records not showing up until manually calling _commit

I have a Django project in which multiple processes are accessing the backend mysql db. One process is creating records, while a second process is trying to read those records. I am having an issue where the second process that is trying to read the records can't actually find the records until I manually call connection._commit().
This question has been asked before:
caching issues in MySQL response with MySQLdb in Django
The OP stated that he solved the problem, but didn't quite explain how. Can anyone shed some light on this? I'd like to be able to access the records without manually calling _commit().
Thanks,
Asif
He said:
Django's autocommit isn't an actual autocommit in the db.
So, you have to ensure that autocommit is set at the db level. Otherwise, because of transaction isolation, processes will not see changes made by a different process (different connection), until a commit is done. AFAIK this is not especially a Django issue, other than the lack of clarity in the docs about Django autocommit != db autocommit.
Update: Paraphrasing slightly from the MySQL docs:
REPEATABLE READ is the default
isolation level for InnoDB. For
consistent reads, there is an
important difference from the READ
COMMITTED isolation level: All
consistent reads within the same
transaction read the snapshot
established by the first read. (My
emphasis.)
So, with REPEATABLE READ you only get, on subsequent reads, what was read in the first read. With READ COMMITTED, each read creates and reads its own fresh snapshot, so that you see subsequent updates from other transactions. So - in answer to your comment - your change to the transaction level is correct.
Are you running the processes as views? If so, they're probably committing when the request is finished processing, but it sounds like you're running these processes concurrently. If you run the process outside of a view, they should commit on each save.