Should I commit after a single select - mysql

I am working with MySQL 5.0 from python using the MySQLdb module.
Consider a simple function to load and return the contents of an entire database table:
def load_items(connection):
cursor = connection.cursor()
cursor.execute("SELECT * FROM MyTable")
return cursor.fetchall()
This query is intended to be a simple data load and not have any transactional behaviour beyond that single SELECT statement.
After this query is run, it may be some time before the same connection is used again to perform other tasks, though other connections can still be operating on the database in the mean time.
Should I be calling connection.commit() soon after the cursor.execute(...) call to ensure that the operation hasn't left an unfinished transaction on the connection?

There are thwo things you need to take into account:
the isolation level in effect
what kind of state you want to "see" in your transaction
The default isolation level in MySQL is REPEATABLE READ which means that if you run a SELECT twice inside a transaction you will see exactly the same data even if other transactions have committed changes.
Most of the time people expect to see committed changes when running the second select statement - which is the behaviour of the READ COMMITTED isolation level.
If you did not change the default level in MySQL and you do expect to see changes in the database if you run a SELECT twice in the same transaction - then you can't do it in the "same" transaction and you need to commit your first SELECT statement.
If you actually want to see a consistent state of the data in your transaction then you should not commit apparently.
then after several minutes, the first process carries out an operation which is transactional and attempts to commit. Would this commit fail?
That totally depends on your definition of "is transactional". Anything you do in a relational database "is transactional" (That's not entirely true for MySQL actually, but for the sake of argumentation you can assume this if you are only using InnoDB as your storage engine).
If that "first process" only selects data (i.e. a "read only transaction"), then of course the commit will work. If it tried to modify data that another transaction has already committed and you are running with REPEATABLE READ you probably get an error (after waiting until any locks have been released). I'm not 100% about MySQL's behaviour in that case.
You should really try this manually with two different sessions using your favorite SQL client to understand the behaviour. Do change your isolation level as well to see the effects of the different levels too.

Related

Are changes made to DB only through transactions?

I am not able to get a clear complete understanding regarding the role of transactions in databases.
I know operations clubbed in a transactions will be executed together and then either committed or rolled back.
But then what about about any other query that I write to the database without manually creating a transaction.
Is a transaction created internally for them?
Also what about select statements then? Are transactions created for them too?
I have been using database and sql for some time now, alas I am not clear on these
Are changes to DBs happening only through transactions? Short answer is yes.
There is always a transaction involved:
It might be automatically started (before) and commited (after) every single DML statement you issue, if you're relying on AUTOCOMMIT behaviour of your database session
Or you may explictly start one with BEGIN, execute your statements and end it with COMMIT
I like to think a transaction as a boundary that imposes clear semantics of ATOMICITY and ISOLATION to the statements that are contained within.
You describe atomicity (all or nothing behaviour) but that is not the only guarantee that a transaction can give you: there's also isolation (and this has to do with reads you within transactions (E.g. SELECTs).
In a concurrent application (many clients writing and reading to the same db/table at the same time), transaction ISOLATION is the property that defines "how much of the effects of other operations" can be observed in the current one. For example, assume you need to perform a transaction that involves doing the same SELECT multiple times: do you want this SELECT to return (possibly) different results each time (because some modification happened concurrently) or not?
For single statements is:
A single DML (UPDATE, INSERT...) statement by itself is effectively "As if it was in a transaction with a single statement, that gets immediately commited after execution" (Either it works like this because you're in AUTOCOMMIT, or you wrapped a single statement within BEGIN...COMMIT)
For a single SELECT it's the same. The transaction in this case (implicit or not, gives you the possibility of specifiying different isolation levels). It might sound strange to consider transactions for SELECTS, but requiring particular isolation levels might mean that the db is acquiring some lock to the data under the hood: committing the transaction in that case would release such lock.
Since you tagged mysql, here you can read on transaction isolations supported by mysql:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
A SQL transaction is any statement that contains Data Manipulation Language (DML). That is, any statement that changes values in a table, such as UPDATE, INSERT, MERGE, DELETE, etc.

What are the consequences of accessing the same database from different programs at the same time?

I have a mysql database that is accessed using JDBC. If I access the database from two different programs at the same time then what effect will be there on the database?
Please tell in view of when both programs are reading the database, one is reading and the other is writing data and when both are writing data.
I think that when both programs write data then that would definitely lead to loss of data. But what happens in the other scenarios?
MySQL works on an ACID basis: http://en.wikipedia.org/wiki/ACID
Which means, both clients will be reading the database as if they were the only clients.
For this to happen each client must start a transaction, which is a single logical unit of work. Within this transaction either all the operations done to the database must be committed or rolled back.
Different RDBMSs have different defaults for their transaction support. For MySQL, the isolation level is REPEATABLE READ, which means that SELECT statements within the same transaction are consistent with respect to each other.
How you can verify this:
Have program1 going start a transaction and through every row and increasing a value, while the other program starts a transaction and goes through the database calculating the sum of the same value for all rows. When they are done, they close their transactions and print out the results. You will notice that both of them read the database as if they were isolated from each other.
There are whole books written about JDBC. Here are some links that can get you started:
JDBC Tutorial: http://docs.oracle.com/javase/tutorial/jdbc/
MySQL: http://dev.mysql.com/doc/refman/5.0/en/innodb-consistent-read.html
Hopefully, MySQL like PostgreSQL, MariaDB or other major databases accept to be used by many programs, each being allowed to have many connections. And the database will not break even if multiple programs try to update the same row at the same time. But ... the how to do that is the problem of the client programs via transactions.
Welcome to the world of ACID transactions ! Within a transaction, the database guarantees that the program keeps a level of consistency. There is no problems for Atomicity, Consistency and Durability, but Isolation is a little more tedious. JDBC defines 4 level of isolation, plus no transaction at all (following extracted from The Java Tutorials : Using Transactions) :
The interface Connection includes five values that represent the transaction isolation levels you can use in JDBC:
Isolation Level Transactions Dirty Reads Non-Repeatable Reads/Phantom Reads
TRANSACTION_NONE Not supported Not applicable Not applicable Not applicable
TRANSACTION_READ_COMMITTED Supported Prevented Allowed Allowed
TRANSACTION_READ_UNCOMMITTED Supported Allowed Allowed Allowed
TRANSACTION_REPEATABLE_READ Supported Prevented Prevented Allowed
TRANSACTION_SERIALIZABLE Supported Prevented Prevented Prevented
Accessing an updated value that has not been committed is considered a dirty read because it is possible for that value to be rolled back to its previous value.
A non-repeatable read occurs when transaction A retrieves a row, transaction B subsequently updates the row, and transaction A later retrieves the same row again. Transaction A retrieves the same row twice but sees different data.
A phantom read occurs when transaction A retrieves a set of rows satisfying a given condition, transaction B subsequently inserts or updates a row such that the row now meets the condition in transaction A, and transaction A later repeats the conditional retrieval. Transaction A now sees an additional row. This row is referred to as a phantom.

MySQL transactions implicit commit

I'm starting out with MySQL trnsactions and I have a doubt:
In the documentation it says:
Beginning a transaction causes any pending transaction to be
committed. See Section 13.3.3, “Statements That Cause an Implicit
Commit”, for more information.
I have more or less 5 users on the same web application ( It is a local application for testing ) and all of them share the same MySQL user to interact with the database.
My question is: If I use transactions in the code and two of them start a transaction ( because of inserting, updating or something ) Could it be that the transactions interfere with each other?
I see in the statements that cause an implicit commit Includes starting a transaction. Being a local application It's fast and hard to tell if there is something wrong going on there, every query turns out as expected but I still have the doubt.
The implicit commit occurs within a session.
So for instance you start a transaction, do some updates and then forget to close the transaction and start a new one. Then the first transaction will implicitely committed.
However, other connections to the database will not be affected by that; they have their own transactions.
You say that 5 users use the same db user. That is okay. But in order to have them perform separate operations they should not use the same connection/session.
With MySQl by default each connection has autocommit turned on. That is, each connection will commit each query immediately. For an InnoDb table each transaction is therefore atomic - it completes entirely and without interference.
For updates that require several operations you can use a transaction by using a START TRANSACTION query. Any outstanding transactions will be committed, but this won't be a problem because mostly they will have been committed anyway.
All the updates performed until a COMMIT query is received are guaranteed to be completed entirely and without interference or, in the case of a ROLLBACK, none are applied.
Other transations from other connections see a consistent view of the database while this is going on.
This property is ACID compliance (Atomicity, Consistency, Isolation, Durability) You should be fine with an InnoDB table.
Other table types may implement different levels of ACID compliance. If you have a need to use one you should check it carefully.
This is a much simplified veiw of transaction handling. There is more detail on the MySQL web site here and you can read about ACID compliance here

Getting stale results in multiprocessing environment

I am using 2 separate processes via multiprocessing in my application. Both have access to a MySQL database via sqlalchemy core (not the ORM). One process reads data from various sources and writes them to the database. The other process just reads the data from the database.
I have a query which gets the latest record from the a table and displays the id. However it always displays the first id which was created when I started the program rather than the latest inserted id (new rows are created every few seconds).
If I use a separate MySQL tool and run the query manually I get correct results, but SQL alchemy is always giving me stale results.
Since you can see the changes your writer process is making with another MySQL tool that means your writer process is indeed committing the data (at least, if you are using InnoDB it does).
InnoDB shows you the state of the database as of when you started your transaction. Whatever other tools you are using probably have an autocommit feature turned on where a new transaction is implicitly started following each query.
To see the changes in SQLAlchemy do as zzzeek suggests and change your monitoring/reader process to begin a new transaction.
One technique I've used to do this myself is to add autocommit=True to the execution_options of my queries, e.g.:
result = conn.execute( select( [table] ).where( table.c.id == 123 ).execution_options( autocommit=True ) )
assuming you're using innodb the data on your connection will appear "stale" for as long as you keep the current transaction running, or until you commit the other transaction. In order for one process to see the data from the other process, two things need to happen: 1. the transaction that created the new data needs to be committed and 2. the current transaction, assuming it's read some of that data already, needs to be rolled back or committed and started again. See The InnoDB Transaction Model and Locking.

Restore Truncated Data In Mysql [duplicate]

I made a wrong update query in my table.
I forgot to make an id field in the WHERE clause.
So that updated all my rows.
How to recover that?
I didn't have a backup....
There are two lessons to be learned here:
Backup data
Perform UPDATE/DELETE statements within a transaction, so you can use ROLLBACK if things don't go as planned
Being aware of the transaction (autocommit, explicit and implicit) handling for your database can save you from having to restore data from a backup.
Transactions control data manipulation statement(s) to ensure they are atomic. Being "atomic" means the transaction either occurs, or it does not. The only way to signal the completion of the transaction to database is by using either a COMMIT or ROLLBACK statement (per ANSI-92, which sadly did not include syntax for creating/beginning a transaction so it is vendor specific). COMMIT applies the changes (if any) made within the transaction. ROLLBACK disregards whatever actions took place within the transaction - highly desirable when an UPDATE/DELETE statement does something unintended.
Typically individual DML (Insert, Update, Delete) statements are performed in an autocommit transaction - they are committed as soon as the statement successfully completes. Which means there's no opportunity to roll back the database to the state prior to the statement having been run in cases like yours. When something goes wrong, the only restoration option available is to reconstruct the data from a backup (providing one exists). In MySQL, autocommit is on by default for InnoDB - MyISAM doesn't support transactions. It can be disabled by using:
SET autocommit = 0
An explicit transaction is when statement(s) are wrapped within an explicitly defined transaction code block - for MySQL, that's START TRANSACTION. It also requires an explicitly made COMMIT or ROLLBACK statement at the end of the transaction. Nested transactions is beyond the scope of this topic.
Implicit transactions are slightly different from explicit ones. Implicit transactions do not require explicity defining a transaction. However, like explicit transactions they require a COMMIT or ROLLBACK statement to be supplied.
Conclusion
Explicit transactions are the most ideal solution - they require a statement, COMMIT or ROLLBACK, to finalize the transaction, and what is happening is clearly stated for others to read should there be a need. Implicit transactions are OK if working with the database interactively, but COMMIT statements should only be specified once results have been tested & thoroughly determined to be valid.
That means you should use:
SET autocommit = 0;
START TRANSACTION;
UPDATE ...;
...and only use COMMIT; when the results are correct.
That said, UPDATE and DELETE statements typically only return the number of rows affected, not specific details. Convert such statements into SELECT statements & review the results to ensure correctness prior to attempting the UPDATE/DELETE statement.
Addendum
DDL (Data Definition Language) statements are automatically committed - they do not require a COMMIT statement. IE: Table, index, stored procedure, database, and view creation or alteration statements.
Sorry man, but the chances of restoring an overwritten MySQL database are usually close to zero. Different from deleting a file, overwriting a record actually and physically overwrites the existing data in most cases.
To be prepared if anything comes up here, you should stop your MySQL server, and make a copy of the physical directory containing the database so nothing can get overwritten further: A simple copy+paste of the data folder to a different location should do.
But don't get your hopes up - I think there's nothing that can be done really.
You may want to set up a frequent database backup for the future. There are many solutions around; one of the simplest, most reliable and easiest to automate (using at or cron in Linux, or the task scheduler in Windows) is MySQL's own mysqldump.
Sorry to say that, but there is no way to restore the old field values without a backup.
Don't shoot the messenger...
Do you have binlogs enabled? You can recover by accessing the binlogs.