The SQLAlchemy's docs say: "When you use a session to query, the session get transaction started". In fact, every operation is a transaction.
But it has a question, I'm using a MySQL middleware--mycat for Read/Write Splitting, if you send a transaction, all queries are transferred to the write server, even if it is a select query.
I wish to let the query not enable transactions without using the raw SQL query. So how to stop SQLAlchemy session transaction? Or change MySQL middleware to better?
Related
https://dev.mysql.com/doc/refman/5.6/en/innodb-autocommit-commit-rollback.html
In InnoDB, all user activity occurs inside a transaction. If autocommit mode is enabled, each SQL statement forms a single transaction on its own.
each SQL is single transaction, how mysql proxy judge a select that should route to master or replica.
I will get some text from another question here:
The PreparedStatement is a slightly more powerful version of a Statement, and should always be at least as quick and easy to handle as a Statement.
The Prepared Statement may be parametrized
Most relational databases handles a JDBC / SQL query in four steps:
Parse the incoming SQL query
Compile the SQL query
Plan/optimize the data acquisition path
Execute the optimized query / acquire and return data
A Statement will always proceed through the four steps above for each SQL query sent to the database. A Prepared Statement pre-executes steps (1) - (3) in the execution process above. Thus, when creating a Prepared Statement some pre-optimization is performed immediately. The effect is to lessen the load on the database engine at execution time.
Now here is my question:
If I use hundreds or thousands of Statement, will it be cause performance problems in database? (I don't mean that they will perform slower because of more jobs to do every time). Will all those statements be cached in database or they will be lost in space as soon as they are executed?
Since there is no restictions on using prepared statements, you should work carefully with them.
As you said you need hundreds of prepaired, think twice may be you are using it wrong.
The pattern it should be used is having an application that doing a haevy inserts/updates/select hundred or thousand times a second which only differs in variables. So in real world it would be like, connecting, creating session, sending statement, and sending bunch of variables to that statement.
But if your plan is to create prepared on each single operations, it's just better to use common queries.
On your questions:
Hundreds of statements will not kill mysql or drive you to performance degradation
The prepared are stored in memory while client session is up and running. As soon as you close session the prepared die.
To be sure you need it:
Your app able to execute statements fast so you get speed value of using them
Your query will not have a variable number of arguments, otherwise you can kill you app by creating objects and storing in memory on every statement
I have a multithreaded application that periodically fetches the whole content of the MySQL table (with SELECT * FROM query)
The application is written in python, uses threading module to multithreading and uses mysql-python (mysqldb) as MySQL driver (using mysqlalchemy as a wrapper produces similar results).
I use InnoDB engine for my MySQL database.
I wrote a simple test to check the performance of SELECT * query in parallel and discovered that all of those queries are implemented sequentially.
I explicitly set the ISOLATION LEVEL to READ UNCOMMITTED, although it does not seem to help with performance.
The code snipper making the DB call is below:
#performance.profile()
def test_select_all_raw_sql(conn_pool, queue):
'''
conn_pool - connection pool to get mysql connection from
queue - task queue
'''
query = '''SELECT * FROM table'''
try:
conn = conn_pool.connect()
cursor = conn.cursor()
cursor.execute("SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED")
# execute until the queue is empty (Queue.Empty is thrown)
while True:
id = queue.get_nowait()
cursor.execute(query)
result = cursor.fetchall()
except Queue.Empty:
pass
finally:
cursor.execute("SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ")
conn.close()
Am I right expecting this query to be executed in parallel?
If yes, how can I implement that in python?
MySQL allows many connections from a single user or many users. Within that one connection, it uses at most one CPU core and does one SQL statement at a time.
A "transaction" can be composed of multiple SQL statements while the transaction is treated as atomically. Consider the classic banking application:
BEGIN;
UPDATE ... -- decrement from one user's bank balance.
UPDATE ... -- increment another user's balance.
COMMIT;
Those statements are performed serially (in a single connection); either all of them succeed or all of them fail as a unit ("atomically").
If you need to do things in "parallel", have a client (or clients) that can run multiple threads (or processes) and have each on make its own connection to MySQL.
A minor exception: There are some extra threads 'under the covers' for doing background tasks such as read-ahead or delayed-write or flushing stuff. But this does not give the user a way to "do two things at once" in a single connection.
What I have said here applies to all versions of MySQL/MariaDB and all client packages accessing them.
I'm using the DBI package to send queries to a MySQL server. I'd like to assure that these queries are sent as a single transaction in order to avoid table lock.
I use the dbSendQuery function to send queries:
df <- fetch(dbSendQuery(connection,
statement = "SELECT *
FROM table"),
n = -1)
The DBI package says little about handling transactions, but what it does have is listed under these functions: dbCommit, dbRollback nor dbCallProc under the header:
Note: The following methods deal with transactions and store
procedures.
in the vignette. None seem to relate to sending queries as a single transaction.
How can I make sure I'm sending these queries as a single transaction?
Warning: not tested.
You would need some help from MySQL. By default, MySQL runs with auto commit mode enabled. To disable auto commit mode, you would need to issue a START TRANSACTION statement. I suspect dbCommit and dbRollback simply execute COMMIT and ROLLBACK, respectively.
Details: http://dev.mysql.com/doc/refman/5.0/en/commit.html
So you would need to do something like
dbSendQuery(connection, "START TRANSACTION")
# add your dbSendQuery code here
dbCommit(connection)
Do I need to lock tables if I use PDO transactions?
If user a has 50 money and transfers 50 to user b, will PDO transaction make sure they all get executed without error?
Also if say I have a if statement like,
if ($user['money'] > 500) {
$dbc ->beginTransaction();
.........
$dbc ->commit();
}
How can I ensure that the value of the users' money doesn't change meaning the query shouldn't run, while the transaction is running??
Thanks
Transaction process is guaranteed by SQL server. If beginTransaction() succeeds, commit() succeeds and your SQL server and table supports transactions, then you can be sure about it.
PDO is an abstraction layer, so it depends on your database. MySQL supports transactions, but only for InnoDB tables. Otherwise only with table locks (which is not the same). SQLite always supports transactions. Another database might never.
Transactions still require YOU (the developer) to create and verify the logic. The database doesn't know what's right (correct, not just) and what's wrong (incorrect). You do, so you have to create a script that calls BEGIN and COMMIT/ROLLBACK when appropriate.
Also worth noting: a transaction isn't automatically ROLLBACKed after a database error. (Maybe some databases or DBALs do, but it's not standard and you shouldn't count on it.) Which means YOU have to check the result/response/feedback of every query AND act appropriately (eg. by calling a ROLLBACK).