Performing a transaction across multiple statements in phpMyAdmin - mysql

I'm not sure if this is an issue with phpMyAdmin, or that I'm not fully understanding how transactions work, but I want to be able to step through a series of queries within a transaction, and either ROLLBACK or COMMIT based on the returned results. I'm using the InnoDB storage engine.
Here's a basic example;
START TRANSACTION;
UPDATE students
SET lastname = "jones"
WHERE studentid = 1;
SELECT * FROM students;
ROLLBACK;
As a single query, this works entirely fine, and if I'm happy with the results, I could re-run the entire query with COMMIT.
However, if all these queries can be ran seperately, why does phpMyAdmin lose the transaction?
For example, if I do this;
START TRANSACTION;
UPDATE students
SET lastname = "jones"
WHERE studentid = 1;
SELECT * FROM students;
Then this;
COMMIT;
SELECT * FROM students;
The update I made in the transaction is lost, and lastname retains its original value, as if the update never took place. I was under the impression that transactions can span multiple queries, and I've seen a couple of examples of this;
1: Entirely possible in Navicat, a different IDE
2: Also possible in PHP via MySQLi
Why then am I losing the transaction in phpMyAdmin, if transactions are able to span multiple individual queries?
Edit 1: After doing a bit of digging, it appears that there are two other ways a transaction can be implicitly ended in MySQL;
Disconnecting a client session will implicitly end the current
transaction. Changes will be rolled back.
Killing a client session will implicitly end the current
transaction. Changes will be rolled back.
Is it possible that phpMyAdmin is ending the client session after Go is hit and a query is submitted?
Edit 2:
Just to confirm this is just a phpMyAdmin-specific issue, I ran the same query across multiple seperate queries in MySQL Workbench, and it worked exactly as intended, retaining the transaction, so it appears to be a failure on phpMyAdmin's part.

Is it possible that phpMyAdmin is ending the client session after Go is hit and a query is submitted?
That is pretty much how PHP works. You send the request, it get's processed, and once done, everything (including MySQL connections) gets thrown away. With next request, you start afresh.
There is a feature called persistent connections, but that is as well doing it's clean up. Otherwise the code would have to somehow handle giving the same user the same connection. Which could prove very difficult given the way PHP works.

Related

Unwanted delayed UPDATE through operating system scheduling?

We are running into a very strange problem with disjunct concurrent PHP processes accessing the same table (using table locks).
There is no replication involved, we're working on a monolith with the mysqli-interface of PHP 5.6.40 (I know, upgrading is due, we're working on it).
Let's say the initial value of a field namend "value" in xyz is 0;
PHP-Process 1: Modifies the table
LOCK TABLE xyz WRITE;
UPDATE xyz SET value = 1;
UNLOCK TABLE xyz;
PHP-Process 2: Depends on a value in that table (e.g. a check for access rights)
SELECT value from xyz;
Now, if we manage to make Process 2 halt and wait for the lock to be released, on a local dev-Environment (XAMPP, MariaDB 10.1.x), everything is fine, it will get the value 1;
BUT, on our production server (DebianLinux, MySQL 5.6.x) there is a seemingly necessary wait period for the value to materialize in query results.
An immediate SELECT statement delivers 0
sleep(1) then SELECT delivers 1
We always assumend that a) LOCK / UNLOCK will Flush Tables or b) A manual FLUSH TABLES xyz WITH READ LOCK will also flush caches, enforcing writing to the disc and generally will ensure that every following query of every other process will yield the expected result.
What we tried so far:
FLUSH TABLES as mentioned - no result
Explicitly acquire a LOCK before executing the SELECT statement - no result
Just wait some time - yielded the result we are looking for, but this is a dirty, unreliable solution.
What do you guys think? What might be the cause? I was thinking of: The query cache not updating in time, paging of the underlying OS not writing stuff back to the disk in time / not validating the memory page of the table data.
Is there any way you know to definitely assure consecutive consistentcy of the data?
There are different transaction isolation modes by default in the different MariadB versions.
You have set up the same mode if you expect the same result. It also seems weird to test it on different MySQL versions.
https://mariadb.com/kb/en/mariadb-transactions-and-isolation-levels-for-sql-server-users/
Your second process do start of transaction may be far before the commit actually issued.
If you do not want dig in transaction isolation just try do rollback before select(but correct solution is determine what exactly isolation your app require).
Rollback; -- may give error, but it is okay.
SELECT value from xyz;

START TRANSACTION and COMMIT not working in separate queries in MySQL

When I send a multiple line semicolon separated query (i.e. 3 separate queries), it works fine, depending on whether I finish it with a COMMIT or ROLLBACK, it either inserts the values or rolls back. BUT when I enter them in three separate queries, one after another, now that's not gonna work. (I'm using PHP MyAdmin)
The latter would have to make more sense, as I think this the whole point in transactions, to send queries in a session (transaction) and deciding only at the end whether we want to run them or discard changes to the table.
START TRANSACTION;
INSERT INTO x VALUES ('y');
COMMIT;
phpMyAdmin doesn't work that way: It doesn't maintain the session between each submission of the form, so you wont get the desired functionality you're looking for that way.
In code, on the other hand, this will work as intended because you're opening up the connection once, running 3 individual queries, and then closing the connection.

Partially commit MySQL Transaction?

I want to know if there's a way to commit a transaction partially. I have a long running transaction in C# and when two users are running this transaction parallel to each other, the data is co-dependent and should show to them both even while in the transaction. For example say I have a table with these 3 columns
username | left_child | right_child
I am making a binary tree and whenever a new user is added into the database they end up somewhere in the tree. But I am running all of the insertions and updates in one transaction so if there's even one error the whole transaction can be rolled back and the structure of the tree is not disturbed. But the problem is the when two users are using my web app at the same time.
Say that username 'jackie_' does not have any children at the minute. Two new users 'king_' and 'robbo' enter parallel to each other and the transaction is running for both of them. Since the results of the transaction running for one user are not visible to the other user in the actual database yet, they both think that the left_child of 'jackie_' hasn't been set yet and so they both update the left_child to their own username. During the transaction since the update was successful for both of them, they both commit the transaction. Now I have two users but only one of them is actually successfully entered into the tree and the structure of the tree is disturbed completely.
So what I need is to be able to commit one transaction even during, "partially". So if 'robbo' got set the left_child of 'jackie_' first, the transaction implements the change into the database so when 'king_' tries to update the same row, he can't. But if along the way, if some other problem occurs for 'robbo' I still want to be able to rollback the whole transaction. Any other solutions which would be more practical are appreciated as well.
For all the queries that I am running, this is the way I am doing it in the transaction
string insertTreesQuery = "INSERT INTO tree (username) VALUES('king_')";
MySqlCommand insertTreesQueryCmd = new MySqlCommand(insertTreesQuery , con);
insertTreesQueryCmd.Transaction = sqlTrans;
insertTreesQueryCmd.ExecuteNonQuery();
where sqlTrans is the transaction that I am using for all the MySqlCommand objects before executing them
What you are asking is not possible. You cannot "partially" commit a transaction. I'm assuming that your example is greatly simplified since you mention the transactions are very large. In that case, it would probably be best to split it up into smaller ones that can be committed independently, thus reducing the chance of there being a conflict.

MySQL performing a "No impact" temporary INSERT with replication avoiding Locks

SO, we are trying to run a Report going to screen, which will not change any stored data.
However, it is complex, so needs to go through a couple of (TEMPORARY*) tables.
It pulls data from live tables, which are replicated.
The nasty bit when it comes to take the "eligible" records from
temp_PreCalc
and populate them from the live data to create the next (TEMPORARY*) table output
resulting in effectively:
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
The report is not a "definitive" answer, expectation is that is merely a "snapshot" report and will be out-of-date as soon as it appears on screen.
There is no order or reproducibility issue.
So Ideally, I would turn my TRANSACTION ISOLATION LEVEL down to READ COMMITTED...
However, I can't because live_Tab1,2,3 are replicated with BIN_LOG STATEMENT type...
The statement is lovely and quick - it takes hardly any time to run, so the resource load is now less than it used to be (which did separate selects and inserts) but it waits (as I understand it) because of the SELECT that waits for a repeatable/syncable lock on the live_Tab's so that any result could be replicated safely.
In fact it now takes more time because of that wait.
I'd like to SEE that performance benefit in response time!
Except the data is written to (TEMPORARY*) tables and then thrown away.
There are no live_ table destinations - only sources...
these tables are actually not TEMPORARY TABLES but dynamically created and thrown away InnoDB Tables, as the report Calculation requires Self-join and delete... but they are temporary
I now seem to be going around in circles finding an answer.
I don't have SUPER privilege and don't want it...
So can't SET BIN_LOG=0 for this connection session (Why is this a requirement?)
So...
If I have a scratch Database or table wildcard, which excludes all my temp_ "Temporary" tables from replication...
(I am awaiting for this change to go through at my host centre)
Will MySQL allow me to
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
INSERT INTO temp_PostCalc (...)
SELECT ...
FROM temp_PreCalc
JOIN live_Tab1 ON ...
JOIN live_Tab2 ON ...
JOIN live_Tab3 ON ...
;
Or will I still get my
"Cannot Execute statement: impossible to write to binary log since
BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine
limited to row-based logging..."
Even though its not technically true?
I am expecting it to, as I presume that the replication will kick in simply because it sees the "INSERT" statement, and will do a simple check on any of the tables involved being replication eligible, even though none of the destinations are actually replication eligible....
or will it pleasantly surprise me?
I really can't face using an unpleasant solution like
SELECT TO OUTFILE
LOAD DATA INFILE
In fact I dont think I could even use that - how would I get unique filenames? How would I clean them up?
The reports are run on-demand directly by end users, and I only have MySQL interface access to the server.
or streaming it through the PHP client, just to separate the INSERT from the SELECT so that MySQL doesnt get upset about which tables are replication eligible....
So, it looks like the only way appears to be:
We create a second Schema "ScratchTemp"...
Set the dreaded replication --replicate-ignore-db=ScratchTemp
My "local" query code opens a new mysql connection, and performs a USE ScratchTemp;
Because I have selected the default database of the "ignore"d one - none of my queries will be replicated.
So I need to take huge care not to perform ANY real queries here
Reference my scratch_ tables and actual data tables by prefixing them all on my queries with the schema qualified name...
e.g.
INSERT INTO LiveSchema.temp_PostCalc (...) SELECT ... FROM LiveSchema.temp_PreCalc JOIN LiveSchema.live_Tab1 etc etc as above.
And then close this connection just as soon as I can, as it is frankly dangerous to have a non-replicated connection open....
Sigh...?

Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

From a script I sent a query like this thousands of times to my local database:
update some_table set some_column = some_value
I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times.
I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since then, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message:
Lock wait timeout exceeded; try restarting transaction
It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?
I had a similar problem and solved it by checking the threads that are running.
To see the running threads use the following command in mysql command line interface:
SHOW PROCESSLIST;
It can also be sent from phpMyAdmin if you don't have access to mysql command line interface.
This will display a list of threads with corresponding ids and execution time, so you can KILL the threads that are taking too much time to execute.
In phpMyAdmin you will have a button for stopping threads by using KILL, if you are using command line interface just use the KILL command followed by the thread id, like in the following example:
KILL 115;
This will terminate the connection for the corresponding thread.
You can check the currently running transactions with
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`
Your transaction should be one of the first, because it's the oldest in the list. Now just take the value from trx_mysql_thread_id and send it the KILL command:
KILL 1234;
If you're unsure which transaction is yours, repeat the first query very often and see which transactions persist.
Check InnoDB status for locks
SHOW ENGINE InnoDB STATUS;
Check MySQL open tables
SHOW OPEN TABLES WHERE In_use > 0;
Check pending InnoDB transactions
SELECT * FROM `information_schema`.`innodb_trx` ORDER BY `trx_started`;
Check lock dependency - what blocks what
SELECT * FROM `information_schema`.`innodb_locks`;
After investigating the results above, you should be able to see what is locking what.
The root cause of the issue might be in your code too - please check the related functions especially for annotations if you use JPA like Hibernate.
For example, as described here, the misuse of the following annotation might cause locks in the database:
#Transactional(propagation = Propagation.REQUIRES_NEW)
This started happening to me when my database size grew and I was doing a lot of transactions on it.
Truth is there is probably some way to optimize either your queries or your DB but try these 2 queries for a work around fix.
Run this:
SET GLOBAL innodb_lock_wait_timeout = 5000;
And then this:
SET innodb_lock_wait_timeout = 5000;
When you establish a connection for a transaction, you acquire a lock before performing the transaction. If not able to acquire the lock, then you try for sometime. If lock is still not obtainable, then lock wait time exceeded error is thrown. Why you will not able to acquire a lock is that you are not closing the connection. So, when you are trying to get a lock second time, you will not be able to acquire the lock as your previous connection is still unclosed and holding the lock.
Solution: close the connection or setAutoCommit(true) (according to your design) to release the lock.
Restart MySQL, it works fine.
BUT beware that if such a query is stuck, there is a problem somewhere :
in your query (misplaced char, cartesian product, ...)
very numerous records to edit
complex joins or tests (MD5, substrings, LIKE %...%, etc.)
data structure problem
foreign key model (chain/loop locking)
misindexed data
As #syedrakib said, it works but this is no long-living solution for production.
Beware : doing the restart can affect your data with inconsistent state.
Also, you can check how MySQL handles your query with the EXPLAIN keyword and see if something is possible there to speed up the query (indexes, complex tests,...).
Goto processes in mysql.
So can see there is task still working.
Kill the particular process or wait until process complete.
I ran into the same problem with an "update"-statement. My solution was simply to run through the operations available in phpMyAdmin for the table. I optimized, flushed and defragmented the table (not in that order). No need to drop the table and restore it from backup for me. :)
I had the same issue. I think it was a deadlock issue with SQL. You can just force close the SQL process from Task Manager. If that didn't fix it, just restart your computer. You don't need to drop the table and reload the data.
I had this problem when trying to delete a certain group of records (using MS Access 2007 with an ODBC connection to MySQL on a web server). Typically I would delete certain records from MySQL then replace with updated records (cascade delete several related records, this streamlines deleting all related records for a single record deletion).
I tried to run through the operations available in phpMyAdmin for the table (optimize,flush, etc), but I was getting a need permission to RELOAD error when I tried to flush. Since my database is on a web server, I couldn't restart the database. Restoring from a backup was not an option.
I tried running delete query for this group of records on the cPanel mySQL access on the web. Got same error message.
My solution: I used Sun's (Oracle's) free MySQL Query Browser (that I previously installed on my computer) and ran the delete query there. It worked right away, Problem solved. I was then able to once again perform the function using the Access script using the ODBC Access to MySQL connection.
Issue in my case: Some updates were made to some rows within a transaction and before the transaction was committed, in another place, the same rows were being updated outside this transaction. Ensuring that all the updates to the rows are made within the same transaction resolved my issue.
issue resolved in my case by changing delete to truncate
issue-
query:
delete from Survey1.sr_survey_generic_details
mycursor.execute(query)
fix-
query:
truncate table Survey1.sr_survey_generic_details
mycursor.execute(query)
This happened to me when I was accessing the database from multiple platforms, for example from dbeaver and control panels. At some point dbeaver got stuck and therefore the other panels couldn't process additional information. The solution is to reboot all access points to the database. close them all and restart.
Fixed it.
Make sure you doesn't have mismatched data type insert in query.
I had an issue where i was trying "user browser agent data" in VARCHAR(255) and having issue with this lock however when I changed it to TEXT(255) it fixed it.
So most likely it is a mismatch of data type.
I solved the problem by dropping the table and restoring it from backup.