How can i find out if there is transaction open in mySQL? I need to start new one if there is no transaction open, but i don't want to start new one if there is one running, because that would commit that running transaction.
UPDATE:
i need to query database in one method of my application, but that query could be called as part of bigger transaction, or just as it should be a transaction on its own. Changing application to track if it has open a transaction would be more difficult, as it could started from many pieces of code. Although it would be possible, i'm looking for solution that would be faster to implement. Simple if statement in sql would be effortless.
Thank you
I am assuming you are doing this as a one-off and not trying to establish something that can be done programatically. You can get the list of currently active processes using: SHOW PROCESSLIST
http://dev.mysql.com/doc/refman/5.1/en/show-processlist.html
If you want something programatic, then I would suggest an explicit lock on a table, at the beginning of your transaction and release it at the end.
Related
I encounter in my project a need to implement transactions - thing that I never did before. I already checked that my aytocommit is set to 1 - and I'm not sure if I nneed to touch it at all?
Right now I have set of scripts that all does include function that connnects to database first. There is a perfect place to put mysqli_begin_transaction($link);, and mysqli_autocommit($link, FALSE);, so I'd have transactions everywhere regardless if specific script does need it, or not, and turn off autocommit - the documentation on php.net if very poor there but AFAIR I should do this. So my question no. 1 & 2 would be: Is it fine to start transaction everywhere regardless if script does need it or not? And should i disable autocommit like this as well?
Now let's say, that I have such script (sorry for not providing actual code, but my question is about how transaction works, not about code itself):
~insert and/or update things
~do something aka "line 2"
~insert and/or update things again
Seems like example taken right from the book. I obviously want all inserts and updates, or none to happen. Since I already started transaction, I assume, that nothing will commit, unless I call mysqli_commit ($link);. But here we have a little problem: I do not include any 'footer' at the end of my scripts and doing so seems like a nightmare now, so I don't have any place to put commit. So question no. 3 is: Will my queries commit automaticly after script ends (or I call exit; or die();) even if i set autocommit to false? Or do I need to call commit/do not turn off autocommit?
Now comes time for case when something fails and I need to rollback. Same as above - do I need to call mysqli_rollback (mysqli $link);, or pure fact that I did not call commit will be sufficient? I'm refering here to an situation where script does not end normally. Situations like power off server while working on "line 2", or was stopped because it took to much time (set_time_limit stopped it).
This is a somewhat broad question, so I'll try to cover all the things as much as I can.
At first you can ignore the mysqli api (the api specific transaction functions are just wrappers), and go straight to the MySQL manual. The important thing here is that disabling autocommit and starting a transaction are the same thing. Also a single query (including modifications by triggers) is always a transaction.
The answer you question 1 & 2 is "probably not". It very much depends on what your existing code assumes about the database connection, and how your application is structured.
From what you mentioned in the question, the answer would be: it will be better if you only put transactions in the places that need them.
For question 3: it will not commit automatically. You can however make it do so, by using register_shutdown_function, although I don't recommend doing that.
There are statements (implicit commits) which will commit the transaction automatically. These include all DDL statements (CREATE,ALTER...) and also TRUCNATE, LOCK TABLES and others. This basically means those statements can't be used in transactions.
MySQL rolls back transactions when the connection is terminated.
I would recommend to add transactions only to the code which needs them (to be safe you can do this for all code which does more than one write query to the db).
The classic approach is:
START TRANSACTION
query
other things
another query
some other stuff
3-rd query
...
COMMIT
The main thing here is to make sure you only commit if no errors have occurred.
Leave the rollback to either connection termination (or register_shutdown_function if you are using persistent connections), because making sure each and every script will have a correctly working rollback logic is hard :)
This will make sure that nothing is committed if bad things happen (exceptions, fatal errors, time/mem limits, power outages, meteors...).
It is also possible to have transactions at a function/method level (nested and stack-like), but thats out of the scope for this question.
I want to know if there's a way to commit a transaction partially. I have a long running transaction in C# and when two users are running this transaction parallel to each other, the data is co-dependent and should show to them both even while in the transaction. For example say I have a table with these 3 columns
username | left_child | right_child
I am making a binary tree and whenever a new user is added into the database they end up somewhere in the tree. But I am running all of the insertions and updates in one transaction so if there's even one error the whole transaction can be rolled back and the structure of the tree is not disturbed. But the problem is the when two users are using my web app at the same time.
Say that username 'jackie_' does not have any children at the minute. Two new users 'king_' and 'robbo' enter parallel to each other and the transaction is running for both of them. Since the results of the transaction running for one user are not visible to the other user in the actual database yet, they both think that the left_child of 'jackie_' hasn't been set yet and so they both update the left_child to their own username. During the transaction since the update was successful for both of them, they both commit the transaction. Now I have two users but only one of them is actually successfully entered into the tree and the structure of the tree is disturbed completely.
So what I need is to be able to commit one transaction even during, "partially". So if 'robbo' got set the left_child of 'jackie_' first, the transaction implements the change into the database so when 'king_' tries to update the same row, he can't. But if along the way, if some other problem occurs for 'robbo' I still want to be able to rollback the whole transaction. Any other solutions which would be more practical are appreciated as well.
For all the queries that I am running, this is the way I am doing it in the transaction
string insertTreesQuery = "INSERT INTO tree (username) VALUES('king_')";
MySqlCommand insertTreesQueryCmd = new MySqlCommand(insertTreesQuery , con);
insertTreesQueryCmd.Transaction = sqlTrans;
insertTreesQueryCmd.ExecuteNonQuery();
where sqlTrans is the transaction that I am using for all the MySqlCommand objects before executing them
What you are asking is not possible. You cannot "partially" commit a transaction. I'm assuming that your example is greatly simplified since you mention the transactions are very large. In that case, it would probably be best to split it up into smaller ones that can be committed independently, thus reducing the chance of there being a conflict.
I have to update a row into a table(InnoDB) and then right after select the last registry that I updated and make an insert. If the connection is too slow(for the update statement), can the select statement get the wrong row? Assuming that I'm using two different queries.
Are you using SQL to run your script or are you running it somewhere else? (ex PHP, Python, C#)
A Script from SQL should* always complete one line before moving on to the next but if you're unsure you could call something like the sleep function or the wait delay function to pause before you run your second line.
*I say should as I've seen some extremely rare random cases, usually with longer running queries that don't. If your first job takes a long time to complete it may be worth the effort to schedule the first job in Job Agent, then later that day schedule the second job.
MySQL does not keep records of row insertion order. Any algorithm that's based on last registry that I updated must implement its own means to gather the required information. If it doesn't, it will get the wrong row sooner or later. (Network speed is probably not as relevant as concurrent access.)
The Setup
While working on some rather complex procedures I've started logging debug information into a _debug table, via a stored logging procedure: P_Log('message'), which just calls a simple INSERT query into the _debug table.
The complex procedures contain transactions, which are rolled back if an error is encountered. The problem is that any debug information that was logged during the course of the transaction is also rolled back. This is of course a little counter productive, since you want to be able to see the debug logs precisely when the procedure -does- fail.
The Question
Is there any way I can insert into _debug without having the inserts rolled back? The log is really only to be used in development, and I would only ever write to it, so I don't care if it would violate how transactions are intended to be used.
And just out of curiosity, how is this normally handled? it seems like being able to write arbitrary log information from inside transactions, to check states of variables, etc, regardless of said transactions being rolled back, would be absolutely crucial for debugging errors. What's the best practice here?
Possible alternatives
storing logs in variables and only writing them at the end of the procedure.
the problem with this is that I want to be able to insert an arbitrary number of debug entries. creating a text variable and parcing that later would work, but seems very hacky.
Using some built-in log in mysql
I'd actually fine with this, if it means I can write arbitrary text to it at will, but I haven't been able to find something like this so far.
The simplest way would be to change your logs table to MyISAM.
It does not support transactions and will completely ignore them. Also MyISAM is a bit faster when you only insert and select from it.
The only other solution that I know of is to create a separate connection for the logs.
I have a situation where I need to lock 2 tables, until some operations are done on those 2 tables individually. For that I have chosen "TRANSACTIONS".
So, between "START TRANSACTION" and "COMMIT".. if anyone tries to insert into those tables, what happens to those queries? will they be maintained in the queue and will be executed after the transaction is completed?
Ideally, whats my requirement is.. they should not get inserted until my transaction has got commited.
Please anyone tell me the scenario.
Thanks in advance!
SuryaPavan
When a transaction is initiated, it's isolated from the rest of the world. That's the I in ACID.
During the transaction, if anyone tries to insert anything - the insert will occur but it won't break the transaction you are performing in any way. Same rule applies in the other direction.
If you have the requirement to literally lock the entire table for insert until your transaction succeeds - that smells like a bad design and you should reconsider if what you're doing is really optimal.
If they got inserted before your transaction was committed (that is, in the middle of your transaction), that would make transactions fairly useless.
I assume you're using mysql (since your question is tagged with that)... why not open two command line sessions to your database and try it to see what happens? That would take far less time than posting your question and waiting for an answer, and you'll likely learn more in the process.