Can I manually lock a MySQL table? [duplicate] - mysql

This question already has answers here:
Lock mysql table with php
(4 answers)
Closed 4 years ago.
I have multiple independent processes, Scripts A and B, that each access the same table. I would like to Script A to read a record from the table then maybe modify that record (or maybe not).
The thing is that I need to keep Script B from accessing that particular record In the midst of that. Is there a manual lock perhaps? Something that will keep Script B out for just those few milliseconds?
Thanks

You don't want to lock the entire table right?
For InnoDB table a select...for update should do, it will lock the record (just add for update at the end of the query).
On your script A:
Create a transaction
Do a select for update
If you want to update the record, do so.
Commit the transaction when you finish.
On your script B do a select for update as well, it will wait until script A release the lock.

Related

Transaction + Select ... For Update... Skipping indexes [duplicate]

This question already has answers here:
When to fix auto-increment gaps in MYSQL
(2 answers)
MySQL AUTO_INCREMENT does not ROLLBACK
(11 answers)
Closed 1 year ago.
I noticed something funny with my table's index column after running an experiment to answer a different question.
In my experiment, I had two conditions. In one, autocommit=FALSE and in another autocommit=TRUE. I had 2 sessions connected to the server and tried various combinations of session #1 starting transactions / selecting FOR UPDATE and session #2 attempting selects and inserts, etc under those conditions. Important note: all transactions started by session #1 were rolled back, not committed.
Reviewing my results, I noticed that when the first session was in a transaction AND had selected FOR UPDATE, the insert made by session 2 (after the first session finished its transaction of course) had its index advanced by 2. In all other inserts by session 2 the index only advanced by 1. That includes inserts forced to wait because session 1 had selected FOR UDPATE (this is possible when autocommit is off btw).
Without losing another hour or two to testing, I was hoping someone could explain how the index advanced by 2. My best guess is this: I assume that the index +1 was reserved for session #1, which was rolled back instead of committed and the index + 2 was reserved for session #2 because it’s request came in while index +1 was reserved. After session #1 rolled back, nothing was put at index + 1, and session #2 inserted at the index already reserved for it, index + 2.
Is that true? If it is, I am concerned that if many sessions request to update at the same time and then don’t commit, whole patches of the index may go unused… Can I prevent this? Can I remediate it if and when it happens?
Thank you in advance.

MariaDB. Use Transaction Rollback without locking tables

On a website, when a user posts a comment I do several queries, Inserts and Updates. (On MariaDB 10.1.29)
I use START TRANSACTION so if any query fails at any given point I can easily do a rollback and delete all changes.
Now I noticed that this locks the tables when I do an INSERT from an other INSERT, and I'm not talking while the query is running, that’s obvious, but until the transaction is not closed.
Then DELETE is only locked if they share a common index key (comments for the same page), but luckily UPDATE is no locked.
Can I do any Transaction that does not lock the table from new inserts (while the transaction is ongoing, not the actual query), or any other method that lets me conveniently "undo" any query done after some point?
PD:
I start Transaction with PHPs function mysqli_begin_transaction() without any of the flags, and then mysqli_commit().
I don't think that a simple INSERT would block other inserts for longer than the insert time. AUTO_INC locks are not held for the full transaction time.
But if two transactions try to UPDATE the same row like in the following statement (two replies to the same comment)
UPDATE comment SET replies=replies+1 WHERE com_id = ?
the second one will have to wait until the first one is committed. You need that lock to keep the count (replies) consistent.
I think all you can do is to keep the transaction time as short as possible. For example you can prepare all statements before you start the transaction. But that is a matter of milliseconds. If you transfer files and it can take 40 seconds, then you shouldn't do that while the database transaction is open. Transfer the files before you start the transaction and save them with a name that indicates that the operation is not complete. You can also save them in a different folder but on the same partition. Then when you run the transaction, you just need to rename the files, which should not take much time. From time to time you can clean-up and remove unrenamed files.
All write operations work in similar ways -- They lock the rows that they touch (or might touch) from the time the statement is executed until the transaction is closed via either COMMIT or ROLLBACK. SELECT...FOR UPDATE and SELECT...WITH SHARED LOCK also get involved.
When a write operation occurs, deadlock checking is done.
In some situations, there is "gap" locking. Did com_id happen to be the last id in the table?
Did you leave out any SELECTs that needed FOR UPDATE?

Multiple simultaneous selects from mysql database table [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm working on a database that has one table with 21 million records. Data is loaded once when the database is created and there are no more insert, update or delete operations. A web application accesses the database to make select statements.
It currently takes 25 second per request for the server to receive a response. However if multiple clients are making simultaneous requests the response time increases significantly. Is there a way of speeding this process up ?
I'm using MyISAM instead of InnoDB with fixed max rows and have indexed based on the searched field.
If no data is being updated/inserted/deleted, then this might be case where you want to tell the database not to lock the table while you are reading it.
For MYSQL this seems to be something along the lines of:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ;
SELECT * FROM TABLE_NAME ;
SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ ;
(ref: http://itecsoftware.com/with-nolock-table-hint-equivalent-for-mysql)
More reading in the docs, if it helps:
https://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html
The TSQL equivalent, which may help if you need to google further, is
SELECT * FROM TABLE WITH (nolock)
This may improve performance. As noted in other comments some good indexing may help, and maybe breaking the table out further (if possible) to spread things around so you aren't accessing all the data if you don't need it.
As a note; locking a table prevents other people changing data while you are using it. Not locking a table that is has a lot of inserts/deletes/updates may cause your selects to return multiple rows of the same data (as it gets moved around on the harddrive), rows with missing columns and so forth.
Since you've got one table you are selecting against your requests are all taking turns locking and unlocking the table. If you aren't doing updates, inserts or deletes then your data shouldn't change, so you should be ok to forgo the locks.

Undo more than one change in mysql [duplicate]

This question already has answers here:
Is there any way to rollback after commit in MySQL?
(3 answers)
Closed 7 years ago.
Can we undo more than one change in mysql? I deleted some rows and did a select * to see the table. I saw ROLLBACK but I guess it only reverts the action by last query. Can I undo deleting those rows?
If there is no way to undo more than one changes, is there a way to view last edited table and undo change done before viewing it? Also, are changes before last query committed(even when AUTOCOMMIT is 0)?
the solution for the issue is that please heck that you binary logs has been activated in your server, if you have binary logs active on your server you can use mysqlbinlog
After taht generate a sql file with it
mysqlbinlog binary_log_file > query_log.sql
then find your missing rows.If not you have to keep Backup your DB from next time
From the reference manual: http://dev.mysql.com/doc/refman/5.1/en/commit.html
By default, MySQL runs with autocommit mode enabled. This means that
as soon as you execute a statement that updates (modifies) a table,
MySQL stores the update on disk to make it permanent.
This means that after you have deleted your records (and committed explicitly or implicitly), you cannot roll them back.
Rollback is a kind of undo for things which change data in tables, however in order to use you have to either:
turn off auto commit and use commit statements explicitly.
make your changes in transactions
there are statements which cause implicit commits: link

Alternative To READ UNCOMMITED With FOR UPDATE

We have 2 scripts/mysql connections that are grabbing rows from a table. Once a script grabs some rows, the other script must not be able to access those rows.
What I've got so far, that seems to work is this:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
START TRANSACTION
SELECT * FROM table WHERE result='new' FOR UPDATE
// Loop over update
UPDATE table SET result='old' WHERE id=...
COMMIT
From what I understand the same connection could read the dirty data, but the other connections shouldn't be able to since the rows are locked. Is this correct?
Also is there a better way of guaranteeing that each row can only be SELECT one time with both scripts running?
edit:
Oh... and the engine is Innodb
edit: Also I'd like to try to avoid deadlocks, unless they really have no effect, in which I could just prepare for them and rerun the query.
SELECT ... FOR UDATE sets exclusive lock on the rows, and if it's not possible it waits for lock to be released, the main aim of SELECT ... FOR UDATE statement is to prevent others from reading the certain rows, while you are manipulating them.
If I get your question right, by 'dirty data' you mean those locked rows?
Don't see why you call them 'dirty', cause they are just locked, but indeed inside of same transaction you can read the rows you've locked (obviuosly).
Regarding your second question
Also is there a better way of guaranteeing that each row can only be
SELECT one time with both scripts running?
SELECT ... FOR UDATE guarantees that in each moment certain rows can be read only inside of one transaction. I dont see a better way to do so, as soon as this statement was specially designed for that purpose.