How to rollback the effect of last executed mysql query - mysql

I just ran a command
update sometable set col = '1';
by mistake without specifying the where condition.
Is it possible to recover the previous version of the table?

Unless you...
Started a transaction before running the query, and...
Didn't already commit the transaction
...then no, you're out of luck, barring any backups of previous versions of the database you might have made yourself.
(If you don't use transactions when manually entering queries, you might want to in the future to prevent headaches like the one you probably have now. They're invaluable for mitigating the realized-5-seconds-later kind of mistake.)

Consider enabling sql_safe_updates in future if you are worried about doing this kind of thing again.
SET SESSION sql_safe_updates = 1

No. MySQL does have transaction support for some table types, but because you're asking this question, I'll bet you're not using it.
Everybody does this once. It's when you do it twice you have to worry :)

Related

How to undo (to revert to the original data table) in MySQL? [duplicate]

I was trying to do an update on the MySQL server and accidentally forgot to add an additional WHERE clause which was supposed to edit one row.
I now have 3500+ rows edited due to my error.
I may have a back up but I did a ton of work since the last backup and I just dont want to waste it all because of 1 bad query.
Please tell me there is something i can do to fix this.
If you committed your transaction, it's time to dust off that backup, sorry. But that's what backups are for. I did something like that once myself... once.
Just an idea - could you restore your backup to a NEW database and then do a cross database query to update that column based on the data it used to be?
Nothing.
Despite this you can be glad that you've got that learning experience under your belt and be proud of how you'll now change your habits to greatly reduce the chance of it happening again. You'll be the 'master' now that can teach the young pups and quote from actual battle-tested experience.
There is only one thing that you can do now is FIX YOUR BAD HABIT, it will help you in future. I know its an old question and OP must have learned the lesson. I am just posting it for others because I also learned the lesson today.
So I was supposed to run a query which was supposed to update some fifty records and then MySQL returned THIS 48500 row(s) affected which gave me a little heart attack due to one silly mistake in WHERE condition :D
So here are the learnings:
Always check your query twice before running it but sometimes it won't help because you can still make that silly mistake.
Since Step 1 is not always helpful its better that you always take backup of your DB before running any query that will affect the data.
If you are lazy to create a backup (as I was when running that silly query) because the DB is large and it will take time then at least use TRANSACTION and I think this is a good effortless way to beat the disaster. From now on this is what I do with every query that affects data:
I start the Transaction, run the Query, and then check the results if they are OK or Not. If the results are OK then I simply COMMIT the changes otherwise to overcome the disaster I simply ROLLBACK
START TRANSACTION;
UPDATE myTable
SET name = 'Sam'
WHERE recordingTime BETWEEN '2018-04-01 00:00:59' AND '2019-04-12 23:59:59';
ROLLBACK;
-- COMMIT; -- I keep it commented so that I dont run it mistakenly and only uncomment when I really want to COMMIT the changes

Undo a mysql UPDATE command

I was trying to do an update on the MySQL server and accidentally forgot to add an additional WHERE clause which was supposed to edit one row.
I now have 3500+ rows edited due to my error.
I may have a back up but I did a ton of work since the last backup and I just dont want to waste it all because of 1 bad query.
Please tell me there is something i can do to fix this.
If you committed your transaction, it's time to dust off that backup, sorry. But that's what backups are for. I did something like that once myself... once.
Just an idea - could you restore your backup to a NEW database and then do a cross database query to update that column based on the data it used to be?
Nothing.
Despite this you can be glad that you've got that learning experience under your belt and be proud of how you'll now change your habits to greatly reduce the chance of it happening again. You'll be the 'master' now that can teach the young pups and quote from actual battle-tested experience.
There is only one thing that you can do now is FIX YOUR BAD HABIT, it will help you in future. I know its an old question and OP must have learned the lesson. I am just posting it for others because I also learned the lesson today.
So I was supposed to run a query which was supposed to update some fifty records and then MySQL returned THIS 48500 row(s) affected which gave me a little heart attack due to one silly mistake in WHERE condition :D
So here are the learnings:
Always check your query twice before running it but sometimes it won't help because you can still make that silly mistake.
Since Step 1 is not always helpful its better that you always take backup of your DB before running any query that will affect the data.
If you are lazy to create a backup (as I was when running that silly query) because the DB is large and it will take time then at least use TRANSACTION and I think this is a good effortless way to beat the disaster. From now on this is what I do with every query that affects data:
I start the Transaction, run the Query, and then check the results if they are OK or Not. If the results are OK then I simply COMMIT the changes otherwise to overcome the disaster I simply ROLLBACK
START TRANSACTION;
UPDATE myTable
SET name = 'Sam'
WHERE recordingTime BETWEEN '2018-04-01 00:00:59' AND '2019-04-12 23:59:59';
ROLLBACK;
-- COMMIT; -- I keep it commented so that I dont run it mistakenly and only uncomment when I really want to COMMIT the changes

Which isolation level to use in a basic MySQL project?

Well, I got an assignment [mini-project] in which one of the most important issues is the database consistency.
The project is a web application, which allows multiple users to access and work with it. I can expect concurrent querying and updating requests into a small set of tables, some of them connected one to the other (using FOREIGN KEYS).
In order to keep the database as consistent as possible, we were advised to use isolation levels. After reading a bit (maybe not enough?) about them, I figured the most useful ones for me are READ COMMITTED and SERIALIZABLE.
I can divide the queries into three kinds:
Fetching query
Updating query
Combo
For the first one, I need the data to be consistent of course, I don't want to present dirty data, or uncommitted data, etc. Therefore, I thought to use READ COMMITTED for these queries.
For the updating query, I thought using SERIALIZABLE will be the best option, but after reading a bit, i found myself lost.
In the combo, I'll probably have to read from the DB, and decide whether I need/can update or not, these 2-3 calls will be under the same transaction.
Wanted to ask for some advice in which isolation level to use in each of these query options. Should I even consider different isolation levels for each type? or just stick to one?
I'm using MySQL 5.1.53, along with MySQL JDBC 3.1.14 driver (Requirements... Didn't choose the JDBC version)
Your insights are much appreciated!
Edit:
I've decided I'll be using REPEATABLE READ which seems like the default level.
I'm not sure if it's the right way to do, but I guess REPEATABLE READ along with LOCK IN SHARE MODE and FOR UPDATE to the queries should work fine...
What do you guys think?
I would suggest READ COMMITTED. It seems natural to be able to see other sessions' committed data as soon as they're committed.
Its unclear why MySQL has a default of REPEATABLE READ.
I think you worry too much about the isolation level.
If you have multiple tables to update you need to do:
START TRANSACTION;
UPDATE table1 ....;
UPDATE table2 ....;
UPDATE table3 ....;
COMMIT;
This is the important stuff, the isolation level is just gravy.
The default level of repeatable read will do just fine for you.
Note that select ... for update will lock the table, this can result in deadlocks, which is worse than the problem you may be trying to solve.
Only use this if you are deleting rows in your DB.
To be honest I rarely see rows being deleted in a DB, if you are just doing updates, then just use normal selects.
Anyway see: http://dev.mysql.com/doc/refman/5.0/en/innodb-transaction-model.html

Locking in MySQL

I have recently started a fairly large web project which is going to use MySQL as a database. I am not completely familiar with MySQL, but I know enough to make simple queries and generally do all that I need to.
I was told that I needed to lock my tables before writing to them? Is this necessary every time? Surely MySQL would have some sort of built in feature to handle concurrent reading and writing of the database?
In short, when should I use locking, and how should I go about doing so?
Here is an excellent explanation of when and how to implement locking: http://www.brainbell.com/tutors/php/php_mysql/When_and_how_to_lock_tables.html
As per El yobo's suggestion:
If you are doing one off select querys, there is not going to be a problem.
From the article:
Locking is required only when
developing scripts that first read a
value from a database and later write
that value to the database.
In short, dont use myisam use innodb instead. When you want to insert, update or delete (CRUD) rows do:
start transaction;
insert into users (username) values ('f00');
...
commit; -- or rollback
when you want to fetch rows just select them:
select user_id, username from users;
hope this helps :)

when will select statement without for update causing lock?

I'm using MySQL,
I sometimes saw a select statement whose status is 'locked' by running 'show processlist'
but after testing it on local,I can't reproduce the 'locked' status again.
It probably depends on what else is happening. I'm no mySQL expert but in SQL Server various lock levels control when data can be read and written. For example in production your select stateemnt might want to read a record that is being updated. It has to wait until the update is done. Vice-versa - an update might have to wait for a read to finish.
Messing with default lock levels is dangerous. And since dev environs don't have nearly as much traffic you probasbly don't see that kind of contention.
If you spot that again see if you can see if any update is being made against one of the tables your select is referencing.
I'm no expect in mysql, but it sounds like another user is holding a lock against a table/field while your trying to read it.
I'm no MySQL expert either, but locking behavior strongly depends on the isolation level / transaction isolation. I would suggest searching for those terms in the MySQL docs.