I was trying to do an update on the MySQL server and accidentally forgot to add an additional WHERE clause which was supposed to edit one row.
I now have 3500+ rows edited due to my error.
I may have a back up but I did a ton of work since the last backup and I just dont want to waste it all because of 1 bad query.
Please tell me there is something i can do to fix this.
If you committed your transaction, it's time to dust off that backup, sorry. But that's what backups are for. I did something like that once myself... once.
Just an idea - could you restore your backup to a NEW database and then do a cross database query to update that column based on the data it used to be?
Nothing.
Despite this you can be glad that you've got that learning experience under your belt and be proud of how you'll now change your habits to greatly reduce the chance of it happening again. You'll be the 'master' now that can teach the young pups and quote from actual battle-tested experience.
There is only one thing that you can do now is FIX YOUR BAD HABIT, it will help you in future. I know its an old question and OP must have learned the lesson. I am just posting it for others because I also learned the lesson today.
So I was supposed to run a query which was supposed to update some fifty records and then MySQL returned THIS 48500 row(s) affected which gave me a little heart attack due to one silly mistake in WHERE condition :D
So here are the learnings:
Always check your query twice before running it but sometimes it won't help because you can still make that silly mistake.
Since Step 1 is not always helpful its better that you always take backup of your DB before running any query that will affect the data.
If you are lazy to create a backup (as I was when running that silly query) because the DB is large and it will take time then at least use TRANSACTION and I think this is a good effortless way to beat the disaster. From now on this is what I do with every query that affects data:
I start the Transaction, run the Query, and then check the results if they are OK or Not. If the results are OK then I simply COMMIT the changes otherwise to overcome the disaster I simply ROLLBACK
START TRANSACTION;
UPDATE myTable
SET name = 'Sam'
WHERE recordingTime BETWEEN '2018-04-01 00:00:59' AND '2019-04-12 23:59:59';
ROLLBACK;
-- COMMIT; -- I keep it commented so that I dont run it mistakenly and only uncomment when I really want to COMMIT the changes
Related
I was trying to do an update on the MySQL server and accidentally forgot to add an additional WHERE clause which was supposed to edit one row.
I now have 3500+ rows edited due to my error.
I may have a back up but I did a ton of work since the last backup and I just dont want to waste it all because of 1 bad query.
Please tell me there is something i can do to fix this.
If you committed your transaction, it's time to dust off that backup, sorry. But that's what backups are for. I did something like that once myself... once.
Just an idea - could you restore your backup to a NEW database and then do a cross database query to update that column based on the data it used to be?
Nothing.
Despite this you can be glad that you've got that learning experience under your belt and be proud of how you'll now change your habits to greatly reduce the chance of it happening again. You'll be the 'master' now that can teach the young pups and quote from actual battle-tested experience.
There is only one thing that you can do now is FIX YOUR BAD HABIT, it will help you in future. I know its an old question and OP must have learned the lesson. I am just posting it for others because I also learned the lesson today.
So I was supposed to run a query which was supposed to update some fifty records and then MySQL returned THIS 48500 row(s) affected which gave me a little heart attack due to one silly mistake in WHERE condition :D
So here are the learnings:
Always check your query twice before running it but sometimes it won't help because you can still make that silly mistake.
Since Step 1 is not always helpful its better that you always take backup of your DB before running any query that will affect the data.
If you are lazy to create a backup (as I was when running that silly query) because the DB is large and it will take time then at least use TRANSACTION and I think this is a good effortless way to beat the disaster. From now on this is what I do with every query that affects data:
I start the Transaction, run the Query, and then check the results if they are OK or Not. If the results are OK then I simply COMMIT the changes otherwise to overcome the disaster I simply ROLLBACK
START TRANSACTION;
UPDATE myTable
SET name = 'Sam'
WHERE recordingTime BETWEEN '2018-04-01 00:00:59' AND '2019-04-12 23:59:59';
ROLLBACK;
-- COMMIT; -- I keep it commented so that I dont run it mistakenly and only uncomment when I really want to COMMIT the changes
I encounter in my project a need to implement transactions - thing that I never did before. I already checked that my aytocommit is set to 1 - and I'm not sure if I nneed to touch it at all?
Right now I have set of scripts that all does include function that connnects to database first. There is a perfect place to put mysqli_begin_transaction($link);, and mysqli_autocommit($link, FALSE);, so I'd have transactions everywhere regardless if specific script does need it, or not, and turn off autocommit - the documentation on php.net if very poor there but AFAIR I should do this. So my question no. 1 & 2 would be: Is it fine to start transaction everywhere regardless if script does need it or not? And should i disable autocommit like this as well?
Now let's say, that I have such script (sorry for not providing actual code, but my question is about how transaction works, not about code itself):
~insert and/or update things
~do something aka "line 2"
~insert and/or update things again
Seems like example taken right from the book. I obviously want all inserts and updates, or none to happen. Since I already started transaction, I assume, that nothing will commit, unless I call mysqli_commit ($link);. But here we have a little problem: I do not include any 'footer' at the end of my scripts and doing so seems like a nightmare now, so I don't have any place to put commit. So question no. 3 is: Will my queries commit automaticly after script ends (or I call exit; or die();) even if i set autocommit to false? Or do I need to call commit/do not turn off autocommit?
Now comes time for case when something fails and I need to rollback. Same as above - do I need to call mysqli_rollback (mysqli $link);, or pure fact that I did not call commit will be sufficient? I'm refering here to an situation where script does not end normally. Situations like power off server while working on "line 2", or was stopped because it took to much time (set_time_limit stopped it).
This is a somewhat broad question, so I'll try to cover all the things as much as I can.
At first you can ignore the mysqli api (the api specific transaction functions are just wrappers), and go straight to the MySQL manual. The important thing here is that disabling autocommit and starting a transaction are the same thing. Also a single query (including modifications by triggers) is always a transaction.
The answer you question 1 & 2 is "probably not". It very much depends on what your existing code assumes about the database connection, and how your application is structured.
From what you mentioned in the question, the answer would be: it will be better if you only put transactions in the places that need them.
For question 3: it will not commit automatically. You can however make it do so, by using register_shutdown_function, although I don't recommend doing that.
There are statements (implicit commits) which will commit the transaction automatically. These include all DDL statements (CREATE,ALTER...) and also TRUCNATE, LOCK TABLES and others. This basically means those statements can't be used in transactions.
MySQL rolls back transactions when the connection is terminated.
I would recommend to add transactions only to the code which needs them (to be safe you can do this for all code which does more than one write query to the db).
The classic approach is:
START TRANSACTION
query
other things
another query
some other stuff
3-rd query
...
COMMIT
The main thing here is to make sure you only commit if no errors have occurred.
Leave the rollback to either connection termination (or register_shutdown_function if you are using persistent connections), because making sure each and every script will have a correctly working rollback logic is hard :)
This will make sure that nothing is committed if bad things happen (exceptions, fatal errors, time/mem limits, power outages, meteors...).
It is also possible to have transactions at a function/method level (nested and stack-like), but thats out of the scope for this question.
From time to time it happens that some indexes in our tables get broken and the DB start consuming 100% CPU load and in some time it gets completely stuck. Even simple queries won't finish and restarts don't help.
What I found is to either drop and recreate indexes one by one (which might take a loooong time and lot of investigation) or just calling alter table mytable engine=innodb; on suspicious table. This works actually quite well, it fixes everything and everything gets back to normal. But I have no idea what actually happens in background and why it helps. Also – would it help to do this manually once a month? Is it a good idea to automatize this? Is there some way to do some DB health check?
A guess...
You have an older version of MySQL/Percona, one that either does not have "persistent statistics" or does not have it enabled.
And you have a nasty query that sometimes leads the Optimizer to pick the wrong query plan.
The quick fix (that may or may not work) is to run ANALYZE TABLE of the table(s) in the slow query.
A better fix may be to upgrade the version.
Meanwhile, let's see the query, its EXPLAIN, and SHOW CREATE TABLE for each table involved. The may be a way to reformulate it to be less flaky.
I just ran a command
update sometable set col = '1';
by mistake without specifying the where condition.
Is it possible to recover the previous version of the table?
Unless you...
Started a transaction before running the query, and...
Didn't already commit the transaction
...then no, you're out of luck, barring any backups of previous versions of the database you might have made yourself.
(If you don't use transactions when manually entering queries, you might want to in the future to prevent headaches like the one you probably have now. They're invaluable for mitigating the realized-5-seconds-later kind of mistake.)
Consider enabling sql_safe_updates in future if you are worried about doing this kind of thing again.
SET SESSION sql_safe_updates = 1
No. MySQL does have transaction support for some table types, but because you're asking this question, I'll bet you're not using it.
Everybody does this once. It's when you do it twice you have to worry :)
I have a long-running process in MySQL. It has been running for a week. There is one other connection, to a replication master, but I have halted slave processing so there's effectively nothing else going on.
How can I tell if this process is still working? I knew it would take a long time which is why I put it on its own database instance, but this is longer than I anticipated. Obviously, if it is still doing work, I don't want to kill it. If it is zombied, then I don't know how to get the work done that it's supposed to be doing.
It's in the "Sending data" state. The table is an InnoDB one but without any FK references that are used by the query. The InnoDB status shows no errors or locks since the query started.
Any thoughts are appreciated.
Try "SHOW PROCESSLIST" to see what's active.
Of course if you kill it, it may then want to take just as much time rolling it back.
You need to kill it and come up with better indices.
I did a job for a guy. Had a table with about 35 million rows. His batch process, like yours, had been running a week, with no end in sight. I added some indexes, made some changes to the order and methods of his batch process, and got the whole thing down to about two and a half hours. On a slower machine.
Given what you've said, it's not stuck. However, the is absolutely no guarantee that it will actually finish in anything resembling a reasonable amount of time. Adding indicies will almost certainly help, and depending on the type of query refactoring it into a series of queries that use temp tables could possibly give you a huge performance boost. I wouldn't suggest waiting around for it to maybe finish.
For better performance on a database that size, you may want to look at a document based database such as mongoDB. It will take more hard drive space to store the database, but depending on your current schema, you may get much better performance.