So just before the weekend I made a bit of a catastrophic error where I got distracted and forgot to finish my SQL-statement in the code I was working on for my site and completely left out any WHERE-clause before saving. This resulted in each time a new order was created, every single order in the system had it's payment-option set to whatever the new order used.
This time I was lucky I could salvage the situation with a rather recent backup and saw the error immediately (but not until after 180.000+ orders had their payment info changed) and I could manually deduct what the payments should have been for the most recent orders made after the backup had been created.
Unfortunately I don't have the luxury of a good testing environment, which I know is very bad.
Question: To prevent anything like this from happening again, is there any way we can set up our SQL server to prevent UPDATE statements to be considered WHERE 1, and instead be considered WHERE 0, where the WHERE clause is missing completely?
You can set the session variable sql_safe_updates to ON with
SET sql_safe_updates=ON;
Read more about it in the manual:
For beginners, a useful startup option is --safe-updates (or
--i-am-a-dummy, which has the same effect). Safe-updates mode is helpful for cases when you might have issued an UPDATE or DELETE
statement but forgotten the WHERE clause indicating which rows to
modify. Normally, such statements update or delete all rows in the
table. With --safe-updates, you can modify rows only by specifying the
key values that identify them, or a LIMIT clause, or both. This helps
prevent accidents. Safe-updates mode also restricts SELECT statements
that produce (or are estimated to produce) very large result sets.
... (much more info in the link provided)
There are few IDE's like dbweaver which provides you a warning when there is on any where clause in your Update/ Delete statements. Ideally you can also use **SET sql_safe_mode=ON;
**. But this would be comfortable only in the test environment, not sure if you can enable it in production. Most right way in sql is to take a backup in an automated way using triggers before updating/ deleting
Related
Is it possible to "see" deleted rows in the same transaction before commit ?
I need to do this in a TRIGGER AFTER DELETE where I need to select the deleted rows which are deleted by a cascade constraint
update
It does not sound like its possible.. So I want to edit my question a bit.. Is it possible / is there a fast way to collect row ID's in a TRIGGER BEFORE DELETE and "send" them to a TRIGGER AFTER DELETE?
check this oracle docs if i understand clearly you need this..
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr002.htm
Not sure what exactly your problem is, but maybe this info will help you.
In general I wouldn't recommend to grant delete permissions to anyone, accidental delete can have disastrous consequences. Instead of executing the delete rather mark the rows for deletion and just hide them on the front-end side. Then you can eventually delete them manually if necessary.
If you want to check what you are deleting before you do it you would have to prepare a script that instead of executing the DELETE prints the query first or shows the rows you are about to delete. Keep in might doing that might compromise your security. If you just want to see how much stuff you are about to delete you can SELECT COUNT first the items you want to delete and print something like: "you are about to permanently delete x items. Is that ok?"
If you need it for testing, to show you what was deleted etc., use MySQL Server Logs. They might be turned off by default, but it depends on your configuration (usually only error log is enabled by default). Then you could check the General Query Log but it only logs executed queries, so you will only see that someone executed DELETE FROM x WHERE y=z but you won't see the values that actually got deleted, only the executed query. Also keep in mind that general query log can grow really fast depending on your workflow, on the other hand gives you a great insight of what was the last thing a particular user did before he encountered an error etc.
Does any of this helps you? If you need more info on particular topic post a comment and I will edit accordingly.
I have two tables with related key. I want to choose the best way to delete row from tbl_one and tbl_two rows that have related key. I tried using DELETE JOIN to do this correctly, but I found another way that is very simple that I use two statements of delete. Could you tell me which is better?
First method:
DELETE tbl_one,
tbl_two FROM tbl_one
JOIN tbl_two ON tbl_one.id = tbl_two.tbl_one_id WHERE tbl_one.id = 1
Second method:
DELETE FROM tbl_one WHERE id =1;
DELETE FROM tbl_two WHERE tbl_one_id =1;
The main point of concern the operation should be done in isolation(either both or none)
you should put the operations inside transaction block.
In my perspective first query works better just because the server can reach the savepoint with a single query rather than parsing and executing two.
turn off the foreign_key_check global variable and run the query and turn it on back afterwards.
NB: You can get use of cascading foreign key behavior mysql provides.
It does not matter if you use a single or multiple statements to alter database content, as long as you are using transactions. Without transactions two issues might arise:
another process accessing the data inbetween you running one statement after another queries a state of the database that is "unclean", because only part of the statements has been processed. This may always happen in a system where more than a single client can use the database at the same time, for example in web pages and the like.
a subsequent query might fail, out of whatever reason. In that case only part of your statements have been processed, the other part not. That leaves your database in an "undefined" state again, a persistent situation in this case. You'd have to manually prevent this by error detection, but even then it might simply not be possible to fix the issue.
Relational database management systems offer transactions for this. Transactions allow to "bundle" several statements to a single one from a logical point of view. You start a transaction, run your statements, then close the transaction. If something unexpected occurred you can always "rollback" your transaction, that way you get a stable and clean database situation just like before the start of your transaction.
Like my title describes: how can I implement something like a watchdog service in SQL Server 2008 with following tasks: Alerting or making an action when too many inserts are committed on that table.
For instance: Error table gets in normal situation 10 error messages in one second. If more than 100 error messages (100 inserts) in one second then: ALERT!
Would appreciate it if you could help me.
P.S.: No. SQL Jobs are not an option because the watchdog should be live and woof on the fly :-)
Integration Services? Are there easier ways to implement such a service?
Kind regards,
Sani
I don't understand your problem exactly, so I'm not entirely sure whether my answer actually solves anything or just makes an underlying problem worse. Especially if you are facing performance or concurrency problems, this may not work.
If you can update the original table, just add a datetime2 field like
InsertDate datetime2 NOT NULL DEFAULT GETDATE()
Preferrably, make an index on the table and then with whatever interval that fits, poll the table by seeing how many rows have an InsertDate > GetDate - X.
For this particular case, you might benefit from making the polling process read uncommitted (or use WITH NOLOCK), although one has to be careful when doing so.
If you can't modify the table itself and you can't or won't make another process or job monitor the relevant variables, I'd suggest the following:
Make a 'counter' table that just has one Datetime2 column.
On the original table, create an AFTER INSERT trigger that:
Deletes all rows where the datetime-field is older than X seconds.
Inserts one row with current time.
Counts to see if too many rows are now present in the counter-table.
Acts if necessary - ie. by executing a procedure that will signal sender/throw exception/send mail/whatever.
If you can modify the original table, add the datetime column to that table instead and make the trigger count all rows that aren't yet X seconds old, and act if necessary.
I would also look into getting another process (ie. an SQL Jobs or a homemade service or similar) to do all the housekeeping, ie. deleting old rows, counting rows and acting on it. Keeping this as the work of the trigger is not a good design and will probably cause problems in the long run.
If possible, you should consider having some other process doing the housekeeping.
Update: A better solution will probably be to make the trigger insert notifications (ie. datetimes) into a queue - if you then have something listening against that queue, you can write logic to determine whether your threshold has been exceeded. However, that will require you to move some of your logic to another process, which I initially understood was not an option.
I am experiencing what appears to be the effects of a race condition in an application I am involved with. The situation is as follows, generally, a page responsible for some heavy application logic is following this format:
Select from test and determine if there are rows already matching a clause.
If a matching row already exists, we terminate here, otherwise we proceed with the application logic
Insert into the test table with values that will match our initial select.
Normally, this works fine and limits the action to a single execution. However, under high load and user-abuse where many requests are intentionally sent simultaneously, MySQL allows many instances of the application logic to run, bypassing the restriction from the select clause.
It seems to actually run something like:
select from test
select from test
select from test
(all of which pass the check)
insert into test
insert into test
insert into test
I believe this is done for efficiency reasons, but it has serious ramifications in the context of my application. I have attempted to use Get_Lock() and Release_Lock() but this does not appear to suffice under high load as the race condition still appears to be present. Transactions are also not a possibility as the application logic is very heavy and all tables involved are not transaction-capable.
To anyone familiar with this behavior, is it possible to turn this type of handling off so that MySQL always processes queries in the order in which they are received? Is there another way to make such queries atomic? Any help with this matter would be appreciated, I can't find much documented about this behavior.
The problem here is that you have, as you surmised, a race condition.
The SELECT and the INSERT need to be one atomic unit.
The way you do this is via transactions. You cannot safely make the SELECT, return to PHP, and assume the SELECT's results will reflect the database state when you make the INSERT.
If well-designed transactions (the correct solution) are as you say not possible - and I still strongly recommend them - you're going to have to make the final INSERT atomically check if its assumptions are still true (such as via an INSERT IF NOT EXISTS, a stored procedure, or catching the INSERT's error in the application). If they aren't, it will abort back to your PHP code, which must start the logic over.
By the way, MySQL likely is executing requests in the order they were received. It's possible with multiple simultaneous connections to receive SELECT A,SELECT B,INSERT A,INSERT B. Thus, the only "solution" would be to only allow one connection at a time - and that will kill your scalability dead.
Personally, I would go about the check another way.
Attempt to insert the row. If it fails, then there was already a row there.
In this manner, you check or a duplicate and insert the new row in a single query, eliminating the possibility of races.
Is there a way that if there's a change in records, that a query that changed the data (update, delete, insert) can be added to a "history" table transparently?
For example, if mySQL detects a change in a record or set of records, is there a way for mySQL to add that query statement into a separate table so that way, we can track the changes? That would make "rollback" possible since every query (other than SELECT) would be able to reconstruct database from its first row. Right?
I use PHP to interact with mySQL.
You need to enable the MySQL BinLog. This automatically logs all the alteration statements to a binary log which can be replied as needed.
The alternative is to use an auditing function through Triggers
Read about transaction logging in MySQL. This is built in to MySQL.
MySQL has logging functionality that can be used to log all queries. I usually leave this turned off since these logs can grow very rapidly, but it is useful to turn on when debugging.
If you are looking to track changes to records so that you can "roll back" a sequence of queries if some error condition presents itself, then you may want to look into MySQL's native support of transactions.