Is it possible to "see" deleted rows in the same transaction before commit ?
I need to do this in a TRIGGER AFTER DELETE where I need to select the deleted rows which are deleted by a cascade constraint
update
It does not sound like its possible.. So I want to edit my question a bit.. Is it possible / is there a fast way to collect row ID's in a TRIGGER BEFORE DELETE and "send" them to a TRIGGER AFTER DELETE?
check this oracle docs if i understand clearly you need this..
http://docs.oracle.com/cd/B19306_01/backup.102/b14192/flashptr002.htm
Not sure what exactly your problem is, but maybe this info will help you.
In general I wouldn't recommend to grant delete permissions to anyone, accidental delete can have disastrous consequences. Instead of executing the delete rather mark the rows for deletion and just hide them on the front-end side. Then you can eventually delete them manually if necessary.
If you want to check what you are deleting before you do it you would have to prepare a script that instead of executing the DELETE prints the query first or shows the rows you are about to delete. Keep in might doing that might compromise your security. If you just want to see how much stuff you are about to delete you can SELECT COUNT first the items you want to delete and print something like: "you are about to permanently delete x items. Is that ok?"
If you need it for testing, to show you what was deleted etc., use MySQL Server Logs. They might be turned off by default, but it depends on your configuration (usually only error log is enabled by default). Then you could check the General Query Log but it only logs executed queries, so you will only see that someone executed DELETE FROM x WHERE y=z but you won't see the values that actually got deleted, only the executed query. Also keep in mind that general query log can grow really fast depending on your workflow, on the other hand gives you a great insight of what was the last thing a particular user did before he encountered an error etc.
Does any of this helps you? If you need more info on particular topic post a comment and I will edit accordingly.
Related
So just before the weekend I made a bit of a catastrophic error where I got distracted and forgot to finish my SQL-statement in the code I was working on for my site and completely left out any WHERE-clause before saving. This resulted in each time a new order was created, every single order in the system had it's payment-option set to whatever the new order used.
This time I was lucky I could salvage the situation with a rather recent backup and saw the error immediately (but not until after 180.000+ orders had their payment info changed) and I could manually deduct what the payments should have been for the most recent orders made after the backup had been created.
Unfortunately I don't have the luxury of a good testing environment, which I know is very bad.
Question: To prevent anything like this from happening again, is there any way we can set up our SQL server to prevent UPDATE statements to be considered WHERE 1, and instead be considered WHERE 0, where the WHERE clause is missing completely?
You can set the session variable sql_safe_updates to ON with
SET sql_safe_updates=ON;
Read more about it in the manual:
For beginners, a useful startup option is --safe-updates (or
--i-am-a-dummy, which has the same effect). Safe-updates mode is helpful for cases when you might have issued an UPDATE or DELETE
statement but forgotten the WHERE clause indicating which rows to
modify. Normally, such statements update or delete all rows in the
table. With --safe-updates, you can modify rows only by specifying the
key values that identify them, or a LIMIT clause, or both. This helps
prevent accidents. Safe-updates mode also restricts SELECT statements
that produce (or are estimated to produce) very large result sets.
... (much more info in the link provided)
There are few IDE's like dbweaver which provides you a warning when there is on any where clause in your Update/ Delete statements. Ideally you can also use **SET sql_safe_mode=ON;
**. But this would be comfortable only in the test environment, not sure if you can enable it in production. Most right way in sql is to take a backup in an automated way using triggers before updating/ deleting
I have a table PRI in database and I just realized that I have entered wrong data in the first 100 rows so I want to delete them. Now I don't have anything to ORDER the rows, so how should I go about the deletion process?
If TOP is an actual keyword, you are on the wrong DBMS. Else you have to read again on how to delete rows.
Generel tip:
If you mess up, use an external DB tool (SQLDeveloper, HeidiSQL, etc.) and connect to your database. Do your clean up until your have a sane database state again.
Then continue coding. Not before. Never use code to undo your failures.
So one of the projects I'm working on requires us to take every query that is ran on the server and automatically paste that query into a table inside of the database. The reason for this is so that the DBA is able to view all prior SQL Queries that have been ran on the box. Unfortunately I don't have any leeway to do this differently as the client is requiring this implementation.
Has anybody done this before or has any code that I could use that will automatically do this? Thanks.
Be careful! If you do an INSERT for every action taken, you will need to do an INSERT for that INSERT, at which point, you will ...
That is, the first logged query will hang the server and fill up the disk!
Instead of doing the task the way it is asked, turn on the "general log" and periodically scrape what it in it into another machine, which does not have this logging turned on.
Other arguments against the task as stated...
If a table has TRIGGERs, you will not be able to add another TRIGGER.
If "every query" really means "every", it is impossible (with a TRIGGER) since you can't write a SELECT or SHOW trigger.
"as the client is requiring this implementation". I would approach this unreasonable constraint by politely finding out what the real goal is. He has only described is an implementation.
If his goal is some kind of audit log, then my suggestion about the general log should suffice.
Like my title describes: how can I implement something like a watchdog service in SQL Server 2008 with following tasks: Alerting or making an action when too many inserts are committed on that table.
For instance: Error table gets in normal situation 10 error messages in one second. If more than 100 error messages (100 inserts) in one second then: ALERT!
Would appreciate it if you could help me.
P.S.: No. SQL Jobs are not an option because the watchdog should be live and woof on the fly :-)
Integration Services? Are there easier ways to implement such a service?
Kind regards,
Sani
I don't understand your problem exactly, so I'm not entirely sure whether my answer actually solves anything or just makes an underlying problem worse. Especially if you are facing performance or concurrency problems, this may not work.
If you can update the original table, just add a datetime2 field like
InsertDate datetime2 NOT NULL DEFAULT GETDATE()
Preferrably, make an index on the table and then with whatever interval that fits, poll the table by seeing how many rows have an InsertDate > GetDate - X.
For this particular case, you might benefit from making the polling process read uncommitted (or use WITH NOLOCK), although one has to be careful when doing so.
If you can't modify the table itself and you can't or won't make another process or job monitor the relevant variables, I'd suggest the following:
Make a 'counter' table that just has one Datetime2 column.
On the original table, create an AFTER INSERT trigger that:
Deletes all rows where the datetime-field is older than X seconds.
Inserts one row with current time.
Counts to see if too many rows are now present in the counter-table.
Acts if necessary - ie. by executing a procedure that will signal sender/throw exception/send mail/whatever.
If you can modify the original table, add the datetime column to that table instead and make the trigger count all rows that aren't yet X seconds old, and act if necessary.
I would also look into getting another process (ie. an SQL Jobs or a homemade service or similar) to do all the housekeeping, ie. deleting old rows, counting rows and acting on it. Keeping this as the work of the trigger is not a good design and will probably cause problems in the long run.
If possible, you should consider having some other process doing the housekeeping.
Update: A better solution will probably be to make the trigger insert notifications (ie. datetimes) into a queue - if you then have something listening against that queue, you can write logic to determine whether your threshold has been exceeded. However, that will require you to move some of your logic to another process, which I initially understood was not an option.
I'm using MySQL + PHP. I have some code that generates payments from an automatic payment table based on when they are due (so you can plan future payments... etc). The automatic script is run after activity on the site and sometimes gets run twice at the same time. To avoid this, we generate a uuid for a payment where there only can be on nth payment for a specific automatic payment. We also use transactions to encapsulate the whole payment generation process.
As part of this, we need the whole transaction to fail if there is a duplicate uuid, but getting an actual database error will show an error to the user. Can I use Insert Ignore in the payment insert SQL? Will the warning kill the transaction? If not, how can I kill the transaction when there is a duplicate uuid?
To clarify: if the INSERT fails, how can I get it to not throw a program-stopping error, but kill/rollback the transaction?
Mind you, the insert will fail on commit, not on initial execution in the php.
Thanks much!
Maybe there is another approach - making sure the same UUID isn't read twice, instead of relying on a query failure.
I am guessing you get the UUID from another table before you process it and insert.
You can use a transaction and then SELECT ... FOR UPDATE when you read the records. This way the records you read are locked. When you get all the data update a status column to "processed" and COMMIT the transaction.
Finally, make sure the process doesn't read records with status "processed".
Hope this helps...
Just stumbled upon this. And it is an old question but I would try to give an answer.
First: Make sure your payment job only runs one at a time. Guess that could save you a lot of trouble.
Second: If your update statement fails in php, put it in a try-catch Block. That way your mysql query fails but will the error will not be shown to the user and you can handle it.
try {
... your mysql code here ....
} catch (Exception $e) {
... do whatever needs to be done in case of problem ...
}
Keep in mind that there are a lot of possible source for an exception. Never take a failing insert as the reason for granted.