How can I use trigger to simply update a table with the last time a table was edited? I know that by using triggers it is "for each row" but if someone's inserting more than one row, it'd be pointlessly inserting or altering the table over and over again. Is there any way to do this without doing it over and over again?
I'd like to be able to just have it do it once for all of the inserts instead of having it done time and time again. If not I guess I can force it, via a wrapper.
edit 1:
Well to explain some more of the design I guess then.
I'm going to be having a table in another database to handle the last_updated data for things like chat, or the players "mailbox", and another one for the development things like tables for quests, skills, items etc. And I want to be able to know when a table was last updated so that I can easily see before I go scan the table to see for new things.
Basically this is what I'd like to do(or something similar), I'm also using PHP so it's likely to be PHP-based approach in the code but the SQL should be kind of standard. I'm not going to do full code but rather semi-runnable.
last_modified=mysql_query("select last_modified from various_stats.table_last_updated where database_name=`database_name` and `table_name`");
if(last_modified>last_checked_time){
data_to_get_updated=mysql_query("select something from various_<something>.table_name where last_modified>last_checked_time");
}
else{
do_nothing;
}
edit 2: I'm using InnoDB, and thus I cannot use the information schema's update_time since it never changes.
will this help you, if im on the right track that is:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tabname'
The above solution is for myisam, for innodb the norm is to set a sceduled script, this can be set as a cron job or a windows scheduled task, if you dont have that kind of control over your web host, you could possibly set up a small server at your work office and run the cron from there. if you do this every say 20 seconds you could simply record the current top auto incremented ID and use this as a guid, if current ID is higher than the last recorded ID you then update your records to show the last changed time to be now.
as this will only be one call to a server every XX seconds, it wont really hammer the server too much and should just run silently in the background.
If you do go down the scheduled task root, it would be wise to add error capture in your script so that you can be alerted via email if something stops working etc.
Related
I have several databases that are used by several applications (one of which is our own, the others we have no control over in what they do).
Out software has to know when the database has last been changed. For reasons I won't get into to keep this short we decided that going with a new table per database that has a singular field: last_changed_on that has a GetDate() as a value. This way our own software can check when it was last changed and check it to the date it has stored for said database and do things if the date is newer than what is stored in-memory.
After doing some research we decided that working with Triggers was the way to go, but from what I could find online, triggers look at specific columns that you set for Updates.
What I'd like to know is if there is a way to automate the process or just have a trigger that happens whenever anything happens insert, update, remove wise?
So I am looking for something like this:
CREATE TRIGGER LastModifiedTrigger
ON [dbo].[anytable]
AFTER INSERT, UPDATE, DELETE
AS
INSERT INTO dbo.LastModifiedTable (last_modified_on) VALUES (CURRENT_TIMESTAMP)
I know that the above example isn't a correct trigger, I'm rather new to them so I was unsure on how to word it.
It might be interesting to note that I can have my own software run several queries creating the queries automatically for each table and each column, but I'd rather avoid to do that as keeping track of all those triggers will be a pain in the long run.
I'd prefer to have a little triggers per database as possible, if only by not having to make a trigger for each individual column name.
Edit: To clarify: I am trying to avoid having to create an automated script that goes and scans every table, and sequentially every column of every table, to create a trigger to see if something is changed there. My biggest issue at the moment is the trigger behavior on updates, but I'm hoping to avoid having to specify tables as well for insert and delete
Edit 2: To avoid future confusion, I'm looking for a solution to this problem for both SQL Server (MS SQL/T SQL) and MySQL
Edit 3: Turns out that I read the documentation very wrongly and (at least on MySql) the trigger activates on any given updated column without having to define a specific one. Regardless, I'm still wondering if there is a way to just have less triggers than having one for each table in a database. (i.e. 1 for any type of update(), 1 for any type of insert(), and 1 for any type of delete()
EDIT 4: Forgot that the argument for overwriting 1 field will come with performance issues, I've considered this and I'm now working with multiple rows. I've also handled the creating of 3 triggers (insert(), update(), and delete()) for each database through my software's code, I really wished this could've been avoided, but it cannot.
Solution
After a bunch more digging on the internet and keep finding opposite results of what I was looking for, and a bunch of trial and error, I found a solution.
First and foremost: having triggers not being dependent on a table (aka, the trigger activates for every table is impossible, it cannot be done, which is too bad, it would've been nice to keep this out of the program code, but nothing I can do about it.
Second: the issue for updates on not being column specific was an error due to my part for searching for triggers not being dependent on specific columns only giving me examples for triggers that are.
The following solution works for MySql, I have yet to test this on SQL Server, but I expect it to not be too different.
CREATE TRIGGER [tablename]_last_modified_insert
AFTER INSERT/UPDATE/DELETE ON [db].[tablename]
FOR EACH ROW
BEGIN
INSERT INTO [db].last_modified(last_modified_on)
VALUES(current_timestamp())
END
As for dynamically creating these triggers, the following show how I get it to work:
First Query:
SHOW TABLES
I run the above query to get all the tables in the database, exclude the last_modified I made myself, and loop through all of them, creating 3 triggers for each.
A big thank you to Arvo and T2PS for their replies, their comments helped by pointing me in the right direction and writing up the solution.
You're slightly off in the assumption that SQL Server triggers are per-column; the CREATE TRIGGER syntax binds the trigger to the named table for the specified operations. The trigger will be called with two logical tables in scope (inserted & deleted) that contain the rows modified by the operation that caused the trigger to fire; if you wanted to check for specific columns' values or changes, then the trigger logic would need to operate against those logical tables.
If you take this approach, you will need to create a trigger for each table you wish to monitor in this fashion; we've had a similar need to track changes (at a more granular level), we didn't find a "pseudotable" that corresponds to all tables in a schema/database. You should also be aware that locking semantics will come into play by doing this, as you will have triggers from multiple tables all targeting the same row for an update as part of separate operations -- depending on the concurrency model in effect, you could be looking at performance consequences by doing so if you expect multiple DML queries to operate concurrently against your database.
I would suggest checking Arvo's commented link above for suitability instead; querying system views is more likely to avoid contention (and other performance-related) issues from using triggers in your scenario.
After a bunch more digging on the internet and keep finding opposite results of what I was looking for, and a bunch of trial and error, I found a solution.
First and foremost: having triggers not being dependent on a table (aka, the trigger activates for every table is impossible, it cannot be done, which is too bad, it would've been nice to keep this out of the program code, but nothing I can do about it.
Second: the issue for updates on not being column specific was an error due to my part for searching for triggers not being dependent on specific columns only giving me examples for triggers that are.
The following solution works for MySQL, I have yet to test this on SQL Server, but I expect it to not be too different.
CREATE TRIGGER [tablename]_last_modified_insert
AFTER INSERT/UPDATE/DELETE ON [db].[tablename]
FOR EACH ROW
BEGIN
INSERT INTO [db].last_modified(last_modified_on)
VALUES(current_timestamp())
END
As for dynamically creating these triggers, the following show how I get it to work:
First Query:
SHOW TABLES
I run the above query to get all the tables in the database, exclude the last_modified I made myself, and loop through all of them, creating 3 triggers for each.
Perhaps you could use Audit for SQL Server:
CREATE SERVER AUDIT [ServerAuditName]
TO FILE
(
FILEPATH = N'C:\Program Files......'
)
ALTER SERVER AUDIT [ServerAuditName] WITH (STATE=ON)
GO
CREATE DATABASE AUDIT SPECIFICATION [mySpec]
FOR SERVER AUDIT [ServerAuditName]
ADD (INSERT, UPDATE, DELETE ON DATABASE::databasename BY [public])
WITH (STATE=ON)
GO
Then you can query for changes:
SELECT *
FROM sys.fn_get_audit_file ('C:\Program Files......',default,default);
GO
Is there any way to detect when an ALTER TABLE statement is executed in MySQL? For example, if the following statement were executed on some_table, is there any way to detect that the column name changed from column_name_a to column_name_b and log it in another table in the DB?
ALTER TABLE `some_table`
CHANGE COLUMN `column_name_a` `column_name_b` VARCHAR(255) NULL DEFAULT NULL;
Thanks.
To my knowledge it is unfortunately not possible to put triggers on the INFORMATION_SCHEMA tables, since they are strictly spoken views and triggers can't be made to work on views. If triggers would be possible on the INFORMATION SCHEMA, then you could have a trigger on updates of the INFORMATION_SCHEMA.COLUMNS table to identify name changes.
However, what you can do is one of the following things:
option 1) Maintain a real table with all column names. Then create a function that checks for a discrepancy between the INFORMATION_SCHEMA.COLUMNS table abd your table. If there is one, you know the name has changed. You need to copy over the new name to your column name table and do whatever else you wanted to do upon name change.
The function to check for discrepancies then must be run periodically via the mysql scheduler in order to detect name changes as quickly as possible. Note that this is not a real time solution. There will be a lag between the ÀLTER TABLE command and its detection. If this is unacceptable in your scenario you need to go with
option 2) Do not call ÀLTER TABLE directly, but wrap it in a function. Within this function you can also call other functions to achieve what you need to achieve. If may be worth while to formulate the needed steps in a higher programming language that you use to drive your application. If this is not possible, you will be limited to the possibilities that are offered in functions/procedures in the mysql environment.
Sorry to not have a simpler way of doing this for you.
I have recently installed a new computer with Percona Server 5.6 instead of MySQL 5.6, and using InnoDB/XtraDB mostly, FWIW. The database I'm working on is merely a testing ground, but I have 1 issue: after I add a column to a table (or even remove one), I usually forget to INSERT or otherwise change another table's data, which keeps track of what column names are in which table; each table has ASCII name along with a number, and this number is the only difference between table names for simplicity. So, is there a way to auto-update the "relation" table so that the column name and table's number are added or changed, instead of using a cronjob ?
Now that I think, I could DROP that table and use information_schema instead ...
EDIT 0: Don't let the above realization stop you; it's just good to know if this is possible before going for a possible other way.
Yes, relying on the 'INFORMATION_SCHEMA.COLUMNS' may be best.
Unfortunately mysql does not support DDL TRIGGER events, as this would be what you are looking for.
triggers allow you to perform many SQL and procedural operations before insertion, update or deletion of rows in a specific table. However to the best of my knowledge - and I would be stoked if I were wrong - you cant set TRIGGER events on DDL statements like ALTER and DROP TABLE...
However still take the time to learn about triggers - they save a lot of time by eliminating the need for cronjobs and external updates for things like aggregate values.
https://dev.mysql.com/doc/refman/5.6/en/trigger-syntax.html
I have a table with 120 columns. I need to set up audit trail which would log any column if it was changed. As it is now, I guess I have to set up a trigger with condition something like this for every column:
IF(NEW.columnName != OLD.columnName)
THEN //log the old value
This would need to be done 120 times... While I would have accepted this approach 20 years ago, today I refuse to believe it's impossible to automate such a simple procedure finding changed columns automatically.
This is what I discovered so far:
Neither NEW nor OLD is a table, it's a sort of a language construct, therefor you can't do "SELECT NOW.*" or something similar.
Dynamic SQL is not allowed in triggers (this could have solved the problem).
Procedures using dynamic SQL are not allowed in triggers (seriously, Oracle, it looks like you have worked really hard to disable this feature no matter what).
I was thinking to use BEFORE and AFTER triggers in conjunction with temporary tables and variables which would have possibly solved the problem, however yet again dynamic SQL would be required. I feel like I hit a dead end.
Is there a solution to this at all?
A side question: would this be possible in PostgreSQL?
UPDATE: I found 2 potential solutions however neither of them look clear enough to me:
using EVENTS as a workaround to use triggers in conjunction with dynamic SQL workaround. I have to admit, I don't quite get this, does this mean that EVENT fires every second no matter what?
This article says that it is possible to use dynamic SQL inside trigger as long as temporary table is used with it. That is still using dynamic SQL, so I don't quite understand.
interesting, I was facing the same problem couple of years ago with implementing dynamic trigger-based audit log. The solution I came up with was to simply generate the SQL trigger code which then can be (automatically) applied to replace old trigger definitions. If memory serves, I created few SQL templates which were processed by a PHP script which in turn was outputting complete trigger definitions based on "SELECT COLUMN_NAME FROM information_schema.COLUMNS WHERE ..." Yes, the trigger code was huge, but it worked! Hope that helps a little =)
i did this for one of the projects by creating a shadow table. if you are not dealing with millions of updates, this might work
when the user logs in, SET #user_id = { logged in user id }
create a trigger on the table before update to copy the row to be modified to a shadow table with the same structure ( note that you cannot have a primary key in the shadow table nor unique keys )
add additional columns to the shadow table ( modified_by, modified_on )
create a small php script to show the diff between columns - this way you dont touvh the existing php code base
if you are dealing with lots of updates and want to keep the shadow table small, a cron can be written to parse the shadow table and identify which column changed and only store this info to another table
I have so many tables in my DB.
eg. user, organisation,etc.
**User**
userId,name,age,orgId,etc..
**SessionLog**
logId, userId, operations, reason
If one admin makes changes like inserting, updating ,deleting, I will log every operations in SessionLog table WHAT he made.
So I plan to use Trigger.But the problem is I want to log the userId for WHO too. By using Trigger WHAT is OK. But for logging WHO, how can I do ?
1) do I need to retrieve the logId and need to update the row with WHO?
or
2) just use the simple INSERT statement to log everything? which way is better?
3) Is there any way to pass desired parameters to Trigger?
Thanks.
1. User
You can use CURRENT_USER to get this. http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_current-user
2. Insert statement vs. Trigger
A trigger will abstract logging away from everybody else, and usually is the easiest solution that stays hidden.
INSERTS / UPDATES will not return until the trigger has completed.
Therefore while triggers on tables with light activity is an OK idea, it will become a real hindrance when dealing with tables that have a lot of activity.
Another option is to encapsulate this in the data access layer, but if you have even a single user that has direct access to the data (DBA included) then I do not recommend this approach.