My situation is this:
I have a table, call it x.
Every time a row is updated or deleted, a copy of the old row should be inserted into x_history.
Additionally x_history will have its' own auto-incrementing id column, call that histid.
It is very important to have its' own id column as this will give us the flexibility to build version restore functionality.
I have 100+ tables to apply this to so I'm looking for a generic trigger that can be used for any table to backup one row into a history table. Only the 2 table names should vary from trigger to trigger. Specifying all column names is really not what I'm looking for.
I need to do this in MySQL but have added MSSQL too - I know both so can convert between one and the other easy enough.
Usually, Triggers are not the optimal solution for such purposes.
If possible, you might want to consider changing your database design.
Normally, a better way to handle such things are keeping the hole history in the source table, And have a status column that tells you for each row if it's deleted, updated, or current.
I have little to no experience with MySql, but I have been working with Sql server for the past 7 or 8 years, so what I'm about to say is true for sql server, but may be different for MySql.
If you choose to go with the triggers approach, keep in mind that after update triggers will execute even if the update does not change the row data (e.g update tableName set col1 = 1 where idCol = 4, the update trigger will be executed even if the col1 value before the update was 1, so no data was changed.)
For SqlServer, you might want to consider a common history table, that has only 6 columns:
1. Identity column
2. Table name column
3. Row Id column (original id from the original table)
4. Row Status column (e.g updated, deleted)
5. Action date (the date the row was copied to the history table)
6. Row content column (this should be an XML datatype (not sure if MySql has such dataType))
and then all you have to do is to use "SELECT * FROM deleted/inserted FOR XML AUTO" to create the content for the 6th. column.
Related
I have a table A with attributes studentID(PK),name,address and allotment_status(value can be zero or one) and Table B with roomid(PK),studentID(FK) and roomno now I want that whenever the value of allotment_status is updated to one a new row is inserted in table B and whenever it is set to zero if row exists in table B it gets deleted.
One of way to create trigger on tableA, with update/insert/delete event. This is pure database solution. Whether it's good design or bad depends on your business requirements. So weight it thoroughly before making a decision. The other solution could be to code in your PHP application layer, but I have less experience on that, so would like to avoid answer code level details.
CREATE TRIGGER on_tablea_updateb after/before INSERT/update ON tableA FOR EACH ROW // your business logic goes here condition....; END IF;//
Is it the correct approach? Any suggestions to make it better?
Below is screenshot of employee table and its shadow table where the tl_name and dept fields may change and currently using shadow table to track all changes.
Records are inserted/updated in Main table and it gets copied to shadow table with help of data macro.
all the records in the shadow table have to be approved/rejected by superuser
Main table will have updated alignment and shadow table will have entire history of changes for any employee.
When a record is added/updated in Main table via userform, a copy of record will be created in shadow table which has to be approved by admin.
When a record is added/updated in Main table via userform, is_active field will be set as false and once it is approved by admin this will be updated to true.
As I understand your requirements:
Changed/inserted data should be immediately visible to everyone, with a visible flag for un-approved data.
This is reasonable, if you work under the assumption that the majority of changes are correct and will be approved (hopefully true ;) ).
I think you are missing:
If a change is rejected, the data in Main table should be automatically reverted to the most recent approved state.
Otherwise the Main table stays in a (sort of) undefined state forever, with is_active = False and (apparently) wrong data.
This can be done with your audit table design. Find the latest approved entry for this emp PK, and use its data.
But if the number of columns that are audited may change in the future, you might consider an approach with two tables, as in this project: https://autoaudit.codeplex.com/documentation
AuditHeader Table
This table is inserted with one row everytime one record is inserted,
updated or deleted in a table that has been setup to use the AutoAudit
system.
AuditDetail Table
This table is related to AuditHeader and is inserted with one row for
each column that is changed during an insert or update operation and
for each column during a delete operation.
If you save old + new values with every change, you can revert to "old" state just from the current Audit entry.
And a structure change of Main table (or if you decide that e.g. users can edit emp_name too) doesn't need a structure change of the Audit table, because every audited column in Main is mapped to a row in AuditDetails instead of a column.
Edit: Additional advantage:
In your sample data you have marked the changed values in red. Obviously an Access table doesn't work like that. If you want to keep this information ("which column(s) exactly was edited?"), you would need an additional column in the Audit table.
This would be covered by AuditDetail, since it contains each change with old + new value.
One of our tables has been maligned
/*edit as per commented request
On doing an update to a specific column I accidentally neglected to specify for which row I wish to make this change and set the offending value for every row in the table.
*/end edit
but we have a very recent backup, however not so recent that other tables won't lose data if we do a total database restore.
I'm wondering what the procedure is (assuming there is one) of copying the contents of a given table from one database to another.
The largest problem is that I can't just drop the offending table and replace it as it has rows that are indexed by id into other tables. This won't be a problem if we just take the values from the identical rows in the back-up and bring them over (since the row ids wouldn't change).
It's unclear what exactly has gone wrong with your data. But I'm thinking maybe just a column or two has got messed up. As you said, you just want to copy over the data from the old table, based on the id column.
Assuming you've imported the backup database as "olddb" and the current one is named "newdb":
UPDATE newdb.yourtable newtable, olddb.yourtable oldtable
SET newtable.somecolumn = oldtable.somecolumn
WHERE newtable.id = oldtable.id
Use mysqldatadump for that particular table, and then feed that into the other database.
You can edit the dump file prior to redaing it in to the target table.
See: https://dba.stackexchange.com/questions/9306/how-do-you-mysqldump-specific-tables
I have a table that is used to store the latest actions the user did (like a ctrl+z for the program), but I want to limit this table to about 200 entries, and after that, every new entry would delete the oldest in the table.
Is there any option to make the table behave this way on SQL or do I need to add some code to the program to do it?
I've seen this kind of idea before, but I've rarely seen a case where it was a good idea.
Your table would need these columns in addition to columns for the normal data.
A column of type integer to hold the row number.
A column of type timestamp (standard SQL timestamp) to hold the time of the last update.
The normal approach to limit this table to 200 rows would be to add a check constraint to the column of row numbers. For example, CHECK (row_num between 1 and 200). MySQL doesn't enforce check constraints, so instead you'll need to use a foreign key reference to a table of row numbers (1 to 200).
All insert statements will need to determine whether the table is full, examine the time of the last update, and either a) insert a new row with a new row number, or b) delete the oldest row or overwrite it.
My advice? Renegotiate this requirement.
Assuming that "200" is not a hard limit, in other words if the number of entries occasionally went over that by a small amount it would be OK...
Don't do the pruning on line, do it as an off line process, run as often as needed to keep the totals per user from not getting "too high".
For example, one such solution would be to fire the SQL that does that query every hour using crontab.
Assuming I have the following table named "contacts":
id|name|age
1|John|5
2|Amy|2
3|Eric|6
Is there some easy way to check whether or not this table changes much like how a sha/md5 hash works when getting the checksum for a file on your computer?
So for example, if a new row was added to this table, or if a value was changed within the table, the "hash" or some generated value shows that the table has changed.
If there is no direct mechanism, what is the best way to do this (could be some arbirary hash mechanism, as long as the method puts emphasis on performance and minimizing latency)? Could it be applied to multiple tables?
There is no direct mechanism to get that information through SQL.
You could consider adding an additional LastModified column to each row. To know the last time the table was modified, select the maximum value for that column.
You could achieve a similar outcome by using a trigger on the table for INSERT, UPDATE and DELETE, which updates a separate table with the last modified timestamp.
If you want to know if something has changed, you need something to compare. For example a date. You can add a table with two columns, the tablename and the timestamp, and program a trigger for the events on the table you are interested to control, so this trigger will update the timestamp column of this control table.
If the table isn't too big, you could take a copy of the entire table. When you want to check for changes, you can then query the old vs. new data.
drop table backup_table_name;
CREATE TABLE backup_table_name LIKE table_name;
INSERT INTO backup_table_name SELECT * FROM `table_name`;