Adding JSON data in audit table - sqlalchemy

I am creating audit tables for my database using sqlalchemy-postgresql-audit but the issue is that it creates a separate audit_table for every table and I wan't to create a common audit_table for all which contains
------------------------------------------------------------
| transaction(insertion/updation/deletion) | data. |
------------------------------------------------------------
I have edited the source code to create a common table for all by extend_existing=True and I wan't to add the the data of transacted row as JSON in data.
How can I achieve that?

I got the answer for this
When trigger is applies a RECORD for the OLD ROW in case of UPDATION and DELETION and NEW ROW in case of UPDATION and INSERTION is created i.e we can access values using
OLD.(column_name) or NEW.(column_name)
And to enter the entire ROW as JSON in data field
to_json(NEW) or to_json(OLD)
will work

Related

Audit table design suggestions

Is it the correct approach? Any suggestions to make it better?
Below is screenshot of employee table and its shadow table where the tl_name and dept fields may change and currently using shadow table to track all changes.
Records are inserted/updated in Main table and it gets copied to shadow table with help of data macro.
all the records in the shadow table have to be approved/rejected by superuser
Main table will have updated alignment and shadow table will have entire history of changes for any employee.
When a record is added/updated in Main table via userform, a copy of record will be created in shadow table which has to be approved by admin.
When a record is added/updated in Main table via userform, is_active field will be set as false and once it is approved by admin this will be updated to true.
As I understand your requirements:
Changed/inserted data should be immediately visible to everyone, with a visible flag for un-approved data.
This is reasonable, if you work under the assumption that the majority of changes are correct and will be approved (hopefully true ;) ).
I think you are missing:
If a change is rejected, the data in Main table should be automatically reverted to the most recent approved state.
Otherwise the Main table stays in a (sort of) undefined state forever, with is_active = False and (apparently) wrong data.
This can be done with your audit table design. Find the latest approved entry for this emp PK, and use its data.
But if the number of columns that are audited may change in the future, you might consider an approach with two tables, as in this project: https://autoaudit.codeplex.com/documentation
AuditHeader Table
This table is inserted with one row everytime one record is inserted,
updated or deleted in a table that has been setup to use the AutoAudit
system.
AuditDetail Table
This table is related to AuditHeader and is inserted with one row for
each column that is changed during an insert or update operation and
for each column during a delete operation.
If you save old + new values with every change, you can revert to "old" state just from the current Audit entry.
And a structure change of Main table (or if you decide that e.g. users can edit emp_name too) doesn't need a structure change of the Audit table, because every audited column in Main is mapped to a row in AuditDetails instead of a column.
Edit: Additional advantage:
In your sample data you have marked the changed values in red. Obviously an Access table doesn't work like that. If you want to keep this information ("which column(s) exactly was edited?"), you would need an additional column in the Audit table.
This would be covered by AuditDetail, since it contains each change with old + new value.

Find differences from data

I am parsing a html code at difference times. Now I want to find differences, so if any value has changed, added or deleten.
I want to save everything in a database.
The table looks like this:
id | column1 | column2 | column3
Now I want to update every row where the data has changed, added or deleten.
What is the best way to compare the old value and the new parsed values?
Should I create a hash, and if the hash is different I delete this entry and add a new one?

restoring data in a specific table

One of our tables has been maligned
/*edit as per commented request
On doing an update to a specific column I accidentally neglected to specify for which row I wish to make this change and set the offending value for every row in the table.
*/end edit
but we have a very recent backup, however not so recent that other tables won't lose data if we do a total database restore.
I'm wondering what the procedure is (assuming there is one) of copying the contents of a given table from one database to another.
The largest problem is that I can't just drop the offending table and replace it as it has rows that are indexed by id into other tables. This won't be a problem if we just take the values from the identical rows in the back-up and bring them over (since the row ids wouldn't change).
It's unclear what exactly has gone wrong with your data. But I'm thinking maybe just a column or two has got messed up. As you said, you just want to copy over the data from the old table, based on the id column.
Assuming you've imported the backup database as "olddb" and the current one is named "newdb":
UPDATE newdb.yourtable newtable, olddb.yourtable oldtable
SET newtable.somecolumn = oldtable.somecolumn
WHERE newtable.id = oldtable.id
Use mysqldatadump for that particular table, and then feed that into the other database.
You can edit the dump file prior to redaing it in to the target table.
See: https://dba.stackexchange.com/questions/9306/how-do-you-mysqldump-specific-tables

MySQL trigger to create and delete tables, make inserts and delete rows

The Situation:
So, on my website, I want to give users the opportunity to save complex sets of data from their excel tables. Since there will be a large number of users and limited resources, I don't want to store all the data in one single table.
My idea is to have tables where descriptive information is stored for the dataset the user wants to create. E.g.
Table: UserDatasets_sets
id, name, userid
Table: UserDatasets_columns
id, fk_UserDatasets_sets, name, type, length, etc...
And then at some place to run a mysql trigger that would
a) create a table of the name that is passed to be inserted with a certain prefix. E.g. 'UD_' + UserDatasets_sets.name and all the columns from the ..._columns table and an extra column fk_Acl.
b) Because I want to also give my users the opportunity to set permissions on the individual entries of their tables, I would then like to store a trigger for this newly created table. So that whenever a row is inserted, a corresponding row is created in the ACL table and the id is then set as the fk_Acl value on the user's table.
c) Last but not least, I would also like to store the same triggers in reverse, so that whenever a user deletes his dataset in the UserDatasets_sets table, the corresponding table gets deleted. The rest that is connected to this action could be deleted by cascading, right?
My Question:
Is this even possible to do as a trigger? (The reason why I would like to do this: I don't want to waste cpu and memory on running the more demanding php alternative.)
What would a query to store this trigger look like?

MYSQL Trigger to update table that is based on two other tables

I've created a table name 'combined_data' using data from two tables 'store_data' and 'hd_data'. The two tables share a common column which I used to link the data when creating the new table and that is 'store_num'. What happens is when a user submits information to 'store_data' I want info from that submit such as store_num, store_name, etc to move into the 'combined_data' table as well as pull information from the 'hd_data' that pertains to the particular store_num entered such as region, division etc. Trying to come up with the structure to do this, I can fill in table names and column names just fine. Just curious if this is doable or if another solution should be sought out?
This is a common situation when saving data and requires to be split into 2 or more different repositories. I would create a stored procedure, and pass everything into a transaction, so if at any time something fails, it would roll back, and you would have consistency between your tables.
However, yes you can also do it with a trigger on insert of data on either store_data, or hd_data, if you would like to keep it simple.