I am trying to capture record history wherein i pass the old and new data to a php function and it returns an array of changed values.
So, is there a way to get the previous committed data from DB using same session before committing the current transaction?
Plan A:
BEGIN;
SELECT ... FOR UPDATE;
...
COMMIT;
Plan B:
See if you can use a TRIGGER, which has access to the old and new values as the pseudo tables OLD and NEW.
(If you would like to discuss your goal more; we might be able to provide a more focused Answer.)
Related
I have a requirement where the data displayed in the UI needs to be the latest one. So, a .bat is written to update the database with the new set of data every 30 minutes. However, during the process the old data will be made to delete completely and fill with the new set of data. But, the issue is during this process, the UI will not be able to show any data, as the old data gets deleted as the first step in the process. So, is there anyway where I can update the DB table with new set of data while UI still displays the old data till the new data gets inserted? so that the UI will always some data to be displayed rather than showing no records?
Thanks,
Keerthi Kumar
You have no other processes that write to this table during your update (which I assume takes a while)?
Then simply use a transaction.
START TRANSACTION;
DELETE FROM your_table; /* Don't do a TRUNCATE TABLE; this would implicitly commit the transaction*/
INSERT INTO your_table VALUES('whatever');
COMMIT;
or
START TRANSACTION;
UPDATE your_table SET column1 = 'new_value';
COMMIT;
Other threads will still read the old data until you COMMITed your transaction.
Please read more about transactions in the manual. Also note, that this assumes you're using InnoDB engine tables.
I am calling an API of my Nodejs app to update a record in my MySQL database.
I defined an "After Update trigger" on it. the trigger calls a post restful API using sys_exec,to pass the updated record's ID to another API. Then, the other API fetch the record and based on the updated values, will insert a new record in the other table of the same database.
But what actually happens is: first the second API insert new record based on the old values of the record and then the old value update new value.
As far as I know, "after update trigger" guarantees to start executing trigger after updating current record.
any suggestion or help, please.
The after update trigger runs after the record is updated, but before the committing of the transaction.
By calling another api from the trigger, the 2nd insert is most likely runs in a different transaction. Unless you change the isolation level to read uncommitted, the 2nd transaction can only read the committed, therefore unchanged values of the record.
I would do the 2nd insertion from the trigger, not from another api because the trigger can obviously see the updated values. The 2nd api can still take care of whatever else it is doing at the moment.
I would not recommend changing the isolation level to read uncommitted - unless you really know what you are doing. It can have unintended side effects.
Hypothetically, I am going to develop a trigger that inserts a record to Table A when an insertion made to an Table A.
Therefore, I want to know how the system handles that kind of loophole or it is going to continue as a loop until the system hangs which requires restart and possibly remove the DB.
I'm trying to gather information on almost every DBMS on this issue or loophole.
I can only speak to Oracle, I know nothing of MySQL.
In Oracle, this situation is known as mutation. Oracle will not spiral into an endless loop. It will detect the condition, and raise an ORA-04091 error.
That is:
ORA-04091: table XXXX is mutating, trigger/function may not see it
The standard solution is to define a package with three functions and a package level array. The three functions are as follows:
initialize - this will only zero out the array.
save_row - this will save the id of the current row (uk or pk) into the arrray.
process_rows - this will go through the array, and actually do the trigger action for each row.
Now, define some trigger actions:
statement level BEFORE: call initialize
row level BEFORE or AFTER: call save_row
statement level AFTER: call process_rows
In this way, Oracle can avoid mutation, and your trigger will work.
More details and some sample code can be found here:
https://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
You can only insert a record in same table if you are using instead of trigger. In all other cases you can only modify the record being inserted.
I hope this answers your quest.
you can create trigger in mysql DBMS.
check below link for create insert trigger syntex
http://www.techonthenet.com/oracle/triggers/after_insert.php
I have so many tables in my DB.
eg. user, organisation,etc.
**User**
userId,name,age,orgId,etc..
**SessionLog**
logId, userId, operations, reason
If one admin makes changes like inserting, updating ,deleting, I will log every operations in SessionLog table WHAT he made.
So I plan to use Trigger.But the problem is I want to log the userId for WHO too. By using Trigger WHAT is OK. But for logging WHO, how can I do ?
1) do I need to retrieve the logId and need to update the row with WHO?
or
2) just use the simple INSERT statement to log everything? which way is better?
3) Is there any way to pass desired parameters to Trigger?
Thanks.
1. User
You can use CURRENT_USER to get this. http://dev.mysql.com/doc/refman/5.0/en/information-functions.html#function_current-user
2. Insert statement vs. Trigger
A trigger will abstract logging away from everybody else, and usually is the easiest solution that stays hidden.
INSERTS / UPDATES will not return until the trigger has completed.
Therefore while triggers on tables with light activity is an OK idea, it will become a real hindrance when dealing with tables that have a lot of activity.
Another option is to encapsulate this in the data access layer, but if you have even a single user that has direct access to the data (DBA included) then I do not recommend this approach.
In Kettle, I use the following logic in a transformation, given some Strings X and Y as input:
[User Defined Java Expression] Generate ID
[Insert / Update] Update/Insert table set id = generatedId, name=X, company=Y where name = X; don't update the ID column
[Database Value Lookup]select id from table where name = X
Idea is to update existing entries in the table or create new ones and get the ID of the interesting row in the next step (which may be an existing one or the newly generated one).
This works fine when executed on MySQL + MyISAM but fails on MySQL + InnoDB, with all other parameters being identical. The last step fails when the row is just being inserted in the second step but works for rows already existing in the database. It seems as if the connection tries to execute the SELECT of the last step before the actual insert happened.
All parameters are set to default in the MySQL settings (MySQL 5.1 and 5.5 show the same behavior).
So my questions are: What are the relevant parameters in Kettle and/or MySQL? How can I guarantee that this works as expected? I cannot switch back to MyISAM.
just use the block rows step between the insert step and the next step. Then the step before the block will complete before the next step starts.
Well, after having evaluated different possibilities, three seem to be possible:
Write my own step which performs the select/insert in a transaction
Serialize the whole transformation in its properties (makes everything REALLY slow)
Use Codeks idea and use the blocking step
I went with the third option for now as everything else is not possible for the moment.
Make sure the transaction generated by Update/Insert is committed and the locks are released before doing the SELECT operation takes place. It looks like there are lock problems