Forms in oracle apps - oracleforms

Am new to oracle apps,tried to implement a small logic to transfer the records from staging table to permanent table on clicking transfer button in form.After updating/inserting the data in perm table,transfer flag in staging table should be updated to 'Y'.
In the form I am taking staging table columns as a datablock.
sample Code:
GO_BLOCK('stg_datablock');
first_record;
Loop
--Insert/Update on perm table
--Now try to update staging table transfer_flag to Y to indicate the record has been transferred to perm.
update staging_table set TRANSFER_FLAG='Y' WHERE col1=:stg_datablock.col1 and col2=:stg_datablock.col2;
EXIT
WHEN :system.last_record = 'TRUE';
next_record;
END LOOP;
But I tried to implement this,it took long time to execute.Could anyone please suggest me the reason for poor efficiency.Any suggestions wil be greatly appreciated.

I have found the reason for the Q.Sorry for creating confusion, i forgot to mention the last command which i have in the code,"commit_form",due to which it took more time to process.I have replaced commit_form() with standard commit,now the execution is as expected.
Thanks,
Likitha

Oracle does not recommend to perform any DML sentences within forms, you must create a Program Unit and call a pkg or procedure to your data base and perform those operations.

Related

Mysql drop table/create table sequence gives strange error

This situation makes no sense.
I have the following sequence of SQL operations in my php code:
DROP TABLE IF EXISTS tablename;
CREATE TABLE tablename;
Of course the php code does not look like that, but those are the commands being executed.
Every once in a while on the CREATE statement, the system returns "table already exists".
I would not think this could happen, unless it is some kind of delay in the dropping. The table is Innodb and I read that there could be processes using the table. However, the tablename has embedded within it a session_id for the user, because this table is somewhat transient and is dedicated to the specific user only--no other user can be using the table, and not even any other script can be using it. It is a "user-specific, script-specific" table. However, it is possible that the user could execute this script, go away to a different script, then come back to this script.
The describe code is in a routine that decides whether it can re-use the table, or whether it has to be recreated. If it has to be recreated, then the two lines execute.
Any ideas what is causing this error condition?
EDIT:
The problem with "actual code" is that sometimes it just leads to more questions that diverge from the actual point. Neverthess, here is a copy from the actual script:
$query1 = "DROP TABLE IF EXISTS {$_SESSION['tmpContact']}";
SQL($query1);
$memory_table = "CREATE TABLE {$_SESSION['tmpContact']}";
The SQL() function executes the command and has error handling.
Plan A: Check for errors after the DROP. There may be a clue there.
Plan B: CREATE TEMPORARY TABLE ... -- That will be local to the connection, so [presumably] you won't need the DROP.
$a = mysql_query("SELECT TABLE");
if($a != ''){}else{}
try mixing the php with the sql.

How a trigger on a table works on insert event?

Hypothetically, I am going to develop a trigger that inserts a record to Table A when an insertion made to an Table A.
Therefore, I want to know how the system handles that kind of loophole or it is going to continue as a loop until the system hangs which requires restart and possibly remove the DB.
I'm trying to gather information on almost every DBMS on this issue or loophole.
I can only speak to Oracle, I know nothing of MySQL.
In Oracle, this situation is known as mutation. Oracle will not spiral into an endless loop. It will detect the condition, and raise an ORA-04091 error.
That is:
ORA-04091: table XXXX is mutating, trigger/function may not see it
The standard solution is to define a package with three functions and a package level array. The three functions are as follows:
initialize - this will only zero out the array.
save_row - this will save the id of the current row (uk or pk) into the arrray.
process_rows - this will go through the array, and actually do the trigger action for each row.
Now, define some trigger actions:
statement level BEFORE: call initialize
row level BEFORE or AFTER: call save_row
statement level AFTER: call process_rows
In this way, Oracle can avoid mutation, and your trigger will work.
More details and some sample code can be found here:
https://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
You can only insert a record in same table if you are using instead of trigger. In all other cases you can only modify the record being inserted.
I hope this answers your quest.
you can create trigger in mysql DBMS.
check below link for create insert trigger syntex
http://www.techonthenet.com/oracle/triggers/after_insert.php

How to get the value from step1 to step2 in sql Job

I need to create a SQL JOB.
Step1:
Insert a Row into TaskToProcess Table and return ProcessID(PK and Identity)
Step2:
Retrive the ProcessID which is generated in step1 and pass the value to SSIS package and execute the SSIS Package.
Is this Possible in SQL server JOB??
Please help me on this
Thanks in advance.
There is no built-in method of passing variable values between job steps. However, there are a couple of workarounds.
One option would be to store the value in table at the end of step 1 and query it back from the database in step 2.
It sounds like you are generating ProcessID by inserting into a table and returning the SCOPE_IDENTITY() of the inserted row. If job step 1 is the only process inserting into this table, you can retrieve the last inserted value from job 2 using the IDENT_CURRENT('<tablename>') function.
EDIT
If multiple process could insert into your process control table, the best solution is probably to refactor steps 1 and 2 into a single step - possibly with a controlling SSIS master package (or other equivalent technology) which can pass the variables between steps.
Similar to Ed Harper's answer, but some details found in "Variables in Job Steps" MSDN forum thread
For the job environment, some flavor of Process-Keyed Tables (using
the job_id) or Global Temporary Tables seems most useful. Of course,
I realize that you might not want to have something left 'globally'
available. If necessary, you could also look into encrypting or
obfuscating the value that you store. Be sure to delete the row once
you have used it.
The Process-Keyed Tables are described in article "How to Share Data between Stored Procedure"
Another suggestion in Send parameters to SQL server agent jobs/job steps MSDN forum thread to create a table to hold the parameters, such as:
CREATE TABLE SQLAgentJobParms
(job_id uniqueidentifier,
execution_instance int,
parameter_name nvarchar(100),
parameter_value nvarchar(100),
used_datetime datetime NULL);
Your calling stored procedure would take the parameters passed to it
and insert them into SQLAgentJobParms. After that, it could use EXEC
sp_start_job. And, as already noted, the job steps would select from
SQLAgentJobParms to get the necessary values.

keeping the history of table in java [duplicate]

I need the sample program in Java for keeping the history of table if user inserted, updated and deleted on that table. Can anybody help in this?
Thanks in advance.
If you are working with Hibernate you can use Envers to solve this problem.
You have two options for this:
Let the database handle this automatically using triggers. I don't know what database you're using but all of them support triggers that you can use for this.
Write code in your program that does something similar when inserting, updating and deleting a user.
Personally, I prefer the first option. It probably requires less maintenance. There may be multiple places where you update a user, all those places need the code to update the other table. Besides, in the database you have more options for specifying required values and integrity constraints.
Well, we normally have our own history tables which (mostly) look like the original table. Since most of our tables already have the creation date, modification date and the respective users, all we need to do is copy the dataset from the live table to the history table with a creation date of now().
We're using Hibernate so this could be done in an interceptor, but there may be other options as well, e.g. some database trigger executing a script, etc.
How is this a Java question?
This should be moved in Database section.
You need to create a history table. Then create database triggers on the original table for "create or replace trigger before insert or update or delete on table for each row ...."
I think this can be achieved by creating a trigger in the sql-server.
you can create the TRIGGER as follows:
Syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER } {INSERT | UPDATE |
DELETE } ON table_name FOR EACH ROW
triggered_statement
you'll have to create 2 triggers one for before the operation is performed and another after the operation is performed.
otherwise it can be achieved through code also but it would be a bit tedious for the code to handle in case of batch processes.
You should try using triggers. You can have a separate table (exact replica of your table of which you need to maintain history) .
This table will then be updated by trigger after every insert/update/delete on your main table.
Then you can write your java code to get these changes from the second history table.
I think you can use the redo log of your underlying database to keep track of the operation performed. Is there any particular reason to go for the program?
You could try creating say a List of the objects from the table (Assuming you have objects for the data). Which will allow you to loop through the list and compare to the current data in the table? You will then be able to see if any changes occurred.
You can even create another list with a object that contains an enumerator that gives you the action (DELETE, UPDATE, CREATE) along with the new data.
Haven't done this before, just a idea.
Like #Ashish mentioned, triggers can be used to insert into a seperate table - this is commonly referred as Audit-Trail table or audit log table.
Below are columns generally defined in such audit trail table : 'Action' (insert,update,delete) , tablename (table into which it was inserted/deleted/updated), key (primary key of that table on need basis) , timestamp (the time at which this action was done)
It is better to audit-log after the entire transaction is through. If not, in case of exception being passed back to code-side, seperate call to update audit tables will be needed. Hope this helps.
If you are talking about db tables you may use either triggers in db or add some extra code within your application - probably using aspects. If you are using JPA you may use entity listeners or perform some extra logic adding some aspect to your DAO object and apply specific aspect to all DAOs which perform CRUD on entities that needs to sustain historical data. If your DAO object is stateless bean you may use Interceptor to achive that in other case use java proxy functionality, cglib or other lib that may provide aspect functionality for you. If you are using Spring instead of EJB you may advise your DAOs within application context config file.
Triggers are not suggestable, when I stored my audit data in file else I didn't use the database...my suggestion is create table "AUDIT" and write java code with help of servlets and store the data in file or DB or another DB also ...

MySQL Triggers - AFTER INSERT trigger + UDF sys_exec() issue

Problem: I've got a table which holds certain records. After the insert has been done, I want to call an external program (php script) via MySQL's sys_* UDFs.
Now, the issue - the trigger I have passes the ID of the record to the script.
When I try to pull the data out via the script, I get 0 rows.
During my own testing, I came to a conclusion that the trigger invokes the php script and passes the parameters BEFORE the actual insert occured, thus I get no records for given ID.
I've tested this on MySQL 5.0.75 and 5.1.41 (Ubuntu OS).
I can confirm that parameters get passed to the script before actual insert happens because I've added sleep(2); to my php script and I've gotten the data correctly.
Without sleep(); statement, I'm receiving 0 records for given ID.
My question is - how to fix this problem without having to hardcode some sort of delay within the php script?
I don't have the liberty of assuming that 2 seconds (or 10 seconds) will be sufficient delay, so I want everything to flow "naturally", when one command finishes - the other gets executed.
I assumed that if the trigger is of type AFTER INSERT, everything within the body of the trigger will get executed after MySQL actually inserts the data.
Table layout:
CREATE TABLE test (
id int not null auto_increment PRIMARY KEY,
random_data varchar(255) not null
);
Trigger layout:
DELIMITER $$
CREATE TRIGGER `test_after_insert` AFTER INSERT ON `test`
FOR EACH ROW BEGIN
SET #exec_var = sys_exec(CONCAT('php /var/www/xyz/servers/dispatcher.php ', NEW.id));
END;
$$
DELIMITER ;
Disclaimer: I know the security issues when using sys_exec function, my problem is that the MySQL doesn't insert FIRST and THEN call the script with necessary parameters.
If anyone can shed some light on how to fix this or has a different approach that doesn't involve SELECT INTO OUTFILE and using FAM - I'd be very grateful. Thanks in advance.
Even if you use an AFTER trigger, the row isn't committed yet. But sys_exec() doesn't return until the php script exits, so the AFTER trigger can't complete, therefore you can't commit the INSERT either.
This is by design. After all, you may do more operations within the same transaction, or you may roll back the transaction. That's the problem with invoking external processes from a trigger: external processes can't see data within the scope of the transaction in the database.
You shouldn't do this task with a trigger. At best, you should use the trigger to set a "flag" column and then write an external process to look for rows with the flag set and then invoke that PHP script. That way only rows that have successfully been inserted AND committed will be processed.
If I understand it clearly, you insert a row in your DB. That invoke a trigger that launch an external command written in PHP. That command queries in its turn the same DB by using the id of the inserted row?
I don't think this is a problem of "delay".
The real "problem" is your initial insert and you external command connect to the same DB on two different sessions -- probably in two different transactions (depending your database engine and your transaction isolation level).
I assume, when the trigger in invoked the row insert is not yet committed to the DB. So the external command still see the DB as it was before.
BTW, if the above explanation is quite speculative -- what is more evident to me is that you should probably think about a different design than trying to made that work as it is.