How to get the value from step1 to step2 in sql Job - sql-server-2008

I need to create a SQL JOB.
Step1:
Insert a Row into TaskToProcess Table and return ProcessID(PK and Identity)
Step2:
Retrive the ProcessID which is generated in step1 and pass the value to SSIS package and execute the SSIS Package.
Is this Possible in SQL server JOB??
Please help me on this
Thanks in advance.

There is no built-in method of passing variable values between job steps. However, there are a couple of workarounds.
One option would be to store the value in table at the end of step 1 and query it back from the database in step 2.
It sounds like you are generating ProcessID by inserting into a table and returning the SCOPE_IDENTITY() of the inserted row. If job step 1 is the only process inserting into this table, you can retrieve the last inserted value from job 2 using the IDENT_CURRENT('<tablename>') function.
EDIT
If multiple process could insert into your process control table, the best solution is probably to refactor steps 1 and 2 into a single step - possibly with a controlling SSIS master package (or other equivalent technology) which can pass the variables between steps.

Similar to Ed Harper's answer, but some details found in "Variables in Job Steps" MSDN forum thread
For the job environment, some flavor of Process-Keyed Tables (using
the job_id) or Global Temporary Tables seems most useful. Of course,
I realize that you might not want to have something left 'globally'
available. If necessary, you could also look into encrypting or
obfuscating the value that you store. Be sure to delete the row once
you have used it.
The Process-Keyed Tables are described in article "How to Share Data between Stored Procedure"
Another suggestion in Send parameters to SQL server agent jobs/job steps MSDN forum thread to create a table to hold the parameters, such as:
CREATE TABLE SQLAgentJobParms
(job_id uniqueidentifier,
execution_instance int,
parameter_name nvarchar(100),
parameter_value nvarchar(100),
used_datetime datetime NULL);
Your calling stored procedure would take the parameters passed to it
and insert them into SQLAgentJobParms. After that, it could use EXEC
sp_start_job. And, as already noted, the job steps would select from
SQLAgentJobParms to get the necessary values.

Related

Update a table (that has relationships) using another table in SSIS

I want to be able to update a specific column of a table using data from another table. Here's what the two tables look like, the DB type and SSIS components used to get the tables data (btw, both ID and Code are unique).
Table1(ID, Code, Description) [T-SQL DB accessed using ADO NET Source component]
Table2(..., Code, Description,...) [MySQL DB accessed using ODBC Source component]
I want to update the column Table1.Description using the Table2.Description by matching them with the right Code first (because Table1.Code is the same as Table2.Code).
What i tried:
Doing a Merge Join transformation using the Code column but I couldn't figure out how to reinsert the table because since Table1 has relationships i can't simply drop the table and replace it with the new one
Using a Lookup transformation but since both tables are not the same type it didn't allow me to create the lookup table's connection manager (which would be for in my case MySQL)
I'm still new to SSIS but any ideas or help would be greatly appreciated
My solution is based on #Akina's comments. Although using a linked server would've definitely fit, my requirement is to make an SSIS package to take care of migrating some old data.
The first and last are SQL tasks, while the Migrate ICDDx is the DFT that transfers the data to a staging table created during the first SQL task.
Here's the SQL commands that gets executed during Create Staging Table :
DROP TABLE IF EXISTS [tempdb].[##stagedICDDx];
CREATE TABLE ##stagedICDDx (
ID INT NOT NULL,
Code VARCHAR(15) NOT NULL,
Description NVARCHAR(500) NOT NULL,
........
);
and here's the sql command (based on #Akina's comment) for transferring from staged to final (inside Transfer Staged):
UPDATE [MyDB].[dbo].[ICDDx]
SET [ICDDx].[Description] = [##stagedICDDx].[Description]
FROM [dbo].[##stagedICDDx]
WHERE [ICDDx].[Code]=[##stagedICDDx].[Code]
GO
Here's the DFT used (both TSQL and MySQL sources return sorted output using ORDER BY Code, so i didnt have to insert Sort components before the Merge Join) :
Note: Btw, you have to setup the connection manager to retain/reuse the same connection so that the temporary table doesn't get deleted before we transfer data to it. If all goes well, then after the Transfer Staged SQL Task, the connection would be closed and the global temporary table would be deleted.

How to convert mssql user-defined table type into mysql UDT

This is my mssql UDT
create type ConditionUDT as Table
(
Name varchar(150),
PackageId int
);
This is my mssql Stored Procedure
create Procedure [dbo].[Condition_insert]
#terms_conditions ConditionUDT readonly
as
begin
insert into dbo.condition (name, p_id)
select [Name],[PackageId]
from #terms_conditions;
end
There is a workaround solution if you do not have any other choice but definitely migrate from sql server to mysql.
The closest structural predefined object that takes on many rows in mysql is an actual table. So you need 1 table per UDDT of sql server. Make sure you use a specific schema or naming conversion so you know those tables are UDDT emulations.
The idea is fill in the info, use them into the sp and then delete them. You need however to gurantee who reads what and that info are deleted after usage, consumed. So:
For any of those tables you need 2 columns, i suggest put them always first. That will be the key and the variable name. The key can be char(38) and use UUID() to get a unique identifier. It can also be int and use the connectionid() instead. Unique identifier is better however as ensures that nobody will ever use information not indented for him no matter what. The variable name will be the one used into the sql server parameter, just a string. This way:
You know what UDDT you use out of the table name.
You know the identity of your process through the key.
You know the 'variable' out of the name.
So, in your application code you:
Begin transaction.
Insert the data into the proper (UDDT emulator) tables using a key and the variable name(s)
Supply to the stored procedure the key and the variable name(s). You can use the same key for many table type parameters within the same sp call.
The stored procedure can now use that information as before from the UDDT variable using key and variable name as filters to query the proper UDDT emulated table.
Delete the data you insert
Commit
On catch, rollback.
For simplicity your sp can read the data into temp table and you do not need to change a line of code from the original sql server sp for this aspect.
Transaction into your app code will help you make sure your temporary variable data will either be deleted or never committed no matter what goes wrong.
As Larnu thought might be the case, MySQL doesn't support user defined types at all, let alone user defined table types.
You will have to make them all separate scalar parameters.

How a trigger on a table works on insert event?

Hypothetically, I am going to develop a trigger that inserts a record to Table A when an insertion made to an Table A.
Therefore, I want to know how the system handles that kind of loophole or it is going to continue as a loop until the system hangs which requires restart and possibly remove the DB.
I'm trying to gather information on almost every DBMS on this issue or loophole.
I can only speak to Oracle, I know nothing of MySQL.
In Oracle, this situation is known as mutation. Oracle will not spiral into an endless loop. It will detect the condition, and raise an ORA-04091 error.
That is:
ORA-04091: table XXXX is mutating, trigger/function may not see it
The standard solution is to define a package with three functions and a package level array. The three functions are as follows:
initialize - this will only zero out the array.
save_row - this will save the id of the current row (uk or pk) into the arrray.
process_rows - this will go through the array, and actually do the trigger action for each row.
Now, define some trigger actions:
statement level BEFORE: call initialize
row level BEFORE or AFTER: call save_row
statement level AFTER: call process_rows
In this way, Oracle can avoid mutation, and your trigger will work.
More details and some sample code can be found here:
https://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
You can only insert a record in same table if you are using instead of trigger. In all other cases you can only modify the record being inserted.
I hope this answers your quest.
you can create trigger in mysql DBMS.
check below link for create insert trigger syntex
http://www.techonthenet.com/oracle/triggers/after_insert.php

Log all events on all tables with MySQL

I need to log all events on all tables in table database_log (id, user, timestamp, tablename, old_value, new_value).
I thought I can create the same trigger on all tables (~25) with a little php script dynamically replace the name's table. But in this case I can retrieve the old and new value, because all tables haven't the same columns so I can't just concat all field for store in the "old_value" and "new_value" (even if I retrieve fields in schema because I can't use a concat() on it for select all value and store in variable).
For exemple a :
SELECT * into v_myvar FROM my_table where id=OLD.id;
CALL addLog(v_myvar)
Where addLog is procedure taking my old value and add a line with other informations, could save my life.
So, I'm looking for a sexy solution with one trigger and/or one procedure (by table) or a useful tool. Someone have a solution ?
Thanks
SET GLOBAL general_log_file = '/var/log/mysql/mysql.log';
The general query log is a general record of what mysqld is doing. The server writes information to this log when clients connect or disconnect, and it logs each SQL statement received from clients.
See the MySql Documentation

keeping the history of table in java [duplicate]

I need the sample program in Java for keeping the history of table if user inserted, updated and deleted on that table. Can anybody help in this?
Thanks in advance.
If you are working with Hibernate you can use Envers to solve this problem.
You have two options for this:
Let the database handle this automatically using triggers. I don't know what database you're using but all of them support triggers that you can use for this.
Write code in your program that does something similar when inserting, updating and deleting a user.
Personally, I prefer the first option. It probably requires less maintenance. There may be multiple places where you update a user, all those places need the code to update the other table. Besides, in the database you have more options for specifying required values and integrity constraints.
Well, we normally have our own history tables which (mostly) look like the original table. Since most of our tables already have the creation date, modification date and the respective users, all we need to do is copy the dataset from the live table to the history table with a creation date of now().
We're using Hibernate so this could be done in an interceptor, but there may be other options as well, e.g. some database trigger executing a script, etc.
How is this a Java question?
This should be moved in Database section.
You need to create a history table. Then create database triggers on the original table for "create or replace trigger before insert or update or delete on table for each row ...."
I think this can be achieved by creating a trigger in the sql-server.
you can create the TRIGGER as follows:
Syntax:
CREATE TRIGGER trigger_name
{BEFORE | AFTER } {INSERT | UPDATE |
DELETE } ON table_name FOR EACH ROW
triggered_statement
you'll have to create 2 triggers one for before the operation is performed and another after the operation is performed.
otherwise it can be achieved through code also but it would be a bit tedious for the code to handle in case of batch processes.
You should try using triggers. You can have a separate table (exact replica of your table of which you need to maintain history) .
This table will then be updated by trigger after every insert/update/delete on your main table.
Then you can write your java code to get these changes from the second history table.
I think you can use the redo log of your underlying database to keep track of the operation performed. Is there any particular reason to go for the program?
You could try creating say a List of the objects from the table (Assuming you have objects for the data). Which will allow you to loop through the list and compare to the current data in the table? You will then be able to see if any changes occurred.
You can even create another list with a object that contains an enumerator that gives you the action (DELETE, UPDATE, CREATE) along with the new data.
Haven't done this before, just a idea.
Like #Ashish mentioned, triggers can be used to insert into a seperate table - this is commonly referred as Audit-Trail table or audit log table.
Below are columns generally defined in such audit trail table : 'Action' (insert,update,delete) , tablename (table into which it was inserted/deleted/updated), key (primary key of that table on need basis) , timestamp (the time at which this action was done)
It is better to audit-log after the entire transaction is through. If not, in case of exception being passed back to code-side, seperate call to update audit tables will be needed. Hope this helps.
If you are talking about db tables you may use either triggers in db or add some extra code within your application - probably using aspects. If you are using JPA you may use entity listeners or perform some extra logic adding some aspect to your DAO object and apply specific aspect to all DAOs which perform CRUD on entities that needs to sustain historical data. If your DAO object is stateless bean you may use Interceptor to achive that in other case use java proxy functionality, cglib or other lib that may provide aspect functionality for you. If you are using Spring instead of EJB you may advise your DAOs within application context config file.
Triggers are not suggestable, when I stored my audit data in file else I didn't use the database...my suggestion is create table "AUDIT" and write java code with help of servlets and store the data in file or DB or another DB also ...