Can somebody please help me to transfer around 15 tables from one database to another database. At present I can do this one by one using Data Flow task, but then I need to do this task 15 times which is very time consuming.
Why don't you just use a task? Maybe tasks->export is what you're looking for.
Otherwise you'll need to create separate blocks for each table or:
Create a variable of type object
Script Task: Add to your list all table names.
Iterate over this object variable with For each loop container
Inside the loop create a source from a variable. In this variable specify the connection dynamically depending on the current loop value.
you can use SSIS package, select Transfer SQL server objects from SSIS toolbox , in Object specify the source and destination servers and database. for copyAllObjects make it false . ObjectToCopy select CopyAllTables true or make it false and pick from the list the table you want to copy.
Related
I have a lot of databases (+100) each one has the same structure and different connections.
I'm using Kettle to run a transformation in the different databases in order to create a data-warehouse.
How can I automate the run of the same transformation with different connections?
I already prove this Pass DB Connection parameters to a Kettle a.k.a PDI table Input step dynamically from Excel but it only accepts a row in the csv.
Should I create a loop, or I'm going to need to create a script?
Any help would be appreciated.
(Sorry for my english)
You can do it with a loop.
But, do not fret, it's not hard to make that with Pentaho.
First of all, you will use a JOB to create your loop:
START --> Transform_that_holds_parameters --> Transform_to_run_in_a_loop
As you can guess, your transformation that runs equally on each DB is the last one on this flow. But we need to set two Advanced flags on that Job Entry:
Execute for every input row?
Copy previous result to parameters?
Then we need to build our Transform_that_holds_parameters with the following structure:
Some_sort_of_input --> copy_rows_to_result
Here you will have to grab all connections parameters from somewhere, be it a Excel file or a table in another database. But once you grad this data, be sure to have 1 row for each database you want to run your transformation in. Ok?
Connect that to the 'Copy rows to result' step, this step sends the data back to our JOB and if you remember, our next transformation is set to 'Execute for every input row' and 'Copy previous result to parameters'.
Now, remember well what are the column names going to the last step of that transformation, you will need them on the next step.
Get back to our JOB and go to properties of the Transform_to_run_in_a_loop, open parameters and fill in the column 'parameter' and 'stream column name' with the columns we just copied to the Result.
Inside your transformation, you will need to set the same parameters with exactly the same names. And use these parameters on your connection settings.
Done, now you will have the first transformation setting all parameters and the second one running for each database config you have.
I have got around 35 tables whose data need to be migrated from SQL Server to MySQL. I am using SSIS for this project and I have set up a control flow (using Load Multiple Tables) with a Script Task and a Foreach Loop Container that iterates through all the tables in my database. What I now need to do is convert the data type for some of the columns, in some of the tables, to 'Unicode String [DT_WSTR]' before I dump them in my destination tables. Is this something that can be done through SSIS? If so, any pointers or a set of instructions would be great.
Thanks,
Pratik Gandhi
Yes, this is a standard out-of-the-box task for SSIS.
Add a Data Flow Task.
Add a Data conversion component to the task
Add your source and destination servers
Map your columns, converting datatypes where required.
As always, MSDN provides further help.
I have some requirements as explained below:
I need to have a SSIS data data flow task,which will accept the input parameters as
--SourceServer Connection String
--DestinationServer Connection String
--Source Table Name
--Destination Table Name
Then It should do the column mapping of source and destination tables dynamically at run time.
The source and destination tables schema would be same always.
I want to call this package by passing the above parameters from C#.net.
One main thing is here I will have to pass different sets of source and destination tables.
just answered this on a previous question. You cant loop through tables and dinamically map columns on your source and destinations components. You would need one set of Source -> Ddestination per table.
If that's not feasible, you may want to lokk at the Transfer SQL Server Objects Task
Create SSIS Packages parameters. Set web.config file for passing that parameters.
First you deploy the package in SQL Server.
Create one job for execute the package.
Create one sp using SQL Server.
& execute the job.
using sp_start_job.
I think it solve ur problem.
I would like to make a package that would copy data from a table only if table is not empty. I know how to do count and how to make a package for copying data but problem is that Source can't have any inputs so I don't know how to do it. Any suggestions?
I don't understand your comment about dragging a "green line from a package to a source" but instead of trying to determine in advance if the table is empty, just do your copy anyway and then see how many rows were copied:
Create a package variable for the rowcount
Populate the variable using the rowcount transformation
Use an expression in the precedence constraint to check the variable: if it's greater than zero then continue executing the rest of your package
#Pondlife I don't think you can use precedence constraint on the data flow task, can you?
I believe you can use it only on the control flow.
I would add a "Execute SQL Task" with the count, sending the result to a variable and from this task, I would drag the green arrow to the Data Flow task that makes the copy and on this arrow I would add the expression on the precedence constraint.
As you have correctly noted, a data flow source does not accept input so one cannot perform logic in the dataflow to determine whether this task should run.
Cannot create connector.
The destination component does not have any available inputs for use in creating a path.
However, there's nothing stopping you from setting up this logic in your control flow. I would use a query that hits the DMVs for a fast rowcount on the destination system, filtered to only the tables I wished to replicate.
Armed with the list of empty tables, it'd probably depend how I'd handle it. For a small number of tables, I'd define N dataflows all with a do nothing script task as a precedent and then use an expression on table name to enable a path, much like I did on this question.
If there are many tables, I'd define a package per table and then invoke execute package task with the package name built dynamically based on the empty table name.
I need to query three different database and dump them into csv files. Its the same procedure for the three databases. The only difference is the database and the name of the csv file. Can I do this without cutting and pasting? Is there a way to pass parameters to the data flow task?
Thanks!
Your flat file and db connection managers could have the connection string based on a package scoped variable.
Then use a foreach looping container to call your dataflow task. Configure the looping container with a foreach item enumerator and add the appropriate names to the collection.
santiiiii's explanation covers the use case of downloading the data in one package execution. If you need to get the data at different times, then you can use a conditional statement in a variable that will give you different file names and database connections based on the supplied value for the variable. You can then set the value of the variable in the SQL Server Agent Job in the Set Values tab. This can give you more flexibility, but santiiiii's solution is definately best if you want to process all three files at the same time.