How to execute a MySQL delete query in Azure Data Factory? - mysql

I would like to delete the MySQL DB records using ADF.
I have created the pipeline in ADF and I am copying the data from a MySQL Database to Storage Account using copy activity in ADF, once that's completed, I would like to delete those copied records from MySQL database.
I am not able to find out any activity which can allow to delete the records from MySQL Database.
Script Activity doesn't allow MySQL linked Service only SQL DB allowed.
Please need your suggestion to complete it.

You can use lookup activity which supports both SQL and MySQL with query after copy activity to delete the records after copy.
After the copy activity join it with the look up and give your source dataset.
Select query and give the truncate query to delete the records in the table.
truncate table [dbo].[output];
I have added the select script above only to avoid the lookup error which gives error if the query didn’t give any data. However, it will truncate the records in the table even after giving error.
If you want to delete the total table, you can give drop query.
drop table <tablename>;
Data copied to blob storage after copy activity:
Table after copy activity:
Here I did it using azure SQL database. You can do the same with Azure MySQL Database as lookup supports both.

You need to create a stored procedure in your Database and add the stored procedure activity as a final step in your Azure Data Factory pipeline. If you'd like to truncate the whole data once the copy is finished, here's how you would create your Stored Procedure:
GO
CREATE PROCEDURE SP_Truncate
AS
BEGIN
TRUNCATE TABLE mytable
END
Once you've created this, add a stored procedure activity as a last step in your Azure Data Factory. It'll delete the copied data. Read a bit more about this in the documentation; you can also add parameters in your stored procedure, which you can refer to using lookup activity. Let me know if you need more help.

Related

How do I update a table in SSIS?

I have a table in database 1 with columns x and y. I have another table in database 2 with columns x and y. I want to update all the y columns in database 1 to the y columns in database 2 where the x columns in database 1 match the x columns in database 2.
This seems like an unbelievably trivial task, but I can't figure out how to do it in SSIS. I have an OLE DB Source and Destination in my data flow task and I have the 2 columns mapped, but it keeps trying to insert instead of update, and it fails because there are a bunch of other non-nullable columns in the destination that I don't have mapped.
The problem with using SSIS to do data transformation is that both the source and target data sets need to be pulled up into memory on the ETL server, the transformation needs to happen there, and then the results have to be written back down to the destination server.
It's network intensive. It's memory intensive. It's just less than ideal. That's also why you're having trouble figuring it out. On a server, it's just an UPDATE statement, but getting it up into SSIS requires many more steps than just that, and absent third party tools, there's no out of the box method to do anything other than row by row updates.
In your situation, where your source data is comparatively lightweight, I would suggest that the most efficient approach would be to use SSIS to move the source data from the source server to the target server and drop it into a working/holding/intermediate table. SSIS is absolutely awesome at moving data from point A to point B. Then, after the Data Flow, use an Execute SQL task to either call an UPDATE stored procedure, or go ahead and write the UPDATE statement in the package.
Doing it that way off-loads the DML from the ETL server to the SQL Server, which is designed for exactly that kind of work. Sort of a "let everybody do what they're good at" approach, if you will.
OK so rather than trying to directly map the data from DB2 into DB1, it's probably a better idea to stage the data from DB2 into DB1 and then update the table of interest in DB1.
The best way to do this is to create a new table in DB1 that stores all of the data from the table in DB2. Let's call this table 'staging'. Use SSIS to do a flat insert from the table in DB2 to your new 'staging' table in DB1 and then create an UPDATE stored procedure in your DB1 database to update existing entries in your endpoint table based on the entries you now have in your 'staging' table. You can trigger the SP from SSIS to run after your staging table is populated. You can cut out the 'staging' table here if you have a synonym from DB1 that references the table in DB2.
SSIS is more about bulk movement of data than it is updating what already exists. Use stored procedures for anything in between.

MYSQL Change Data Capture(CDC) - Azure Services (Azure data factory)

I want to perform ETL operation on the data tables of MYSQL Database and store the data in the azure data warehouse. I do not have updated date column to identify a modified record over the period. How do I come to know which record is modified. Does MYSQL database support CDC?
It is possible to read the MYSQL binlogs or binary logs using azure services (Azure data factory)?
If you can put together a single statement query that will return what you want using whatever functions and joins are available to you then you can put that into the sqlReaderQuery part of the ADF.
Otherwise you might be able to use a stored procedure activity (sorry not so familiar with mySQL as I am ADF)
Do you have any column which is increasing integer? If so, you can still use lookup activity + copy activity + stored procedure activity to get incremental load. More details are as following: https://learn.microsoft.com/en-us/azure/data-factory/tutorial-incremental-copy-powershell
ADF do not have built-in support for CDC yet. You can do that through custom activity in ADF with your code.
In MySQL you have the option to add a timestamp column which updates on an update on rowlevel by default. A CDC is not available, but when you can to see the de difference you can compare the MAX(updatedate) on MySQL versus (>=) your own MAX(ETLDate) to get all the modified records.

Doing Data transfer from Finacle to sql server database through SSIS

I am doing data transfer from Finacle db to my sql server db by usingh SSIS.
I am in bank and we use finacle for data storage.now under my project i have to transfer only customer and account information..But as the data is huge,it is taking too much time...
for examople:I started my query for data fetch from finacle on 18th for 3 regions and it got completed on 19th .Then I have run that same query on 19th for another 3 regions.This way we are proceeding:::::
last we will run the query to upload all the new accounts and customer data from our first day.
My problem is:I do not want duplicate data....i.e as i am uploading data from server again and again,it will led to duplicacy... is there any way i can put a check that if data is already there in my destination table- dont put it again and if it is not there ,then only put it.Please help me in this.
Import your data to a staging table on SQL Server. Then run a stored procedure that copies only the data that doesn't exist in the destination table from the staging table to the destination table.
I needed this previously and did a little research. I happened to find this really simple workable solution here:
https://finaclestack.com/questions/how-to-prevent-data-duplication-when-running-an-import-from-tabley-to-tablex-in-finacle/
It says:
1. If the tableY content is not much, then just select * from tableY, fetch ‘insert statement’ of the result and run the statement in tableX.
2. If the tableY content is much, then select in batches using a where clause. Then fetch ‘insert statement’ of each result and run the statement in tableX.

How to insert/update rows from MySQL to SQL Server by using SSIS

I'm looking for the best practice to insert or update rows from a MySQL connection to a SQL Server connection.
First of all, I added a ADO.NET data source to grab MySQL content (a simple table Supplier with two fields id and name). Then, I added a Lookup transformation to split new rows / updated rows. It works well when I need to insert new rows. However, I would like to use a Command OLE DB to update existing rows but It doesn't work due to a incompatibility between my connection manager and the component (ADO.NET vs OLE DB).
Any idea to update modified rows ?! Should I use a cache component ?!
Thanks in advance !
Just get rid of the lookup and conditional split all together.
Outside of your SSIS package, build a staging table that contains the fields you need for inserts/updates.
In your SSIS Package, create a control flow that does the following:
Execute SQL Task to truncate the staging table.
Data Flow task to load the MySQL data from the source system to the staging table. If you can do this based on a "changes-only" type process, such as using a timestamp that you check, it would be faster.
Execute SQL Task to perform an UPDATE statement on your target table using the staging table joined to the target table.
Execute SQL Task to perform an INSERT statement on your target table using a query based on the target table and your staging table (with a WHERE NOT EXISTS or some such on a key fied)
I would change the SQL connection to use OLE DB. As well as allowing the OLE DB Command to work, you may also find the OLE DB Destination is faster.

Automating tasks on more than one SQL Server 2008 database

We host multiple SQL Server 2008 databases provided by another group. Every so often, they provide a backup of a new version of one of the databases, and we run through a routine of deleting the old one, restoring the new one, and then going into the newly restored database and adding an existing SQL login as a user in that database and assigning it a standard role that exists in all of these databases.
The routine is the same, except that each database has a different name and different logical and OS names for its data and log files. My inclination was to set up an auxiliary database with a table defining the set of names associated with each database, and then create a stored procedure accepting the name of the database to be replaced and the name of the backup file as parameters. The SP would look up the associated logical and OS file names and then do the work.
This would require building the commands as strings and then exec'ing them, which is fine. However, the stored procedure, after restoring a database, would then have to USE it before it would be able to add the SQL login to the database as a user and assign it to the database role. A stored procedure can't do this.
What alternative is there for creating an automated procedure with the pieces filled in dynamically and that can operate cross-database like this?
I came up with my own solution.
Create a job to do the work, specifying that the job should be run out of the master database, and defining one Transact-SQL step for it that contains the code to be executed.
In a utility database created just for the purpose of hosting objects to be used by the job, create a table meant to contain at most one row, whose data will be the parameters for the job.
In that database, create a stored procedure that can be called with the parameters that should be stored for use by the job (including the name of the database to be replaced). The SP should validate the parameters, report any errors, and, if successful, write them to the parameter table and start the job using msdb..sp_start_job.
In the job, for any statement where the job needs to reference the database to be replaced, build the statement as a string and EXECUTE it.
For any statement that needs to be run in the database that's been re-created, doubly-quote the statement to use as an argument for the instance of sp_executesql IN THAT DATABASE, and use EXECUTE to run the whole thing:
SET #statement = #dbName + '..sp_executesql ''[statement to execute in database #dbName]''';
EXEC (#statement);
Configure the job to write output to a log file.