I am working on a Microsoft Integration Service project, and I have a flat source file (Product.txt) that contains some data that I am saving in a SQL Server DB when I run the project.
The data is saved successfully, but when I change some values in my source Product.txt and re-run the project, the data in the SQL server are not updated.
Is there any thing that must be done to enable the update? Thank you.
There are several things you can do but you haven't provided enough info. I am guessing here based on the word "changed file", to me that means update.
That generally means in your data flow you should start with source, then use a lookup based on your destination to see if your "key" exists. Change the test to redirect no match.
Map no match to your inserts. And map matches to an update SQL statement.
Related
I am working with a scenario where I have one azure file storage account in which I have different folders, each contains the *.csv file. I want to load each *.csv file in different Azure SQL Database tables dynamically by iterating over the my RootFolder.
The problem I am facing is my *.csv file contains more columns than my destination. When copy activity gets triggered following error is encountered:
You could make your dataset schema as expression. And then use GetMetadataActivity to get schema of each file.
Using Microsoft Visual Studio Community 2015.
Goal of project
-create "*\temp\email" directory
-start program to extract all emails that include xls attachments to the previously created folder
-use for each loop to cycle through each file in the folder, process, and shift to sql table.
The problem I am running into is caused by either a blank excel document (which is occasionally sent from a remote location) or some of the original xls reports only contain 5 columns instead of 6 that I have mapped now. Is there any way to separate files that include the correct columns from those that do not match?
** as Long as these two problems do not exist I can run the ssis package and everything runs without issue.
Control flow;
File System Task (creates directory --->Execute Process Task (xls extraction)-->ForEach Loop(Data flow Task "email2Sql")
Data Flow;
Excel Source (uses expression ExcelFilePath,#user:filepath) delay validation ==true
(columns are initially set to f1-f6 and are mapped to for ex. a,b,c,d,e,f. The Older files that get mixed in only include a,b,c,d,e.) This is where I want to be able to separate the xls files
Conditional Transformation split (column names are not in row 1, this helps remove "null" values)
Ole Db destination (sql table)
Sorry for the amount of reading, but for the first post I tried to include anything that I thought may be relevant.
There are some tools out there which would allow you to open the excel doc and read it. However, I think the simplest thing to do would be to use SSIS out of the box:
1 - add a file system task after the data flow which reads the file.
2 - Make the precedence constraint from the data flow to the file system task "failure." This will cause that to only fire when the data flow task fails.
3 - set the file task to move the "bad" files to another folder
This will allow you to loop through all the files and move the failed ones. Ultimately, the package will end in failure. If you don't want that behavior you can change the ForceExecutionResult property to be success. However, it might be good to know that there were problems with some files so that they can be addressed.
m
I have 2 different sources to be loaded in Datawarehouse with a condition.
1st source is flat file and the second source is SQL database. Condition is the SSIS package should check the flag value in a table, If the value is 0, it should process the flat file and if the value is 1, then it should process the data from SQL source. I know this can be done in built in task but it must be done in Script task by dynamically changing the connection type (data provider) based on the flag value. Anybody who is well versed in .NET, Please help on this. Kindly let me know if something is not clear
Thanks.
I have several Access files with data from a group of users that I'm importing into one master file. The tables in the user files are each configured with a Before Change data macro that adds a timestamp each time the user edits the data.
("Data macros" are similar to triggers in SQL Server. They are different from UI macros. For more info, see this page.)
I'd like to import these timestamps into the master file, but since the master file is a clone of the user files, it also contains the same set of data macros. Thus, when I import the data, the timestamps get changed to the time of the import, which is unhelpful.
The only way I can find to edit data macros is by opening each table in Design View and then using the Ribbon to change the settings. There must be an easier way.
I'm using VBA code to perform the merge, and I'm wondering if I can also use it to temporarily disable the data macro feature until the merge has been completed. If there is another way to turn the data macros off for all files/tables at once, even on the users' files/tables, I'd be open to that too.
Disable the code? No. Bypass the code? Yes.
Use a table/field as a flag. Set the status before importing. Check the status of this flag in your event code, and decide if you want to skip the rest of the code. I.e.
If [tblSkipFlag].[SkipFlag] = false
{rest of data macros}
EndIf
Another answer here explains how you can use the (almost-)undocumented SaveAsText and LoadFromText methods with the acTableDataMacro argument to save and retrieve the Data Macros to a text file in XML format. If you were to save the Data Macro XML text for each table, replace ...
<DataMacro Event="BeforeChange"><Statements>
... with ...
<DataMacro Event="BeforeChange"><Statements><Action Name="StopMacro"/>
... and then write the updated macros back to the table then that would presumably have the effect of "short-circuiting" those macros.
I have a problem that has been annoying me for quite some time now and a few days ago I started googling for a solution, but I haven't really gotten anything to work. I've read a little about something called SSIS, but I'm not sure it does what I'm looking for or if there is something else I should research in order to accomplish my goal. This is my problem:
My accounting program produces and updates a .dbf file with information about all vouchers and places it in a folder on my local computer. Our MySQL must continually be updated with this information. So this is what I do twice a day:
I open up the .dbf file in excel
Save it as a .csv.
Close Excel
Open the file in notepad++
Convert the formating to utf8
Save
log in to MySQL
Go to the right table
Upload the .csv
Replace the old data with the new
As this takes quite a bit of time, I feel that there must be better ways to do this. It would be great if I could have this scheduled to be done automatically or if there is some kind of an SQL query that could do this, because then I could use PHP to make a website that I could enter and have the query run when I press a button or something.
So my question is: What is the most simple way to continually get the info from the .dbf file into my SQL server?
There is a way to do your job by shedule with DBF Commander Pro's command-line interface. Use the following command in a *.BAT file:
dbfcommander.exe -edb <dbf_file_name> <server_table_name> <connection_string>
After that, create a shedule for this BAT file using Windows Sheduler.
The only issue remains, that you need to clear the destination table on MySQL database before the export process.
In order to try the export process in app GUI, click 'File -> Export to DBMS'. In the window appears click Build button in order to build the connection string: select MS OLEDB Provider for MySQL Server, then choose your server from the list, provide login and password, select a database, click OK:
In the Export to DBMS window select the destination table you want to import source DBF file to, then click Export. The command line you need you can find at the bottom part of the window.
More info on import and export DBF to a database you can find here. Detailed using of command-line is here.
As you mention of doing in PHP. What is stopping you from doing it there.
You could create one connection handle using a VFPOleDB provider to open the path location of the table, open and read the table. Then have a SECOND connection to your MySQL database open and ready to push the data there.
Then, for each row read from the VFP OleDB connection result set, do whatever special cleansing you need to.
Then, query from the MySQL connection if its an existing entry or not and if an add or update is necessary, then send the data respectively.
Continue for the rest of the records from the VFP result set.
No need to open in Excel, save to CSV format, load yet another tool, etc...