Force delete a file using SSIS on a network location - ssis

I am facing a problem while deleting a file on a network location using SSIS, since its a zip file, contains monthly SQL Database backup file, so I need to delete the last month file before copying current month file.
May be there is some app which were using this file, I am not sure, but I wanna get rid of this file, so that I can copy new file.
Thanks

Use a file task and you should be able to delete pretty much anything on any location as long as you have the rights to do so.

Related

SSIS Loop container rename file name on a Monday

I am busy setting up an SSIS, I am using a foreach/loop container to get a csv, it places it in a staging SQL, I then do my normalization with joins, push to production, archive my files again with a loopcontainer and variables and clean up my staging enviroment.
All working perfectly, except on a Monday (or after a off day,) my files have the same name everyday and I am adding a time stamp.
The issue is getting the files after an off day i need too get 2 files or more, Windows adds a incrementing number in brackets and although I can get this file imported, I cannot get the rename loop to find this file (I will add my move to staging and rename are in different dtsx steps,) as a result these files are not renamed and then not move to archive.
I am guessing the issue is my filename variable as it is static, IO likely need a wildcard FileImport.csv is my normal file and my variable but I sometimes have FileImport (1).csv and likely need something equivilent to FileImport*.csv.
Source Connection
Figured it out, firstly fix the connection to point at the folder, then instead of using a variable to for the source connection, use the connection manager, working as intended now

CSV file not recognised

I am creating a CSV file in my system and MFT to another. When they receive it, their job does not pick up the file. When they open the file in excel, save it locally and reload the same file, the job picks up the records. I can't figure out what could be wrong with the file I create or something wrong with their job? Anyone experienced something similar?
Appreciate any ideas.
Thanks
With such a few details no one rather than you can find what's wrong.
My suggestion would be to get two files: one - the original file that the job does not want to deal with; second - the file that the job can consume (saved via Excel). Then open this two files in a notepad and try to find any differences.

Copy/Move files in PDI / Spoon yields 'is not a file' error

I am trying to automate weekly generation of a database. As a first step in this process, I need to obtain a set of files from network location M:\. The process is as follows:
Delete any possibly remaining old source files from my local folder (REMOVE_OLD_FILES).
Obtain the names of the required files using regular expressions (GET_FILES).
Copy the files from the network location to my local folder for further processing (COPY/MOVE FILES)
Step 3 is where I run into trouble, I frequently receive the below error:
Error processing files. Exception : org.apache.commons.vfs.FileNotFoundException: Could not read from "file:///M:/FILESOURCE/FILENAME.zip" because it is a not a file.
However, when I manually locatae the 'erroneous' file on the network location and try to open or copy it, there are no problems. If I then re-run the Spoon job, no errors occur for this file (although the next file might lead to an error).
So far, I have verified that steps 1 and 2 run correctly: more specifically, there are no errors in the file names returned from step 2.
Obviously, I would prefer not having to manually open all the files first to ensure that Spoon can correctly copy them. Does anyone have an idea what might be causing this behaviour?
For completeness, below are the parameters selected in the COPY/MOVE FILES step.
I was facing same issue with different clients and finally i tried with some basic approach and it got resolved. It might help in your case as well.
Also, other users can follow this rule.
Just try this: Create all required folder with Spoon Job "Create a Folder" and inactive/delete those hops from your job or transformation once your folders are created.
This is because, user you are using to delete the file/s is not recognized as Windows User. Once your folder is in place you can remove "Create a Folder" steps from your Job.
The path to the file is wrong. If you are running spoon in a Windows environment you should use the Windows format for filepaths. Try changing from
"file:///M:/FILESOURCE/FILENAME.zip"
To
"M:\FILESOURCE\FILENAME.zip"
By the way, it will only work if M: is an actual drive in the machine. If you want to access a file in the network you should use the network path to the shared folder, this way:
"\\MachineName\M$\FILESOURCE\FILENAME.zip"
or
"\\MachineName\FILESOURCE\FILENAME.zip"
If you try to access a file in a network mounted drive it won't work.

How to BULK INSERT files that change name every day?

I have a folder that is static with a daily txt file that goes into the folder. The file name is the date. If the file name has the same name every day, everything works. Is there a way I can have my script pull any txt file in the folder? (Note, file comes in, get's processed and then I have an automatic transfer that moves the file after it has been inserted to a processed folder). So there will only be one file at a time in the folder. I hope that makes sense.
Here is the script for the bulk insert:
Bulk Insert Mydata.dbo.cust_adj
From 'C:\MyData\FlatFiles\UnprocessedAdjReport\importformat.txt'
With
(
FieldTerminator= '|',
Rowterminator= '\n'
)
Go
(I've got this saved as a stored procedure btw)
So "importformat" is just the name I used while setting up my scripts, going forward it will be in bb-yyyy-mmdd-hhmmnnnn.txt, as soon as the file is inserted, I move the file from the unprocessed folder to the processed folder. There will only be the one file each day.
If anyone has any advice or assistance with this, I would greatly appreciate it.
See the link it may be what you are looking for:
http://www.kodyaz.com/articles/how-to-extract-filename-from-path-using-sql-functions.aspx
Or
http://sqljourney.wordpress.com/2010/06/08/get-list-of-files-from-a-windows-directory-to-sql-server/
Or maybe
http://www.codeproject.com/Articles/38850/An-Easy-Way-to-Get-a-File-Name-or-a-File-Extension
HTH

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.