I have created an SSIS package that includes one script task to add new attachment name and path in a variable after "|" and running it under foreach loop to include all attachment names and path in the variable value. then I am passing that variable as an attachment to send mail task. This package is running fine via batch file execution and sending one email containing multiple files as well.
Now, I want to schedule that batch file to run on every hour, and to accomplish that, I need to add logic in the package to send mail only if 2 attachments are available. It should not send any email if there are no or single attachment present. that way I want to remove manual job execution. Can you please help? I am new to SSIS development. Script task code as below:
if (Dts.Variables["User::FileNamesList"].Value.ToString() == "")
{
Dts.Variables["User::FileNamesList"].Value = Dts.Variables["User::InputPath"].Value.ToString() + Dts.Variables["User::Year"].Value.ToString() +"\\" + Dts.Variables["User::FileName"].Value.ToString();
}
else
{
Dts.Variables["User::FileNamesList"].Value = Dts.Variables["User::FileNamesList"].Value.ToString() + "|" + Dts.Variables["User::InputPath"].Value.ToString() + Dts.Variables["User::Year"].Value.ToString() + "\\" + Dts.Variables["User::FileName"].Value.ToString();
}
Anything involving mail is much better handled in script task.
Let me explain "better". You have a lot more control on To, Cc, Bcc, ReplyTo, etc. As well as being able to send a html formatted body.
This should run your entire application:
{
var fis = Directory.GetFiles(#"C:\folder"); //a 2nd param can be added for search pattern (ex. "*.xls")
//Check for number of files
if(fis.Length>1)
{
SendEmail("Where's it going", "Here are some files", "What you want in the body", fis);
//Archive
foreach (var f in fis)
File.Move(f, #"archive folder path" + new FileInfo(f).Name);
}
}
public static void SendEmail(string to, string subject, string body, string[] filePaths)
{
string from = "SendingMail#blah.com";
using (MailMessage mail = new MailMessage())
{
SmtpClient SmtpServer = new SmtpClient("YourSMTPClient");
mail.From = new MailAddress(from);
mail.To.Add(new MailAddress(to));
mail.Subject = subject;
mail.IsBodyHtml = true;
mail.Body = body;
foreach (var f in filePaths)
{
Attachment file;
file = new Attachment(f);
mail.Attachments.Add(file);
}
SmtpServer.Port = 9999; //Your SMTP port
SmtpServer.Credentials = new NetworkCredential(from, pword);
SmtpServer.EnableSsl = true;
SmtpServer.Send(mail);
}
}
This was resolved on same day. I was able add variable in my script task to count number of files (basically it will increase by 1 on each loop execution), then I added a simple condition into arrow connecting to mail send task to first check if the variable value is matching with expected values (created a variable in SSIS and assigned max expected file count), so it now it will send mail only if 2 files are available in folder. I have added a block only just to reset the variables and that also has second arrow attached from foreach loop , to run if the condition is not matching. The tool is now sending mail only for year which will have both files present or skip for that loop. I was able to schedule that as well.
Arrow condition true: before send mail task:
condition in arrow for script task to resent variable 2:
Counter:
I have a scenario where i need to check multiple files with different file types exists in a directory after the execution of SSIS package.
i have 2 files one .txt other one .csv in a folder, now every day i need to check whether 2 files present in the folder or not.
So i have created package which has variables with dynamic expressions.
Folder Path : \\server1\testing\
Folder name : logisticsSummary
My files are
logisticsSummary.txt
logisticsSummary.csv
So i have taken script and written the below script.
string FileFolder = Dts.Variables["User::Fully_Qualified_Folder_Path"].Value.ToString();
foreach (string filename in FileFolder)
{
string FileExtension = Path.GetExtension(filename);
if (FileExtension != ".txt" && FileExtension != ".csv")
{
Dts.Variables["User::File_Exist"].Value = 0;
Dts.TaskResult = (int)ScriptResults.Success;
}
else
{
Dts.Variables["User::File_Exist"].Value = 1;
Dts.TaskResult = (int)ScriptResults.Success;
}
}
When i am executing the package it is throwing error
Error: The binary code for the script is not found. Please open the
script in the designer by clicking Edit Script button and make sure it
builds successfully.
i have set delay validation property to true, not sure where my code went wrong. Any other approach in achieving this.
What I wanted to happen,
When a form pops-up, the text box would automatically generate an Integer,
regarding of the total number of JSON files located in my documentsdirectory +1.
The text box would be the file name of my JSON file and be saved in documentsdirectory.
After clicking save, form closes and save.
Repeats step 1.
You can count the number of JSON files using something like the function below that iterates through a directory and looks for files with extension "json".
private function countJsonFiles(dir:File):int
{
var numJsonFiles = 0;
//iterate through directory and make entries for assets
for each(var file:File in dir.getDirectoryListing())
{
if (!file.isDirectory)
{
//the last condition assumes you only want to count files with extension "json"
//I'm not sure if you want to count all the files or a different extension,
//so you can modify that part as needed
if (file.name != "." && file.name != ".." && file.extension == "json")
{
numJsonFiles++;
}
}
return numJsonFiles;
}
}
Then you can write to a new JSON file with its name concatenated with the number of JSON files so far plus 1:
var file:File = File.documentsDirectory.resolvePath("jsonFile" + countJsonFiles(File.documentsDirectory) + 1 + ".json");
var stream:FileStream = new FileStream();
stream.open(file, FileMode.WRITE);
stream.writeUTFBytes(text_box.text);
stream.close();
A more elegant solution to this would be to use an AIR SQLite database. You can find all the info you need here: http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118676a5497-7fb4.html
Basically you can create a simple table that has just one column "numJSONFiles" and increment this whenever a new file is created. That way, you always remember what the last index was that you used for a json file.
I don't recommend using EncryptedLocalStore here as it's not really sensitive information and doesn't quite fit the criteria found here: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/data/EncryptedLocalStore.html
Hope this helps!
Given one table in SQL Server which holds consolidated data from three source tables including one column called OFFICE which differentiates the records from each other.
The three source tables hold data from three offices.
I want to create an Excel file dynamically which will have 3 sheets in one workbook based on the three different different offices (ex. office1, office2, office3) resulting in each sheet containing the relevant data according to its office.
Please recommend an approach using dynamic Excel destination in SSIS as I don't want to use an approach which creates a template file and then copies that template to destination excel file.
While this can be accomplished using a scipt task and C#, a far easier solution is demonstrated at
http://www.rafael-salas.com/2006/12/import-header-line-tables-_116683388696570741.html
and the follow-up
http://www.rafael-salas.com/2008/03/ssis-and-dynamic-excel-destinations_01.html#!
But to summarize the relevant details, you need to use an 'Execute SQL Task' to dynamically create the sheet at runtime prior to using it as a destination.
Create a new variable to hold the Sheet name and set this variable to the Office you are working with as you iterate through them.
Also, create a variable to hold the Create table statement that will create each sheet.
For example,
"CREATE TABLE "+ #[User::SheetName] + "(HeaderID INTEGER, HeaderName NVARCHAR(50), LineID INTEGER, LineName NVARCHAR(50), LineDetails NVARCHAR(50))"
and Set the SQLSourceType Property of the Execute SQL task inside of the For Each container to Variable and choose the Variable you created to hold the create statement.
In the Excel Destination Component, Change the data access mode to ‘Table Name or View Name Variable’ and choose the sheet name variable you created from the variable dropdown list.
I have several SSIS packages that perform a similar function. A single Excel file consists of multiple worksheets with each worksheet populated by results from a separate SQL query. Here are the basic generic steps I applied. Before you begin, make certain you create a connection manager for both the database to be applied and the output Excel file.
1) Create a Script task in Control flow and populate it like the following. Here I am creating the Excel file along with the worksheets it will contain. (Worksheets should never include any spaces or special characters.) My code below is in C#.
using System;
using System.IO;
using System.Collections.Generic;
using System.Data;
using System.Text;
using Excel = Microsoft.Office.Interop.Excel;
using Microsoft.SqlServer.Dts.Runtime;
namespace ST_87e8d62a054b4e16b60297154afc19d8.csproj
{
[System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
#region VSTA generated code
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
#endregion
public void Main()
{
Excel.Application xlApp;
Excel.Workbook xlWorkBook;
Excel.Worksheet xlWorkSheet;
object misValue = System.Reflection.Missing.Value;
xlApp = new Excel.ApplicationClass();
xlWorkBook = xlApp.Workbooks.Add(misValue);
//Create First worksheet
xlWorkSheet = (Excel.Worksheet)xlWorkBook.Worksheets.get_Item(1);
xlWorkSheet.Name = "Names";
//Define column headers for "RawData" WorkSheet
xlWorkSheet.Cells[1, 1] = "First Name";
xlWorkSheet.Cells[1, 2] = "Last Name";
xlWorkSheet.Cells[1, 3] = "Title";
// Create Second Worksheet
xlWorkSheet = (Excel.Worksheet)xlWorkBook.Worksheets.get_Item(2);
xlWorkSheet.Name = "Addresses";
//Define column headers for "CCDN" WorkSheet
xlWorkSheet.Cells[1, 1] = "Street";
xlWorkSheet.Cells[1, 2] = "City";
xlWorkSheet.Cells[1, 3] = "State";
xlWorkSheet.Cells[1, 4] = "Zip";
xlWorkSheet.Cells[1, 5] = "Country";
string Filename = "C:\\MyFile.xls";
if (File.Exists(Filename))
{
File.Delete(Filename);
}
xlWorkBook.SaveAs(Filename, Excel.XlFileFormat.xlWorkbookNormal, misValue, misValue, misValue, misValue, Excel.XlSaveAsAccessMode.xlExclusive, misValue, misValue, misValue, misValue, misValue);
xlWorkBook.Close(true, misValue, misValue);
xlApp.Quit();
releaseObject(xlWorkSheet);
releaseObject(xlWorkBook);
releaseObject(xlApp);
Dts.TaskResult = (int)ScriptResults.Success;
}
2) Create in your database two tables that will be populated temporarily. That is, one table will be populated for the results of the first worksheet and the second table will be populated for the results of the second worksheet. It is a good naming approach to preface the names of each table with "Working_" so that you know the purpose of each. I took the approach of using tables instead of views is because I like to sort (ORDER BY) my results, which cannot be done with a view.
3) Add to the SSIS package two Execute SQL Tasks under Control Flow. The first task will run an INSERT SQL statement that will populate the first table you just created and the second task will run another INSERT SQL statement that will populate the second table just created.
4) Add to the SSIS package two Data Flow tasks under Control Flow. The first will be for populating the first worksheet and the second for populating the second worksheet.
5) Select the first Data Flow task and add to it under Data Flow an OLE DB Source where you will define the OLE DB conenction manager (your database) and then the table or view. Select the first new table created. Make certain all of the columns of interest are selected and that you can perform a preview.
6) Add a Data Conversion flow task and then an Excel Destination flow task.
7) Repeat steps 5 and 6 for the second worksheet and table.
8) Finally under Control Flow add an Excel SQL Task that will remove the contents of the two Working tables. You do not want the old contents to be included the next time the package is run.
Now, if you want to play around with formatting of the Excel file after it is completed and impress your manager, you can also do that in code with a final Task Script (also using C#). The nice part about this approach is that you are not having to apply any special formatting functions in your SQL, Excel is doing all the work. You could actually include the formatting in Step 1 and as soon as you copy the data over in the following steps, it is automatically formatted. As with any report output, there is not point in making SQL perform formatting steps (adding additional work to the database server) when it is more efficient to let Excel or SSRS do what they do best.
public void Main()
{
Excel.Application xlApp;
Excel.Workbook xlWorkBook;
Excel.Worksheet xlWorkSheet;
object misValue = System.Reflection.Missing.Value;
Excel.Range xlRange;
xlApp = new Excel.ApplicationClass();
string Filename = "C:\\MyFile.xls";
xlWorkBook = xlApp.Workbooks.Open(FileName, 0, false, 5, "", "", true, Microsoft.Office.Interop.Excel.XlPlatform.xlWindows, "\t", false, false, 0, true, 1, 0);
//Format cells in Names worksheet
xlWorkSheet = (Excel.Worksheet)xlWorkBook.Worksheets.get_Item(1);
//Set the header range in bold font
xlRange = xlWorkSheet.get_Range("a1", "p1");
xlRange.Font.Bold = true;
xlRange.WrapText = true;
//Freeze first row listing headers
xlWorkSheet.Application.ActiveWindow.SplitRow = 1;
xlWorkSheet.Application.ActiveWindow.FreezePanes = true;
//Auto adjust the width of each column
xlWorkSheet.Columns.AutoFit();
xlRange = xlWorkSheet.get_Range("c1", "j6467");
xlRange.Cells.Locked = false;
xlRange.Interior.Color = 65535;
xlRange = xlWorkSheet.get_Range("o1", "p6467");
xlRange.Cells.Locked = false;
xlRange.Interior.Color = 65535;
//Do not alert when saving changes to Excel file.
xlWorkBook.Application.DisplayAlerts = false;
//Save Excel file modifications
xlWorkBook.Save();
//Close workbook and application
xlWorkBook.Close(true, misValue, misValue);
xlApp.Quit();
//Release from cache.
releaseObject(xlWorkSheet);
releaseObject(xlWorkBook);
releaseObject(xlApp);
//Set formatting of percent cells
xlRange = xlWorkSheet.get_Range("d3", "d7");
xlRange.NumberFormat = "###,###%";
//Define the top left cell and bottom right cell of the table in the Excel worksheet
xlRange = xlWorkSheet.get_Range("c1", "c7");
//Draw grid of thin line around each cell in table
xlRange.BorderAround(Excel.XlLineStyle.xlContinuous, Excel.XlBorderWeight.xlThin, Excel.XlColorIndex.xlColorIndexAutomatic, 1);
//Draw thick border around entire table
xlRange = xlWorkSheet.get_Range("a1", "d7");
xlRange.BorderAround(Excel.XlLineStyle.xlContinuous, Excel.XlBorderWeight.xlThick, Excel.XlColorIndex.xlColorIndexAutomatic, 1);
//Right justify columns B and C
xlRange = xlWorkSheet.get_Range("b3", "c7");
xlRange.HorizontalAlignment = Excel.XlHAlign.xlHAlignRight;
//Do not alert when saving changes to Excel file.
xlWorkBook.Application.DisplayAlerts = false;
//Save Excel file modifications
xlWorkBook.Save();
//Close workbook and application
xlWorkBook.Close(true, misValue, misValue);
xlApp.Quit();
//Release from cache.
releaseObject(xlWorkSheet);
releaseObject(xlWorkBook);
releaseObject(xlApp);
Dts.TaskResult = (int)ScriptResults.Success;
}
And that's about it. Notice that just for the purpose of this example, I'm hardcoding the filename. But in my actual code I am applying a User variable which is then populated by another SQL statement pulling the name from another database table. For best practices, it a good idea to keep your SSIS packages entirely table-driven. That way, any changes made to names and locations are made in a database table in a record specific to your SSIS package... avoiding any need to update your SSIS package and go through the dev to QA to production lifecycle again.
Hope this helps and please let me know if you have any questions.
My google apps script imports a csv file [test.csv] into a new tab, manipulates some values, then saves the manipulated tab as a csv file [test.csv]. When it saves the new version, it simply makes another copy [test(1).csv]. I wish instead to overwrite the previous file (or delete the old one then export/write the new version.) Help please?
I am using reference code from the Interacting With Docs List Tutorial
I know this is an old question, but much of the information in the accepted answer has been deprecated by Google since then. DocsList is gone, as are the clear() and append() methods on a file object.
I use the following function to create or overwrite a file:
// Creates or replaces an existing file
function updateFile (folder, filename, data) {
try {
// filename is unique, so we can get first element of iterator
var file = folder.getFilesByName(filename).next()
file.setContent(data)
} catch(e) {
folder.createFile(filename, data)
}
}
For reference, here's some code for doing the same for a folder. It assumes we're operating in the parent folder of the current sheet and want a folder
object for a new or existing folder there.
// Get folder under current folder. Create it if it does not exist
function getOrCreateFolder(csvFolderName) {
var thisFileId = SpreadsheetApp.getActive().getId();
var thisFile = DriveApp.getFileById(thisFileId);
var parentFolder = thisFile.getParents().next();
try {
// csvFolderName is unique, so we can get first element of iterator
var folder = parentFolder.getFoldersByName(csvFolderName).next();
// asking for the folder's name ensures we get an error if the
// folder doesn't exist. I've found I don't necessarily catch
// an error from getFoldersByName if csvFolderName doesn't exist.
fname = folder.getName()
} catch(e) {
var folder = parentFolder.createFolder(csvFolderName);
}
return folder
}
You could do DocsList.find(fileName) which gives you a list of files that have that name. If file names are unique, then you can just do var file = DocsList.find(fileName)[0].
If you are a Google Apps user, you can use file.clear() to remove all the contents of the old file, and then file.append() to insert all of the new contents.
Otherwise, you will have to file.setTrashed(true) and then DocsList.createFile() to make the new one.