I have a loop which loops through connections. Each loop connects to a different connection and then loads tables and such. In the 2nd step is where it connects and then loads data into our DWH. Sometimes the connection is down for what ever reason and will fail at this step. I need the package to keep going on this connection fail.
I have read many things on the propogate set to false and this still does not work. As you can see in my screen shot, i have the "Load ..." onError event handler propogate set to false, and the Sequence container onError propogate set to false.
I have also tried setting the sequence container max errors to 0 so that section completes, and on the onError of the "Load ..." to set a flag in a variable to continue going if the connection completes, or stop there if the connection fails.
I have done this in the past and just set the overall package complete status to success on completion, but this will not catch other errors that may occur in this loop that I will need to catch / fail the package on.
Any help here would be appreciated.
Doing more research on connections failing, I found a script by Jamie Thomson: Verify a connection before using it [SSIS].
I modified it a bit to my own usage:
Only used a single connection instead of looping through all of them.
I set the task result to always succeed.
Instead of FireError I did a FireWarning.
I created a variable (connFail) to set to a 0 or a 1 depending on if the connection failed or not.
I places this script task before my table load to catch any failed connections before the task was executed. These modifications allowed me to fire an e-mail alert if the connection failed (connfail = 1), or continue on the package if the connection was successful (`connFail = 0').
Full script I used is below:
bool failure = false;
bool fireAgain = true;
try
{
Dts.Connections["Connection"].AcquireConnection(null);
Dts.Events.FireInformation(1, ""
, String.Format("Connection aquired successfully on '{0}'", Dts.Connections["Connection"].Name)
, "", 0, ref fireAgain);
}
catch (Exception e)
{
Dts.Events.FireWarning(-1, ""
, String.Format("Failed to aquire connection to '{0}'. Error Message='{1}'",Dts.Connections["Connection"].Name, e.Message)
, "", 0);
failure = true;
}
if (failure)
{
Dts.TaskResult = (int)ScriptResults.Success;
Dts.Variables["connFail"].Value = 1;
}
else
{
Dts.TaskResult = (int)ScriptResults.Success;
Dts.Variables["connFail"].Value = 0;
}
Related
I have a controller where I need to invalidate a token and create a new token.
The controller code snapshot looks something like this:
function __regenerate_token($t)
{
if($token = Token::find()->where('token = :t' , ['t'=>$t])->one())
{
$token->expired = true;
$token->save(); // ->save(false);
}
$newtoken = new Token();
$newtoken->attributes = [
'token'=> strtolower(trim(\com_create_guid(), '{}')),
'expiry_at' => strtotime("+10 minutes"),
];
$newtoken->save(false);
return $newtoken;
}
Now whats happening is - update() also returns true & no errors. Same with insert. Individually run, they work fine. But if I call them they way it is - it fails silently.
The underlying table is innoDB. I tried to wrap the update & insert inside a transaction, but same issue.
Regard
I found out the reason. I was calling __regenerate_token in a function after a db-transaction. However in certain condition, the execution exited the function before committing / rolling back the changes. This caused the __regenerate_token also become part of the transaction and failed.
pseudo code
function debit_account
begin transaction
try
do stuff
if some_condition
return false #<-- this caused issue
endif
commit
return true
catch
rollback
return false
end function
debit_account()
__regenerate_token() #fails when some_condition is hit
Hope this someone else who hits a problem like this!
The requirement is to execute SSIS package, when a file is arrived at a folder,i do not want to start the package manually .
It is not sure about the file arrival timing ,also the files can arrive multiple times .When ever the files arrived this has to load into a table.I think, some solution like file watcher task ,still expect to start the package
The way I have done this in the past is with an infinite loop package called from SQL Server Agent, for example;
This is my infinite loop package:
Set 3 Variables:
IsFileExists - Boolean - 0
FolderLocation - String - C:\Where the file is to be put in\
IsFileExists Boolean - 0
For the For Loop container:
Set the IsFileExists variables as above.
Setup a C# script task with the ReadOnlyVariable as User::FolderLocation and have the following:
public void Main()
{
int fileCount = 0;
string[] FilesToProcess;
while (fileCount == 0)
{
try
{
System.Threading.Thread.Sleep(10000);
FilesToProcess = System.IO.Directory.GetFiles(Dts.Variables["FolderLocation"].Value.ToString(), "*.txt");
fileCount = FilesToProcess.Length;
if (fileCount != 0)
{
for (int i = 0; i < fileCount; i++)
{
try
{
System.IO.FileStream fs = new System.IO.FileStream(FilesToProcess[i], System.IO.FileMode.Open);
fs.Close();
}
catch (System.IO.IOException ex)
{
fileCount = 0;
continue;
}
}
}
}
catch (Exception ex)
{
throw ex;
}
}
// TODO: Add your code here
Dts.TaskResult = (int)ScriptResults.Success;
}
}
}
What this will do is essentially keep an eye on the folder location for a .txt file, if the file is not there it will sleep for 10 seconds (you can increase this if you want). If the file does exist it will complete and the package will then execute the load package. However it will continue to run, so the next time a file is dropped in it will execute the load package again.
Make sure to run this forever loop package as a sql server agent job so it will run all the time, we have a similar package running and it has never caused any problems.
Also, make sure your input package moves/archives the file away from the drop folder location.
As others have already suggested, using either WMI task or an infinite loop are two options to achieve this, but IMO SSIS is resource intensive. If you let a package constantly run in the background, it could eat up a lot of memory, cpu and cause performance issues with other packages depending on how many other packages you've running. So other option you may want to consider is schedule an Agent job every 5 minutes or 10 minutes or something and call your package in the job. Configure the package to continue only when a file is there or quit otherwise.
You can create a Windows service that uses WMI to detect file arrival and launch packages. Details on how to are located here: http://msbimentalist.wordpress.com/2012/04/27/trigger-ssis-package-when-files-available-in-a-folder-part2/?relatedposts_exclude=330
What about the SSIS File Watcher Task?
I have a Flex app that connects to a JBoss/MS-SQL back-end. Some of our customers have a proxy server in front of their JBoss with a timeout of 90 seconds. In our application there are searches that can take up to 2-3 minutes for complex criteria. Since the proxy isn't smart enough to recognize AMF's keep alive pings for what they are the proxy sends a 503 to the client, which in Flex land becomes a "Channel Call Failed" event. In searching SO and other places, this seems to be a common problem. We can't do anything about the proxy or lengthen the timeout, the application needs to handle it.
Of course the back-end continues to process and eventually ships the results to the client. But the user gets an ugly error message and assumes the app is broke.
The solution I have settled on is to consume the CCF error and have the client continue to wait. I have managed the first part, but I can't figure out how to keep the client's handlers active to receive the data (and/or consume another timeout if necessary).
Current error handler:
private function handleSearchError(event : FaultEvent) : void {
if (event.fault.faultCode == "Channel.Call.Failed") {
event.stopImmediatePropagation(); // doesn't seem to help
return;
}
if (searchProgress != null) {
PopUpManager.removePopUp(searchProgress);
searchProgress = null;
}
etc...
}
This is the setup:
<mx:Button id="btnSearch" label="
{resourceManager.getString('recon_perspective',
'ReconPerspective.ReconView.search')}" icon="{iconSearch}"
click="handleSearch()" includeIn="search, default"/>
And:
<mx:method name="search" result="event.token.resultHandler(event);"
fault="handleSearchError(event);"/>
Kicking off the call:
var token : AsyncToken = null;
token = sMSrv.search(searchType.toString(), getSearchMode(), criteria,
smartMatchParent.isArchiveMode);
searchProgress = LoadProgress(PopUpManager.createPopUp
(FlexGlobals.topLevelApplication as DisplayObject, LoadProgress, true));
searchProgress.title = resourceManager.getString('matching', 'smartmatch.loading.trans');
searchProgress.token = token;
searchProgress.showCancelButton = true;
PopUpManager.centerPopUp(searchProgress);
token.resultHandler = handleSearchResults;
token.cancelSearch = false;
So my question is how do I keep handleSearch and handleSearchError alive to consume the events from the server?
I verified that the data comes back from the server using WebDeveloper in the browser to watch the network traffic and if you cause the app to refresh that screen, the data gets displayed.
I'm very in experienced but would this help?
private function handleSearchError(event : FaultEvent) : void {
if (event.fault.faultCode == "Channel.Call.Failed") {
event.stopImmediatePropagation(); // doesn't seem to help
if(event.isImmediatePropagationStopped(true)) {
//After stopped do something here?
}
return;
}
if (searchProgress != null) {
PopUpManager.removePopUp(searchProgress);
searchProgress = null;
}
etc...
}
When using the Background Transfer API we must iterate through current data transfers to start them again ahen the App restarts after a termination (i.e. system shutdown). To get progress information and to be able to cancel the data transfers they must be attached using AttachAsync.
My problem is that AttachAsync only returns when the data transfer is finished. That makes sense in some scenarios. But when having multiple data transfers the next transfer in the list would not be started until the currently attached is finished. My solution to this problem was to handle the Task that AttachAsync().AsTask() returns in the classic way (not use await but continuations):
IReadOnlyList<DownloadOperation> currentDownloads =
await BackgroundDownloader.GetCurrentDownloadsAsync();
foreach (var downloadOperation in currentDownloads)
{
Task task = downloadOperation.AttachAsync().AsTask();
DownloadOperation operation = downloadOperation;
task.ContinueWith(_ =>
{
// Handle success
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnRanToCompletion,
TaskScheduler.FromCurrentSynchronizationContext());
task.ContinueWith(_ =>
{
// Handle cancellation
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnCanceled,
TaskScheduler.FromCurrentSynchronizationContext());
task.ContinueWith(t =>
{
// Handle errors
...
}, CancellationToken.None, TaskContinuationOptions.OnlyOnFaulted,
TaskScheduler.FromCurrentSynchronizationContext());
}
It kind of works (in the actual code I add the downloads to a ListBox). The loop iterates through all downloads and executes StartAsync. But the downloads are not really started all at the same time. Only one is runninng at a time and only if it finishes the next one continues.
Any solution for this problem?
The whole point of Task is to allow you to have the option of parallel operations. If you await then you are telling the code to serialize the operations; if you don't await, then you are telling the code to parallelize.
What you can do is add each download task to a list, telling the code to parallelize. You can then wait for tasks to finish, one by one.
How about something like:
IReadOnlyList<DownloadOperation> currentDownloads =
await BackgroundDownloader.GetCurrentDownloadsAsync();
if (currentDownloads.Count > 0)
{
List<Task<DownloadOperation>> tasks = new List<Task<DownloadOperation>>();
foreach (DownloadOperation downloadOperation in currentDownloads)
{
// Attach progress and completion handlers without waiting for completion
tasks.Add(downloadOperation.AttachAsync().AsTask());
}
while (tasks.Count > 0)
{
// wait for ANY download task to finish
Task<DownloadOperation> task = await Task.WhenAny<DownloadOperation>(tasks);
tasks.Remove(task);
// process the completed task...
if (task.IsCanceled)
{
// handle cancel
}
else if (task.IsFaulted)
{
// handle exception
}
else if (task.IsCompleted)
{
DownloadOperation dl = task.Result;
// handle completion (e.g. add to your listbox)
}
else
{
// should never get here....
}
}
}
I hope this is not too late but I know exactly what you are talking about. I'm also trying to resume all downloads when the application starts.
After hours of trying, here's the solution that works.
The trick is to let the download operation resume first before attacking the progress handler.
downloadOperation.Resume();
await downloadOperation.AttachAsync().AsTask(cts.Token);
I'm modifying the commandtext of linq-to-sql to force it to use nolock, like this...
if (db.Connection.State == System.Data.ConnectionState.Closed)
db.Connection.Open();
var cmd = db.GetCommand(db.Customers.Where(p => p.ID == 1));
cmd.CommandText = cmd.CommandText.Replace("[Customers] AS [t0]", "[Customers] AS [t0] WITH (NOLOCK)");
var results = db.Translate(cmd.ExecuteReader());
It's an MVC application, so the datacontext is in the base controller, and may have been used before this code, and more importantly, after. Should I be closing the connection in this routine? Or not at all? Or only if I opened it here?
Update:
I'm now using the more general function (in the DataContext class) to modify the commandtext, and closing the connection if it was opened here. And the open has been moved down to the ExecuteReader. So far it has been working and reducing the sporadic deadlock issues. The results do not have to be right-up-to-the-second.
public List<T> GetWithNolock<T>(IQueryable<T> query)
{
// to skip nolock, just...
// return query.ToList();
List<T> results = null;
bool opened = false;
try
{
if (Connection.State == System.Data.ConnectionState.Closed)
{
Connection.Open();
opened = true;
}
using (var cmd = GetCommand(query))
{
cmd.CommandText = Regex.Replace(cmd.CommandText, #"((from|inner join) \[dbo.*as \[t\d+\])", "$1 with (nolock)", RegexOptions.IgnoreCase);
results = Translate<T>(cmd.ExecuteReader()).ToList();
}
}
finally
{
if (opened && Connection.State == System.Data.ConnectionState.Open)
{
Connection.Close();
}
}
return results;
}
I have found in the past that using a Transaction in the recommended way causes the site to run out of connections overnight. As far as I know, that's a bug in linq-to-sql. There may be ways around it, but I'm trying to keep my main code straightforward. I now "just" have to do this...
var users = GetWithNolock<User>(
Users
.Where(u => my query
);
If you Open it, you should Close it. Other LinqToSql operations match this pattern.
In my code, I unconditionally open the connection and close the connection in a finally. If someone passes me an open connection, that's their fault and I happen to close it for them.
You could delay opening the connection until just before ExecuteReader.