We have an SSIS 2008R2 process that includes a step where an external application is launched in a task that executes a remote procedure call. The external application produces output as a flat file, which SSIS is then supposed to pick up and process. When the external process has finished, the task that launched it completes successfully.
The executive summary version of our problem is that we seem to always have to run the package twice before the step succeeds that processes the flat file. Anyone have an idea why, and what we might try to resolve this?
Here are the gory details:
The SSIS package pauses while the external process is running, as it is supposed to do, and waits for the "all clear" from the external application before attempting to read the file that has been produced. (FWIW, the external application creates the file when it starts, then populates it over the course of its run.)
Our problem is that, both during development when the package is being run from BIDS and during testing when the package is run as a scheduled SQL Server job, the package would sometimes (bot not always) fail and report that it was unable to open the text file. BUT, it is not consistent.
The file in question is written to a network share. We have verified that the share is accessible to the network account under which the job runs, as well as to the developers.
We have tried adding a script task that does the following:
Verify that the file exists
If the file exists, try to open it as a stream for read/write access, in exclusive mode.
If the file is not available, wait for a specified time and try again. Keep trying until either it is successfully opened (and then closed again), or until the limit of number of tries is reached.
Once the file has been successfully opened, close the stream and wait for a few more seconds in case there is a latency problem of some sort.
Although the script is designed to report a failure if it is never able to open the file, we have never seen that branch of the code actually execute (i.e. we are ALWAYS successful in an attempt to open the file).
We know that networks are busy places and that, microseconds after we close the file, something else could come along behind our backs and open it again, but there is absolutely no reason to expect this to be the case in our environment.
Finally, when the package is run from the SQL Server job on a schedule it always fails. When we do nothing more than re-execute the job manually, it seems to "always" succeed. (It was not always so; before we upped the wait time after our successful attempt to open the file even this was not enough.)
The code that we use to test for whether the flat file could be opened came from a thread right here on StackOverflow. I'm happy to post it if anyone thinks our test might itself be contributing to the problem, but it's hard to understand how that could be since the package works sometimes.
Lets give a try like following...
Can you create a Proxy under SSIS Package Execution.
Try to execute this step with this proxy(Run As drop down once you edit Step).
Related
I have a simple package to produce a csv file from a query and update a 'lastrun' time on a table. This was written in 2014, running on a test server with 2014. The agent job that runs it simply executes it via an SSIS Package step. No other steps are involved.
However, I get the above error message in the agent log file. The job will successfully execute and produce a file, but ONLY after either restarting the agent service or changing the properties on the job (after refreshing the job list in SSMS). And because it seemingly deletes itself during execution, there is no job history to view, and then the schedule will stop repeating.
I can't find anything like this on here, and wondered if anyone has ever seen this, or has any ideas?
Thanks.
Note (update) : All other agent jobs run ok on the same server. The only difference with this one is that it's the only one that is calling an SSIS package.
It could be when you try to restore the same database (subscriber/distribution) as another database name, it will clear the job automatically.
I am trying to utilize windows task scheduler to distribute Access reports to End Users in my company.
All I am doing is triggering a macro that runs code from a module which exports a report to a PDF, prints it out, then exits access.
DoCmd.Quit acQuitSaveNone
That's what I've been using at the end of my code in VBA to close access after running a macro.
When I manually run the macro it works fine, but when scheduled on a task, it gets held up a lot and stuck.
I've checked event viewer to try and find any Microsoft Office alerts but do not see any.
It appears that Access is unable to quit many times when running through Task Scheduler. Would there be a VBS I can run, say, 5 minutes after the task to close down the .accdb file and MSACCESS.EXE or is there something I can do to make these Tasks actually work??
It seems maybe when running in the background through task scheduler.. the code is getting ahead of itself and trying to quit at the same instance that it is finishing the output to the printer, causing it to seem "busy" and unable to actually close?
I have an entire domino of code that shoots off after this but it stops dead in its tracks when it cannot finishing closing access.
Any suggestions?
Thanks,
Ian
Here is the gist of what I would do, I am just addressing the print on open and quit actions. You may need to tweak to ensure that you have validation etc in place.
First, make sure that your report has a default printer specified. (Report design-->Page Setup-->Page-->Use Specific Printer and then select your printer)
Create an Autoexec (auto execute) macro, this macro will be saved with the name Autoexec. Macros named Autoexec executes when Access is opened automatically. Note that once you have this macro, it will run each time you open Access, to prevent it from running, hold down the shift key and then open Access, this bypasses the Autoexec
First action in your Autoexec Macro is OpenReport and set the View to Print (report is printed when opened)
Next action in your Autoexec Macro is QuitAccess with options set to SaveAll.
When you run an application from the scheduler, it runs with different credentials. You want to make sure that you choose the appropriate options here.
Fixed it. Very glad as I've seen many with this issue and hope this gets to them.
If anyone has any issues regarding access macros and task scheduler and this doesn't work - I'll be happy to help as best as I can as the frustration of wanting to remove administrative tasks through automation that don't work out is just terrible.
Since I needed the task scheduler to wake the computer up, log me in, and open access databases on network drives that reference SQL servers.. there was a couple things I had to make sure were set up.
First is trusted locations. Any network drives you are access should probably be in the trusted locations.
Secondly, my last command on the macro is DoCmd.RunCommand.Close (I believe.. if im wrong Ill change tomorrow when I'm back in the office)
Third, use the root path to the network folder and not a mapped drive as they may not map when your logged off... so instead U:\file path... would have to be \computer1\filepath\ etc..
This path should be made with all linked tables or databases among the network especially if you have code calling for those files.
Fourth, I ended up having to do it the less preferred way and have a folder called "accessjobs" where I put shortcuts to Access Macros which triggered code and simply ran that path in task scheduler with "start in:" the folder the macro shortcut is in.
Fifth I had to run with highest privileges, and selected "run whether user is logged on or not"
A couple of these things may be coincidental that they work, but I am not about to spend even more time with trial and error to see which settings are benine as I spent TOOO long figuring this all out. But now it is solved and the sky is the limit now!
Thanks for the help!
Ian
Hope it's the right place to ask this question - usually I use SO to ask about programming...
I'm doing a project that involves Crystal Reports Server. From code, I'm able to schedule reports successfully, but when I look at the BI launch pad I don't see the report in My Recently Run Documents (I see failed reports in that list - ones that has wrong database credentials).
When I go to Central Management Console and I find my reports in folders and I go to Properties > History I see the report status as "Running" - and it has been like that for a long while (too long than it should) for 2 different reports I have sent.
How can I diagnose what the problem is? and why it is stuck? there are no error messages anywhere about it.
How can I get a full history of all reports in the system (not just one single report at a time)? and how can I see currently running reports?
How can I stop a running report?
I really hope this is the right place for these kind of questions... if not, would be very happy to get a referral.
Thanks
How can I get a full history of all reports in the system?
Open the CMC and then click on the Instance Manager. At the bottom of the page, you can filter on the object type and status. That way, you can get a full overview of all running reports on your platform.
How can I stop a running report?
If you select a running instance (either in a document's history page or in the Instance Manager), you'll notice that there is no stop button. Instead, you have to delete the running instance. It might not stop running immediately though (depending on what it's doing), but it will be removed immediately from the list of instances.
How can I diagnose what the problem is?
What I would recommend is to enable tracing on all related servers (thus your job server, processing server, etc) and then retry scheduling the report. This should generate additional logging on the server which you can use to diagnose the issue.
The trace files have the extension .glf (generic log file) and are located in the logging folder on your Crystal Server. Have a look at the command-line property of each of the servers for which you're enabling the tracing, you should find a log folder there somewhere.
Make sure to turn the tracing off again as soon as you're finished, as tracing will not only create extra strain on your servers (causing the system to slow down), but it will also result in very large log files.
Before starting with tracing, have a look at the existing log files to see if it doesn't already contain error messages that might help you diagnose the issue. Sort the log files by date, and look at the most recent one for each of the servers involved. If there's nothing in there, start with tracing, but remove the existing .glf files to minimise log contamination (some files will be locked, just ignore them).
I am trying to generate the database scripts(tables,triggers,views,procedures) in sql server 2008, all of sudden the scriptting wizards hang up at the end state saying that scripting is completed but the close button never enable, if i stop this some of the tablels are missing, please advise
Install earlier SSMS version.
For me bug was at 15.0.18358.0, changed to 15.0.18338.0 and the wizard started working.
If the wizard says "0 Remaining", this means it has determined all the objects that it needs to script, and is writing them out to your destination. If you are writing to a file, go to that file location in Windows Explorer, and keep refreshing the view. If the file keeps growing in size, this means everything is fine and the data is still being written. Be patient, and eventually the process will finish and the Close button will become enabled.
Morning
I've been reading "SQL Server 2008 Integration Services Problem - Design - Solution". It outlines a way of logging variable changes which I'm trying to replicate in SQL 2005.
Create variables e.g. PackageId, RecordsAffected. - Set Raise ChangeEvent to true.
Create a string variable g.g. strVariableValue. - Set Raise ChangeEvent to false.
On the package event handler: OnVariableValueChanged add a script task "SCR Convert value to string".
Add ReadOnlyVariables: System::VariableValue
Add ReadWriteVariables: User::strVariableValue
In the script, set a local variable to System::VariableValue.value.tostring
Set the variable User::strVariableValue to the local variable
Add an "Execute SQL Task" component "SQL Log Variable Value Changed" calling a SP with no resultsets.
Set parameter mapping to User::PackageId, System::VariableName, User::strVariableValue
When this is run, I get a deadlock on User::PackageID
Error: 0xC001405B at SQL Log Variable Value Changed: A deadlock was detected while trying to lock variable "User::_PackageID" for read access. A lock could not be acquired after 16 attempts and timed out.
The script step succeeds but the Execute SQL task fails. I'm using Visual Studio 2005 Version 8.0.50727.42, Microsoft SQL Server Integration Services Designer Version 9.00.4035.00 and BIDSHelper Version 1.4.3.0.
Any ideas?
Eureka!
I had the same problem and led to a few deadend posts, then I discovered the root.
I had the framework working just fine and wanted to force some info to be logged.
So I changed the value of the framework variable "strVariableValue" and this caused the deadlock with the change event task.
I fixed by creating my own variable "strLogMe" and putting whatever I wanted to log.
Moral: don't touch the framework variables
Did you use the code sample from the book? All the files are available on the Wiley website for free. The code sample includes a SSIS package, sql scripts, and VB code for the script. If this doesn't work for you, then let me know since one of my team members found a way to log variable changes that was different from this methodology.
I was getting this error ("a deadlock was detected" etc), suddenly, which seemed to coincide with I.T. having done a Microsoft Windows patch on the server. There were packages which were using script tasks, with read-only and/or read-write variables in the SSIS UI. Even though it seemed to have been an environmental issue (because the packages had worked for months, then suddenly stopped working, even though I hadn't changed any code), I thought, well (as I had seen from various blog posts from years gone by), there were instances of companies doing server patches, then having their SSIS packages break; and the blogs seemed to say, change the way you're locking the variables, don't reference them in the UI; instead, lock them explicitly in code. So I tried the same thing. It didn't fix it.
It turns out some individual had removed the permissions of the user under whose identity the packages run, from the AD group; those permissions were required because it was trying to copy a file from a directory which required read permissions on the directory. These packages are typically called by a SQL agent job using a proxy identity. When the package was executed manually from SSMS, it worked. But when it was run by calling the SQL agent job, it failed.
The bottom line is, it was just coincidence that the packages started failing around the time of the Windows update. But the other (main) point is, if your package is trying to access a file on the network, and the identity (or proxy identity) under which that package runs does not have permissions to the source or target directory, then your package could fail and the problem could manifest itself in this cryptic way, where it looks like a variable deadlock issue, but it's actually a file share permissions issue. I only wasted a day on this, but... maybe this will be useful to somebody in the future.