SSIS package does nothing when invoked by agent - ssis

SSIS package loops through input files. For each file, flatfile parse adds records to a DB table, then file is renames/moved for archiving. After all files, package calls a sproc to delete all year-old records.
Package runs from visual studio OK. Put in SSIS package store, run from there, no problem.
Create an SQL Agent job to run package. Job does something for about five minutes, announces it was successful, but no new records in DB and no renaming of input files.
Package uses dedicated login for SQL Server privileges. Job is run as HOSTNAME-SVC which has read/write privileges on the input directory and the archive directory.

Have you setup logging for the package? You could add a script task to the For-Each Loop Container that runs a Dts.Events.FireInformation command during each loop. This could help you track the file name it finds, the number of loops it does, how long each loop takes, etc. You could also add a logging step at the end so that you know it is at least exiting the For-Each Loop container successfully.
If you find that the package is running successfully but not looping through any files at all, then you may want to test using a simpler package that reads one file only and loads it into a staging table. If that works, then go the next step of looping over all the files in the director and only importing the one file over and over again. If that works, then go the next step of changing the file connection to match the file that it finds in the For-Each Loop Container file enumerator task.
If the package isn't looping over any files and you can't get it to see even the one file you tested loading from the job, then try creating a proxy account with your credentials and running the job as the proxy account. If that works, then you probably have a permissions issue with your service account.
If the package doesn't import anything even with the proxy account, then you may want to log into the server as the service account and try to run the SSIS package in BIDS. If that works, then you may want to deploy it to the server and run the package from the server (which will really use your machine, but at least it uses the ssis definition from the server). If this works, then try running the package from the agent.

I'm not sure I fully understand. The package has already been thoroughly tested under several Windows accounts, and it does find all the files and rename all the files.
Under the Agent, it does absolutely nothing visible, but takes five minutes to do it. NO permissions errors or any other errors. I didn't mention that an earlier attempt DID get permissions errors because we had failed to give the service acount access to the input and output directories.
I cannot log in as the service account to try that because I do not have a pasword for it. But sa isd job owner so it should be able to switch to the service account--and the access errors we got ten days ago show that it can. The package itself has not changed in those ten days. We just deleted the job in order to do a complete "dress rehearsal" of deployment procedure.
So what has changed, I presume, is some detail in the deployment procedure, which unfortunately was not in source control at the time it succeeded.

It seems to be something different about the permissions. We made the problem go away by allowing "everyone" to read the directory on the production server. For some unknown reason, we did not have to do that on the test server.
When the job tried to fetch the file list, instead of getting an error (which would be logged) it got an empty list. Why looping through an empty list took five minutes is still a mystery, as is the lack of permissions. But at least what happened has been identified.

I had a similar problem. Was able to figure out what was happening by setting the logging option of the SQL Server Agent Job.
Edit the step in the job that runs the package, go to the logging tab and pick "SSIS log provider for SQL Server" and, in the configuration string, I picked (using the drop down) the OLEDB connector that was in the package, it happens to connect to SQL Server in question.
I was then able to view more details in the history of that job, and confirmed that it was not finding files. By changing permissions on the directory to match the sql server agent account, the package finally executed properly.
Hope this helps.
You may want to turn logging off after you resolve your issue, depending on how often your package will run and how much information logging provides in your case.
Regards,
Bertin

Related

SQL Server Job Using WinZip Command Line

I have a SQL Server 2008 Job that backs up a database, then zips that backup and moves the zipped file. My job runs fine until it gets to the step that calls WinZip, which executes:
c:\program files (x86)\winzip v19.5\winzip32.exe
-m \\RemoteShare\RestrictedFolder\dbBack.zip
x:\SQLInstanceFolder\BackupFolders\dbBack.bak
The job neither completes nor fails; it just stops moving forward. It will generate the dbBack.bak file and create the dbBack.zip file in the remote location, but it won't proceed past there. It seems to be behaving like it is waiting on a pop-up confirmation, but I don't see one when I log in to the console or run the zip from the command line.
I've tried adding -ybc flag to automatically confirm or skip any prompts, but it didn't seem to do anything. The process still didn't complete. I've even tried to > pipe output of the process, but it won't even write my log file.
This is a secured system and infrastructure, but I'm fairly certain I'm not being blocked by a permission. My SQL Server service account that runs the job has access to the folders it needs and it can run the winzip32.exe process. This process ran fine, but we had to upgrade WinZip this past weekend (19.5), and that's when it stopped working properly. We aren't able to roll back to the previous version (10).
Does anyone have any idea on what could be stopping my process or how to make it proceed?
I think I discovered the problem. It turns out, we are using the GUI version of WinZip and calling the executable from the command line. Even though we can't see the GUI, it's still there. So, the prompt to confirm our compression is still there in the program's workflow, we just can't see it and thus can't confirm it. And the confirm flags don't work with the GUI version.
My workaround involved logging in to my SQL server as our service account and running a WinZip operation. When it completed and gave me the Add Complete prompt, I checked Do not display this dialog in the future and clicked OK. This will suppress that prompt when the service account runs its Job.
If someone changes the service account, we'll have to do this again, so our ultimate solution will be to install the WinZip Commmand Line Plugin. Hopefully, when that's done, we won't have to worry about this.
But it works now. :-)

SSIS - Recreating job caused it to lose access to configuration file

So, a little history on this issue. I had to deploy something into prod that had code from preprod, so I commented out the new line. But missed a character, which caused the job to fail on that night. Next night I fix the SQL in the SSIS job, the same error. No matter how many times I deploy, same error.
So, one of my coworkers decides to go into the integrations Catalog, delete the old catalog and redeploy everything.
Next night, I get a new error that the environment variables are not set. So, I config the job, point it to what I assume is the correct environment variable. Same error. Tonight I realized that the environment variable that it was calling has been renumbered. So, I renamed the job, and recreated it manually pointing the envelope reference to the new environment variables.
Now, when my job tries to connect, it tells me that that username fails to login.
I'm guessing that the issue is that when the config file was recreated, it was recreated minus the password. I'm trying to find out how to check and how to deploy my packages properly.
So, we finally managed to fix the issue.
Turns out that by dropping the project and redeploying we lost the environment variables.
I had to configure the project to use the environment variable. I then had to go to each job step, under configure we also had to click the checkbox to use environment variables.

SSIS package could not open global shared memory to communicate with performance DLL

I am working on a .dtsx file that reads from a database and outputs a flat file. While testing the package, using SQL's Execute Package Utility, I got this warning:
Warning: Could not open global shared memory to communicate
with performance DLL;
data flow performance counters are not available.
To resolve, run this package as an administrator,
or on the system's console.
In my research I got mixed messages as to how to deal with this. One person said it is an issue with data types between the source and data conversion. Another said it was merely a warning that can be ignored as long as you don't need performance counters (which I don't believe I do). I also found where it is an issue on computers running xp with no SQL Service packs, but I am on Windows 7.
Should I be concerned with this warning?
If you want to prevent this warning from occurring, you can add the user account used to execute the package (e.g. your account and/or the SQL Server Agent account) into the local group "Performance Monitor Users".
If this change is made for any services e.g. SQL Server Agent, the service will need to be restarted for the change to take effect.
My understanding is that is UAC not allowing VS/BIDS access to the performance counters. For day-to-day package execution, you are fine. It is not impacting the ability of SSIS to run nor does it alter the outcome of data transformations.
The #Nathan fix didn't work for me
What sorted it was running Visual Studio as administrator ... even though my account is in Local Administrators group
The "local administrators" group allows you to function with administrator rights when separately requested, such as "run as administrator". It does not mean that you run everything all the time with administrator rights
In my example, I had an oledb command object but I didn’t have an object preceeding it. It was doing a single insert statement but So I added an object just with “Select 1 as NeededColumnInput” and then I connected it to the oledb command object. Then mine worked.

Inserting with MySQL while in a scheduled task?

I am running MySQL on Windows 7. I use a scheduled task to insert a record into a table after an action has occurred. However, when the scheduled task runs, nothing is inserted. I have redirected output from the "mysql" line into a log file, but the log is always empty. Running the batch file manually does cause the record to be inserted successfully. The scheduled task runs under the same user account and priveleges as when I run it manually.
Has anyone seen this behavior before?
Never mind. Apparently despite being run as my account, "taskeng" doesn't know where "mysql" is. Writing the full path to the mysql executable solved it.

Magento Module install SQL not running

I have written a module that is refusing point blank to create the tables within my mysql4-install-1.0.0.php file....but only on the live server.
The funny thing is that on my local machine (which is a mirror of the live server (i.e. identical file structure etc)) the install runs correctly and the table is created.
So based on the fact that the files are the same can I assume that it is a server configuration and or permissions problem? I have looked everywhere and I can find no problems in any of the log files (PHP, MySQL, Apache, Magento).
I can create tables ok in test scripts (using core_read/write).
Anyone see this before?
Thanks
** EDIT ** One main difference between the 2 environments is that on the live server the MySQL is remote (not localhost). The dev server is localhost. Could that cause issues?
Is the module which your install script is a part of installed on the live server? (XML file in app/etc/modules/, Module List Module for debugging.)
Is there already a record in the core_resource table for your module? If so, remove it to set your script to re-run.
If you file named correctly? The _modifyResourceDb method in app/code/core/Mage/Core/Model/Resource/Setup.php is where this file is include/run from. Read more here
Probably a permissions issue - a MySQL account used by public-facing code should have as few permissions as possible that still let it get the job done, which generally does NOT allow for creating/altering/dropping tables.
Take whatever username you're connecting to mysql with, and do:
SELECT User, Host
FROM mysql.user
WHERE User='your username here';
This will show you the user#host combos available for that particular username, then you can get actual permissions with
show grants for username#host;
Do this for the two accounts on the live and devlopment server, which will show you what permissions are missing from the live system.
In the Admin->System->Advanced section is your module present and enabled?
Did you actually unpack your module to the right space, e.g. app/code/local/yourcompany/yourmodule ?
Do you have app/etc/modules/yourmodule.xml - I believe that this could be the overlooked file giving rise to your problem.
the cache could be the culprit, if you manually deleted the core_resource row for your module in order to make the setup sql run again, you have to also flush the cache
probably a difference between dev and production servers is cache settings, that would explain why you only see this in production
For me, the issue appeared using Windows for development. Linux system is case sensitive. In my config.xml the setup section was named camelCase while the folder was named all-lowercase. Making them the same made the script run.