SSIS package not deleting AS400 records - ssis

I have an SSIS package which downloads data from AS400 to SQL Server.
This is working fine but:
The pre-final task (Execute SQL task) is to delete the downloaded records from AS400.
The query is simple:
DELETE FROM (TABLE_NAME)
I am pretty sure that this task is being hit because a Send Email task after this is working.
The issue is occurring on one server only. And I am unable to figure out why.
The entire setup is the same for all servers.

Try to track down the joblog of the job servicing the request. If there are errors that are being passed along by SSIS, you'll see them there.
One issue I've seen, especially with people that still call the system AS/400, is that the tables aren't being journaled; which means that when SSIS attempts to delete the rows under commitment control, it will fail with an error.

Related

Sudden increase in packages failures executed against Sharepoint/Office 365 from SSIS

Beginning yesterday afternoon (12/9 Central US time) we began seeing a marked increase in SSIS packages execution failures. These packages have been in operation for several months and experienced no failures on 12/8. Initially I brushed it off as temporary, but now it seems as if "none" of them are working. Several of these packages run hourly, with the first failure around 10:30 on 12/9. Between 10:30 and 1500 'most' succeeded, but after 1500 on 12/9 most failed.
I'm testing with a relatively simple dataflow package. I have two sources (SQL and Sharepoint). From the sources, I compare the two and then update the Sharepoint list with any changes that have been made (SQL query is the authoritative record). The Source Sharepoint list is the same list that is being updated. As a further test, I removed all steps except for querying the Sharepoint list and sorting it. The initial query still fails.
Errors are happening inconsistently within the dataflow package. For example since I've been testing this morning, I had one (and only once) that made it through the package to the point it was should have tried to Add, Update or Delete list items. The table comparison resulted in updates to the Sharepoint list. The package failed when attempting to update the records. Most of them (and all recent attempts) are failing when the dataflow queries the Sharepoint list initially. There are only two records on the Sharepoint List and two records on the SQL table.
I'm connecting to Sharepoint using MS Graph. Testing the connection (Connection Manager) within VS 2019 has succeeded every time. I've verified that the secret I'm using is not expired. I created a new secret and am receiving the same error. 'Usually' if I attempt to preview the Sharepoint source that is successful, but not always. Even if it is successful attempting to debug and run the package fails. I'm not seeing any alerts on Microsoft or Azure that would provide any indication that the problem's there, though I feel like something must have changed there.
I have opened a support ticket with CozyRoc and they have directed me to open a ticket with Microsoft. Microsoft's support request workflow is directing me here.
In the production All Execution reports, the error I'm getting back is:
"Data Flow Task:Error: Attempt to read message string for 0xc02090f5 failed with error 0xc02090f2. Make sure all message related files are registered."
Initial research pointed me toward a data typing issue, but I've not changed anything in our Sharepoint, SSIS or SQL environment to have changed the data types.
This appears to be very repeatable so I can try providing more information if needed.
Look likes the answer was to wait. After about 1 week, the issue resolved as quickly as it appeared. I didn't find a way to report my issue directly to Microsoft without adding a support plan. I was in the process of finding alternative methods to address our needs when it resolved 'by itself'.

SSIS package wrote 0 rows

Yes, I read the other questions on the same topic, but they do not cover my issue.
We run two environments; DEV and Prod. The two were synched last week, meaning they ought to contain the same data, run the same SSIS packages, and source the same source data.
However, today we had a package on PROD go through its' usual steps (3 tables being truncated, and then loaded from OLEDB source to OLEDB destination, one after the other). The package finished without throwing an error, and the first 2 tables contain data, whereas the last one does not.
On DEV, everything looks fine.
I went through the package history, and it actually shows it wrote 0 rows:
Yesterday, however, it worked as intended:
When I manually ran the package, it wrote data. When I click "Preview", it displays data. When I manually run the source query, it consistently returns data, the same amount of rows, every time. The SSIS catalog has not been updated (no changes were deployed to PROD between yesterday and today).
The source query does not use table variables, but it does use CTEs. I have seen suggestions to add SET NOCOUNT ON, and willing to accept this could be an explanation. However, those answers seem to indicate the package never writes any data, whereas this package has worked successfully before, and works successfully on DEV.
Does anyone have any explanation as to how I can explain to my customer that I have no clue as to why 1 package suddenly chose not to write any data, and how I can ensure this won't happen again, to either this package or any of the other packages?
This can be tricky. Try the following:
Under Integration Service Catalogs -> SSISDB -> project -> (right click)Reports -> Standard Reports -> All executions. Check here if at any point, ETL job lost contact with warehouse.
2.If you have logging enabled, try to see at what task_name your package started returning 0:
select
data_stats_id,
execution_id,
package_name,
task_name,
source_component_name,
destination_component_name,
rows_sent
from
ssisdb.catalog.execution_data_statistics
How are you handling transactions and checkpoints? This is important if you want to know root cause of this issue. It may happen that due to loss of connectivity had forced to rollback any write in warehouse.
As it turns out, the issue was caused by an oversight.
Because we run DEV and PROD on the same server (we know, and have recommended the customer to at the very least consider using different instances)), we use variables in which we point at the proper environment (set in the environment variables).
The query feeding this particular package was updated, and apparently rather than using the variable to switch databases, it was hard-coded (likely as result of testing, and then forgetting to update the variable). The load for DEV and PROD run at the same time, and we suspect that while PROD was ready, DEV was still processing the source tables, and thus 0 rows were returned.
We only found this out today because the load again ran fine right until this morning. I was too late to catch it using Profiler, but because it was only this package, I checked, and spotted the hardcoded reference to _DEV.
Thanks everyone for chiming in.

'[180] Job was deleted while it was executing: the outcome was (Unknown)'

I have a simple package to produce a csv file from a query and update a 'lastrun' time on a table. This was written in 2014, running on a test server with 2014. The agent job that runs it simply executes it via an SSIS Package step. No other steps are involved.
However, I get the above error message in the agent log file. The job will successfully execute and produce a file, but ONLY after either restarting the agent service or changing the properties on the job (after refreshing the job list in SSMS). And because it seemingly deletes itself during execution, there is no job history to view, and then the schedule will stop repeating.
I can't find anything like this on here, and wondered if anyone has ever seen this, or has any ideas?
Thanks.
Note (update) : All other agent jobs run ok on the same server. The only difference with this one is that it's the only one that is calling an SSIS package.
It could be when you try to restore the same database (subscriber/distribution) as another database name, it will clear the job automatically.

Cannot deploy SSIS via Integration Services to SQL Server 2012: Current transaction canot be committed. Error:3930

I have a SSIS that has been working for over a year, and just in a couple of days I made a bunch of changes. When I try to deploy, it returns me an error saying
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
It doesn't tell me what or which package is causing this issue.
Is there any way to trouble shoot this?
I appreciate any recommendation!!
In my case, the reason of the error was trigger on SSISDB DB for DDL_DATABASE_LEVEL_EVENTS, wich tried to write info in another DB.
I just solve it, the partitionw as running out of free space, I just cleaned the log, and it's back to normal!
I changed the name of my Solution and Project but wanted to deploy to the same location in the SSISDB Catalog. I was getting this specific error because of the name change (it was failing on deployment step when it called the internal.preparedeploy procedure). I ended up deleting the Project that existed on the Target Server and re-deploying the new Project to the same location. It deployed successfully.

Deadlock on logging variable value changes using a SQL task

Morning
I've been reading "SQL Server 2008 Integration Services Problem - Design - Solution". It outlines a way of logging variable changes which I'm trying to replicate in SQL 2005.
Create variables e.g. PackageId, RecordsAffected. - Set Raise ChangeEvent to true.
Create a string variable g.g. strVariableValue. - Set Raise ChangeEvent to false.
On the package event handler: OnVariableValueChanged add a script task "SCR Convert value to string".
Add ReadOnlyVariables: System::VariableValue
Add ReadWriteVariables: User::strVariableValue
In the script, set a local variable to System::VariableValue.value.tostring
Set the variable User::strVariableValue to the local variable
Add an "Execute SQL Task" component "SQL Log Variable Value Changed" calling a SP with no resultsets.
Set parameter mapping to User::PackageId, System::VariableName, User::strVariableValue
When this is run, I get a deadlock on User::PackageID
Error: 0xC001405B at SQL Log Variable Value Changed: A deadlock was detected while trying to lock variable "User::_PackageID" for read access. A lock could not be acquired after 16 attempts and timed out.
The script step succeeds but the Execute SQL task fails. I'm using Visual Studio 2005 Version 8.0.50727.42, Microsoft SQL Server Integration Services Designer Version 9.00.4035.00 and BIDSHelper Version 1.4.3.0.
Any ideas?
Eureka!
I had the same problem and led to a few deadend posts, then I discovered the root.
I had the framework working just fine and wanted to force some info to be logged.
So I changed the value of the framework variable "strVariableValue" and this caused the deadlock with the change event task.
I fixed by creating my own variable "strLogMe" and putting whatever I wanted to log.
Moral: don't touch the framework variables
Did you use the code sample from the book? All the files are available on the Wiley website for free. The code sample includes a SSIS package, sql scripts, and VB code for the script. If this doesn't work for you, then let me know since one of my team members found a way to log variable changes that was different from this methodology.
I was getting this error ("a deadlock was detected" etc), suddenly, which seemed to coincide with I.T. having done a Microsoft Windows patch on the server. There were packages which were using script tasks, with read-only and/or read-write variables in the SSIS UI. Even though it seemed to have been an environmental issue (because the packages had worked for months, then suddenly stopped working, even though I hadn't changed any code), I thought, well (as I had seen from various blog posts from years gone by), there were instances of companies doing server patches, then having their SSIS packages break; and the blogs seemed to say, change the way you're locking the variables, don't reference them in the UI; instead, lock them explicitly in code. So I tried the same thing. It didn't fix it.
It turns out some individual had removed the permissions of the user under whose identity the packages run, from the AD group; those permissions were required because it was trying to copy a file from a directory which required read permissions on the directory. These packages are typically called by a SQL agent job using a proxy identity. When the package was executed manually from SSMS, it worked. But when it was run by calling the SQL agent job, it failed.
The bottom line is, it was just coincidence that the packages started failing around the time of the Windows update. But the other (main) point is, if your package is trying to access a file on the network, and the identity (or proxy identity) under which that package runs does not have permissions to the source or target directory, then your package could fail and the problem could manifest itself in this cryptic way, where it looks like a variable deadlock issue, but it's actually a file share permissions issue. I only wasted a day on this, but... maybe this will be useful to somebody in the future.