Yes, I read the other questions on the same topic, but they do not cover my issue.
We run two environments; DEV and Prod. The two were synched last week, meaning they ought to contain the same data, run the same SSIS packages, and source the same source data.
However, today we had a package on PROD go through its' usual steps (3 tables being truncated, and then loaded from OLEDB source to OLEDB destination, one after the other). The package finished without throwing an error, and the first 2 tables contain data, whereas the last one does not.
On DEV, everything looks fine.
I went through the package history, and it actually shows it wrote 0 rows:
Yesterday, however, it worked as intended:
When I manually ran the package, it wrote data. When I click "Preview", it displays data. When I manually run the source query, it consistently returns data, the same amount of rows, every time. The SSIS catalog has not been updated (no changes were deployed to PROD between yesterday and today).
The source query does not use table variables, but it does use CTEs. I have seen suggestions to add SET NOCOUNT ON, and willing to accept this could be an explanation. However, those answers seem to indicate the package never writes any data, whereas this package has worked successfully before, and works successfully on DEV.
Does anyone have any explanation as to how I can explain to my customer that I have no clue as to why 1 package suddenly chose not to write any data, and how I can ensure this won't happen again, to either this package or any of the other packages?
This can be tricky. Try the following:
Under Integration Service Catalogs -> SSISDB -> project -> (right click)Reports -> Standard Reports -> All executions. Check here if at any point, ETL job lost contact with warehouse.
2.If you have logging enabled, try to see at what task_name your package started returning 0:
select
data_stats_id,
execution_id,
package_name,
task_name,
source_component_name,
destination_component_name,
rows_sent
from
ssisdb.catalog.execution_data_statistics
How are you handling transactions and checkpoints? This is important if you want to know root cause of this issue. It may happen that due to loss of connectivity had forced to rollback any write in warehouse.
As it turns out, the issue was caused by an oversight.
Because we run DEV and PROD on the same server (we know, and have recommended the customer to at the very least consider using different instances)), we use variables in which we point at the proper environment (set in the environment variables).
The query feeding this particular package was updated, and apparently rather than using the variable to switch databases, it was hard-coded (likely as result of testing, and then forgetting to update the variable). The load for DEV and PROD run at the same time, and we suspect that while PROD was ready, DEV was still processing the source tables, and thus 0 rows were returned.
We only found this out today because the load again ran fine right until this morning. I was too late to catch it using Profiler, but because it was only this package, I checked, and spotted the hardcoded reference to _DEV.
Thanks everyone for chiming in.
We are using TFS 2017 update 2 on premise for CI and CD. In my release definition I have multiple "agent phase". Is there any possibility to skip the entire "agent phase" based on some conditions?
An agent phase is a way of defining a sequence of tasks that will run on one or more agents. At run time, one or more jobs are created to be run on agents that match the demands specified in the phase properties.
Unlike the build task, you could not simply disable/skip the task by right click it and select "disable selected task(s)". You need configure the Run this phase properties for an agent phase to run or not when specific conditions are met.
For "custom" you need to enter an expression that evaluates to true or false and controls when this phase should run. This is for the single agent phase. It's not able to skip the entire "agent phase" on some conditions.
No, that capability doesn't exist.
I have developed an SSIS package to run 3 reports from Reporting Services that are data driven subscriptions.
When I run the SSIS job it executes all the 3 reports at once, what I need is to run the reports sequentially, in other words, one by one. How can I do this?
This is an expected behavior. When you trigger a data driven subscription job, the SQL Server Agent starts the job and that completes the whole transaction. The SSIS package would then go on to trigger the next data driven subscription job and the next ( assuming you have put the job-triggering in sequence).
Now if you want to create a dependency in the way the jobs should run i.e. Job1 followed by Job2 followed by Job3, you need to manually write additional piece of code. The way to go about it would be to monitor the status code of the subscription.
In the ReportServer database there is a table called dbo.Subscriptions containing a column 'LastStatus'. Currently in my local db, I don't have any subscriptions and also am not able to find any documentation for the table. But I am pretty sure this would be either a boolean or a status flag such as 'Sucess' or 'Failure. Upon triggering the first job, you would need to write a .net Code to monitor this status with a polling interval. Once you get the desired outcome, move on to triggering the next job.
Hope this is clear. I would edit this answer with an working example.
I have a simple package to produce a csv file from a query and update a 'lastrun' time on a table. This was written in 2014, running on a test server with 2014. The agent job that runs it simply executes it via an SSIS Package step. No other steps are involved.
However, I get the above error message in the agent log file. The job will successfully execute and produce a file, but ONLY after either restarting the agent service or changing the properties on the job (after refreshing the job list in SSMS). And because it seemingly deletes itself during execution, there is no job history to view, and then the schedule will stop repeating.
I can't find anything like this on here, and wondered if anyone has ever seen this, or has any ideas?
Thanks.
Note (update) : All other agent jobs run ok on the same server. The only difference with this one is that it's the only one that is calling an SSIS package.
It could be when you try to restore the same database (subscriber/distribution) as another database name, it will clear the job automatically.
There is one job which refreshes sources from SVN and builds these sources.
After build this job sends notification about build to committers.
Then this job triggers second job using "Trigger parameterized build on other projects" plugin.
Second job did not refresh from SVN anything. It just run some tools using classes compiled by the first job.
I need to send notification to the committers if second job will be failed.
Is it possible to pass committers from the first job to the second?
I use Blame Upstream Committers plugin. Currently it works properly only when the Parametrized Build plugin is invoked as a Post-Build Action, not as a build step (due to a bug in the Parameterized Build plugin which is being claimed to have been fixed, but not released, yet).