I am trying to run a Windows scheduler task using schtasks.exe in an SSIS job. The task will be called using the same ID as is used to execute the task. The task itself works; It is currently started manually after the SSIS job is done.
I've tried running a script on my own PC with a space in the name and this worked. All other settings are the same. The name of this task is "DWH Upload DEV" and it's location is set to "\".
I'm calling the script from an Execute Process Task block, using schtasks.exe as the executable (with the full file path of course) and /RUN /TN "DWH Upload DEV" as the arguments.
I have RequireFullFileName set to False, FailTaskIfReturnCodeIsNotSuccessValue set to True and SuccessValue and TimeOut to 0
I am receiving the following error: [Execute Process Task] Error: In Executing "[location]\schtasks.exe" "/RUN /TN "DWH Upload DEV"" at "", The process exit code was "1" while the expected was "0".
The part here that confuses me is the error being at seemingly nothing.
Thanks in advance for any help!
EDIT:
I can't run the program that is performed by the scheduler from SSIS itself, due to the program needing to be run by a specific user. The use of task scheduler is a workaround for this.
So, as it turned out, the task DID trigger but it didn't actually properly work (I later heard the task stopped working about an hour before I started working with it). The task can now properly be run by using the following settings:
Executable: C:\Windows\System32\schtasks.exe
Arguments: /RUN /TN "DWHUploadDEV" (the new version has no spaces in the name)
The rest of the settings is as normal.
Related
The scenario is very similar to this one https://github.com/DevExpress/testcafe/issues/6804
Scenario:
I'm running a set of tests using testcafe, in a folder, with concurrency 4:
UI clicks
UI navigation
File upload scenario
Assertions
The job runs in github actions, with self-runners (docker container with no limits set) in AWS, using a properly sized m5.2xlarge box.
All goes well, tests pass, but randomly the job fails with an elusive error:
2022-06-11T08:10:53.9599698Z 30 passed (4m 34s)
2022-06-11T08:10:54.2336015Z Error:
2022-06-11T08:10:56.2617307Z ##[error]Process completed with exit code 1.
testcafe version 1.17.1
node version 14.17.18
I've tested changing/resizing the EC2 runner but didn't help. There are no errors on infra side, just the exit code 1 w/ no other details.
Is anyone else having this issue or have any advice on troubleshooting it?
Jenkins pipeline is building Docker images. OpenShift plugin(s) are used for the same.
An example command:
openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
While this works smoothly most of the time, whenever this command fails due to some underlying platform issues, almost no information is seen in the Jenkins build job console:
[Pipeline] }
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] ............................................................
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Uploading finished
[start-build:buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd] Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
[Pipeline] }
ERROR: Error running start-build on at least one item: [buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd];
{err=, verb=start-build, cmd=oc --server=https://api.scp-west-zone02-z01.net:6443 --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt --namespace=sb-1166-amld5-car-service-se --token=XXXXX start-build buildconfig/amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd --from-dir=./build/libs --wait --follow -o=name , out=Uploading directory "build/libs" as binary input for the build ...
............................................................
Uploading finished
Error from server (BadRequest): unable to wait for build amld5-car-reporting-spacetime-ubi-openshift-java-runtimejd-857 to run: timed out waiting for the condition
, status=1}
[Pipeline] // catchError
I need more verbosity, detailed error information. I checked the start-build command reference, and I thought --build-loglevel [0-5] might help here. When I used it, I got a warning that since I am using source type as 'Binary' in the BuildConfig, logging isn't supported(seriously???)
NOTE: the selector returned when -F/--follow is supplied to startBuild() will be inoperative for the various selector operations.
Consider removing those options from startBuild and using the logs() command to follow the build output.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying --build-loglevel with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] WARNING: Specifying environment variables with binary builds is not supported.
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] Uploading directory "build/libs" as binary input for the build ...
[start-build:buildconfig/casc-docs-spacetime-ubi-openshift-java-runtimeadoptopenjdk] ..
How do I get more logs, info. while executing the start-build command?
I was facing the same problem, I just used something like:
def build = openshift.selector(BUILD_CONFIG_NAME, "${appBcName}").startBuild("--from-dir=${artifactPath}", '--wait','--follow')
build.logs('-f')
And so far it seems to work, I got the logs from my openshift build in my Jenkins pipeline. Now I'll try to get the logs only if build does not Complete, to reduce the overall logs.
(for future searchers like me ^^)
We are using VS2017 (15.8.4) SSIS running on SQL Server 2016. Our package would run to completion within VS without error. But when we use a SQL job that runs the package, it errors out immediately when the job starts. I've even taken all the codes out of the Script Task and only has:
public void Main()
{
// TODO: Add your code here
Dts.TaskResult = (int)ScriptResults.Failure;
}
It would error out the same way. If we remove the Script Task completely from the Control Flow and redeploy the package, the Job will run to completion with no error. The error from the SQL job history is very generic and has no useful information. The Script Task does not give a compilation error when we saved it.
(1) Does anyone know why it error out?
(2) What would you suggest in configuring to get more informative error?
(3) Could it be due to missing DLLs or .NET Framework Components?
I am running Sun Grid Engine and just finished installing it. I submit a job using a shell wrapper script from command line like
./hello_world_qsub.sh
but when I run it I get the following error
Unable to run job: b410 your job is not allowed to run in any queue . Your job 1("hello_world.sh") has been submitted.
Got the answer myself,
In the user configuration tab in qmon, add the name of your executing host and click modify that did the job
I've got this question within a project in ssis. I had to divide the original project into different packages because this was to big and sometimes it could provoke some problems with the memory.
So, in order to link the different packages I'm using the "Execute Package Task" to refer one to each other.
If I execute the package directly from SSIS it works perfect, there's not problem.
But if I use a scheduler to program the time of execution I'm getting this error message:
Error: 2015-09-22 14:54:37.98 Code: 0xC00220E6 Source: Execute Package
Task Description: There is no project to reference. End Error Error:
2015-09-22 14:54:37.99 Code: 0xC0024107 Source: Execute Package Task
Description: There were errors during task validation. End Error
DTExec: The package execution returned DTSER_FAILURE (1).
I wonder what can be happening with the project and its execution.
Regards
I'm not sure what scheduler you are using, or how it's configured, but you can debug your way through this by simulating the scheduler with DTExec. It sounds like you've crossed your Setup and Execution Method (see below for those definitions).
Here's the summary.
Option 1: For the Setup, use Project References and for the Execution Method, use Project/Package.
Option 2: For the Setup, for each child package, use External References and for Execution Method, use File.
(It sounds like you're using a combination of Project References and File, which in turn sends the There is no project to reference error off of the child packages.)
Option 1
Setup
Open your parent package in SSDT, and then double click a child package. It should look like this:
Execution Method
This setting means you need to execute the package via the Project/Package method via DTExec. So build your project - this generates an ispac file. And to execute via dtexec, it would look like:
dtexec /Proj Path\To\MyProject.ispac /Pack Path\To\The\ParentPackage.dtsx
Note: If you specify the dtsproj file instead of the ispac file in the /Proj parameter, you will receive a File contains corrupted data error!
Option 2
Setup
Open your parent package in SSDT, and then double click a child package. Change it to look like this:
This is done by
1. Changing the Reference Type to External Reference
2. Changing the Location to File system (SQL Server is another option)
3. Select <New connection...> to create a new file connection for the child package to run (or a SQL server connection)
Execution Method
This setting means you can now use the File method, which is likely the way you're attempting to execute the package.
dtexec /f Path\To\My\ParentPackage.dtsx