Hudson: Copy artifact from master to slave fails - hudson

Is it possible to use the 'copy artifact' plugin to copy an artifact from a job that ran on master to a downstream job that runs on a slave node?
I'm getting an error on the slave that says:
hudson.util.IOException2: hudson.util.IOException2: Failed to extract /srv/hudson/jobs/myproject/builds/2011-04-29_10-28-54/archive/myartifact.foo
Obviously that path is not valid, as it points to the artifact folder on master.
Am I missing something or is this just not possible?

Yes, it is possible. You can use the Copy Artifact Plugin to copy any artifact to the slave.
For a first test I recommend to
set up a job just with one 'Copy artifacts from another project' step
set the 'Project name' to a job with your artifact
set 'Which build' to 'Last successful build' (ensure there is one)
keep the 'Artifacts to copy' and 'Target directory' empty to copy all artifacts to the slave workspace directory

Related

Need to pass SSIS variable to the Execute Process task to make Dynamic command line argument for Winscp

I have a folder 'DATA' at SFTP location from where I need to download the set of files to some common location and then copy the respective files to different folder location.
File Names are:
Test1.csv
Test2.csv
Test3.csv
Test4.csv
Test5.csv
I want that files first gets downloaded to below location:
G:\USER_DATA\USER_USER_SYNC\Download
Since these files are related to different schema and have to processed separately by each different ssis packages for further transformations and loading.
For some reasons we have to first keep it at some common location and then move or copy afterwards.
Here's my command line argument.
/log=G:\USER_DATA\USER_USER_SYNC\SFTP_LOG\user_sync_winscp.log /command "open sftp://username:password#stransfer.host.com/" -hostkey=""ssh-rsa 2048 9b:63:5e:c4:26:bb:35:0d:49:e6:74:5e:5a:48:c0:8a""" "get /DATA/Test1.csv G:\USER_DATA\USER_USER_SYNC\Download\" "exit"
Using above, I am able to download a given file one file at a time.
Since, I need to have first it at some common folder location. Hence I am planning to add another Execute process task to copy the files.
/C copy /b G:\USER_DATA\USER_USER_SYNC\Download\Test1.csv G:\USER_DATA\USER_USER_SYNC\Testing1
/C copy /b G:\USER_DATA\USER_USER_SYNC\Download\Test1.csv G:\USER_DATA\USER_USER_SYNC\Testing2
and so on...
I am looking for some way, using which we can download all the available files to some common folder location and then move or copy to different folder locations.
I have changed the design and followed a new approach. Thanks to Martin for fixing the sftp related issues and continuous support.
New SSIS package has below tasks:
Step1. It will look for latest updated files on sftp server and download the given files Test1.csv and Test2.csv to location G:\USER_DATA\USER_USER_SYNC\Download\
Here's my command line arguments:
/log=G:\USER_DATA\USER_USER_SYNC\SFTP_LOG\user_sync_winscp.log /command "open sftp://bisftp:*UFVy2u6jnJ]#hU0Zer5AjvDU4#K3m#stransfer.host.com/ -hostkey=""ssh-rsa 2048 9b:63:5e:c4:26:bb:35:0d:49:e6:74:5e:5a:48:c0:8a""" "cd /DATA" "get -filemask=">=today" Test1.csv Test2.csv G:\USER_DATA\USER_USER_SYNC\Download\" "exit"
Step-2. Since my requirement was to further copy each file to different folder location, so that respective process can pick corresponding file and start transformation and loading it into sql server.
This step will execute the Window cmd process and copy Test1.csv to new location as
G:\USER_DATA\USER_USER_SYNC\Testing1
command line arguments as:
/C copy /b G:\USER_DATA\USER_USER_SYNC\Download\Test1.csv G:\USER_DATA\USER_USER_SYNC\Testing1
Like wise I have another Execute process task to copy Test2.csv to new location as
G:\USER_DATA\USER_USER_SYNC\Testing2
command line arguments as:
/C copy /b G:\USER_DATA\USER_USER_SYNC\Download\Test2.csv G:\USER_DATA\USER_USER_SYNC\Testing2
The given solution is working fine, However there are couple of things which still needs to be handle.
Since I am downloading the latest file only using -filemask=">=today". Everything runs fine if execute process task is able to find the latest files on sftp server. If it's not there, than the next subsequent execute process task is failing with below error message.
The returning The process exit code was "1" while the expected was "0"
Here what I understand is that it's failing as it has nothing to copy or move.
Is there any way by which we can capture the exit code returned from first execute process task and store it into some variable, so that we can use expression to decide that whether to start next task or not.
Second, as you can see that I am using two execute process task to copy files from one location to another. Can we do anything to combine both these two commands into one execute process task?
Any suggestion most welcome and also i think that this issue needs to be addressed as a separate question.

Copying files between servers SQL Job

I have a SQL job that I would like to copy files in a directory out to a directory on another server. Here is the CMD that I am running
xcopy \DB1\TEST*.* \DB2\foo /S /Y
And I get this error
Executed as user: DOMAIN\SA$. File not found - . 0 File(s) copied. Process Exit Code 0. The step succeeded.
I have also tried the copying to the same server with same results as well as just trying to grab one file and copying
I have added the DOMAIN\SA account as an admin on the servers and still getting this. Is this just a bug or am I missing some permissions setting?

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

On a Hudson master node, what are the .tmp files created in the workspace-files folder?

Question:
In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support?
Background
Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing.
Some key configuration points of the job:
It is tied to a specific slave node
It builds in a 'custom workspace'
It runs the Task Scanner plugin on a portion of the workspace to find "todo" items
It triggers a downstream job that builds in the same custom workspace on the same slave node
In this particular instance, the .tmp files were being created by the Task Scanner plugin. When tasks are found, the files in which they are found are copied back to the master node. This allows the master node to serve those files in the browser interface for Tasks.
Per this answer, it is likely that this same thing occurs with other plug-ins, too.
Plug-ins known to exhibit this behavior (feel free to add to this list)
Task Scanner
Warnings
FindBugs
There's an explanation on the hudson users mailing list:
...it looks like the warnings plugin copies any files that have compiler warnings from the workspace (possibly on a slave) into a "workspace-files" directory within HUDSON_HOME/jobs//builds/
The files then, I surmise, get processed resulting in a "compiler-warnings.xml" file within the HUDSON_HOME/jobs//builds/
I am using the "warnings" plugin, and I suspect it's related to that.

SSIS package not running when called as step in SQL Job

I have a .dtsx file (an SSIS package) that downloads files from an FTP server and imports data. It runs fine whenever I run it manually. However, when I schedule calling the package as a step in a SQL server agent job, it fails. The step it fails at is the one where I call a .bat file. The error in the job history viewer says this:
Error: 2009-05-26 12:52:25.64
Code: 0xC0029151 Source: Execute
batch file Execute Process Task
Description: In Executing
"D:\xxx\import.bat" "" at "", The
process exit code was "1" while the
expected was "0". End Error DTExec:
The package execution returned
DTSER_FAILURE (1).
I think it's a permissions issue, but I'm not sure how to resolve this. The job owner is an admin user, so I've verified they have permissions to the directory where the .bat file is located. I've tried going into Services and changing the "Log On As" option for SQL Server Agent, and neither option works (Local System Account and This Account). Does anyone have ideas as to what other permissions need to be adjusted in order to get this to work?
I tried executing just the batch file as a SQL Job step, and it gave more specifics. It showed that it failed when I was trying to call an executable, which was in the same directory as my .bat file, but not in the windows/system32 directory, which is where it was executing from.
I moved the executable to the system32 directory, but then I had no clue where my files were being downloaded to. Then I found that there's a property for the Execute Process Task (the one that executes the .bat) called WorkingDirectory. I set this to be the directory where the bat is located, moved the executable back into the same one as the .bat file, and it's now working as expected.
For me it was a permissions issue. Go to Environment --> Directories, then change Local directory to something the SQLAgentUser can access. I used C:\temp. Click the dropdown for Save, and choose "Set defaults".
Are you executing the SSIS job in the batch file, or is the batch file a step in the SSIS control flow?
I'm assuming the latter for this answer. What task are you using to execute the batch file (e.g. simple execute program task or a script task). If the latter, it looks like your batch file is actually failing on some step, not the SSIS script. I'd check the permissions of what your batch file is trying to access
In fact, it might be a better idea to rewrite the batch file as a script task in SSIS, because you'll get much better error reporting (it'll tell you which step in the script fails).
You could try executing the batch file using the runas command in a command window. If you try and execute it under the local system or network system account, it should give you a better error. If it does error, you can check the error level by going "echo %ERRORLEVEL%".
If it wasn't the latter, and you're executing the SSIS package via a batch file, why?
Are you possibly accessing a mapped drive in your .bat file? If so, you can't rely on the mapped drive from within the service, so you'd have to use UNC path.
I had the same error and I resolved it by logging on to the user account that runs the job, opened Coreftp site in question there, test the site access, made the change there (in my case, I had to reenter the new password) and now it works.
So yes, it is an issue of file access. This one is file access to the coreftp site in question.