hudson svn publisher plugin not working - hudson

I'm trying to use the SVN Publisher plugin to commit some artifacts of my build but I'm getting a non-sensical error:
workspace: /Users/builder/hudson/workspace/myproject/
Attempting to import to SVN: https://mysvnrepo.com/svn/myproject/_SNAPSHOT_
SVN Publisher: target: /Users/builder/hudson/workspace/myproject/myproject/_build
SVN Publisher: Error: target Directory not accessable: /Users/builder/hudson/workspace/myproject/myproject/_build
This path is readable by the user that the hudson slave is using.
In looking at the comments on the SVN Publisher page, it seems that some people have run across this problem while others have not.
My question is: for those of you that have gotten it to work, what did you do?

It seems that the plugin is running on the hudson server even though the build is using slaves. This seems to be a bug in the SVN Publisher plugin. :(

It looks like you should be able to utilize the "Copy files back to the job's workspace on the master node" to get these files back to the server (this part works for me). It appears to happen after SVN Publisher is run, but that would be OK and simply means that SVN publisher should be committing (or importing) the previous build. But alas, SVN publisher doesn't seem to be doing anything except logging a message.

Related

Google cloud - Stackdriver debug reports "File was not found in the executable" for GCE Jetty war

I've been trying to follow the
Setting Up Stackdriver Debugger for Java applications on Google Compute Engine, but am running into issues with Stackdriver Debug.
I'm building my .war file from a separate build server, then deploying it to my GCE server. I added the agent to the start command via /etc/defaults, and my app appears in the https://console.cloud.google.com/debug control panel. The version I set in the run command matches the revision that shows up in the source-context(s).json files.
However when I click open the app, I see the message that
No source version information was provided by the deployed application
I connected the app's git repo as a mirrored cloud repository, and can browse the source files in the sidebar of the Stackdriver Debug page. But, If I browse to a file and add a breakpoint I get an error that the error "File was not found in the executable."
I have ran the gcloud preview app gen-repo-info-file command, which created two basic json files storing my git repo and revision. Is it supposed to do anything else?
I have tried running jetty using both normal and extracted modes. If I have jetty first extract the war file, I can see the source-context.json filesin the WEB-INF/classes directory.
What am I missing?
https://github.com/GoogleCloudPlatform/cloud-debug-java#extra-classpath mentions
you can update the agentPath showing your WEB-INF/class directory.
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes
For multiple class paths:
-agentpath:/opt/cdbg/cdbg_java_agent.so=--cdbg_extra_class_path=/opt/tomcat/webapps/myapp/WEB-INF/classes:/another/path/with/classes
There are a couple of things going on here.
First, it sounds like you are doing the correct thing with gen-repo-info-file. The debugger agent should pick up the json files from the WEB-INF/classes directory.
The debugger uses fuzzy matching to find source files, so as long as the name of the .java file matches a file in your executable, you should not get that error.
The most likely scenario given the information in your question is that you are attaching the debugger to a launcher process, rather than your actual application. Without further details, I can't absolutely confirm that, though.
If you send us more details at cdbg-feedback#google.com, we can look more closely at your case to see if we can understand exactly what's happening, and potentially improve our documentation, since it sounds like you followed the docs pretty closely.

Managing composer and deployment

So, I'm enjoying using composer, but I'm struggling to understand how others use it in relation to a deployment service. Currently I'm using deployhq, and yes, I can set it to deploy and run composer when there is an update to the repo, but this doesn't make sense to me now.
My main composer repo, containing just the json file of all of the packages I want to include in my build, only gets updated when I add a new package to the list.
When I update my theme, or custom extension (which is referenced in the json file), there is no "hook" to update my deployment service. So I have to log in to my server and manually run composer (which takes the site down until it's finished).
So how do others manage this? Should I only run composer locally and include the vendor folder in my repo?
Any answers would be greatly appreciated.
James
There will always be arguments as to the best way to do things such as this and there are different answers and different options - the trick is to find the one that works best for you.
Firstly
I would first take a step back and look at how you are managing your composer.json
I would recommend that all of your packages in composer.json be locked down to the exact version number of the item in Packagist. If you are using github repo's for any of the packages (or they are set to dev-master) then I would ensure that these packages are locked to a specific commit hash! It sounds like you are basically there with this as you say nothing updates out of the packages when you run it.
Why?
This is to ensure that when you run composer update on the server, these packages are taken from the cache if they exist and to ensure that you dont accidentally deploy untested code if one of the modules happens to get updated between you testing and your deployment.
Actual deployments
Possible Method 1
My opinion is slightly controversial in that when it comes to Composer for many of my projects that don't go through a CI system, I will commit the entire vendor directory to version control. This is quite simply to ensure that I have a completely deployable branch at any stage, it also makes deployments incredibly quick and easy (git pull).
There will already be people saying that this is unnecessary and that locking down the version numbers will be enough to ensure any remote system failures will be handled, it clogs up the VCS tree etc etc - I won't go into these now, there are arguments for and against (a lot of it opinion based), but as you mentioned it in your question I thought I would let you know that it has served me well on a lot of projects in the past and it is a viable option.
Possible Method 2
By using symlinks on your server to your document root you can ensure that the build completes before you switch over the symlink to the new directory once you have confirmed the build completed.
This is the least resistance path towards a safe deployment for a basic code set using composer update on the server. I actually use this method in conjunction with most of my deployments (including the ones above and below).
Possible Method 3
Composer can use "artifacts" rather than a remote server, this will mean that you will basically be creating a "repository folder" of your vendor files, this is an alternative to adding the entire vendor folder into your VCS - but it also protects you against Github / Packagist outages / files being removed and various other potential issues. The files are retrieved from the artifacts folder and installed directly from the zip file rather than being retrieved from a server - this folder can be stored remotely - think of it as a poor mans private packagist (another option btw).
IMO - The best method overall
Set up a CI system (like Jenkins), create some tests for your application and have them respond to push webhooks on your VCS so it builds each time something is pushed. In this build you will set up the system to:
run tests on your application (If they exist)
run composer update
generate an artifact of these files (if the above items succeed)
Jenkins can also do an actual deployment for you if you wish (and the build process doesn't fail), it can:
push the artifact to the server via SSH
deploy the artifact using a script
But if you already have a deployment system in place, having a tested artifact to be deployed will probably be one of its deployment scenarios.
Hope this helps :)

Cannot clone Mercurial repository

I'm having difficulty cloning a repository in Mercurial.
The repository is stored at Kiln on demand, though I'm not sure that makes much difference.
I have a new install of Tortoise HG, which has of course installed the hg command line onto my machine.
When I attempt to clone the repository, I immediately receive the error:
abort: The system cannot find the path specified: 'F:\backups\_hgcookies'
Code: 255
I don't know where it's getting this path from - there is an 'F' drive on my machine that is completely empty aside from hidden system volume files.
The Kiln Tortoise install contains a couple of plugins bundled with it, including kilnauth, which I assume is using a cookie to store authentication information.
I've looked in the mercurial.ini file, however it contains no mention of this folder or hgcookies - that I can see.
I'm wondering if there's a permissions issue somewhere - I'm in the administrators group on the machine, but am on a company network with quite a bit of lockdown which has caused problems before.
I've not found any similar problems through googling, though it's been difficult to get relevent results with the word 'backup' and 'hgcookies' in my terms!
Any help, greatly appreciated.
Seems this was an issue with the KilnAuth extension. I'm not sure why it decided to store the cookies on the F: drive, but I manually created a 'backups' folder on that drive and that allowed it to store the cookie there with no problems.
I had some help from the FogCreek guys diagnosing this - I have to say I've never experienced such awesome customer service, really. Hats off to those guys!

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Jenkins/Hudson fails when trying to retrieve code from CVS

Trying to configure Jenkins CI. Currently just running it from the .war (eventual intention as a service). Jenkins is aware of the CVS executable (i.e. will read the version [Concurrent Versions System (CVSNT) 2.0.62.1817 (client/server)]).
The .cvspass is not specified, because they apparently do not play nice with CVSNT (which prefers to keep passwords in the registry.) I've specified the password in the job config by using the :pserver:user:passg#server:/dir pattern for CVSROOT, which I found suggested in some places. Regardless of whether I run using that, or :pserver:userg#server:/dir as the CVSROOT I get the blinking red ball, jenkins stuck with a nearly full progress bar for 2 and a half minutes. It then fails. The console output yells with something like
FATAL: hudson.scm.ChangeLogSet.iterator()Ljava/util/Iterator;
java.lang.AbstractMethodError: hudson.scm.ChangeLogSet.iterator()Ljava/util/Iterator;
at hudson.model.AbstractBuild.getCulprits(AbstractBuild.java:282)
at hudson.model.AbstractBuild.getCulprits(AbstractBuild.java:279)
at hudson.model.AbstractBuild$AbstractRunner.post(AbstractBuild.java:596)
at hudson.model.Run.run(Run.java:1400)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:175)
Both CVSROOTs I'm using provide no trouble with TortoiseSVN. I've found some mention of difficult of logging into SVN from jenkins as a service and related user/system issues, but considering I'm running it from the .war I don't think that's the issue.
EDIT:
Interestingly the console log if I use an invalid user or password recognizes such.
cvs [checkout aborted]: authorization failed: server rejected access to /dir for user FOO
FATAL: CVS failed. exit code=1
Finished: FAILURE
which indicates that Hudson is talking to the CVS server and authenticating, but something else goes wrong.
/EDIT
Cheers
Answer to the question found, thanks to rpetti on #jenkins on freenode. Problem was I had switched between Hudson and Jenkins and there were some incompatible configuration files that were mucking things up. Deleting and recreating the home directory solved the problem.
CVSNT 2.0.62.1817 is very very old and has several known security issues. Please upgrade to the latest 2.8.01.