Jenkins projects pointing to same Mercurial repo do not share source - mercurial

I am using Jenkins for our build server. I have multiple projects using the same Mercurial (Hg) repository and want to avoid each project cloning it's own local repo to build from (since the repo is rather large). This is supposed to be possible via Jenkins and the Mercurial plugin.
In my Mercurial plugin configuration I have checked both "Use Repository Caches" and "Use Repository Sharing". In each project, the same repository location (a network location specified via IP address) is listed.
However, each project still seems to want to create a clone of the repository. Any ideas?

In our setup (using Jenkins 1.506), I've defined a custom workspace under the Advanced Project Options for each of my builds, typically at [project]\repo and then build from there into a \build\ folder.
If you define the custom workspace for each Jenkins project to point to the same shared custom workspace using the same source for the repo it will reuse what is already there.
I've not tested this, but I would assume that under this setup, it is important to prevent concurrent builds from occurring in the same working directory. Bad things would follow.
As a followup question: What is your rationale for not wanting each build to have its own source code?

Related

Can Sphinx source files be pushed to ReadTheDocs without a linked repository?

I'm moving the Mercurial repositories for all my open-source projects to OSDN (OSDN.net) from Bitbucket because Bitbucket will soon drop support for Mercurial. However, OSDN only supports SSH, not HTTPS, as a file exchange protocol, and ReadTheDocs does not support SSH URLs. The ReadTheDocs public API allows builds to be triggered, but does not support any way to provide the source files with the build trigger.
Or any documented way, at least. Does anybody know of a way to either push document source files to RTD with a build trigger, or connect an OSDN repository to RTD so that RTD can clone the source files itself?
Thanks.
OSDN does support both SSH & HTTP(S), for "writing" the only option is ssh. However, read-the-docs needs only to 'read'; https is fine (And supported, although a bit hard to find).
On OSDN, toggle the "RO|r/w" button, to see the other-URL. It's not a button, nor trigger; but it looks like it --The UX/UI design isn't very great ...
Copy that RO value (again: ignore the UI-feedback. You can copy the https-URL. And past it on RTfD.
Note: for now, I could get webhooks/integration working. So, you have to go read-the-docs to rebuild, after a push. Or use the curl webhook from e.g a Makefile locally, see: https://docs.readthedocs.io/en/stable/webhooks.html#parameters

Prevent access to some files in webserver - mercurial/ssh

I have a centos server with code maintained using a mercurial repo.
To allow a new person to commit code to mercurial, I create a new user, add them to the webdev group, and they can push / pull code by
hg pull ssh://name#server.com.
However, there are some files (config files) that I would not like new users to have access to. Mercurial has been asked not to track these files, so the only way to access them is to ssh into the system and look at the files. Which I dont want new users to be able to do.
In essence, I want my new developers to only pull/push files through hg and disallow ssh-ing directly into the system. What the best way to do this? Can I provide hg access to a repo without providing ssh access to the files?
(or is my approach to the problem flawed?)
Thanks!
This can be really easily done by taking advantage of the command option available in .ssh\authorized_keys files. When you're granting their key access in that file you can prepend a "command=...." argument to their key and that's the only command they can run.
Mercurial ships with a handy script for doing exactly that. It has instructions inside:
https://www.mercurial-scm.org/repo/hg/file/tip/contrib/hg-ssh
In term of an authorization layer (similar to Gitolite for Git), you have mercurial-server (not to be mixed up with the Mercurial light-weight web server hgserve)
mercurial-server gives your developers remote read/write access to centralized Mercurial repositories using SSH public key authentication; it provides convenient and fine-grained key management and access control.
See its repository here.
It is based on the same SSH forced-command mechanism than the script mentioned by Ry4an in his answer (+1 on his answer, because it is already packaged with Mercurial).
See for illustration the "mercurial-server" source of refreshauth.py.

Can Jenkins store artifacts outside the job directory?

I currently have Jenkins set up with a number of jobs, but it's proving difficult to back up because the artifacts are stored within the job directory. I'd like to back up the job configurations and artifacts separately. I'm sure I remember reading somewhere that Jenkins now has an option to store them outside the job, but I can't find this.
Is there any configuration option that does this while still making the artifacts visible from within the job on the Jenkins interface? (ie rather than merely an add-in that copies the artifacts elsewhere)
Go to your jenkins configuration page, e.g.
http://mybuildserver.acme.com/configure
At the top of the configuration page there is a "home directory" setting. Click the "advanced..." button below it.
Now set the "Workspace Root Directory" to e:\jenkins-workspaces\${ITEM_FULL_NAME}, and "Build Record Root Directory" to e:\jenkins-builds\${ITEM_FULL_NAME} or something similar.
Warning: I run Jenkins 2.7.2 and noticed that certain features don't work properly after configuring Jenkins like that. I saw problems with folders and problems with the multi-branch project plugin. Check the status of those issues if your rely on these features.
As you can see here, there are many plugins to deploy artifacts anywhere you want/need, on FTP, CIFS, Confluence, Artifactory.... especially the ArtifactsDeployer that will allow you to make a copy of the artifacts in the Jenkins Home.
Thank you Sam, for your post, which directed me into the right direction to solve my problem.
Have been searching for a way on how can I make a symlink to the Job-Archive of a build for multibranch projects. Up to now, we used to manually search for the correct folder basename in the filesystem and added that one to the Jenkinsfile.
Now, I can simply use
jobOutputFolder = currentBuild.rawBuild.artifactsDir.path
and use that in my script.
If security is a concern, I could implement that as a shared library additionally.
Try the Use Custom Workspace build option. From the Jenkins popup help:
For each job on Jenkins, Jenkins allocates a unique "workspace
directory." This is the directory where the code is checked out and
builds happen. Normally you should let Jenkins allocate and clean up
workspace directories, but in several situations this is problematic,
and in such case, this option lets you specify the workspace location
manually.
This option is also available under advanced project properties of multi-configuration project builds.
A groovy script under "Prepare an environment for the run" will always run on the master, and this groovy script can create a symlink to where you really want artifacts archiving to archive_to which SHOULD include the job name and build number:
if (! Files.createSymbolicLink(Paths.get(currentBuild.artifactsDir.path),
Paths.get(archive_to.getCanonicalPath()))) {
throw new RuntimeException("Can't create symlink to archive dir")
}
Of course (sadly) when old builds are purged by Jenkins the old artifacts are left because jenkins will not follow a symlink when purging, even if jenkins owns the symlink and the target (shame).
I workaround for that may be to point a symlink back from the new archive dir, then, when jenkins purges it's archive dir, the new symlink will dangle and a cron job can then later delete the new job archive dir
Copy Artifact Plugin (https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin) adds a build step for retrieving files from another project's workspace to current and work from there.

Enforcing hg settings on all users of a mercurial repository

Is there any way to centrally manage mercurial settings for all users of a repository? Are there additional [existing] tools, add-ons, extensions, etc for this?
My use case
We have a repository that includes a few Excel, Word etc files that constantly cause trouble with merging.
With [merge-patterns] entries a la **.doc = internal:fail I can specify the intended behaviour, but I have to set this up for each and every user.
I want this to propagate automatically to anyone who clones the repository.
Environment
We use Kiln 2.6 hosted on our own Windows Server and TortoiseHg 2.2 on our Windows clients.
As far as I know, this possibility doesn't exists in Mercurial and I'm not aware of any extension which let you clone the .hgrc along with the other files.
However, you can do some things to "ease" the process of setup for each user.
Provide a template hgrc in the repository
You can add a "template" .hgrc in the repository. When a user clone the repo, the only thing he as to do is move the template to the right place.
Change the system wide hgrc
If you have some kind of Configuration management system for your clients, you can set the system wide configuration file for each of your users. There's various way of doing it. From the documentation:
(Windows) <install-dir>\Mercurial.ini or
(Windows) <install-dir>\hgrc.d\*.rc or
(Windows) HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial
Per-installation/system configuration files, for the system on which
Mercurial is running. Options in these files apply to all Mercurial
commands executed by any user in any directory. Registry keys contain
PATH-like strings, every part of which must reference a Mercurial.ini
file or be a directory where *.rc files will be read. Mercurial checks
each of these locations in the specified order until one or more
configuration files are detected. If the pywin32 extensions are not
installed, Mercurial will only look for site-wide configuration in
C:\Mercurial\Mercurial.ini.
But obviously this depends on the way your clients are set up, so you will have to find the solution yourself. For example you can:
Set these files on the computer installation
Provide an executable which configure this that every user must run
Configure your in-house configuration management system to set up this on the next computer start
Change the roaming user profile if they have one.
You can use the projrc extension to push a project configuration file to others. It requires that the clients enable the extension first and that they fully trusts the server.

Jenkins build outside of workspace

I am new to Jenkins/Hudson and am trying to migrate a C make-based project from buildbot. For legacy reasons, the build system is hard-coded to build outside of the versioned source tree (git), one directory above, in a separate directory. E.g.:
workspace
.git
foo
bar
build
artifacts
Besides the fact that it ends up creating a directory outside the workspace, Jenkins won't recognize items in the build/ directory above to archive as artifacts.
How can I make this kind of build system work with Hudson? Building in-source-tree is not a short-term option. The only option I found was "use custom workspace," but all this does it hard-code the workspace directory to some other directory.
To answer my own question: there is indeed an option in Jenkins git plugin to check out to a local subdirectory instead of the root of the workspace. With the git plugin, click on the Advanced button and fill in the field "Local subdirectory for repo (optional)".
I don't find the option that djs mentioned, but you can specify a different work directory:
Configure job
Extended Project settings
Use custom work space
This can be set to everywhere you want, also the workspace of a different job.