How would one make a job in Jenkins that polls source
control (i.e. mercurial) as a triggers it to execute the job, but
without actually clone/pull the monitored repo?
If it already has a local clone and you just don't want to update it can run hg incoming whose exit code lets you know if there's new stuff. If you don't have a local clone you'll need to run something like hgweb on the box that's serving the repo and then poll the raw version of the latest commit and watch for changes: http://hg.intevation.org/mercurial/crew/raw-rev/tip
Related
I have a small group of developers and we all develop on our own machines. When we have code that is ready for testing, we merge and push to a RhodeCode installation. The hgrc file for my central RhodeCode repo is set up like this:
[paths]
test_env = /www/mysite/test
prod_env = /www/mysite/prod
[hooks]
changegroup = hg push test_env
so when a person checks code into RhodeCode, the changes are automatically pushed to the test environment. (There's a hg update in the test repo hgrc file, so the code updates there). This is perfect.
BUT.. I would like our RhodeCode admins to be able to push to prod without needing shell access on the server. Is there a way to allow someone to run a "hg push prod_env" from the RhodeCode interface? I figure since RhodeCode has full control over hg, it should be possible, but does this ability exists somewhere in RhodeCode? would it be a huge task to add it?
If not, how would you go about allowing an authenticated user to push a repository to production without shell access? I have been googling, but I can't seem to find anything. I know I could write a php script with a passthru("hg push test_env), but that seems like a permissions nightmare as apache runs as "nobody" and rhodecode owns the repo.
Thoughts?
Obviously, you cannot push nothing. But you can try to add or edit some file from the RhodeCode interface (which allows this to do) at the prod_env. This should cause local commit and push without accessing a shell.
For those looking at this question, here's how I solved it:
Wrote a passworded page in PHP with a button that executes this code:
shell_exec('hg pull -R ../wp-content/themes/2014');
I then put hg update in the hgrc file for the prod website, and made the web user and authorized user of the repository.
It works pretty good - i have slight security concerns because of the resulting file ownership, but assuming the PHP follows proper practice, there aren't any problems.
This link Is there a way to keep Hudson / Jenkins configuration files in source control? shows how to save Hudson configuration changes to an SCM (i.e. a "backup with history")
My question is: can the Hudson configuration be pulled from an SCM. In other words, to change a job configuration, you add a changeset to the SCM repository first. Hudson, at the start of a build, pulls the configuration from the SCM and runs as usual.
Of course, it would also be ideal to make the entire job configuration screen read-only (or as minimal as possible).
Why would I want this?
I want the SCM to be where a configuration change is begun. Why? So
the changesets in the SCM reflect when the configuration change was
done in the flow of changesets for the project, i.e. it imposes a
chronological ordering to the project changes.
I don't want to use the security feature (i.e. no need for a login, etc)
I searched and could only find plugins for backing up or saving the configuration, but none that "pulled" the .xml files.
Thanks,
John
I haven't tried it myself, but you might be able to do this with a custom build that does the following on a schedule:
Sync all of the job configuration files from your SCM into the Hudson jobs directory
Do an HTTP GET to [Your Hudson URL]/reload - this is the equivalent of clicking the "Reload Configuration from Disk" link on the "Manage Hudson" page.
I don't think you could have each job update its own configuration from SCM every time it runs, because the configuration will have already been loaded by the time the job polls the SCM for changes.
I'd like for a build to be done (on the server) each time a push is made to our central Mercurial repository.
Builds are usually kicked off on our build server by running a Visual Build file either manually or via a scheduled task.
What are the ways of achieving this?
Simple, low impact solutions are preferred.
As Pablo suggested, you can do this with a hook, but you'll need an incoming hook on the server side. This hook runs "after a changeset has been pulled, pushed, or unbundled into the local repository" (hgrc manpage).
Edit the .hg/hgrc file of the repository located on the server and define your build hook as follows:
[hooks]
incoming = /path/to/executable-build-script
Of course, the build script called here just needs to be a trigger for whatever build process you actually use.
Note that an incoming hook runs for every single changeset in a push. If you don't want this, use a changegroup hook -- it runs only once for each push, no matter how many changesets it carries.
Another way, in addition to the hooks that Pablo mentions, is to set up a continuous integration server, like TeamCity. Then you could ask TeamCity to monitor your repository, pull new changesets and start the visual build script for you.
Disclaimer
These findings are for tortoisehg client and mercurial server behind apache on win32.
Try #1
The naive solution would be to make your push kick off the build.
In .hg\hgrc
[hooks]
incoming=.hg\build.py
In build.py
os.system('\Progra~2\Micros~2.0\Common7\IDE\devenv /build release project.sln > logfile')
Problem
What you'll find is that, after a push, the tortoise hg client won't return until your os.system call returns. This may or not be acceptable. In my shop a build took about 20 minutes, and my boss deemed that unacceptable.
Try #2
My solution was for the hook to return immediately after creating a REQUESTBUILD file to the root directory.
In .hg\hgrc
[hooks]
incoming = .hg\write_buildrequest_file.bat
In .hg\write_buildrequest_file.bat
echo REQUESTBUILD > \REQUESTBUILD
Meanwhile, I had a python script running in an infinite loop, checking for the presence of REQUESTBUILD.
In .hg\monitor_buildrequest_file.py
import popen2, time, os
import subprocess
while True:
if os.path.exists("\REQUESTBUILD"):
os.system("del \REQUESTBUILD")
os.chdir("/yourrepo/.hg")
retcode = subprocess.call("\python27\python.exe build.py")
else:
time.sleep(10)
build.py would generate an HTML file of results, which the submitter would have to pull via their web browser.
There are other issues (pushes while a build is commencing, saving historical results, building out of the working directory vs copying elsewhere) but this is the general idea.
You need to handle repository events with hooks.
So, after commit event you need to run a script that will perform your build accordingly.
I would like our Hudson deploy job to output a changelog of all Subversion changes since last deploy. Any ideas to how that is done?
It turns out that Hudson automatically generates a list of changes since last build.
And since we set up deployment in Hudson, each deployment has a list of commits along with their changes. The Hudson email-ext plugin makes them available as $CHANGES, so that it for instance can send an email with $CHANGES in the email body.
Of course, there are limits to this approach: Only changes since last deploy are here. So if you manually stop deployment, or the deployment build breaks, then the next changelog only contains changes since that. And not since last successful deployment.
Also see a related discussion in the Stackoverflow question Sending Subversion Change Log Info Via Hudson
Add a build step in your hudson job that runs a shell script:
svn log -r HEAD:PREV > ./changelog
To generate a change list of the changes since the last commit.
You can also do a log -r by date/date-range.
Can you determine your last deploy (I assume you may not deploy with every build.) Some more information about your process will help.
If you tag your deployments with a normalised pattern it would be fairly simple to create a counter that would allow you to get the right SVN revision.
I'd like to set up a mercurial repository in a clearcase static view directory. My plan is to clone from that directory, do all my real work in a mercurial repo and then push my changes back to the shared Hg/Clearcase dir.
I'd like to hear general suggestions on how this might work best, but I foresee one specific problem: Clearcase locks files as read-only until they are checked-out. They way I'd like it to work is to set up a mercurial hook to checkout the file before the push is completed and roll-back the push if the checkout doesn't work.
Should I be looking at the pretxncommit hook? Or the pull hook? Also, I'm not quite clear on how to write the actual hooks either. I know the clearcase command, but I'm not sure how to contruct the hook to pass in the filename for each file in the changeset.
Suggestions?
The question I just answered 2 days ago: How to bridge git to ClearCase? can gives you an illustration of the process.
I like to take the ClearCase checkout/checkin step separate from the DVCS work:
I will unlock files as I need them within the DVCS repo (made directly within the snapshot view), and then update the snapshot view, which will tells me the "hijacked" files (which I can the easily checkout and checkin through the cleartool update GUI).
But if you have clone you DVCS repo somewhere else, and push it back to a local repo which is not the ClearCase snapshot view, what you could do is simply copy back the view.dat hidden file of your snapshot view at the root directory of the DVCS repo.
That simple file is enough to transform back the local repo in a ClearCase snashot view!
Then you make all the files read-only (except those modified after a certain date, i.e. the time when you started working), to avoid ClearCase considering all the files as hijacked.
The rest is similar to the first approach: update, checkout/checkin.