I would like our Hudson deploy job to output a changelog of all Subversion changes since last deploy. Any ideas to how that is done?
It turns out that Hudson automatically generates a list of changes since last build.
And since we set up deployment in Hudson, each deployment has a list of commits along with their changes. The Hudson email-ext plugin makes them available as $CHANGES, so that it for instance can send an email with $CHANGES in the email body.
Of course, there are limits to this approach: Only changes since last deploy are here. So if you manually stop deployment, or the deployment build breaks, then the next changelog only contains changes since that. And not since last successful deployment.
Also see a related discussion in the Stackoverflow question Sending Subversion Change Log Info Via Hudson
Add a build step in your hudson job that runs a shell script:
svn log -r HEAD:PREV > ./changelog
To generate a change list of the changes since the last commit.
You can also do a log -r by date/date-range.
Can you determine your last deploy (I assume you may not deploy with every build.) Some more information about your process will help.
If you tag your deployments with a normalised pattern it would be fairly simple to create a counter that would allow you to get the right SVN revision.
Related
Docker Hub's Automated Build documentation talks about setting up builds based on branches and tags, but does not mention Mercurial bookmarks or how they interact with the build configuration.
Can you configure automated builds based on Mercurial bookmarks, and have bookmark pushes trigger the builds?
Short answer: Configure the bookmark names as if they were tags, and it should work.
Longer answer: From some experimentation, as of Sep. 2016, it seems that bookmark names actually match against both tag and branch builds configured in Docker Hub. However, when pushing a bookmark update, only tag builds matching the name will get triggered automatically; branch builds won't get triggered automatically, but can still be triggered manually.
How would one make a job in Jenkins that polls source
control (i.e. mercurial) as a triggers it to execute the job, but
without actually clone/pull the monitored repo?
If it already has a local clone and you just don't want to update it can run hg incoming whose exit code lets you know if there's new stuff. If you don't have a local clone you'll need to run something like hgweb on the box that's serving the repo and then poll the raw version of the latest commit and watch for changes: http://hg.intevation.org/mercurial/crew/raw-rev/tip
This link Is there a way to keep Hudson / Jenkins configuration files in source control? shows how to save Hudson configuration changes to an SCM (i.e. a "backup with history")
My question is: can the Hudson configuration be pulled from an SCM. In other words, to change a job configuration, you add a changeset to the SCM repository first. Hudson, at the start of a build, pulls the configuration from the SCM and runs as usual.
Of course, it would also be ideal to make the entire job configuration screen read-only (or as minimal as possible).
Why would I want this?
I want the SCM to be where a configuration change is begun. Why? So
the changesets in the SCM reflect when the configuration change was
done in the flow of changesets for the project, i.e. it imposes a
chronological ordering to the project changes.
I don't want to use the security feature (i.e. no need for a login, etc)
I searched and could only find plugins for backing up or saving the configuration, but none that "pulled" the .xml files.
Thanks,
John
I haven't tried it myself, but you might be able to do this with a custom build that does the following on a schedule:
Sync all of the job configuration files from your SCM into the Hudson jobs directory
Do an HTTP GET to [Your Hudson URL]/reload - this is the equivalent of clicking the "Reload Configuration from Disk" link on the "Manage Hudson" page.
I don't think you could have each job update its own configuration from SCM every time it runs, because the configuration will have already been loaded by the time the job polls the SCM for changes.
I'd like for a build to be done (on the server) each time a push is made to our central Mercurial repository.
Builds are usually kicked off on our build server by running a Visual Build file either manually or via a scheduled task.
What are the ways of achieving this?
Simple, low impact solutions are preferred.
As Pablo suggested, you can do this with a hook, but you'll need an incoming hook on the server side. This hook runs "after a changeset has been pulled, pushed, or unbundled into the local repository" (hgrc manpage).
Edit the .hg/hgrc file of the repository located on the server and define your build hook as follows:
[hooks]
incoming = /path/to/executable-build-script
Of course, the build script called here just needs to be a trigger for whatever build process you actually use.
Note that an incoming hook runs for every single changeset in a push. If you don't want this, use a changegroup hook -- it runs only once for each push, no matter how many changesets it carries.
Another way, in addition to the hooks that Pablo mentions, is to set up a continuous integration server, like TeamCity. Then you could ask TeamCity to monitor your repository, pull new changesets and start the visual build script for you.
Disclaimer
These findings are for tortoisehg client and mercurial server behind apache on win32.
Try #1
The naive solution would be to make your push kick off the build.
In .hg\hgrc
[hooks]
incoming=.hg\build.py
In build.py
os.system('\Progra~2\Micros~2.0\Common7\IDE\devenv /build release project.sln > logfile')
Problem
What you'll find is that, after a push, the tortoise hg client won't return until your os.system call returns. This may or not be acceptable. In my shop a build took about 20 minutes, and my boss deemed that unacceptable.
Try #2
My solution was for the hook to return immediately after creating a REQUESTBUILD file to the root directory.
In .hg\hgrc
[hooks]
incoming = .hg\write_buildrequest_file.bat
In .hg\write_buildrequest_file.bat
echo REQUESTBUILD > \REQUESTBUILD
Meanwhile, I had a python script running in an infinite loop, checking for the presence of REQUESTBUILD.
In .hg\monitor_buildrequest_file.py
import popen2, time, os
import subprocess
while True:
if os.path.exists("\REQUESTBUILD"):
os.system("del \REQUESTBUILD")
os.chdir("/yourrepo/.hg")
retcode = subprocess.call("\python27\python.exe build.py")
else:
time.sleep(10)
build.py would generate an HTML file of results, which the submitter would have to pull via their web browser.
There are other issues (pushes while a build is commencing, saving historical results, building out of the working directory vs copying elsewhere) but this is the general idea.
You need to handle repository events with hooks.
So, after commit event you need to run a script that will perform your build accordingly.
I create the hg repository with my source tree. I want to keep the first version of some files such as Makefile in the repository and then hg don't see it modified even through I modified it.
Original problem is that ./configure usually modifies the Makefile but I don't want the build files to committed in the repository. So I want to keep only first version of configure and Makefile in the repository so that everybody who clone my repository can run ./configure by themself and not bother the repository
I tried hg remove or hg forget but those are stop tracking and also delete the files in the next revision of reporitory.
.hgignore doesn't do the things too.
I think of hg revert everytimes I run ./configure or make but it's not efficient way.
Are there any better ways?
Its usually good form to not track the configure script at all. There are some reasons for this:
Its huge. I've seen code bases where the configure script and helper macro libraries were more than ten times the size of the actual code being compiled.
When other developers make changes to configure.in(.ac), they are going to need to commit a new configure script. If three people do that, there's a good chance that Mercurial will require at least one of them to manually resolve a merge conflict in configure itself. Keep in mind, configure is machine generated, attempting to read it (much less resolve merge conflicts) may make your eyes bleed.
Generally, you'll offer a program in source form via two methods:
Download of a release archive (e.g. foo-1.2.3-rc2.zip), this can contain the configure script.
Downloading the repository directly using Mercurial. If they want to work with that, they'll need to have autoconf installed.
In the root of my repositories, I usually include a file called autogen.sh that runs all of the steps needed (aclocal, autoconf, ...), which also handles alerting the user if they need something installed. I.e. Could not find tool aclocal, please install the autoconf package.
Its really best to just go with the autogen.sh method. This means only tracking configure.in (or configure.ac) and the associated Makefiles (from Makefile.in). Let each build configure their own, and provide a distclean target to remove all files configure generates. Finally, provide a maintainer-clean target to remove anything that the configuration suite itself generated, e.g. configure.
That should help make nightly builds easy.
You could try and setup a pre-commit hook which would always restore the original Makefile content if found in the changeset.
The SO question illustrates reading the content of the changeset to be committed.
Make sure to use the pre-commit hook, and not precommit.