I have a Mercurial repository (on Bitbucket) with some code (Java) and I want to do CI builds on a cloud-based Jenkins server (at Jelastic, running on CentOS). My problem is that I haven't been able to do a proper installation of Mercurial on the Jenkins server.
The Jenkins build fails with the following message:
ERROR: Failed to clone https://bitbucket_jenkins_user:some_password#bitbucket.org/repo_owner/my_repository because hg could not be found; check that you've properly configured your Mercurial installation
Setup information
It's a private Mercurial repository, hosted at Bitbucket
In Bitbucket I have set up a Service to trigger the Jenkins build, after a Push has happened
I have defined a specific bitbucket jenkins user in my Mercurial repository, it has only read rights and it logs in using simple https authentication
Jenkins runs on a Tomcat 7, hosted in a Jelastic cloud environment, on CentOS 6
The Mercurial plugin was installed through the Jenkins interface, by Manage plugins
The build is configured as being triggered remotely (by the service defined in Bitbucket)
Build results are the same when started manually and when triggered from a push to the repository
When I first did this I was under the impression that installing the Mercurial plugin in Jenkins would be enough, that it would also install the needed Mercurial binaries to be able to connect to the repository and get the code. I have realized that I was wrong and that on the Manage Jenkins / Configure System page I need to specify my Mercurial installation.
Questions
Is it possible to create a Mercurial installation without ssh access and doing a "yum install mercurial"?
In the Jenkins interface, what can I specify when choosing the "Install Automatically" option?
When defining an installer, I have experimented with the "Extract zip/tar.gz" option, but what can I write as the "Download URL for binary archive"?
Jenkins also offers an installer option of "Run command". What kind of commands could that be, maybe a "yum install ..." or "rpm ..."?
Since my server is cloud based, getting ssh access is a paid add-on which I would prefer to avoid. But if that is my only option I will of course do it, thereby getting access to running commands on the server. However, running "yum install mercurial" on Centos seems to only give the 1.4 version of Mercurial. Current version when I write this is 2.6.3, would I need to download the sources and compile it myself or is it possible to get that as a binary for Centos somewhere?
The Mercurial Plugin page has a section on how to use the Auto Installation options to install Mercurial using ArchLinux packages.
"The plugin supports generic tool auto-installation methods for your Mercurial installation, though it does not publish a catalog of Mercurial versions. For users of Linux machines (with Python preinstalled), you can use ArchLinux packages. For example, in /configure under Mercurial installations, add a Mercurial installation with whatever Name you like, Executable = INSTALLATION/bin/hg, Install automatically, Run Command, Label = linux (if desired to limit this to slaves configured with the same label), Command = [ -d usr ] || wget -q -O - http://www.archlinux.org/packages/extra/i686/mercurial/download/ | xzcat | tar xvf - (or …/x86_64/… for 64-bit slaves), Tool Home = usr, and configure a job with this installation tied to a Linux slave."
see https://wiki.jenkins-ci.org/display/JENKINS/Mercurial+Plugin
Related
I am trying to add perforce repository for bamboo. But not able to make it. Getting following error while adding repository-
I have downloaded latest bamboo 30days trial and installed on my local machine. Bamboo is installed correctly as I am able to create automated build using MSBuild with BitBucket repository.
My bamboo instance is running on http://localhost:8085.
I have Downloaded and installed latest P4D and P4V from https://www.perforce.com/downloads and installed in the same machine. Perforce server and client are correctly configured as I am able to checkin source code to server using client. Running on port 1666. By client and server are both installed in C:\Program Files\Perforce.
I have added bamboo Server Capabilities for perforce.
On providing perforce detail while adding repository, getting following three alerts from p4v client-
User id and password are correct also url with port is correct. I am able to get data from repository with this detail.
p4v.exe is already there in my PATH and also I run p4v -help on command prompt, getting following result. I dont see any option named 'info'.
p4v.exe is the GUI program for Perforce.
There is a command-line program for Perforce, called p4.exe.
When Bamboo says that it wants The location of the p4 client executable, you have to specify the path to p4.exe, not the path to p4v.exe.
When you downloaded and instlaled p4v.exe on your system, you should also have received a p4.exe (if not, download that program, too). Then specify the path to p4.exe to Bamboo.
I have install Git, Node.js and NPM on my machine and have successfully been able to run a progressive web app on Chrome through the LocalHost. Now what about when I want to run this web app on a public server? Will I have to install Git, Node.js and NPM on my web hosting account? Or are these components already installed on web hosting servers in general (like would be an app like cPanel)?
By the way, would you be able to recommend any good FTP application for Mac that can upload zillions of files easily (not 1 by 1)?
You will need Node on the host server, but not NPM or Git (although NPM will be there by default when you install Node).
Typically what you want is to setup a "Continuous Integration / Continuous Delivery" platform. You can use free options like Travis or Jenkins or you can just use a shell script and run it through AWS Lambda or something like that.
Very simplified version:
You push code to Git
CI/CD detects the code check-in (polls) and pulls latest from Git on an Agent
CI/CD runs a "build" on the Agent, which for Node is at least npm install but can include Grunt, Gulp, Webpack, or a host of other useful steps.
CI/CD publishes the result of the build to a target server.
Here you have five machines involved:
Git server
Your local dev box
CI/CD server
CI/CD agent
production server
Hope this helps you get started in the right direction.
I am using VCS checkout mode to agent side and also have defined the teamcity.hg.agent.path = c:\program files\mercurial\hg.exe in agent.properties file. Still VCS is not picking up this setting and giving me the following error:-
Test connection failed in Dev :: Stocks :: Build and Package. Cannot find mercurial executable at path 'hg'
Also have defined the path in windows environment variables. Mercurial is not installed on server machine. I have read in documentation that if you are using a agent side checkout then not required. Please guide what I am missing here.
thanks
According to the documentation, if you're using server side checkout, then you don't need to install mercurial on your agent.
But when you set up version control as a VCS root, you do need the TeamCity server to be able to talk to the mercurial server so it can do things like detect changes. The JetBrains documentation specifically says that if you're using a Mercurial VCS root:
Mercurial should be installed in the server machine, and, if
agent-side checkout is used, on the agents.
I have a Jenkins instance with the Mercurial plugin installed on a Windows 2008 R2 machine. I am trying to define a build job for a Maven project on my KilnHQ repository. The HG clone step fails to retrieve code due to authentication failure.
I've read Kiln documentation and they only support HTTPS. There is no SSH support. However, the Jenkin's hg plugin does not allow me to enter a username/password.
How can I successfully wake my build work through Jenkins?
With (all) http(s):// URLs it's legal to put the username and password directly in the URL. Mercurial supports that. If The Mercurial plugin for Jenkins doesn't break it then you can probably do that:
https://user:pass#kilnhost.com/path/to/repo
If that doesn't work you can probably put the authentication information in the Jenkin's user's home directory's Mercurial.ini file's [auth] section: http://www.selenic.com/mercurial/hgrc.5.html#auth
I'm trying to set up a Debian server with Apache2 as a front end to Tomcat 7 running Jenkins - which is working - but i want Jenkins to monitor a private Mercurial repository hosted at bitbucket and I'd like to use SSH. (I've tried with plain https but still no joy - there is a question answer here on SO that recommends using ssh)
I've deployed Jenkins using the hot deploy capability of Tomcat by dropping the Jenkins .WAR file into /webapps. I can generate a public/private key pair on the server and have added my public key to the bitbucket account - but how do I make Jenkins use my private key to authenticate when accessing a private repository on bitbucket?
I've seen a blog post that uses apt-get to install Hudson on Debian which creates a Hudson user (so i could add a private key for this user) but installing it via the tomcat autodeploy doesn't add a Jenkins user. If there is no unix user set up for Jenkins, can I configure one that Jenkins would use when authenticating?
If you can control the mercurial command line from within Jenkins you can pass the --ssh command to tell mercurial what ssh command to invoke. Something like this would probably work:
hg --ssh '/usr/bin/ssh -i /path/to/private/key' ...
Alternately you can figure out what user Jenkins is running as (it's got to be some user) and put the file in that user's ~/.ssh even if that's /root/.ssh/