How to clone a googlecode mercurial repository in jenkins - mercurial

I am having a problem to trigger a repository clone of googlecode project.
I keep receiving the following error:
Started by user anonymous $ hg clone
--rev default "https://username#demo.projectname.googlecode.com/hg/ " "F:\Hudson\jobs\project Demostration project\workspace" abort: demo.projectname.googlecode.com certificate error: certificate is for
*.googlecode.com, googlecode.com, *.codespot.com, *.googlesource.com, googlesource.com (use --insecure to connect insecurely) ERROR: Failed to clone.
--template {node}
Anyone know on how to tell jenkins it is safe to use that certificate? In what textbox do you place --insecure option

That's a relatively new command line option (1.8.3 I think) to get around a relatively new practice of actually checking certificates (1.8.2 I think). It's likely not exposed in the Jenkins UI. Some things you could do to work around it:
put the server's cert's fingerprint in a whitelist in your (Jenkin's user's) hgrc
wrap Mercurial in a quick shell script that passes --insecure
clone from the non-https version of the google URL (I think they still allow that)
configure the CACerts for Mercurial either globally (/etc/mercurial/hgrc) or in the Jenkins user's ~/.hgrc
Any of those should work and most of them are explained here: https://www.mercurial-scm.org/wiki/CACertificates

Related

Hgweb and changegroup hook not working

I'm using hgweb to publish my local repositories.
/project_path/project_name/.hg/.hgrc have:
[hooks]
changegroup.bitbucket = hg push ssh://hg#bitbucket.org/user/repo
When i'm use hg serve, all changegroup hooks working fine, but when i'm using hgweb through nginx with fcgi it's not working at all. I need those functionality to have some kind of backups.
It's mostly like Trust.
Mercurial needs to trust a hgrc file before it will parse/run it. If your /project_path/project_name/.hg/.hgrc file is owned by you then when you run hg serve with Mercurial running as you it's parsed/used. However, nginx runs as its own user, probably nginx which doesn't trust files owned by you, so when it invokes Mercurial those files are ignored (see Note).
That Mercurial trust link gives a better explanation and talks about how to say "nginx trusts X", but if it's a single-user system or you want everyone to trust you you can just throw a trust block in the system-global /etc/mercurial/hgrc file saying everyone trusts X.
Note: It doesn't actually just ignore those files it puts a warning on STDERR which in apache-land you'd find in your error.log, but in nginx land no one ever seems to find those warnings so I've no idea where nginx puts them.
I assume you have some kind of authentication issue here. When running hg serve from the command line, your ssh credentials are provided by an ssh-agent running in the background.
However, this does not work when running hgweb as a service, because there is no ssh-agent running in the background. If you would start an ssh-agent, there would be no possibility to enter the password for your ssh key.
Bitbucket uses ssh keys to authenticate you, so you can't just add your password to the above hg push command.
One possible solution would be to not use bitbucket as your backup, but a different mercurial server that let's you provide a simple password on the command line.
I'm afraid I can't help you further with this.

Authenticating across mercurial subrepositories

I've got a mercurial repository, which pulls in dependencies using the subrepository functionality (as defined in the .hgsub file), but I'm struggling to get this working in TeamCity.
I've enabled the mercurial_keyring extension in order to save credentials (so when TeamCity provides authentication details for the root repository, it remembers them for the subrepositories). I've added an [auth] section to mercurial.ini too:
[auth]
bitbucket.schemes = https
bitbucket.prefix = https://bitbucket.org/xyz
bitbucket.username = xyz
If I run hg clone from the command line, I get prompted for a password once, and all is good. But the initial checkout when run via TeamCity fails with
VCS root: mercurial: https://bitbucket.org/xyz/projectA {instance id=23, parent id=1}, due to error: 'cmd /c hg update -C -r 4a08f587bb1f' command failed. stderr: abort: http authorization required stdout: pulling subrepo src\Common.Library from https://bitbucket.org/xyz/common.library
What am I missing, or am I going about this in completely the wrong way? Many thanks!
It seems that passing in credentials directly from TeamCity doesn't work with mercurial_keyring, but if I specify both username and password in plaintext in the mercurial.ini file (making sure it's accessible under the account the TeamCity build agent is running under), then this works.
The mercurial.ini file can be placed under <mercurial install path>\mercurial.ini if it does not work under user path.
Not ideal, but a solution... if anyone else finds a better one, please let me know.
May be it got fixed in last versions of TeamCity, but the following works for me:
Configure build agent service to run under domain account with
access to HG repositories (both root and subrepos)
Enable mercurial_keyring on build agent and add [auth] section
to mercurial config
Try to clone repository manually, enter
password. No need to wait until the whole repo is cloned -- it could
be terminated when "requesting all changes" message is shown.
Have fun -- now service will use keyring.
Probably the [auth] section shouldn't be added at all to the mercurial.ini for the TC agent. Team City uses --config auth... options to hg. I would also recommend not to use the mercurial_keyring but to set the username and password in VCS root - this is both secure and shared between different TC agents.
Not sure about the bitbucket, but in other cases usage of https scheme can require certificates configuration. This can be configured in mercurial.ini:
[web]
cacerts =
[hostfingerprints]
# hides mercurial warnings
domain-name = ab:cd:...:01
And last part: depending on .hgsub it might be needed to use VCS checkout mode "Automatically on agent" in Team City Version Control Settings.

How to checkout the source code from Mercurial

I need to download the source code from the Mercurial.
$ hg clone xmppframework.googlecode.com/hg xmppframework
warning: xmppframework.googlecode.com certificate with fingerprint b1:af:83:76:f3:81:b0:57:70:d8:07:42:c8:c1:b3:67:38:c8:7a:bc not verified (check hostfingerprints or web.cacerts config setting)
requesting all changes
adding changesets
adding manifests
adding file changes
I tried it in terminal by using this link to download the source code.But the command is failed.
Anyone can help me to get rid of this.
Thanks to all,
Madan.
The problem is, that you didn't added the fingerprint of google to your hgrc file.
There are two ways to solve the problem:
Use http instead of https, the disadvantage would be, that your traffic isn't encrypted anymore.
hg clone http://xmppframework.googlecode.com/hg/ xmppframework
Or add the fingerprint to you hgrc file:
Please note that Google Code is changing the fingerprint sometimes. When the fingerprint below doesn't work, you can use this command (taken from this question) to detect the current fingerprint:
$ openssl s_client -connect xmppframework.googlecode.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin
[hostfingerprints]
xmppframework.googlecode.com = b1:af:83:76:f3:81:b0:57:70:d8:07:42:c8:c1:b3:67:38:c8:7a:bc
Edited because original answer was ugly.
It's not a SVN link, it's Mercurial link, so you can't use svn tool. You need Mercurial for this.

Unbundle throws " abort: error: ftp error: no host given" when using a local network share with an UNC path

Before explaining my problem let me tell you the Mercurial setup,
We have the following repos,
RELEASE
DEVELOPMENT
BUGFIX
All the above repo are running on a central server using IIS and hgwebdir.cgi
Now coming to the problem,
I clone a local repo from DEVELOPMENT repo.
I make changes to the clone and commit (Not push).
I make a bundle from the clone and pass the bundle to QA who has cloned the RELEASE repo.
Now I try to apply the bundle to the RELEASE repo clone using hg unbundle
I get an error, abort: error: ftp error: no host given
What am I doing wrong? Can you give solution to the above problem keeping a Windows setup in mind?
It really sounds like you have a syntax error in your unbundle command. The normal usage is just:
hg unbundle c:\path\to\the.bundle
there's no ftp involved unless you're trying to use a ftp:// URL which isn't supported. Is it possible you have a directory named ftp and the parser is mistakign it for a component in a ftp URL?
Also, most folks wouldn't use bundles in the scenario you're describing. They'd just do:
hg push URL-or-file-path-to-QA
and push direct to QA's own repo (not to RELEASE)
People generally use bundles only when a network connection isn't possible or practical.
I experienced the same problem, I don't think hg likes uncs.
I mapped \server\DevSourceCode\Mercurial to R: and it worked fine, see below:
R:\Repositories\myproj>hg unbundle \\server\DevSourceCode\Mercurial\ChangeBundles\myproj_changes.hg
abort: error: ftp error: no host given
R:\Repositories\myproj>hg unbundle R:\ChangeBundles\myproj_changes.hg
adding changesets
adding manifests
adding file changes
added 0 changesets with 0 changes to 139 files
(run 'hg update' to get a working copy)

Can not clone mercurial (hg) repository via http

I can't clone my repository via http:
abort: 'http://MYREPO' does not appear to be an hg repository!
Firstly, I created a new repo by hg init MYREPO followed by adding some file and commit.
The dir with my repo is password protected but there is no sign of problem because of it, I tried both methods of cloning:
(on my local machine)
hg clone http://MYREPO my_repo
and
hg clone http://user:password#MYREPO my_repo
Permissions of repo dir are: drwxrwxr-x
I can clone this very repository on my remote machine (the same repo is on) without any problems.
What could be possibly wrong?
UPDATE:
Looks like you're getting confusing between repository and hostname
If running "hg serve", "hg clone http://USER#HOST:8000" where host can be you machine's IP or the hostname (type "hostname" on linux or try "ping localhost"). You can change the default port from 8000 by passing a --port #### to hg serve.
If you want to do it over ssh, "hg clone ssh://USER#HOST//PATH/TO/YOUR/REPOSITORY". Suppose you made an repository in your home directory called MYREPO then you would do this: "hg clone ssh://USER#HOST/~/MYREPO"
You can only clone your repo via http is something is serving that repo over http. Mercurial provides a built in http server for you. Run "hg serve" while inside of your repo then attempt to clone it from another location (or another command shell). If you just want a local clone, you don't need to use http ("hg clone ").
Also, try "hg help clone" and "hg help serve" for details.
weirdly, cloning with ssh requires a non-intuitive extra forward slash.
this works for me on a host with ssh running on port 43211
hg clone ssh://example.com:43211//repos/myRepo ./myRepo
the double slash after the port number works, but a single slash there results in the ".hg not found" error
besszero is right, but why don't you clone using SSH if you are gonna use username and password anyway?
hg clone ssh://machine_ip//your/repo/location your_repo
It's also safer if you don't want to open another port for mercurial's http server and you don't need the hgweb features, the traffic is also encrypted. The only con is that you have to log in to checkout, but HTTP doesn't work for pushing back the changes, at least not in my experience.
Argh... One need to be careful with .htaccess configuration. In my case I needed to add 'hgwebdir.cgi' to the path to clone... Thanks for the answers though!
SSH seems logical but somehow I couldn't use it with user other than my local:
hg clone ssh://MY_REMOTE_USER#MYREPO
remote: abort: There is no Mercurial repository here (.hg not found)!