Hglib: How to connect to remote repo using over ssh? - mercurial

Using "raw" mercurial API I can write just something like:
peer = hg.peer(ui.ui(), {}, 'ssh://hg#bitbucket.org/some/project')
After the connection is established, I can work with remote repo.
I'm failing to understand what should work with a remote repo using hglib.
Naïve approach, i.e. using something just as simple as:
hglib.open("ssh://hg#bitbucket.org/some/project")
does not work, and the exception raised does not make anything clearer to me.
My question: With hglib, how can I open connection to a remote ssh-repo?

The hglib documentation is not very wordy about how to use it. The best is to already have your key copied and RSA fingerprint in your ~/.ssh/known_hosts.
But you can clone a repo with:
hglib.clone(source="ssh://hg#bitbucket.org/some/project")
You can of course add destination folder (e.g. dest="/path/to/blah").
If you already have an existing hg repo cloned you can change some settings in your hgrc before trying hglib.open(), so hg uses the ssh url like so:
[ui]
username = some_user
[paths]
default = ssh://hg#bitbucket.org/some/project
EDIT
I think for hg.open to work, you have to have a repo checked out. For me I refer to the path where my repo is cloned. So:
hglib.open('/path/to/cloned/repo')
To do this over SSH you have to edit your /repopath/.hg/hgrc as mentioned above.

Related

How to automatically keep remote mercurial repository at tip after pushes

I created a mercurial repository on some file servers net share.
Is it possible to automatically get the remote repository updated to tip if somebody pushes its changes?
Because some other people (purely users) may copy the repositories content (rather than cloning, because of lack of .hg) and i want them to get the newest version.
Since it is a share on a simple NAS it would be good if the pushing client could invoke this update.
It seems that a hook on the changegroup event can solve this.
Add the following lines to the repository's configuration file (repo/.hg/hgrc)
[hooks]
changegroup = hg update
This solution was suggested on a slightly different question:
Cloning mercurial repo to the remote host
At least under windows this seems only to work on local repositories. The reason for this is, that hg tries run a cmd on the remote path that fails, since it does not support UNC paths as current direcory.
Adding explicitly the repository url fixes this, but its not client independent anymore.
[hooks]
changegroup = hg update -R %HG_URL%
You could treat the server repository as your "local working directory" and then PULL from your own PC to that location. If you use hg pull --update then it will automatically update the working folder to the latest.
One way to do this is to login to your NAS and physically run the hg command line program there. If not, you could also mount the NAS folder on your local PC and then chdir to its mapped local folder and use your local hg client to do so.
This might seem like an odd thing to do but Mercurial doesn't care which is the "clone" and which is the "server", you can swap them interchangeably in your workflow.

mercurial + bitbucket + windows 7, how to setup?

thanks to the help of Stackoverflow I was able to setup an account and repository on bitbucket and manually push my local repo to the cloud using password.
I was unable to find a proper tutorial on how to setup SSH between mercurial and bitbucket using Windows 7 and also I was unable to find a proper tutorial on how to automatize the push command to avoid writing the full path all the time of each of the repositories.
Anyone can help on achieveing those two issues?
to find a proper tutorial on how to setup SSH between mercurial and bitbucket
Keywords: plink, pageant
proper tutorial on how to automatize the push command to avoid writing the full path all the time of each of the repositories
"Full path" to local or remote repo?
In case
Local, and using -R "path/to/local/repo" - just cd to repo always before using HG
Remote - add all needed repositories into .hgrc of repository (.hg\hgrc from the root of repo-dir) [paths]
[paths]
default = git+ssh://git#github.com/lazybadger/Fiver-l10n.git
sf = ssh://bigbadger#hg.code.sf.net/u/bigbadger/code
With these names I can pull/push from/to default || sf as URLs: hg push sf, "default" as default target can be omitted totally

Authenticating across mercurial subrepositories

I've got a mercurial repository, which pulls in dependencies using the subrepository functionality (as defined in the .hgsub file), but I'm struggling to get this working in TeamCity.
I've enabled the mercurial_keyring extension in order to save credentials (so when TeamCity provides authentication details for the root repository, it remembers them for the subrepositories). I've added an [auth] section to mercurial.ini too:
[auth]
bitbucket.schemes = https
bitbucket.prefix = https://bitbucket.org/xyz
bitbucket.username = xyz
If I run hg clone from the command line, I get prompted for a password once, and all is good. But the initial checkout when run via TeamCity fails with
VCS root: mercurial: https://bitbucket.org/xyz/projectA {instance id=23, parent id=1}, due to error: 'cmd /c hg update -C -r 4a08f587bb1f' command failed. stderr: abort: http authorization required stdout: pulling subrepo src\Common.Library from https://bitbucket.org/xyz/common.library
What am I missing, or am I going about this in completely the wrong way? Many thanks!
It seems that passing in credentials directly from TeamCity doesn't work with mercurial_keyring, but if I specify both username and password in plaintext in the mercurial.ini file (making sure it's accessible under the account the TeamCity build agent is running under), then this works.
The mercurial.ini file can be placed under <mercurial install path>\mercurial.ini if it does not work under user path.
Not ideal, but a solution... if anyone else finds a better one, please let me know.
May be it got fixed in last versions of TeamCity, but the following works for me:
Configure build agent service to run under domain account with
access to HG repositories (both root and subrepos)
Enable mercurial_keyring on build agent and add [auth] section
to mercurial config
Try to clone repository manually, enter
password. No need to wait until the whole repo is cloned -- it could
be terminated when "requesting all changes" message is shown.
Have fun -- now service will use keyring.
Probably the [auth] section shouldn't be added at all to the mercurial.ini for the TC agent. Team City uses --config auth... options to hg. I would also recommend not to use the mercurial_keyring but to set the username and password in VCS root - this is both secure and shared between different TC agents.
Not sure about the bitbucket, but in other cases usage of https scheme can require certificates configuration. This can be configured in mercurial.ini:
[web]
cacerts =
[hostfingerprints]
# hides mercurial warnings
domain-name = ab:cd:...:01
And last part: depending on .hgsub it might be needed to use VCS checkout mode "Automatically on agent" in Team City Version Control Settings.

Having ssh not ask for a password every time with Mercurial

I'm a Mercurial newbie and I just started to use it.
I work in local repository and when I commit changes I use hg <command> ssh://user#host/usr/www/site.com/project for pushing, pulling and see the incoming/outgoing changes.
But every time ssh ask me the password. Is there a way for remember my ssh password for this purpose? Also, how can I don't write every time the full command (ssh://user etc etc)?
You have to setup your ssh with public keys. There are many tutorials on the web e.g. see Getting started with SSH
Once you have the keys in place you can either use ssh-agent to only enter your local private-key password once per session. There are also GUI tools that act as ssh-agent (e.g. SSHKeychain on a Mac)
Or if you have low security requirements you can also generate your key without password.
But please don't store cleartext passwords in config files.
There are two possibilities to avoid typing the url on each command:
From hg help urls
These URLs can all be stored in your hgrc with path aliases under the
[paths] section like so:
[paths]
alias1 = URL1
alias2 = URL2
...
The other possibility is using the default paths:
default:
When you create a repository with hg clone, the clone command saves the
location of the source repository as the new repository's 'default'
path. This is then used when you omit path from push- and pull-like
commands (including incoming and outgoing).
Thats what I often use, since usually you get your working directory bay cloning from somewhere and from then on I just don't specify the url and use the default.

create hgrc file to work on all paths on a machine, and for several repos

I want to create a hgrc file to set the username and password for all paths on some machine, e.g no matter in which directory I am in, hg clone some_path will always work without prompting for a username and a password (this is for an auto-deploy script). Also, it should work for several repos, not just one.
I followed the instructions and created a file: /etc/mercurial/hgrc.d/deploy.rc
it's contents:
[auth]
default.prefix= http://myrepo
default.username = myuname
default.password = pwd
But when I do
hg clone some_path I get abort: error: Connection refused.
What Am i doing wrong?
It should work. You can use hg showconfig to verify that it really is reading the config and that you don't just have a connection problem or something.
What version of hg are you using?
Also, it could be that your .hg/hgrc file is taking precedence over your global config.
Could you get the log of the server you try to connecgt to?
It should be listed there if at least the server address is correct.
And perhaps a hg clone -v something