I've been using a shared host and Bitbucket + Hg for several years. At the end of this year I plan to migrate the site to a VPS and want to host a private Mercurial instance on the same VPS. Is it possible to migrate the repo and its entire history from Bitbucket to the new VPS Mercurial instance or must I just do a big push from the default trunk and lose all the old branches and what not?
I see a lot of posts about migrating INTO Bitbucket but none about migrating OUT!
The thing is that you always have the whole history of changes (not entirely true, but in most cases).
It means that you can push the whole repository (which includes all the history) to any other repository.
That's it: you just create an empty repository anywhere you like and push to it.
Related
We have a SOLUTION folder (Mercurial repository) in whitch we have a PROJECT folder, that is also a Mercurial repository.
So two repositories: one - the root(solution) folder and other - a subfolder of the root folder(the project) (yes strange but it is like this)...
Everything worked, but one day someone somehow included the SOLUTION branch into the PROJECT repository... So all the history from the Solution branch was included in parralel with the Project branch into the PROJECT repository....
Now is a little mess in the PROJECT repository... There is need to clean that repository...
Locally it worked by applying the hg strip rev XXS (where XXS was the revision number of the very first node from the freshly added Solution branch in the Project repository).
But it seems there is no strip equivalent on the server?!
Every time we'll pull incoming changes in the Project repository, the "Solution" branch will be re-imported....
Is there a way to manage it on the server side?
Of course the same solution would also work on the server. Thus you need login access to the server itself to execute the same local history operation on it. But for the default setup (publishing server) a push will never remove changesets which are present on a remote location; when you history edit your local repository, the changes will not all propagate: only additions to the graph will, but no deletions.
If such changes to the remote server are expected to be pushed, and this is a regular thing, you might want to look into use of phases and how to setup a non-publishing server, e.g. a server with mutable history: Phases#Publishing_Repository.
Mind that such a workflow also means that every single one of the people with push privilige has to change their default phase to 'draft' instead of 'public' - at least for that project.
kill the server repo. start a fresh one, then from local:
hg push -rev XXR
where XXR is the last rev you want to keep.
I would like to know how is it supposed to do the following in mercurial.
My repository is on our server where mercurial-server is running. I develop on Windows and because my project is supposed to be multiplatform, I've set up Jenkins build server which builds using a Linux slave machine automatically after each push.
Now what I need:
Quite often the build on Linux does not work, because I'm using "treat warnings as errors" and gcc gives different warnings than MSVC. Then I need to fix the build and I don't want to mess the repository with "fixing" commits (often more commits are needed to fix all gcc warnings). It would be ok for me to have a single commit in the repo for a fix of the linux build.
I thought I could just clone the repository to a 'fixing repo' (like a branch using clone) and do as many fixing commits as needed. Then just merge all these commits to the main repo as a single fixing commit. But then I need to make my 'fixing repo' public (i.e. clone the fixing repo to the remote server) to allow jenkins to build the sources. But how to get rid of the 'fixing repo' when the fix is finished? It is probably not a good idea to allow users delete remote repositories, is it?
Possibilities I see:
1) forgot about branching using clone, do a named branch, fix, merge to master branch and close the fixing branch - probably not a common solution in mercurial to create a named branch for this
2) just create a new head (possibly set a bookmark), fix, merge to master head (branch) and close the fixing head (can I close a head?) - better but I still have to do with all fixing commits in the main repository
3) using the cloned 'fixing repo', set up ssh access or even mercurial-server for Jenkins to my personal machine and let it clone from it, build and when the build is fixed, just merge all commit from local 'fixing repo' to remote repository as a single commit
Some better solution? Or do I want to do something unusual?
Thanks.
We use Mercurial and have set up a repo server to which all local/developer changes are pushed to/pulled from. I came back from vacation to find that one of our repos has been replaced. Apparently a developer was having issues pushing commits to it and figured the only solution was to blow away the repo on our server and push a new one from his machine under a new name. Not sure how, but in doing this we lost all the change history of the project before it was blown away.
I still have the full repo history on my local machine up to that point and would like to merge the new repo with the old repo and have the full change history retained. I'm hesitant to do a pull/update to my machine in case I lose the history.
I also want to update the name of the repo directory on the server because now some of our tools have broken paths to the repo and would prefer reverting back to the original directory name insted of updating all our tools' references.
I think I can use the hg rename to do what I want regarding the rename, but how do I merge the two repos into one?
I found a way to merge by following (somewhat) the instructions here: https://www.mercurial-scm.org/wiki/MergingUnrelatedRepositories
First I made a copy of my local repo in case things went sideways
I cloned the repo on the server to the name I wanted it to be and which matched my local repo
I did a pull on the server repo to my local one and instead of doing an update, did a merge
Resolved merge diffs and committed locally
Pushed changes to new server repo, confirmed full change history was in place
Removed old server repo
We have a dedicated issue tracking (Redmine) machine, which has a Mercurial repository (call it "Redmine repository"). Redmine is set up to use that repository, and as far as I understand, Redmine never makes any changes to that repository. All developers (eventually) push their changes to that repository.
We also have a dedicated production machine, which can execute the code, but is not used to make any changes to the code.
We have two choices:
Set up another Mercurial repository on the production machine (call it "production repository"). When a new production release is approved, pull the changes from the Redmine repository to the production repository, and then update the local working directory to the appropriate revision from the production repository.
Reuse the existing Redmine repository on the production machine designating it a local repository for the Mercurial installation there (the Redmine repository is on the shared drive that can be easily mounted on the production machine). Whenever a new production is approved, update the local working directory to the appropriate revision from the Redmine repository.
With option #2, we get rid of an extra "pull" step (from Redmine repository to production repository), which slightly simplifies the process. But I'm not sure if it's ok that a single repository is used by two Mercurial installations as if it's local.
Any comments on this choice (or any other aspect of this setup) is appreciated!
It sounds like a bad idea. Mercurial does a really good job of keeping reads and writes to its repository atomic, but it has a harder time doing that when the repository is on a shared drive -- even if it's only one local repository using it -- because network shares (especially on Windows) don't always make things atomic that they say they do.
Ideally your repositories (both the working dir and the repository) are local when possible, and you use push/pull to get changesets to/from a network share. If that's not possible then having a single local application using the repo on the remote file system is the best idea.
If you positively want to try having two clones using the same underlying repository check out the ShareExtension, which ships with Mercurial but is for advanced users only.
Instead of trying to piggy-back, why not just put a hook like this in your redmine repository:
[hooks]
changegroup = hg push //production/clone
That will automatically push changesets that arrive in redmine to production.
It's my first time using a DVCS and also as a lone developer, the first time that I've actually used branches, so maybe I'm missing something here.
I have a remote repository from which I pulled the files and started working. Changes were pushed to the remote repository and of course this simple scenario works fine.
Now that my web application has some stable features, I'd like to start deploying it and so I cloned the remote repository to a new branches/stable directory outside of my working directory for the default branch and used:
hg branch stable
to create a new named branch. I created a bunch of deployment scripts that are needed only by the stable branch and I committed them as needed. Again this worked fine.
Now when I went back to my initial working directory to work on some new features, I found out that Mercurial insists on only ONE head being in the remote repository. In other words, I'd have to merge the two branches (default and stable), adding in the unneeded deployment scripts to my default branch in order to push to the main repository. This could get worse, if I had to make a change to a file in my stable branch in order to deploy.
How do I keep my named branches separate in Mercurial? Do I have to create two separate remote repositories to do so? In which case the named branches lose their value. Am I missing something here?
Use hg push -f to force the creation of a new remote head.
The reason push won't do it by default is that it's trying to remind you to pull and merge in case you forgot. What you don't want to happen is:
You and I check out revision 100 of named branch "X".
You commit locally and push.
I commit locally and push.
Now branch X looks like this in the remote repo:
--(100)--(101)
\
\---------(102)
Which head should a new developer grab if they're checking out the branch? Who knows.
After re reading the section on named branchy development in the Mercurial book, I've concluded that for me personally, the best practice is to have separate shared repositories, one for each branch. I was on the free account at bitbucket.org, so I was trying to force myself to use only one shared repository, which created the problem.
I've bit the bullet and got myself a paid account so that I can keep a separate shared repository for my stable releases.
You wrote:
I found out that Mercurial insists on only ONE head being in the remote repository.
Why do you think this is the case?
From the help for hg push:
By default, push will refuse to run if it detects the result would
increase the number of remote heads. This generally indicates the
the client has forgotten to pull and merge before pushing.
If you know that you are intentionally creating a new head in the remote repository, and this is desirable, use the -f flag.
I've come from git expecting the same thing. Just pushing the top looks like it might be one approach.
hg push -r tip