We've been running out of a Mercurial repo from within a CC snapshot view successfully for some time now. We have the source repo on the view, and the team's base repo is a clone from that one. That keeps a layer of separation to make checkout-checkins in CC easier to manage.
Now, for reasons internal to where I work, we need to switch to a new view. How can we do this? There are other teams within the company checking in files directly to CC (hopefully we'll convince them away soon), so that should be a consideration.
How can I overlay our existing repo into a new view (and then I can rebase the team's base repo no problem)?
The problem is the delta that might exist between your current Mercurial repo and the new snapshot view (especially with a different config spec).
Since the OP mentions in the comments that the config spec of the new view won't change, he suggest a simpler method than the one below:
Load the new snapshot view content
remove all its files from the disk (not from ClearCase)
copy the .hg directory of the original Mercurial repo in the new (empty) view
update the working tree of said Mercurial repo (all the files are back in their original place, but detected by ClearCase as hijacked)
cleartool update -overwrite in order to force ClearCase to erase those files by the versions from ClearCase. (see man cleartool update)
Mercurial would then detect any change between the files restored by ClearCase and the ones managed in the repo.
(Original answer)
I would:
create a branch dedicated for that migration in Mercurial,
compare its content with the snapshot view (without putting yet any Mercurial repo in it)
import and resolve the differences from the ClearCase view to the Mercurial repo
and then, once the content is identical, clone the Mercurial repo directly within the snapshot view.
The rest would be about:
fetching that dedicated migration branch to the team's base repo
merging that branch to the main development branch within team's base repo
Related
I have central mercurial repository server. I cloned repoA on my local system
initiated new repoB on central server. cloned repob to local. copied everything from repoA to repoB, commit and pushed to repoB (central server)
now i have all the changeset history from repoA on this new repoB
there was a need to do so as there were two application code on same repoA, to separate it i did the above experiment. and it is working.
my question is by doing so is there any side effects , or is there a better way to do it (recommended way ) please suggest, thank you !
When you clone a repository to your local PC, the repository lives in a folder. That name of that folder is typically how people refer to the "name of the clone" or the "name of the repository".
Other than that, the folder name itself has very little significance and is not even properly part of the Mercurial repository.
It sounds like you did several other steps, but basically if you renamed repository A to B it won't make much difference (but see notes below).
You do not need to use hg clone to clone a repository. You can literally just copy the entire repository folder and the copy will work just fine. The one difference that I am aware of when you use clone vs. operating system file copy is that the clone will point back to the repo you cloned from (for use in push/pull operations). The copy would point back to the original source. (See notes below about some related effects).
One situation where you might cause some problems by renaming the repository folder is if you have have cloned FROM it. Example: you have local repo A. You clone A to B. Now internal to the configuration data in B is a reference to the folder path including A. If you rename A to A1 then that path is obviously broken.
In such a situation you can easily edit the B/.hg/hgrc file and modify the line starting with default= to correct the path.
Based on your question it sounded like you copied a bunch of stuff from one repo to another. Presumably this also included the .hg folder. Generally speaking I recommend avoiding the contents of that folder, and always approach it with caution.
Although technically some of it is human-readable it is simpler & safer to treat it as a black box, or you risk corrupting your repository. There are occasional exceptions (like hgrc) but they are few & far between.
Of course if you are just trying to learn how it works then by all means try things & see what happens! One of the great Mercurial features is the ability to copy a repo, mess around with it, and throw it away when done.
I have three different Linux-based working places, each with a different computer. I need to have a repository synchronized to keep coding on the latest version each time I move from a workplace to another. You can always commit and push to, say, bitbucket and then pull from another computer, but this is not the purpose of a commit.
Other similar posts did not help, like Synchronizing a collection of Mercurial repositories.
Any suggestion?
Your two primary options for exchanging temporary work between repositories are Mercurial Queues and the evolve extension.
Mercurial Queues are documented fairly extensively here. To use them for your purpose, you have to put the patches under version control (explained near the bottom of the chapter) and can then push them to/pull them from a shared patch repository. Note that the book is a few years old and Mercurial has added some convenience features in the meantime. These days you can do operations on the patch repository directly via the --mq option (e.g., hg init --mq, hg commit --mq, hg push --mq) and don't need a bash alias for convenience.
Evolve is probably more intuitive; it provides a fairly straightforward approach to shared mutable history. You can commit changes in one repository, push the changes to a shared repository, pull from another and uncommit or alter them, then push them back.
In order to set this up, you need a shared repository somewhere that is declared as non-publishing. You do this by adding the following lines to its .hg/hgrc:
[phases]
publish = False
This prevents changesets exchanged through this repository from becoming public (at which point, they'd become immutable).
You will also need to install the extension first (unlike MQ, which is part of core Mercurial).
Note that Bitbucket currently does not support obsolescence markers, which are crucial for the functioning of changeset evolution, so you will need to host the shared repository in a different place. Evolve functions not by deleting outdated changesets, but by marking them as obsolete and hiding them (obsolescence markers also track how old and new changesets are related). Because Bitbucket does not support these markers, obsolete changesets will become visible again if pushed there. (Note that you can still use evolve locally or between evolve-aware repositories and use Bitbucket for public stuff.)
Slightly different ways:
Handwork
MQ with MQCollab extension
Commits with "classic" exchange between repos using MuliRepo extension (just don't forget hg pull on every workplace before pull - and add all remote repos into [multirepo] section on each workplace)
Automated way
Create additional "central hub" and use AutoSync extension
I've read somewhere* a setup like this would be nice:
Two main branches, one for each server.
Pushing to master sends changes on to live;
Pushing to dev/stage (or whatever you call it) sends changes to staging;
Workflow:
Create branch from dev;
work locally until you're ready to test;
merge back to dev;
push to Hub, which sends changes to dev/staging server.
Once you're ready with those to go live:
merge from dev to master,
then push master to Hub, which sends those changes on to the live server.
Two main branches, one for each server.
So I have one branch "production" on "webroot/myliveapp/"
and another branch "development" on "webroot/devapp/"
Where should the repository be ?
UPDATE:
I mean:
We will have, according to this flow:
Prime repo;
Bare repo hub;
Clones;
The development and production branches should belong to one repository, right ?
If this is correct, then were should we issue the FIRST git init command ?
On our Prime repo ?
So we will have:
"webroot/myliveapp/" - production branch;
"webroot/devapp/" - development branch;
"webroot/.git" - Prime repository;
Does this make sense ?
Or should the Prime repository correspond to our production branch location ?
*Note: if you need a context about what workflow I'm trying to implement, is this one:
http://joemaller.com/990/a-web-focused-git-workflow/
Thanks for the update on your question, it is more clear now.
I believe the problem you're having is based on a misunderstanding of Git workflow; Git doesn't equate directories to branches, it equates a view of your filesystem to branches. This is powerful - but easy to shoot yourself in the foot. Let me explain.
Git acts more like a database-backed, differentially-versioned, history tracking filesystem in itself. It is "above" your filesystem, not "part of" it. It doesn't use your filesystem to represent branches, rather, when you check out a different branch, all the files in your filesystem will change to be the files in that branch. You are asking Git to make your filesystem represent the alternate reality of that branch.
If you are on branch master, and it has a file root/foo.txt committed, and you check out branch experiment, which does not have root/foo.txt committed, you will find that file gone when you look for it. It is a part of master, not experiment, and so it is not present in your filesystem. This is why Git is really picky about your current branch being committed before it lets you switch branches - if you have unstaged changes on your filesystem that Git doesn't know about, it refuses to blow them away by overwriting them with a different reality. You have to intervene to make things right first.
So, to answer the quesiton, don't create subdirectories for "myliveapp" and "devapp" - create different branches. Just have your one codebase under "webroot". Then, hack away on, say, the "unstable" branch, committing your changes as usual. You can then switch all of the files in your repository to be at the version of your dev server's files by switching to the "devapp" branch, and you can similarly switch back to "unstable" at any time.
When you want to update a branch, e.g. doing an update of your dev server, you can merge "unstable" into "devapp". This will make all of the files of "devapp" look like those of "unstable", bringing it up to date.
One other thing to note: the difference between a prime repo, a bare repo, and clones is almost nil. There is virtually no difference in the software; rather, it's a human convention to say "Linus' kernel is the canonical Linux kernel". With that understanding:
A prime repo is just one repository that everyone agrees holds the "canonical" version of the software. That is, whenever a developer has made a change they want everyone to see, rather than saying, "Pull my version of devapp", they can say, "I've published my changes to our prime repo." It's simply an easy convention for people to rally around.
A clone is a copy of some other repo. I could clone the prime repo, make changes, and then you can clone my repo. If you make changes, you can push them either onto the prime repo or onto mine, as long as the merge is valid and you have permissions on the computer.
A bare repo simply has no "working copy" - there is no "webroot" directory on that computer. It's empty with only the .git directory - which is fine for servers where nobody needs to alter the files.
Finally, the .git dir doesn't hold the files of your repo, it holds the git configuration and database. It's your entire repository history in database form, which is used to populate the rest of the repo with a particular version of your software. That's why I made the comment: you can locally check out any version of any alternate reality of the repository, with no network communication, at any time - because it's all there in the .git dir. The only network communication necessary is for when you want to sync your local repository to some other repository, using push or pull.
I am writing a open source project, I did all the work at my machine and now I want to push the project do Codeplex.com, but I dont want to send all the old history.
Its possible to push all files in just one revision to Codeplex and continue with my history locally? Something like a Push And Collapse
No - a DVCS relies on the fact that you synchronise all history between members in the distribution set.
If you want to get rid of the history though, prior to pushing to Codeplex you can do the following:
Clone your local repository to a revision before the history you want "removed". We'll call the clone "Repository B".
Update repository A to the tip that you want to apply. Grab the changes, and copy the files to your cloned repository B. You can greate a bundle or patch, but for brevity here I'm just going with the quick and dirty :)
On repository B, commit a single changeset with all those changes.
From now on, repository B is your master. Push this to Codeplex.
As you can see, you can't have historical changeset data on one clone that partakes in a synchronisation with another, but before you've pushed to Codeplex, you can mush all those changes into a single commit - so long as you're happy to lose the history locally too.
An alternative is to use Mercurial Queues to "fold" history, but it needs to be done before you push to Codeplex - check out this wiki page for more information.
Check out the CollapseExtension.
I'm a Subversion user, and I think I've got my head mostly around it all now. So of course now we're thinking of switching to Mercurial, and I need to start again.
In our single repository, we have the typical branches, tags, trunk layout. When I want to create a feature branch I:
Use the repo browser to copy trunk to branches/Features/[FeatureName].
Checkout a new working copy from branches/Features/[FeatureName].
Start working on it.
Occasionally commit, merge trunk in, resolve conflicts and commit.
When complete, one more merge of trunk, then "Reintegrate" the feature branch into trunk.
(Please note this process is simplified as it doesn't take into account release candidate branches etc).
So I have questions about how I'd fulfil the same requirements (i.e. feature branches rather than working on trunk) in Mercurial:
In Mercurial, is a branch still within the repository, or is it a whole new local repository?
If we each have a copy of the whole repository, does that mean we all have copies of each other's various feature branches (that's a lot of data transfer)?
I know Mercurial is a DCVS, but does that mean we push/pull changes from each other directly, rather than via a peer repository on a server?
I recommend reading this guide
http://stevelosh.com/blog/2009/08/a-guide-to-branching-in-mercurial//
In Mercurial, is a branch still within
the repository, or is it a whole new
local repository?
The equivalent of the subversion way of working would be a repository with multiple heads in mercurial. However, this is not the idiomatic way of doing things. Typically you will have only one head in a given repository, so separate repositories for each branch.
If we each have a copy of the whole
repository, does that mean we all have
copies of each other's various feature
branches (that's a lot of data
transfer)?
Yes, if you look at the history of the head of your local repository, then you'll be able to see all the feature branches that were merged in. But mercurial repositories are remarkably space efficient. For example, I have done a hg clone https://www.mercurial-scm.org/repo/hg to get the source for mercurial itself, and it is only 34.3 MB on an NTFS file system (compared to the source code download, which is 1.8 MB). Mercurial will also make use of hardlinks if your file system supports it, so there is little overhead if you clone a repository to another location on the same disk.
I know Mercurial is a DCVS, but does
that mean we push/pull changes from
each other directly, rather than via a
peer repository on a server?
One way of working is indeed to have each developer expose a public repository in which he pushes his own changes. All other developers can then pull what they want.
However, typically you'll have one or more "blessed" repositories where all the changes are integrated. All developers then only need to pull from the blessed repository. Even if you didn't explicitly have such a blessed repository I imagine people would automatically organize themselves like that, e.g. by all pulling from a lead developer.
Steve Losh's article on branching in mercurial linked above is fantastic. I also got into some explaining of branching and how the DAG works in a presentation I gave a couple of months ago on mercurial that's out on slideshare. The pertinent slides start at slide #43.
I think that understanding that all commits to the same repository are stored in a DAG (Directed Acyclic Graph) with some simple rules really helps demystify what's going on.
a node with no child nodes is a "head"
the root node has no parents
regular nodes have a single parent
nodes that are the result of a merge have two parents
if a merge node's parents are from different branches, the child node's branch is inherited from the first parent
Named branches are really just metadata labels on commits, but really aren't any different than the anonymous branches that happen when you merge someone elses work into your repository, or if you go back to an earlier version and then make a commit there to make a new head (which you can later merge).