For example exists hg rep A - project setup environment. It contains following files:
//project A
.some_config_file
script_1
After project B forked from A, some changes was made.
// project B
M .some_config
M script_1
Parallel in project A there has been improved some features or bug fixed in script_1.
// project A
M script_1
When I try to pull new features (hg pul -u 'repA') to B from A, it brings old .some_config back to repository and overwrites actual one.
And there is my questions:
How do I resolve this conflicts?
How to pull partially changes from fork parent?
And what the best practice to work with fork parent?
Pulling from forked rep, pollutes local one.
You seem to be unfamiliar with the distinction of your 'working copy' and the repository as a tree of individual changesets.
The solution likely is: update your working copy to your fork B. Then merge the original project, fork A, into your currently checked-out version, into fork B. Take care to only accept those changes during the merge which you want to be merged - and discard any changes made to .some_config
Besides that, it's often a bad idea to have config files in a repo. Only have example config files there (and name them such) and keep the actual config file outside, untracked.
Related
I have an active project with some sensitive files and directories. I want to hire an external contractor to do some simple UI work. However, I don't want the contractor to have access to some directories and files. My project is in mercurial on Bitbucket.
What is the best way to clean up the project and give him access to commit his changes? I thought about forking into a new repository, but I am worried about removing directories I don't want him to have access to.
How to I remove them so they don't appear the original changesets? How to I merge his repo back without it removing those directories in my main repository? Is a fork the way to go?
Naturally a repository needs access to its whole history in order to self-check its integrity. I don't know of a way to selectively hide parts of the repository (there's the ACL extension, but it is for write access only).
In your case, I would
create a new repository where all sensitive information has been stripped off (use the convert extension for that task).
Then I would let the external guy work with that repository.
Once his work is finsihed, pull his repository into a clone of the original one (using -f to force pulling of an unrelated repository), and
rebase his first changeset and all its children onto a head of your original repository.
Finally, push the rebased head to the original repository.
For steps 3 to 5 you don't necessarily have to wait until the external developer is done. Rebasing intermediate states of his repository is also possible.
Yet, it's an theoretical idea .. one has to see how it performs in practice.
Alternative: In case you frequently have external contractors who shouldn't see some parts of your code, I would second #Anton's comment to setup permission related multiple repositories.
There are multiple ways to do this:
Using sub-repositories
Using multiple repositories
???
Regardless, you need to restructure and split your existing repository, so this will create havoc if you have lots of people working on this project, they will all need to stop working, synchronize their work, destroy their local clones and clone down fresh copies after the restructuring.
One way using multiple repositories would be that you do the following:
Make 2 extra clones of the repository (keep one around for fallback if everything fails, you can always go back)
The first clone you need to run the hg convert command on to get rid of all the bits and pieces your contractor should not access
Then you fix that repository so that it works by itself. You might have to change code to provide hooks and events for anything not present but which you intend to inject into the project before you build
Then you need to run hg convert on the other clone to get rid of everything now present in the first.
Then you pull from the first (contractor) repository into the second (private) repository, merge, and do necessary fix-ups so that the code still works as intended
What you have now is two repositories:
Contractor-repository, with only the bits you want to expose
A private repository, that has pulled and merged from the contractor-repository, and contains all the other bits and pieces
From now on, whenever the contractor has pushed work to his repository, you need to pull from it and into the private repository and then merge.
Your repositories would look like this:
Contractor: ---97---98---99---100---102---103---104
M M
Private: ---91---92---93---94---95---96---101---105---106---107
/ /
/ /
---97---98---99---100---102---103---104
The two changesets with M above are merge-changesets that merge contractor-supplied code into your private repository.
Note that you too would have to commit code to the contractor-repository, to work on and fix bugs in the code there, but all the private bits you can keep private.
I currently use SVN for a number of things that aren't exactly code, for instance xml files, report templates, miscellaneous files, etc. I have several non-developers who are comfortable using TortoiseSVN for this. They typically work as follows:
Person A - does an SVN Update on the folder of interest to them. Or perhaps just on a single file.
Person A - edits whichever file(s) they're working on. Perhaps add or remove files.
Person B - someone else is probably working on different files at this point
Person A - does an SVN Commit to save their changes to the repository.
Very occasionally they'll hit conflicts where more than one person has edited a file. Almost always this is just because they forgot step #1. Because they're always working on separate files, there are (almost) never real conflicts. As long as they do step #1 first everything works fine.
I'd like to move to Mercurial, however something holding me back is the prospect of having do 'merge' all the time, because Mercurial looks at the state of the entire repository, not just the files of interest at a particular time. e.g. the workflow would be like this:
Person A - does a pull and update on the repository. (let's assume there are no local changes so this is straightforward).
Person A - edits whichever file(s) they're working on. Perhaps add or remove files.
Person B - someone else edits, commits, and pushes a different file at this point
Person A - commits changes. Tries to push. Gets an error about multiple heads.
Person A - does a pull and update. update doesn't work: merge required.
Person A - does a merge. If using TortoiseHg it's a bit confusing working out what to click on to do the merge. I guess this is simpler on the command line, provided there are no complications.
Person A - commits the merge.
Person A - pushes the changes.
My resistance is that there are more steps, and the merge step is somewhat hard to get your head around if you're not a developer. Is there a way I can put these steps together to make the process nice and simple?
"Very occasionally they'll hit conflicts where more than one person has edited a file. Almost always this is just because they forgot step #1. Because they're always working on separate files, there are (almost) never real conflicts. As long as they do step #1 first everything works fine."
If this is the case why do you want to use a DVCS? Mercurial is great, but the benefits of a DVCS come from the ability to merge and fork and the ease of doing either, if your workflow requires neither why would you want to switch toolset?
Sounds like the rebase extension might work for you. The workflow becomes:
hg clone
make changes
hg commit
hg pull --rebase
hg push
The local revisions get "rebased" onto the latest tip on pull, which avoids the merge.
One possible approach is to have a point person who does all the real work of merging. I'm not a big fan of letting everyone push to one shared repos, expecially if they don't know what they are doing. An alternative approach is that A has local repos A, B has local repos B, and there is repos S, which combines A and B. Then, don't let A or B push to S. Instead let an expert pull from A and B, and do the merging in S. Then A and B never have to push to S. If they coordinate with the expert, then he/she will already have merged their changes into S by the time they pull updates from S, and so A and B will not have to merge either when pulling. This is actually the default mode in which DVCS works, since by default all repositories are read-only except by their owner.
My project is made up of code in the following locations
C:\Dev\ProjectA
C:\Lib\LibraryB
C:\Lib\LibraryC
Presently each of these folders is a completely independent Mercurial repository. Project A changes all the time, Library B and Library C change rarely.
I currently tag each version of Project A as it is released and (when I remember) put a corresponding tag in the Library B and C repositories.
Can I improve upon this by using subrepositories? Would that require me to make Library B and C a subdirectory of Project A?
If Library B and C must be subdirectories of Project A what do I do if I want to start a Project D that uses Library B but isn't otherwise affiliated with Project A at all?
If Library B and C must be
subdirectories of Project A what do I
do if I want to start a Project D that
uses Library B but isn't otherwise
affiliated with Project A at all?
Any project can exist both independently and as subrepository of another project at the same time. I'll explain by suggesting a workflow.
First of all, each of your projects (A, B, C) should have a blessed repository that is published somewhere:
You could run hgwebdir on your own server, or make use of a Mercurial hosting service like Bitbucket or Kiln. This way developers have a central authorative point to pull/push changes from, and you have something to make backups of.
Now you can make clones of these repositories to work on in two different ways:
directly clone your project. For example:
hg clone http://bitbucket.org/LachlanG/LibraryB C:\Lib\LibraryB
and/or create subrepository definitions by putting a .hgsub file in the root of ProjectA with the following content:
libraries/libraryB = http://bitbucket.org/LachlanG/LibraryB
libraries/libraryC = http://bitbucket.org/LachlanG/LibraryC
These subrepository definitions tell Mercurial that whenever Project A is cloned, it also has to put clones of Library B and Library C in the libraries folder.
If you are working in Project A and commit, then your changes in libraries/LibraryB and libraries/LibraryC will be committed as well. Mercurial will record which version of the libraries is being used by Project A in the .hgsubstate file. The result is that if you hg update to an old version of the project to see how things worked last week, you also get the corresponding version of your libraries. You don't even need to make tags :-)
When you hg push the Project A changes to the blessed repository, Mercurial will also make sure to push the subrepository changes first to their own origin. That way you never accidentally publish project changes which depend on unpublished library changes.
If you prefer to keep everything local, you can still use this workflow by using relative paths instead of URLs in the subrepository definitions.
You can indeed declare B and C subrepos of project A (they will appear as subdirectory, as described in Mercurial Subrepository).
That would improve your release mechanism as it would allow you to:
get all repos in one place (A and under)
reference an exact tag of B and C under A
tag each sub-repo s first if they had any modification
tag A with the information about B and C tags in it (any clone of A will be able to get the exact tags of B and C used by A)
You can also declare B as a subrepo of D, independently of A. What you make in A (regarding B) will have no consequences for B used in D.
Can one hg repo live inside another hg repo on my local file system?
I am pulling down the bitbucket wiki for 'sandbox', and I want to know if this should be placed in repos/sandbox/wiki or repos/sandbox-wiki.
Is the former okay to do?
Edit: See Subrepository.
The short answer is yes, but I can't imagine why you would want to.
In your example, I think you should go with:
repos/sandbox-wiki
[edit] Additionaly:
Yo Dowg, I herd you like repositories.
So we put a repo in your repo so you can version while you version
:-)
Yes and no. Depends on what you want to do. You can create repo 'sandbox/wiki' but files in this inner repos won't be commited in the outer 'sandbox' repo (#Jason is right). If you don't want to, no problem.
Try explicitly adding files from wiki repos in sandox and you'll get the message below. If you just add path to some directory containing an inner repo the files will just be ignored.
From sandox root directoy:
hg add wiki/myfile
abort: path 'wiki/myfile' is inside repo 'wiki'
Mercurial does not allow nested repositories, but there is at least one reason for them:
Imagine that you are working in a project: /MyProject. In this folder you put everything: code, documentation, tests, etc.
You want to backup your work because it is very important, so you create a repository for /MyProject. Then, overtime you use bundles to save the evolution of /MyProject and back up them in a USB flash memory so that you can recover everything just in case your hard drive breaks.
Remember that /MyProject contains everything. And among all those things, there are the main code and some auxiliary projects. You also want to track the progress of an auxiliary project that is in /MyProject/AuxiliaryProject, so you use Mercurial to track its evolution.
Also, you want to have a separate repository for the main code: /MyProject/Main
In this situation you want nested repositories: one big one for being able to back-up everything using bundles and child repositories for managing each subproject.
I think Mercurial should give the user several options when initializing a repository. For example:
- ignore nested repositories
- include nested repositories but ignoring .Hg folders (i.e. act as if there were no nested repositories but do not ignore the information contained in the nested respositories).
- include nested repositories and also include .Hg folders (makes sense for back-up purposes)
--------- Edit:
Subrepositories is a feature that is work in progress:
https://www.mercurial-scm.org/wiki/subrepos
Also, there is an extension named "forest" that might become obsolete in the future:
https://www.mercurial-scm.org/ForestExtension
You'd need to set up an .hgignore file in sandbox to exclude wiki because mercurial assumes that it is responsible for all descendants. This would probably generate more user confusion than it is worth.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Mercurial: copying ONE file and its history to another repository
I have several repositories in my local machine.
One is my main code, another is an assortment of useful code/tools.
These are two fundamentally different repos. It might make sense to setup a new repo and pull these two in as sub-repos, but I want to wait until Mercurial devs mark sub-repos as non-experimental before I do that.
One of the useful code files has become so useful, I want to put it into my main code area...but I want to maintain its history. This will, of course, result in some variant of a fork, but that's acceptable. (best case would be being able to push-pull it back and forth and keep updating its history).
I'd just use the subrepo feature that came online in 1.3. It might change slightly, but you won't be left high and dry backwards compatibility wise.
If you can't bring yourself to so, then what you need to do is:
use hg convert with a filemap that deletes all files except the one you want and convert from the repo with the single useful file to a new repo containing only that file and all its history
then hg pull from the new single-file-full-history repo into the target repo
hg merge in the target repo and you'll have that file with all it's history
The other option would be to hg export the entire tools repo, use grepdiff (part of difftools) to limit to only one file, and then import into the target repo, but that's crazy.
The short answer is you can't copy a file and its history simply, as stated in this thread:
https://www.mercurial-scm.org/pipermail/mercurial/2009-April/025105.html
I'm relatively new to DVCS and you really have to think of each repo as a self contained package. Not like svn or p4, where you can hang different projects off the root and configure it how you like and do partial repo checkouts. (That said, I really like the flexibility of being able to clone repos quickly to try things out. And being able to commit on a local machine.)
I'm just looking at a similar problem. I've branched a repo to make changes and I only want one file out of one changeset. And it is nice to have the history.
You could look at:
hg cat
This would probably involve writing a script to transfer history, i.e. commit N changesets in the target repo with the hg cat results from the source. Wonder if there is an extension to do this?
You could get the log of the file you want to copy and paste that into a commit comment. It's not in the metadata, but you do have a record and all the hashes etc.
may be
hg export
also can help you.