GIt Workflow dev and production - where should I create the repos? - git-workflow

I've read somewhere* a setup like this would be nice:
Two main branches, one for each server.
Pushing to master sends changes on to live;
Pushing to dev/stage (or whatever you call it) sends changes to staging;
Workflow:
Create branch from dev;
work locally until you're ready to test;
merge back to dev;
push to Hub, which sends changes to dev/staging server.
Once you're ready with those to go live:
merge from dev to master,
then push master to Hub, which sends those changes on to the live server.
Two main branches, one for each server.
So I have one branch "production" on "webroot/myliveapp/"
and another branch "development" on "webroot/devapp/"
Where should the repository be ?
UPDATE:
I mean:
We will have, according to this flow:
Prime repo;
Bare repo hub;
Clones;
The development and production branches should belong to one repository, right ?
If this is correct, then were should we issue the FIRST git init command ?
On our Prime repo ?
So we will have:
"webroot/myliveapp/" - production branch;
"webroot/devapp/" - development branch;
"webroot/.git" - Prime repository;
Does this make sense ?
Or should the Prime repository correspond to our production branch location ?
*Note: if you need a context about what workflow I'm trying to implement, is this one:
http://joemaller.com/990/a-web-focused-git-workflow/

Thanks for the update on your question, it is more clear now.
I believe the problem you're having is based on a misunderstanding of Git workflow; Git doesn't equate directories to branches, it equates a view of your filesystem to branches. This is powerful - but easy to shoot yourself in the foot. Let me explain.
Git acts more like a database-backed, differentially-versioned, history tracking filesystem in itself. It is "above" your filesystem, not "part of" it. It doesn't use your filesystem to represent branches, rather, when you check out a different branch, all the files in your filesystem will change to be the files in that branch. You are asking Git to make your filesystem represent the alternate reality of that branch.
If you are on branch master, and it has a file root/foo.txt committed, and you check out branch experiment, which does not have root/foo.txt committed, you will find that file gone when you look for it. It is a part of master, not experiment, and so it is not present in your filesystem. This is why Git is really picky about your current branch being committed before it lets you switch branches - if you have unstaged changes on your filesystem that Git doesn't know about, it refuses to blow them away by overwriting them with a different reality. You have to intervene to make things right first.
So, to answer the quesiton, don't create subdirectories for "myliveapp" and "devapp" - create different branches. Just have your one codebase under "webroot". Then, hack away on, say, the "unstable" branch, committing your changes as usual. You can then switch all of the files in your repository to be at the version of your dev server's files by switching to the "devapp" branch, and you can similarly switch back to "unstable" at any time.
When you want to update a branch, e.g. doing an update of your dev server, you can merge "unstable" into "devapp". This will make all of the files of "devapp" look like those of "unstable", bringing it up to date.
One other thing to note: the difference between a prime repo, a bare repo, and clones is almost nil. There is virtually no difference in the software; rather, it's a human convention to say "Linus' kernel is the canonical Linux kernel". With that understanding:
A prime repo is just one repository that everyone agrees holds the "canonical" version of the software. That is, whenever a developer has made a change they want everyone to see, rather than saying, "Pull my version of devapp", they can say, "I've published my changes to our prime repo." It's simply an easy convention for people to rally around.
A clone is a copy of some other repo. I could clone the prime repo, make changes, and then you can clone my repo. If you make changes, you can push them either onto the prime repo or onto mine, as long as the merge is valid and you have permissions on the computer.
A bare repo simply has no "working copy" - there is no "webroot" directory on that computer. It's empty with only the .git directory - which is fine for servers where nobody needs to alter the files.
Finally, the .git dir doesn't hold the files of your repo, it holds the git configuration and database. It's your entire repository history in database form, which is used to populate the rest of the repo with a particular version of your software. That's why I made the comment: you can locally check out any version of any alternate reality of the repository, with no network communication, at any time - because it's all there in the .git dir. The only network communication necessary is for when you want to sync your local repository to some other repository, using push or pull.

Related

How to properly use hg share extension?

Say I have cloned a repo to a directory called ~/trunk and I want to share a branch named my-new-branch to the directory ~/my-new-branch. How would I do that with the hg share extension?
This is what I've been doing:
cd ~
hg share trunk my-new-branch
But then when I cd into the new directory I have to hg up to the branch?
Confused.
IMO share is a very useful command which has some great advantages over clone in some cases. But I think it is unfortunately overlooked in many instances.
What share does is to re-use the 'store' of Mercurial version control information between more than one local repository. (It has nothing directly to do with branching.)
The 'store' is a bunch of files which represents all the history Mercurial saves for you. You don't interact with it directly. Its a black box 99.99% of the time.
share differs from the more commonly-used clone command in that clone would copy the information store, taking longer to run and using potentially a lot more disk space.
The "side effect" of using share rather than clone is that you will instantly see all the same commits in every shared repository. It is as if push/pull were to happen automatically among all the shared repos. This would not be true with clone, you'd have to explicitly push/pull first. This is quite useful but something to be mindful of in your workflow because it may surprise you the first time you use it if you are only used to clone.
If you want to work in multiple branches (named or unnamed) of your project simultaneously,
either clone or share will work fine. One you have created the second repository, yes you need to update it to whatever changeset you want to begin working on.
Concrete example using share:
hg clone path\to\source\repo working1 # Create local repo working1 cloned from somewhere
cd working1
hg up branchname1
cd ..
hg share working1 working2 # shares the 'store' already used for working1 with working2
cd working2
hg up branchname2 # some other branch or point to start working from
As soon as you commit something in working1 that commit will be visible in the history of working2. But since they are not on the same branch this has no real immediate effect on working2.
working2 will retain path\to\source\repo as its default push/pull location, just like working1.
My own practice has been to create numerous locally shared repositories (quick, easy, saves space) and work in various branches. Often I'll even have a few of them on the same named branch but set to different points in history, for various reasons. I no longer find much need to actually clone locally (on the same PC).
A caveat -- I would avoid using share across a network connection - like to a repo on a mapped network drive. I think that could suffer some performance or even reliability issues. In fact, I wouldn't work off a network drive with a Mercurial repo (if avoidable) in any circumstance. Cloning locally would be safer.
Secondly -- I would read the docs, there are a few weird scenarios you might encounter; but I think these are not likely just based on my own experience.
Final note: although share is implemented as an "extension" to Mercurial, it has been effectively a part of it since forever. So there is nothing new or experimental about it, don't let the "extension" deal put you off.

Can I work in the repository in a single user Mercurial workflow?

I use Mercurial in a single-user workflow to have the option to roll back changes if my coding or writing goes horribly wrong (I primarily use the Stata and R statistics packages and LaTeX). While working only locally, this has been easy since all I have is the main repo.
Recently I have started ssh-ing into a Linux server for more computational power. So far I have been manually copying files back and forth and using Mercurial only locally, but I would like to use Mercurial to take care of this and keep these two workflows synchronized. Also, I like the ability to code both locally (on my laptop or desktop) and on the server.
Do I need to work on a clone of the main repo on the server and keep the main repo untouched? Or can I work directly in the main repo when I am on the server? In this question #gizmo points to this workflow guide; the "single developer" discussion is helpful, but it's still not clear to me that I can work in the main repo while I'm on the server without causing some major problem that I don't yet understand.
Thanks!
Edit: I should add that I have worked through Joel Spolsky's HgInit.com tutorial and I'm comfortable pushing/pulling/cloning/etc over ssh, but I am still not sure if I can work in the main repo without causing heartache later. Or maybe this is more a philosophical question? Thanks!
Mercurial is DVCS, it means - in each location you have both: local working copy and local repository
Mercurial is DVCS, it means - you can freely exchange (pull|push) data between repos (if they provide remote-access methods).
If you
comfortable pushing/pulling/cloning/etc over ssh
and don't forget perform pull|push cycle around your work at home (in order to don't run hg serve at home-host and sync from server as source) you don't get any headache at all with perfect linear aggregated history on each place. And even you forget to sync repo sometimes, you get in worst case two heads later, which you'll be able to merge easy (doesn't know formats of Stata and R data-files, but LaTeX, as text, is mergeable)
There is no problem with working directly in the repository on your server. From Mercurial's point of view, the "main" repository is just another random repository — Mercurial doesn't consider it to be special.
You don't say this directly, but one thing that people ask is "What happens when I push to the server?" The answer is that hg push only sends data into the repository (the .hg/ folder). The working copy is not touched on the server when you push to it. Since you push new changesets to the server, you might need to run hg update the next time you work on the server. This is just like if you had run hg pull on the server — there you'll also merge or update afterwards.
I have this situation all the time: I create a repository at home and clone it to my computer at work. I change files in either location and push/pull between the two repositories. If I need to share my work with others, then I make a repository at Bitbucket and push the code there. That way Bitbucket serves as a nice canonical repository for the code and I typically change the default path to Bitbucket in the repositories at home and at work. So at home I would have:
[paths]
default = httsp://bitbucket.org/mg/<repo>/
work = ssh://mg#work/<repo>
so that I can do hg push to send things to Bitbucket and hg pull work to grab things directly from work (in case I forgot to push to Bitbucket before leaving).

How to setup a DVCS system that keeps some of the stuff we like about our centralized VCS?

Right now, we have a small team of developers using TFS for our version control. I'm evaluating the possibility of us moving to a DVCS, and am wondering if we'd need to give up some of the stuff we like about our current system if we moved to DVCS, or if we can find a way to support it.
Right now we a Stable branch, and 1 branch for each developer (you can think of each dev's branch as a feature branch that is reused from feature to feature). The stuff we like is:
1) Each time a dev checks in changes to his branch, we do a build and test of everything on the servers, followed by automatic deployment of all projects to that developers test environment.
2) Merging from any dev's branch to Stable is done by me, so that I can have 1 last check on what is happening to our stable branch before committing the changes.
3) If I want to help a dev with something, I can just grab latest from their branch and look at it on my machine.
I'm trying to understand how this could work with a DVCS (specifically we are testing with Mercurial).
I'm hoping to be able to pull off something like this:
1) We setup a central repository, and we create Main and Release branches in addition to 1 branch for each developer.
2) All devs clone the repository to their local machine.
3) All work is done in their personal branch against the local repository.
4) When they are done, the would pull from the central repository, and perform a local forward integration merge from Main to to their branch, to integrate any changes that have happened in the past to Main.
5) They would then push their changes to the central repository.
6) Some CI service would pickup this change, causing a build/test/deploy-to-dev of all our projects in that branch.
7) If everything was ok the dev would shoot me an email saying their branch was ready for merging to Main.
8) I could then merge in their changes, either by somehow connecting directly to the remove repository, or by doing a pull->merge->push.
So to deal with our requests:
1) I'm assuming there is some CI tools that can watch a branch in Mercurial, and kick off a build/test/deploy process (like CC.Net).
2) I can still manage the final merge process from DevX to Main either by connecting to the remote repo, or by pulling, merging and pushing through my local repo.
3) I believe I could either pull changes directly from another dev's repo, or I could just pull from the central repo, and then update my working directory to work on their code.
So do I have this mostly right?
Yep, you have all that right. Regarding you final assumptions:
1) There's a Mercurial Source Control Block for CruiseControl.NET. Another CI server I've heard of in use with Mercurial is Jenkins.
2) Correct. For integration with Main, I would prefer pulling (from either) and merging on my own machine before pushing, rather than merging on the server.
3) Exactly so.
It sounds like your developers are fairly disciplined, but just in case you need better control certain aspects of your operations:
You can use hooks to issue warnings when someone tries to merge their branch to Main. In-process hooks have to be written in Python, but they have access to the Mercurial API that way. You could also place hooks on the server that reject pushes containing a merge to Main not done by certain users.
One way some organizations control integration is a pull-only scenario. Only a few developers can push to the official repository and other developers send them pull requests. The Mercurial book's Chapter 6 covers this a bit, too.
A branch per developer is good. A branch per feature is also useful, allowing each developer to work on multiple things in parallel, then merging each to their branch when done. They just have to remember to close that feature branch before doing so, so the branch name doesn't keep popping up. This can be done with with clones as well, but I find myself preferring named branches since I have to keep work/backup/laptop development clones all synced so I can work on whateve, whenever. I still do expendable work as a clone first.

Moving from Subversion to Mercurial - how to adapt the workflow and staging/integration systems?

We got all psyched about from from svn to hg and as the development workflow is more or less flushed out, here remains the most difficult part - staging and integration system.
Hopefully this question goes a bit further then your common 'how do I move from xxx to Mercurial'. Please forgive long and probably poorly written question :)
We are web shop that does a lot of projects(mainly PHP and Zend), so we have one huge svn repo, with like 100+ folders, each representing a project with it's own tags,branches and trunk of course. On our integration and testing server(where QA and clients look at work results and test stuff) everything is pretty much automated - Apache is set to pick up new projects automatically creating vhost for each project/trunk; mysql migration scripts right there in trunk too and developers can apply them through simple web-interface. Long story short our workflow is this now:
Checkout code, do work, commit
Run update on the server via web interface(this basically does svn up on server on a particular project and also run db-migration script if needed)
QA changes on the server
This approach is certainly suboptimal for large projects when we have 2+ developers working on the same code. Branching in svn was only causing more headaches, well, hence moving to Mercurial. And here is where the question lies - how does one organize efficient staging/integration/testing server for this type of work(where you have many projects, say single developer could be working on 3 different projects in 1 day).
We decided to have 'default' branch tracking production essentially and then make all changes in individual branches. In this case though how can we automate staging updates for each branch? If earlier for one project we almost always were working on trunk, so we needed one DB, one vhost, etc. now we potentially talking about N-databases per project, N-vhost configs and etc. Then what about CI stuff(such as running phpDocumentor and/or unit tests)? Should it only be done on the 'default'? On branches?
I wonder how other teams solve this issue, perhaps some best practices that we're not using or overlooking?
Additional notes:
Probably worth mentioning that we've picked Kiln as a repo hosting service(mostly since we're using FogBugz anyway)
This is by no means the complete answer you'll eventually pick, but here are some tools that will likely factor into it:
repositories without working directories -- if you clone -U or hg update null you get a repository with no working directory (only the .hg). They're better on the server because they take up less room and no one is tempted to edit there
changegroup hooks
For that last one the changegroup hook runs whenever one or more changesets arrive via push or pull and you can have it do some interesting things such as:
push the changesets on to another repo depending on what has arrived
update the receiving repo's working directory
For example one could automate something like this using only the tools described above:
developer pushes five changesets to central-repo/project1/main
last changeset is on branch 'my-experiment' so csets are automatually re-pushed to optionally created repo central-repo/project1/my-experiment
central-repo/project1/my-experiment automatically does hg update tip which is certain to be on the my-expiriment branch
central-repo/project1/my-experiment automatically runs tests in its working dir and if they pass does a 'make dist' that deploys, which might set up database and vhost too
The biggie, and chapter 10 in the mercurial book covers this, is to not have the user waiting on that process. You want the user to push to a repo that contains possibly-okay-code and the automated processed do the CI and deploy work, which if it passes ends up being a likely-okay repo.
In the largest mercurial setup in which I've worked (20 or so developers) we got to the point where our CI system (Hudson) was pulling from the maybe-ok repos for each periodically then building and testing, and handling each branch separately.
Bottom line: all the tools you need to setup whatever you'd like probably already exist, but gluing them together will be one-off sort of work.
What you need to remember is that DVCS (vs. CVCS) introduces another dimension to versioning:
You don't have to rely anymore only on branching (and get a staging workspace from the right branch)
You now have with DVCS the publication workflow (push/pull between repo)
Meaning your staging environment is now a repo (with the full history of the project), checked out at a certain branch:
Many developers can push many different branches to that staging repo: the reconciliation process can be done in isolation within that repo, in a "main" branch of your choice.
Or they can pull that staging branch in their repo and test things out before pushing back.
From Joel's tutorial on Mercurial HgInit
A developer don't necessary have to commit for other to see: the publication process in a DVCS allows for him/her to pull the staging branch first, reconcile any conflict locally, and then push to the staging repo.

Mercurial local branching and pushing to shared repository

I created a branch on my local Mercurial repository. I want to push to the shared repository so my work can be backed up, but I don't want other project members to see the branch.
What's the standard operating procedure in this case?
I'd like to avoid having the repository get full of developer branches that I don't need to see.
It sounds like you're conflating two issues. If you need your branches backed up, simply clone your local repository someplace else that is actually backed up, but not visible to your coworkers (that is, not the main repository). You can always easily pull changesets from more than one repository in case that becomes necessary.