I'm working on a web application that's versioned in mercurial and deployed to Amazon Web Servies. We're in the process of planning our repository structure and I'd like to know how other people have handled this.
We'll need separate stable and dev repositories, for bug fixes and new features respectively. In Amazon-land, we have separate live, test and dev environments for running code, code about to go live, and things we're just trying out. The dev environment is likely to be built when we need it, and then shut down again, so its IPs are likely to change.
Ideally, we'd like to hg push from our local dev repos up the chain, all the way to live. However, for reasons of server security and because the IP of servers (especially the transient dev environment) might change, we may find ourselves needing the servers to pull when they're created. We'll also have cases where autoscaling will spawn new servers and we need to get the most recent, tested, code from somewhere.
I'm interested to know how you've solved this/these problem(s) or if you have any suggestions for how we might go about it.
We assign one EC2 server with an elastic IP address and make it our central repository. Developers push/pull from this instance. All production/testing servers pull from this repository. This has worked pretty well over the past couple of years, with developers spread out across different time zones.
We also use ZoneEdit.com to handle the DNS to this IP address, which makes it convenient if we ever decide to use a different elastic IP address or move the repository off EC2 altogether.
Related
We are looking for a way to use GitHub on an internal system that we are developing at work. We have developed it in PHP and MySQL, with a fair bit of jQuery/Ajax, on a Windows Server VM running IIS. Other staff can access the frontend over the network using the IP address.
There are currently three people working on it and at the moment we directly edit the file on the VM as we need it to still communicate with the database to check our changes have worked. There is no option to install anything like WAMP on our individual machines and there are the usual group policy restrictions so the only access we have to a database is via the VM. We have been working with copies of files/folders and the database but there is always the risk that then merging these would be a massive task.
I do use GitHub (mainly desktop but I can just about get by with using the command line as long as I have a list of the command in front of me) at home to sync between my PC and Laptop, via GitHub.com and believe that the issues we get with several people needing to update the same file would be eradicated by using it here at work.
However, there are some queries we need to ensure we have straight in our heads before putting forward a request.
Is what we are asking for viable? Can several branches on the same server be worked on at the same time or would this only work on an individual machine.
Given that our network is fairly restricted, is there any way that we can work on the files on our machine and connect to a VM hosted database? I believe that an IDE will allow us to run php files on a standard machine (although a request for Eclipse is now around 6 weeks old and there is still no confirmation that we will get it any time soon) but will this also allow .
The stuff we do is not overly sensitive but the company would certainly not want what we do out there in a public repository (and also would not be likely to pay for a premium GitHub account) so we would need to branch/pull/merge directly from our machines to the VM.
Does anyone have any advice/suggestions/solutions to this? Although GitHub would be a preferred option as I already use it, we are open to any suggestion that will allow three people, on different machines, simultaneously work on a central system while ensuring that we do not overwrite or affect each others stuff.
Setting up a git repo on Windows is not trivial and may require a fair bit of work. You can try using SVN it is fairly straight forward to install on windows and has a better learning curve than Git. I am not saying SVN is better/worse as compared to Git, it's much better suited to your needs. We have a similar setup and we use Tortoise SVN https://subversion.apache.org/ as a client. SVN also has branches and stuff.
SVN for server side repository https://subversion.apache.org/
If you would still prefer Git on windows, check this out - https://www.linkedin.com/pulse/step-guide-setup-secure-git-remote-repository-windows-nivedan-bamal
1) It is possible to work on many branches and then merge them into a single branch. That's the preferred Git development way. You can do the same on SVN.
I setup mercurial on my server, but I am unclear how things should be. I am looking for more examples of different setups, but perhaps I am using the wrong keywords. Right now, it is only going to be a handful of developers, and I am unsure if I should just make the repo as the DocumentRoot. I really don't know what questions to ask since this is new to me, but I would appreciate it if anyone could provide some knowledge and guidance. Some questions that I do have right now is, how I should setup my servers and repositories? Should I setup a separate VirtualHost for a test clone before making it live? Anything would be helpful! Thanks in advance!
There's probably not a reason to do this. I would keep them separate but set up an automated process (either a custom script or continuous integration (CI)) to deploy from Mercurial to the site by running a single command. Optionally, you can make every commit trigger a deployment.
EDIT: With continuous integration, it is the CI's server's responsibility for deploying. If you use SSH, the CI would pull from hg, export, then upload through SSH. That should address your issues. For a comparison of CI servers that support Mercurial, see this question.
I don't have The answer to give you, since many variables and need affect the workflow, but here is some links to get you started :
http://www.zdnetasia.com/a-development-workflow-for-mercurial-62204755.htm
https://www.mercurial-scm.org/wiki/Workflows
http://www.webdevelopment.nicholastuck.com/tools/one-project-one-repository-mercurial-used-right/
I will also recommend you to read this excellent Mercurial introduction : http://hginit.com/
You can also find various questions on SO about workflows with Mercurial, have a look on the sidebars to the right for example.
When you will have some more specific question, don't hesitate to ask again !
I would make your DocumentRoot directory a first-level subdirectory of your repository, and here's some reasons why:
If you're using something like Apache to manage your server, you could put other meta-information - like sites-available and sites-enabled configuration files - in a sibling directory, since they're not really a part of the website documents.
Similarly, you can keep a "docs" directory right next to the code.
If your repository root is your DocumentRoot, all other things being equal, you are also serving up your .hg directory, where your whole repository history is, and your .hgignore file, that kind of thing. You can fix this with a .htaccess file, of course, but it's simpler just to have the child folder.
Essentially, codebases tend not to be exactly one-to-one matches with deployed sites, so I tend to favor having the document root be a subdirectory.
Deployment is a whole 'nother can of worms. It really depends on your needs as to what you do, but here's what I do:
I run a VirtualBox instance on my computer that looks as close as possible to what my deployed server looks like, at least as close as I can get the configuration files to be. I would argue that this approach is less error-prone than an additional VirtualHost entry. Depending on the project, I can get this down to being identical minus perhaps some DNS entries, so I can set everything up to either point to testing.myproject or production.myproject, and this I always automate (I use chef, but that is overkill for a smaller project) so that it's testable code and not prone to finger-fumbling. There's nothing worse than running smoke tests that wipe your database - and have the config accidentally pointing to your prod db. Running a virtual machine lets you painlessly test upgrades to the environment or OS of your server, and you can nuke and restore to a snapshot if you want to go to an earlier state of the machine's configuration.
If you really want to prevent SSH developer access to your prod machines - and IMO, that's a bad idea, because if you have problems on your production server, you've prevented your developers from diagnosing or fixing it - then I think your best bet is to use something like hudson, which is a continuous integration framework. You only give ssh access to the Hudson user to run your deploy script, but anyone (with the right privileges set in Hudson) can run that job. In fact, this is handy to have in an environment where you have e.g. some product management members you want to have the ability to update the production server without being able to log in. The "poor man's" version of this is using sudo to allow your devs to run a command as another user who does have ssh access - and only allowing them to run the publish script.
I would still recommend giving your devs access to your machine, though you don't have to hand over the keys to the kingdom. Just create a "developers" group, assign your devs to it, and give it enough permissions to play with the necessary directories of the server, and you should be good to go.
First off, I've been staring at page after page of solutions but none of them seem to fit the situation I have.
I have web developers all around the country using Windows workstations with Eclipse. We decided DVCS was best for us because the centralized system just isn't working (Serena: slow network connections takes forever to check in... they don't do it because it's not "streamlined", etc.)
We use Eclipse to edit and modify files on a development server in a different state. (Most DVCS scenarios assume you have a web server setup on your workstation or are doing binary executable development.)
What I'd like to try is to have a local repository for developer changes and "feature play" but automatically keep the development repository up to date. I thought of using Mercurial hooks to automatically pull/update/merge/push but that requires the developer to commit every time they want to test a change. (in order to fire the hook to upload their file to the development server.) It would be ideal to have this automatically happen on file saves because it's already an issue with training people to use version control (mainly because it's PITA slow currently with the WAN and virtual locations. Getting the WAN upgraded is not an option.)
My guess is that I'm going to have to setup Unison or something to keep the developer's repository synced to the development server as if it were a local copy and that of course would sync with the other developers. I was trying to find out if anyone had a solution that's streamlined/simple for keeping all developers up to date while allowing them to version control at will (easily.)
This is what branches are for. Have a branch called "unstable" and set your development server up with a repository that auto-updates on commits/merges to only that branch (via hook). Individual developers are free to work on feature branches and commit locally. When something is ready to be shared, the developer merges their changes into the "unstable" branch and pushes that branch to the development server/repository.
I manage my deployments this way. My Web server virtual host web root points to a Mercurial repository/working copy. When something is ready to go out, I merge it into the "stable" branch and push "stable" to the server. The repository hook updates the files on the virtual host with the new changes.
[hooks]
changegroup = /usr/bin/hg update stable >&2
I've only been doing this for a month or so but it's been working like a charm.
Also, +1 on checking out Hudson/Jenkins. What you're looking for is called "continuous integration" (CI) and Hudson and Jenkins make that happen.
If you can't have a development server on all workstations, then maybe using Test Driven Development with some sort of Unit Test framework for your language would work - most of those don't require a full server implementation to be able to write and test your code. It would require a change in paradigm but you'd sure get better quality of code out of that.
We're very happy with mercurial, but we had to change our habits... and it took some time
Each dev has now a test platform locally. Commits are made through branches and nothing is pushed before tests have been validated locally
Then, Hudson is our friend. To integrate teams works, each commit results in a build that tests the integrity of the app. Red signify rollback and return to the dev. Green is cool
Devs are here to commit 'sane' code and integrate by team to the central repo. They must decide when it's the time to push. No synchronized task can deliver them from this burden. When I see all the errors that can happen, even if each push choice is made consciensciouly by a human, I can't imagine if changes happened automatically on each file save
Good experience with Mercurial...
You say that you want to have a local repository for developer changes, but automatically push any changes to the server. If you cannot have a local dev environment to test changes on, what is the point of having local development branches? If your testing must be done on the development server, there is no way I can think of to allow for "feature play" in local repositories while maintaining any kind of sanity on your development server.
Your best bet in this scenario would probably be branching on the development server and having the server checkout different branches to test different features (hg update -C feature-blah). The default state for the server's repository should be a checkout of the main "devel" branch (hg update -C devel), and when any features or bugfix branches are verified as working they are merged back into "devel" and the server's repository updated from that.
Edit to clarify: your developers would either checkout from "devel" or from a feature branch to their local machine. They would then make any edits and push it back up to the server, and then switch the server's active branch to the newly-updated code.
Also, I am assuming from your other comments that there is only one dev server and it is only able to run one version of the code at a time. If this is not the case, my answer makes no sense at all.
Howto configure system to have one master and multiple slaves where building normal c-code with gmake? How slaves can access workspace from master? I guess NFS share is way to go, but if that's not possible any other options?
http://wiki.hudson-ci.org/display/HUDSON/Distributed+builds is there but cannot understand how workspace sharing is handled?
Rsync? From master: SCM job -> done -> rsync to all slaves -> build job and if was done on slave -> rsync workspace back to master?
Any proof of concept or real life solutions?
When Hudson runs a build on a slave node, it does a checkout from source control on that node. If you want to copy other files over from the master node, or copy other items back to the master node after a build, you can use the Copy to Slave plugin.
It's surely a late answer, but may help others.
I'm currently using the "Copy Artifact plug-in" with great results.
http://wiki.hudson-ci.org/display/HUDSON/Copy+Artifact+Plugin
(https://stackoverflow.com/a/4135171/2040743)
Just one way of doing things, others exist.
Workspaces are actually not shared when distributed to multiple machines, as they exist as directories in each of the multiple machines. To solve the coordination of items, any item that needs distributed from one workspace to another is copied into a central repository via SCP.
This means that sometimes I have a task which needs to wait on the items landing in the central repository. To fix this, I have the task run a shell script which polls the repository via SCP for the presence of the needed items, and it errors out if the items aren't available after five minutes.
The only downside to this is that you need to pass around a parameter (build number) to keep the builds on the same page, preventing one build from picking up a previous version built artifact. That and you have to set up a lot of SSH keys to avoid the need to pass a password in when running the SSH scripts.
Like I said, not the ideal solution, but I find it is more stable than the ssh artifact grabbing code for my particular release of Hudson (and my set of SSH servers).
One downside, the SSH servers in most Linux machines seem to really lack performance. A solution like mine tends to swamp your SSH server with a lot of connections coming in at about the same time. If you find the same happens with you, you can add timer delays (easy, imperfect solution) or you can rebuild the SSH server with high-performance patches. One day I hope that the high-performance patches make their way into the SSH server base code, provided that they don't negatively impact the SSH server security.
My team has a local development network which is not physically connected to any outside network. This is a contractual obligation and CANNOT be avoided. We also have to coordinate with a team which located halfway across the country and, as previously implied, has no direct network connectivity to us. Our only method of transferring data involves copying data to a USB disk and sending via email/ftp/etc.
NOTE: Let's not discuss the network connection issue or the obvious security flaw with the USB disk access. These issues are non-negotiable.
We're still convincing the external team to use Mercurial (currently don't use ANY SCM). Assume for the rest of this question that they're using Mercurial. We're going to force their hand any day now.
We switched to Mercurial in hopes of being able to utilize the distributed nature to better sync w/ the external team. Internally, we're using Mercurial much like a central server SCM. Each developer clones from a master repo on your integration server. Changes are pushed/pulled from this central location.
Here lies the actual question content:
What is the best way to communicate changes to the remote team (assuming they're using a similar Mercurial setup to us)? Should I have a local master repo for local push/pull), and a local integration repo for remote push/pull? How can I best avoid complicated merge issues that will arise? If we use Mercurial bundles to push changes, who will do the merges and against which repository?
You can basically use it in exactly the same way as if you were online.
You just need to replicate the remote repo locally and unbundle every changeset they send you in it. You should never push your changes directly into the local-mirror (it should always reflect the state of the remote team).
Afterwards you decide what you want, doing merges on your side or on their side, it doesn't really matter.