Where do you keep the configuration files for your stack? - configuration

For the website(s) I am a developer for we have a number of different technologies which make up our stack, each with a different set of configurations etc.
This is a Rails stack, so we're running things including:
Nginx w/ Passenger
Varnish
Redis
Memcached
MySQL
MongoDB
As we're continually tweaking our configs and changing them to support our continually changing system, and if we were to 'lose' the configurations (e.g. due to a server crash or otherwise) it would be a huge pain to rebuild from memory.
Given that version control would be extremely useful I can quite easily add these files into a Git repo or similar and store them in the cloud somewhere, but what about application-specific configuration (for example, URL Rewrite config for a website on a shared server)? Should these be in this same repo as well?

Put website specific stuff in the Git repo of that website, and system-wide stuff in a "systems" git repo.

If you are not currently using Source Control (of any kind) in your development environment, stop whatever you are doing and sort that out right now. That is the most important aspect of your setup.
At a very minimum you should keep EVERYTHING that is a text file and relates to your app (yes all config files, URL rewrites).
Others suggest you can put binary files also, but at the very minimum all source code, all config etc should be in source control.
By the end of the day :)

Related

What techniques exist for ensuring production environment variables are persisted in some form within a project?

Apologies for title phrasing; I'm sure it could be clearer.
In the Twelve-Factor App methodology, we are encouraged to store web app configuration using environment variables. When using a managed platform such as Heroku, this configuration is safely persisted as a feature of the platform, automatically made available to each deployment, and readily inspectable by developers. This feature is assumed to be stable and, as far as I know, no separate copy of production config need be maintained elsewhere.
When using a simpler unmanaged deployment process, e.g. git push-ing non-containerised code to a VPS, environment variables can still be used (e.g. a non-source-controlled .env file) but they are now effectively ephemeral, and if the VPS is destroyed through some error or incident, the project can be redeployed elsewhere but the configuration variables will need to be reconstructed from something.
My question is, in such a scenario, what is considered best practice around what that "something" should be? When joining a new project I can often cp .env.example .env to set up a typical local configuration. The values in the example file are usually safe to save in source control. However, I don't know where (if anywhere) I should be saving production configuration in order that I could configure a new production deployment of the kind described above. In the Heroku example, the configuration can always be inspected. But in the VPS example, if that running VPS is the only location where the complete production configuration exists, its unexpected disappearance presents a problem.
Obviously any credentials in the config could be regenerated, but that could quickly turn into a non-trivial exercise. I'm wondering how more experienced folks deal with this issue. Thanks!

Using versioning on a VM with several users

We are looking for a way to use GitHub on an internal system that we are developing at work. We have developed it in PHP and MySQL, with a fair bit of jQuery/Ajax, on a Windows Server VM running IIS. Other staff can access the frontend over the network using the IP address.
There are currently three people working on it and at the moment we directly edit the file on the VM as we need it to still communicate with the database to check our changes have worked. There is no option to install anything like WAMP on our individual machines and there are the usual group policy restrictions so the only access we have to a database is via the VM. We have been working with copies of files/folders and the database but there is always the risk that then merging these would be a massive task.
I do use GitHub (mainly desktop but I can just about get by with using the command line as long as I have a list of the command in front of me) at home to sync between my PC and Laptop, via GitHub.com and believe that the issues we get with several people needing to update the same file would be eradicated by using it here at work.
However, there are some queries we need to ensure we have straight in our heads before putting forward a request.
Is what we are asking for viable? Can several branches on the same server be worked on at the same time or would this only work on an individual machine.
Given that our network is fairly restricted, is there any way that we can work on the files on our machine and connect to a VM hosted database? I believe that an IDE will allow us to run php files on a standard machine (although a request for Eclipse is now around 6 weeks old and there is still no confirmation that we will get it any time soon) but will this also allow .
The stuff we do is not overly sensitive but the company would certainly not want what we do out there in a public repository (and also would not be likely to pay for a premium GitHub account) so we would need to branch/pull/merge directly from our machines to the VM.
Does anyone have any advice/suggestions/solutions to this? Although GitHub would be a preferred option as I already use it, we are open to any suggestion that will allow three people, on different machines, simultaneously work on a central system while ensuring that we do not overwrite or affect each others stuff.
Setting up a git repo on Windows is not trivial and may require a fair bit of work. You can try using SVN it is fairly straight forward to install on windows and has a better learning curve than Git. I am not saying SVN is better/worse as compared to Git, it's much better suited to your needs. We have a similar setup and we use Tortoise SVN https://subversion.apache.org/ as a client. SVN also has branches and stuff.
SVN for server side repository https://subversion.apache.org/
If you would still prefer Git on windows, check this out - https://www.linkedin.com/pulse/step-guide-setup-secure-git-remote-repository-windows-nivedan-bamal
1) It is possible to work on many branches and then merge them into a single branch. That's the preferred Git development way. You can do the same on SVN.

PHPStorm cache on downloaded files?

So I've used PHPStorm before, and have been asked to evaluate it (along with some other coworkers) as I already had my own private license, for how effective it would be with my current company. Although I'm hitting a bit of a snag that I really dont think should be a show stopper.
Anyway, the way my company has its development environments setup now is a bit odd. We check everything into subversion, into different directories than what it will end up on the clients system because we save them to debian packages. This makes working with the files directly from subversion difficult, as PHPstorm has no idea where related files are located.
However, because of this, our files on our development virtual machines are not directly under subversion. Instead, we patch up our virtual machines by installing the updated packages when needed.
This makes life difficult for an IDE, which wants to keep a local copy of the files on your system. The best way I can figure out how to do this, is to run a synchronize between the remote server and local server (going by timestamp and size should be fine, and completes in less than a minute). It would be fine to tell developers "after you patch, make sure you sync with phpstorm".
However, the problem I'm having is, if I modify a file on the remote system, sync (and it says it downloaded) it takes several minutes after opening the file for the remote changes to be seen in phpstorm
I have no idea why this would be, and could potentially lead to really bad results if someone makes a few quick changes, saves, and overwrites the needed files.
I'm currently running phpstorm on Ubuntu 14.04 64-bit
Any help would be appreciated

Should I make my repository my DocumentRoot for my website?

I setup mercurial on my server, but I am unclear how things should be. I am looking for more examples of different setups, but perhaps I am using the wrong keywords. Right now, it is only going to be a handful of developers, and I am unsure if I should just make the repo as the DocumentRoot. I really don't know what questions to ask since this is new to me, but I would appreciate it if anyone could provide some knowledge and guidance. Some questions that I do have right now is, how I should setup my servers and repositories? Should I setup a separate VirtualHost for a test clone before making it live? Anything would be helpful! Thanks in advance!
There's probably not a reason to do this. I would keep them separate but set up an automated process (either a custom script or continuous integration (CI)) to deploy from Mercurial to the site by running a single command. Optionally, you can make every commit trigger a deployment.
EDIT: With continuous integration, it is the CI's server's responsibility for deploying. If you use SSH, the CI would pull from hg, export, then upload through SSH. That should address your issues. For a comparison of CI servers that support Mercurial, see this question.
I don't have The answer to give you, since many variables and need affect the workflow, but here is some links to get you started :
http://www.zdnetasia.com/a-development-workflow-for-mercurial-62204755.htm
https://www.mercurial-scm.org/wiki/Workflows
http://www.webdevelopment.nicholastuck.com/tools/one-project-one-repository-mercurial-used-right/
I will also recommend you to read this excellent Mercurial introduction : http://hginit.com/
You can also find various questions on SO about workflows with Mercurial, have a look on the sidebars to the right for example.
When you will have some more specific question, don't hesitate to ask again !
I would make your DocumentRoot directory a first-level subdirectory of your repository, and here's some reasons why:
If you're using something like Apache to manage your server, you could put other meta-information - like sites-available and sites-enabled configuration files - in a sibling directory, since they're not really a part of the website documents.
Similarly, you can keep a "docs" directory right next to the code.
If your repository root is your DocumentRoot, all other things being equal, you are also serving up your .hg directory, where your whole repository history is, and your .hgignore file, that kind of thing. You can fix this with a .htaccess file, of course, but it's simpler just to have the child folder.
Essentially, codebases tend not to be exactly one-to-one matches with deployed sites, so I tend to favor having the document root be a subdirectory.
Deployment is a whole 'nother can of worms. It really depends on your needs as to what you do, but here's what I do:
I run a VirtualBox instance on my computer that looks as close as possible to what my deployed server looks like, at least as close as I can get the configuration files to be. I would argue that this approach is less error-prone than an additional VirtualHost entry. Depending on the project, I can get this down to being identical minus perhaps some DNS entries, so I can set everything up to either point to testing.myproject or production.myproject, and this I always automate (I use chef, but that is overkill for a smaller project) so that it's testable code and not prone to finger-fumbling. There's nothing worse than running smoke tests that wipe your database - and have the config accidentally pointing to your prod db. Running a virtual machine lets you painlessly test upgrades to the environment or OS of your server, and you can nuke and restore to a snapshot if you want to go to an earlier state of the machine's configuration.
If you really want to prevent SSH developer access to your prod machines - and IMO, that's a bad idea, because if you have problems on your production server, you've prevented your developers from diagnosing or fixing it - then I think your best bet is to use something like hudson, which is a continuous integration framework. You only give ssh access to the Hudson user to run your deploy script, but anyone (with the right privileges set in Hudson) can run that job. In fact, this is handy to have in an environment where you have e.g. some product management members you want to have the ability to update the production server without being able to log in. The "poor man's" version of this is using sudo to allow your devs to run a command as another user who does have ssh access - and only allowing them to run the publish script.
I would still recommend giving your devs access to your machine, though you don't have to hand over the keys to the kingdom. Just create a "developers" group, assign your devs to it, and give it enough permissions to play with the necessary directories of the server, and you should be good to go.

Where to place web server root?

I've just made an upgrade and now partly thinking on web-server directory structure for local workstation for web-development on linux platform. Running multiple hosts and different projects required. Where is it better to put all the server's docroots? /var/www? /srv? /www? I plan to make it as separate partition - could it be good for backups? :) I'm looking forward to your thoughts on this.
For development, you could put the files anywhere - perhaps in your home directory (you can allow Apache to serve files from your home directory by setting UserDir enabled in the Apache configuration: see http://httpd.apache.org/docs/2.1/mod/mod_userdir.html).
For production, /srv/www is probably the best place for the files; this is (loosely) defined in the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM
Additionally, /srv/www is typically (certainly on Fedora, for example) one of the locations that is regarded by SELinux as web content, which allows Apache to read the files.
Under /srv is the proper place for service data files. Making it a separate volume is not necessary, provided the volume it's on is relatively safe from becoming full.
It is well explained in the Filesystem Hierarchy Standard. It should go in /srv.