Where to place web server root? - mysql

I've just made an upgrade and now partly thinking on web-server directory structure for local workstation for web-development on linux platform. Running multiple hosts and different projects required. Where is it better to put all the server's docroots? /var/www? /srv? /www? I plan to make it as separate partition - could it be good for backups? :) I'm looking forward to your thoughts on this.

For development, you could put the files anywhere - perhaps in your home directory (you can allow Apache to serve files from your home directory by setting UserDir enabled in the Apache configuration: see http://httpd.apache.org/docs/2.1/mod/mod_userdir.html).
For production, /srv/www is probably the best place for the files; this is (loosely) defined in the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM
Additionally, /srv/www is typically (certainly on Fedora, for example) one of the locations that is regarded by SELinux as web content, which allows Apache to read the files.

Under /srv is the proper place for service data files. Making it a separate volume is not necessary, provided the volume it's on is relatively safe from becoming full.

It is well explained in the Filesystem Hierarchy Standard. It should go in /srv.

Related

Using berks for local development only?

I don't want to use berks in production because I don't like the idea of nodes going out to the web to pull cookbooks (I only want them to pull them from the Chef server in the normal way). But I like using Berks for local development because it resolves the dependencies for kitchen for me.
I was thinking about just adding berksfile and berksfile.lock to gitignore, but I figured I'd ask if it is possible to accomplish this with berks without removing it from production.
"nodes" will never go to the internet looking for cookbooks, they'll always be sourced from the chef server, so.... The question back is: how do you propose to deliver cookbooks to the chef server used to manage your production nodes?
What most people appear to do is commit the Berkshelf lock file and just run a "berks apply" against the target chef server. That will most likely fit your needs.
Personally, I like better separation between development and my production/non-production systems. I create a release tarball containing all the cookbooks that I've tested in development, using the "vendor" command in Berkshelf, and store this binary in a revision control system like Nexus. I suspect many would consider this over-kill, but it enables me to deliver an off-line (no internet connection required) and traceable delivery of my configuration.

Can html5 be used front end for an ftp server?

quick question to day. I've done a little digging around on the net and i can't really find a very definitive answer.
Basically, I run my own server on a redundant dual core, 4gb ram 2Tb pc (server1)
And on here, i would like to make an FTP partition. Reason being, i would very much like to be able to transfer files back and forth work, uni and home as i please.
I also run a website from my server which allows me to stream media from my hard drive to any laptop, tablet, desktop, iphone, android.. you name it!
I would LIKE to be able to add a section on my website where by I can log in and access my files as a sort of HTMF5 Front end.
I am aware and know how to create a login with a database which has md5 hash and store cookies to stop un-authorised people accessing my ftp.
Any help or a shove in the right direction would be much appreciated! Thanks in advance :D
Yes it's possible. but that won't be HTML5 ftp server etc that you mentioned.
You can achieve this by installing a web server on your machine like apache and then make directories public - run Apache on some port and you will be able to access the directory. if your server is running on port 8080, URL will be like: domain.com:8080 - You can style directory using this simple script & make this password protected as well using .htaccess .
osFileManager
The other option is to use some php script. Many commercial scripts are available and as well as open source. i recommend you trying osFileManager - it has a lot of features like:
Browse the directory structure
Create files
Upload files
Rename files
Move files
Delete files
Edit files
Change permissions
Change password
Create users
Here is it's installation instructions: http://www.osfilemanager.com/osfilemanager-docs.html
or a paid HTML5 & AJAX based script can be bought for 14$ from here:
http://codecanyon.net/item/file-manager-and-backup-system/5177206

storing images for my website

I want to setup seperate amazon ec2 instance where i store all my images uploaded via my website by users. I want to be able to show images from this exclusive server. I know how to setup DNS names which would point to this server. But i would like to know how to setup the directories, for example if i refer to an image url as http://images.mydomain.com/images/sample.jpg, then
images.mydomain.com is the server name and
images should be the folder name
now the question is should a webserver be running on this server which is what will serve the images or can i just make images folder public so that it is visible to entire world? How do avoid directory listing?
Pointer to any documentation would be greatly appreciated.
It certainly is possible to set up a separate EC2 instances to serve your images. You may have good reasons to do that--for example, you may want to authorize only specific users or groups of users to access certain images, in a way that's closely controlled by program logic.
OTOH, if you're just looking to segment the access of image/media files away from the server that provides HTML/web content, you will get much better performance / scalability by moving those files to a service that is specifically tuned for storage and web access. Amazon's S3 (Simple Storage Service) is one relatively straightforward option. Amazon's CloudFront content distribution network (CDN) or a competing CDN would be an even higher performance option.
Using a CDN for file access does add the complexity of configuring the CDN, but if you're going to the trouble of segmenting media access from your primary web server, and if you're expecting any significant I/O load, I've found it to be a high-return-for-effort-expended approach.
I would definitely not implement this as you are planning. You should store all your images in an Amazon S3 bucket and serve them via Amazon's CloudFront CDN. Why go through the hassle of setting up and maintaining an EC2 instance to do what Amazon has already done? S3 provides infinite storage, manages permissions, metadata, etc. CloudFront provides fast access to your images, caching them at edge locations all around the world. Additionally, you can use Amazon Route 53 (or some other DNS service) to point various CNAMEs to your CloudFront distribution.
If you're interested in this approach I'd be happy to provide more info on how to set this up.
Yes, you will definitly need to run a webserver on the machine. Otherwise it will not bepossible for clients to connect via http/port 80 and view the images in a browser. This has nothing to do with directory listing enabled. Once you have a webserver running, you can disable directory listing in its configuration.
Install an apache on your server and run it (http://httpd.apache.org/docs/2.0/install.html). You then setup what's called a 'site' in its configuration which is pointing to a local directory which will then be the base directory for your server. It could, for example, be /home/apache on a Unix system. There you create your images folder. If your apache is setup correctly you can then access your images via http://images.mydomain.com/images/sample.jpg.

Should I make my repository my DocumentRoot for my website?

I setup mercurial on my server, but I am unclear how things should be. I am looking for more examples of different setups, but perhaps I am using the wrong keywords. Right now, it is only going to be a handful of developers, and I am unsure if I should just make the repo as the DocumentRoot. I really don't know what questions to ask since this is new to me, but I would appreciate it if anyone could provide some knowledge and guidance. Some questions that I do have right now is, how I should setup my servers and repositories? Should I setup a separate VirtualHost for a test clone before making it live? Anything would be helpful! Thanks in advance!
There's probably not a reason to do this. I would keep them separate but set up an automated process (either a custom script or continuous integration (CI)) to deploy from Mercurial to the site by running a single command. Optionally, you can make every commit trigger a deployment.
EDIT: With continuous integration, it is the CI's server's responsibility for deploying. If you use SSH, the CI would pull from hg, export, then upload through SSH. That should address your issues. For a comparison of CI servers that support Mercurial, see this question.
I don't have The answer to give you, since many variables and need affect the workflow, but here is some links to get you started :
http://www.zdnetasia.com/a-development-workflow-for-mercurial-62204755.htm
https://www.mercurial-scm.org/wiki/Workflows
http://www.webdevelopment.nicholastuck.com/tools/one-project-one-repository-mercurial-used-right/
I will also recommend you to read this excellent Mercurial introduction : http://hginit.com/
You can also find various questions on SO about workflows with Mercurial, have a look on the sidebars to the right for example.
When you will have some more specific question, don't hesitate to ask again !
I would make your DocumentRoot directory a first-level subdirectory of your repository, and here's some reasons why:
If you're using something like Apache to manage your server, you could put other meta-information - like sites-available and sites-enabled configuration files - in a sibling directory, since they're not really a part of the website documents.
Similarly, you can keep a "docs" directory right next to the code.
If your repository root is your DocumentRoot, all other things being equal, you are also serving up your .hg directory, where your whole repository history is, and your .hgignore file, that kind of thing. You can fix this with a .htaccess file, of course, but it's simpler just to have the child folder.
Essentially, codebases tend not to be exactly one-to-one matches with deployed sites, so I tend to favor having the document root be a subdirectory.
Deployment is a whole 'nother can of worms. It really depends on your needs as to what you do, but here's what I do:
I run a VirtualBox instance on my computer that looks as close as possible to what my deployed server looks like, at least as close as I can get the configuration files to be. I would argue that this approach is less error-prone than an additional VirtualHost entry. Depending on the project, I can get this down to being identical minus perhaps some DNS entries, so I can set everything up to either point to testing.myproject or production.myproject, and this I always automate (I use chef, but that is overkill for a smaller project) so that it's testable code and not prone to finger-fumbling. There's nothing worse than running smoke tests that wipe your database - and have the config accidentally pointing to your prod db. Running a virtual machine lets you painlessly test upgrades to the environment or OS of your server, and you can nuke and restore to a snapshot if you want to go to an earlier state of the machine's configuration.
If you really want to prevent SSH developer access to your prod machines - and IMO, that's a bad idea, because if you have problems on your production server, you've prevented your developers from diagnosing or fixing it - then I think your best bet is to use something like hudson, which is a continuous integration framework. You only give ssh access to the Hudson user to run your deploy script, but anyone (with the right privileges set in Hudson) can run that job. In fact, this is handy to have in an environment where you have e.g. some product management members you want to have the ability to update the production server without being able to log in. The "poor man's" version of this is using sudo to allow your devs to run a command as another user who does have ssh access - and only allowing them to run the publish script.
I would still recommend giving your devs access to your machine, though you don't have to hand over the keys to the kingdom. Just create a "developers" group, assign your devs to it, and give it enough permissions to play with the necessary directories of the server, and you should be good to go.

Where do you keep the configuration files for your stack?

For the website(s) I am a developer for we have a number of different technologies which make up our stack, each with a different set of configurations etc.
This is a Rails stack, so we're running things including:
Nginx w/ Passenger
Varnish
Redis
Memcached
MySQL
MongoDB
As we're continually tweaking our configs and changing them to support our continually changing system, and if we were to 'lose' the configurations (e.g. due to a server crash or otherwise) it would be a huge pain to rebuild from memory.
Given that version control would be extremely useful I can quite easily add these files into a Git repo or similar and store them in the cloud somewhere, but what about application-specific configuration (for example, URL Rewrite config for a website on a shared server)? Should these be in this same repo as well?
Put website specific stuff in the Git repo of that website, and system-wide stuff in a "systems" git repo.
If you are not currently using Source Control (of any kind) in your development environment, stop whatever you are doing and sort that out right now. That is the most important aspect of your setup.
At a very minimum you should keep EVERYTHING that is a text file and relates to your app (yes all config files, URL rewrites).
Others suggest you can put binary files also, but at the very minimum all source code, all config etc should be in source control.
By the end of the day :)