Best method of showing clients their website during development - language-agnostic

We are trying to streamline the process of showing clients their websites whilst in development without the need to change absolute paths etc.
We mostly develop locally and change our hosts files to reflect the domain name, when we are ready to show the client we copy the files to www.client.com/dev but I'm looking for a better method, any suggestions that can make this process smoother and faster would be great.

If you always host the site on a separate domain and not in a subdirectory, you will never have to change absolute paths. So instead of hosting a site in development at www.client.com/dev try dev.client.com. Another option would be to use client.yourcompany.com.
Also try to protect the site in development with HTTP basic authentication. This is easy to set up in most web servers, without changing your web application. Also, if the content is even remotely sensitive in any way, use HTTPS as well.
Alternatively, let them simply come over to your office and present it to them (or go to them and present it). The upside is that you have full control over what they will and won't see, and it never has to go online.

Well, we have client.t.uw.ru site which is universally visible.
When it matures, it moves onto www.client.com and is pushed to search engines.
Thus, we have a * DNS entry on t.uw.ru domain which makes it easy.

Related

Spring Application - Getting images on Amazon S3 to client

I am building a Spring Web Application hosted on Elastic Beanstalk. I use S3 to store user uploaded images which works great. What I don't understand is how fetching images from S3 to the client work. I found three alternatives.
1.Get the image in a controller and send it to the client. Like this:
S3Object object = amazonS3Client.getObject("bucketname", "path/to/image");
2.Open up all images and reach it directly by an URL in the client. Something like this:
<img src="http://aws.amazon.com/bucket/path/to/image.jpg">
3.Use signed download URLs that only working for a certain time. Like this:
GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest("bucketname", "path/to/image");
String url = conn.generatePresignedUrl(request)
Im not sure which approach to go for. Routing it through the web server seems unnecessary, since it loads the server. Open the URLs to anyone might higher requests and costs since anyone can use the images. And the third way is new to me, haven't really seen anyone practising this which makes me insecure if this is really the way to go.
So, how is this usually done?
And how is this used in the development environment versus production environment. I guess its not changing? Or is it common to use spring profiles to change the location of static content while developing and only use S3 for production?
If your hosting Javascript, CSS on S3, is it then most common to go for approach 2 and open them up for everyone?
For me it depends upon the requirements you have for access control for images uploaded by a user.
If the images are non-sensitive i.e. it wouldn't really matter if someone else got hold of another user's images, then I would go for approach 2.
If on the other hand it would be a disaster if someone managed to get hold of another user's images, then I would go for approach 3 (or some other form of expiring token access to the images).
The last time I did this I went for approach 2 because the images were non-sensitive. To try and prevent people from discovering images, we did apply a hashing function to the name of the image, but again I wasn't massively concerned about this. In either case, a well defined bucket structure that can be easily worked out by the application when constructing the URL for an image is useful. So for you, perhaps consider something like:
s3:bucket_name/images/users/<hashed_and_salted_user_name>/<user_images>
As for you request regarding dev vs prod environments, then matching a bucket name to the Spring profile is the approach we used. So for example:
s3:bucket_name/prod/images/users/user/foo.jpg
s3:bucket_name/dev/images/users/user/foo.jpg
As you can probably guess we had Spring profiles named "prod" and "dev". The code for building image URLs took into account the name of the current Spring profile when creating the URL. Gives a nice separation between environments.
In terms of CSS and Javascript, then I tend to host obfuscated/minified versions in the production S3 buckets, and full versions in the dev/test buckets (mainly for performance rather that trying to hide code). In addition I'd use some sort versioning/naming structure in how you host CSS/Javascript in S3 so that you can determine what "version" of resources your app is using. So for example:
s3:bucket_name/css/app-1.css
s3:bucket_name/css/app-2.css
The version of the CSS/Javascript resources is updated each time you push a new version into production.
By going down this path you kinda look at S3 as the final resting place for a piece of Javascript/CSS when it is ready to go into the wide world of production. Once there, you know it will never change. If CSS/Javascript does change, then the user has to fetch a new resource from S3 as the version will be incremented. You can hook this into your build process so that your main app is always referencing the latest version of CSS/Javascript. I found this has two useful functions:
Makes it very easy to determine which version of a resource your application is running with
Makes it very easy to cache resources (either with browser or something like CloudFront) as you know they will never change
Hope that helps.

web page needed for bypassing proxy restricted sites

I am looking for ways to browse sites that are blocked by proxy filters at my location.
One solution i came up with was to build a page that would take a input of a URL and display the site in an iframe. Thus i would have a window into a browser on a page that is being displayed by my proxy. I was going to host this on my personal web site and use it to access restricted content. this way i have access to blogs, and forums where there is a wealth of information that is blocked by a backwards blanketed restriction list.
How can i make a web page similar to this? Would it be simple html and javascript, do I need .Net?
What you aim to do has to be done server-side. When you put a page in an iframe, your web browser loads it, and will do so just as if you went directly to the URL.
There is no way around this via client-side code, such as JavaScript.
If you truly want to reinvent the wheel, pick a language and look into whatever functions download files. No need to do this though when there are plenty of web-based proxy services, such as http://www.hidemyass.com.
Even if you loaded it in an iframe, the request for the page in the iframe will still go through the proxy and so you will still be blocked.
You'd have to do something like open a socket to the site through your web host and then download the content and redisplay it. That's assuming your host isn't also blocked. Also, you'll lose the benefits of cookies and sessions this way (ie. you won't be able to be logged into things unless the session id is in the query string).
The fastest and simplest solution would be to create a free Log Me In account at www.logmein.com. then setup your host computer at home, login from work, and browse freely. I do this myself at work so no one can see my personal browsing history when I dont want them to. This of course would only work if logmein.com was not a blocked site at your work. good luck!
It depends upon the "filter" complexity. If you have your own website that you can reach through the proxy or if your computer can run as a webserver, you could try accessing via a proxy script such as "CGIProxy." There are online services that do this too. However, some proxy filters can detect these methods as well and you'd still be out of luck. No javascript or HTML tricks can overcome the proxy filter.

What are the downside of not having an index.html file to some directories

I'm curious what are some effects/downside of not putting an index.html file to your directories (e.g images). I know when an index file is not present to a directory, files inside that directory are no longer private and will be visible to the browsers when point (eg yoursite.com/images/). Aside from that what are some big effects to consider? and how to properly secure them.
thanks!
This depends on your web server, but there are two disadvantages to not having one.
If not secured, some web servers will show the directory contents if there is no default page.
If someone types your_site/directory/, and there is no default page, they will receive a 404 error.
With current web servers, there are many ways to get around not having a default page for each directory. You can set the custom 404 to redirect to a page that is available, and some can put one in automatically for you. As far as security goes, you can just turn off directory browsing to not allow people to see the contents.
If you are configuring your own server, not having one becomes less of an issue since you can control how the server responds to such situations. People normally just put it in as a fail safe, to make sure that the above two problems are taken care of.

SSL Encryption and an external image server

I have an ASP.NET web site technology that I use for scores of clients. Each client gets their own web site (a copy of the core site that can then be customized). The web site includes a fair amount of content - articles on health and wellness - that is loaded from a central content server. I can load the html for these articles from a central content server by copying from the content server and then inserting the text into the page as it is produced.
Easy so far.
However, these articles have image references that point back to the central server. The problem that I have is due to the fact that these sites are always accessed (every page) via an SSL link. When a page with an external image reference is loaded, the visitor receives a message that the page "contains both secure and insecure elements" (or something similar) because the images come from the (unsecured) server. There is really no way around this.
So, in your judgment, is it better to:
A) just put a cert on the content server so I can get the images over SSL? Are there problems there due to the page content having two certs? Any other thoughts?
B) change the links to the article presentation page so they don't use SSL? They don't need SSL but the left side of the page contains lots of links to pages that do need - all of which are now relative links. Making them all absolute links is grody because each client's site has its own URL so all links would need to be generated in code (blech).
C) Something else that I haven't thought of? This is where I am hoping that someone with experience in the area will offer something brilliant!
NOTE: I know that I can not get rid of the warning about insecure elements - it is there for a reason. I am just wondering if anyone else has experience in this area and has a reasonable compromise or some new insight.
Not sure how feasable this is but it may be possible to use a rewrite or proxy module to mirror the (img directory) structure on each clone to that of the central. With such a rule in place you could use relative img urls instead & internally rewrite all requests to these images over to the central server, silently
e.g.:
https://cloneA/banner.jpg -> http://central/static/banner.jpg
https://cloneB/topic7/img/header.jpg -> http://central/static/topic7/header.jpg
I'd go with B.
Sadly, I think you'll find this is a sad fact of life in SSL. Even if you were to put a cert on the other server, I think it may still get confused because of different sites [can't confirm nor deny though], and regardless, you don't want to waste the time of your media server by encrypting images.
I figured out a completely different way to import the images late last night after asking this question. In IIS, at least, you can set up "Virtual Directories" that can point essentially anywhere (I'm now evaluating whether to use a dedicated directory on each web server or a URL). If I use a dedicated directory on each server I will have three directories to keep up to date, but at least I don't have 70+.
Since each site will pull the images using resource locations found on the local site, then I don't have to worry about changing the SSL status of any page.

Pros and Cons of a separate image server (e.g. images.mydomain.com)?

We have several images and PDF documents that are available via our website. These images and documents are stored in source control and are copied content on deployment. We are considering creating a separate image server to put our stock images and PDF docs on - thus significantly decreasing the bulk of our deployment package.
Does anyone have experience with this approach?
I am wondering about any "gotchas" - like XSS issues and/or browser issues delivering content from the alternate sub-domain?
Pro:
Many browsers will only allocate two sockets to downloading assets from a single host. So if index.html is downloaded from www.domain.com and it references 6 image files, 3 javascript files, and 3 CSS files (all on www.domain.com), the browser will download them 2 at a time, with the other blocking until a socket is free.
If you pull the 6 image files off onto a separate host, say images.domain.com, you get an extra two sockets dedicated to download your images. This parallelizes the asset download process so, in theory, your page could render twice as fast.
Con:
If you're using SSL, you would need to either get an additional single-host SSL certificate for images.domain.com or a wildcard SSL certificate for *.domain.com (matches any subdomain). Failure to do so will generate a warning in the browser saying the page contains mixed secure and insecure content.
You will also, with a different domain, not send the cookies data with every request. This can increase performance.
Another thing not yet mentioned is that you can use different web servers to serve different sorts of content. For example, your static content could be served via lighttpd or nginx while still serving your dynamic content off Apache.
Pros:
-load balancing
-isolating a different functionality
Cons:
-more work (when you create a page on the main site you would have to maintain the resources on the separate server)
Things like XSS is a problem of code not sanitizing input (or output for that matter). The only issue that could arise is if you have sub-domain specific cookies that are used for authentication.. but that's really a trivial fix.
If you're serving HTTPS and you serve an image from an HTTP domain then you'll get browser security alert warnings pop up when you use it.
So if you do HTTPS, you'll need to buy HTTPS for your image domain awell if you don't want to annoy the hell out of your users :)
There are other ways around this, but it's not particularly in the scope of this answer - it was just a warning!