I run a blog on which every external images posted by me or my users on a topic are directly imported to my web server (in the images folder).
This "images" folder is chmod to 755, however all the files in it (.jpg/png/jpeg/gif) are automatically chmod to 644.
In this case, if a user posts an "infected" image that contains a php script or malicious code, are they blocked by chmod 644 or is there still a chance that the code gets executed when the hacker opens the url mysite.com/images/infectedfile.png ?
Thanks for your help
Interesting question. There are two problems that I can see with a malicious file ending up in your images folder:
What does the malicious file do to your server?
What does the malicious file do to your users that download it?
I don't believe that your solution will be complete by any means. Considering what happens when a user visits a post with a malicious image on it. An unlucky user could be infected by the malicious code. A lucky user will have an anti malware product that will detect this and block the page or image from being loaded.
The lucky user is unlucky for you as this means your reputation with that user is damaged they might not come back. Worse still if the user reports your blog e.g. via Google Safe Browsing as serving malicious files then you can find yourself on a block list. This will mean virtually zero traffic to your site.
I would strongly recommend a layered approach to solving this problem. I would look for a technology that allows you to scan files as they are uploaded, or, at least confirm they are not known to be malicious. Secondly I would look at a solution that scans the files you are serving from your web server.
Understandably this becomes harder to do with modern solutions where you don't necessarily own the operating system for your web server and therefore can't deploy a traditional anti malware product. There are some options for looking up files as they pass into your environment via an API Virus Total and Sophos Intelix are a couple of examples but I am sure there are more.
You should configure your images folder in your web server software to not serve files unless they end in the exact file types you allow (e.g. .jpg, .png, .gif), and not to run the PHP process on them. You should also ensure that the correct mime type for those file types is always used, so image/jpeg for .jpg, image/png for .png, image/gif for .gif.
If those two things are configured, even if someone uploads e.g. a .jpg file that actually contains something malicious, the browser will try to load it as a JPEG image, and fail because it's not valid. Occasionally browsers have bugs that mean a specially broken image file can cause the browser to do something it shouldn't.
Related
Mozilla Development Network recommends sandboxing uploaded files to a different subdomain:
Sandbox uploaded files (store them on a different server and allow
access to the file only through a different subdomain or even better
through a fully different domain name).
I don't understand what additional security this would provide; my approach has been to upload files on the same domain as the web page with the <input> form control, restrict uploaded files to a particular directory and perform antivirus scans on them, and then allow access to them on the same domain they were uploaded to.
There's practical/performance reasons and security reasons.
From a practical/performance reason, unless you are on a budget, store your files on a system optimised for performance. This can be any type of CDN if you are serving them once uploaded, or just isolated upload-only servers. You can do this yourself, or better off you can use something like AWS S3 and customise the permissions to your needs.
From a security point of view, it is incredibly hard to protect an uploaded file from being executable, specially if you are using a server side scripting language. There are many methods, both in HTTP and in the most popular HTTP servers (nginx, apache, ...) to harden things and make them secure, but there is so many things that you have to take into account and another bunch that you would never even think about, that it is much safer to just leave your files somewhere else altogether, ideally where there's no scripting engine that could run script code on them.
I assume that the different subdomain or domain recommendation is about XSS, vulns exploiting bad configurations of CORS, prevention on phishing attempts (like someone successfully uploading content to your site that mimics your site but does something nasty such as stealing user credentials or providing fake information, and the site would still be served from your domain and there wouldn't be an https security alert from the certificate either).
Why can't a local page like
file:///C:/index.html
Send a request for a resource
file:///C:/data.json
This is prevented because it's a cross origin request, but in what way is that cross origin? I don't understand why this is a vulnerability / prevented. It just seems like a massive pain when I want to whip up a quick utility for something in JavaScript/HTML and I can't run it without uploading to to a server somewhere because of this seemingly arbitrary restriction.
HTML files are expected to be "safe". Tricking people into saving an HTML document to their hard drive and then opening it is not difficult (Here, just open the HTML file attached to this email would cause many email clients to automatically safe it to a tmp directory and open it in the default application).
If JavaScript in that file had permission to read any file on the disk, then users would be extremely vulnerable.
It's the same reason that software like Microsoft Word prompts before allowing macros to run.
It protects you from malicious HTML files reading from your hard drive.
On a real server, you are (hopefully) not serving arbitrary files, but on your local machine, you could very easily trick users into loading whatever you want.
Browsers are set up with security measures to make sure that ordinary users won't be at increased risk. Imagine that I'm a malicious website and I have you download something to your filesystem that looks, to you, like a regular website. Imagine that downloaded HTML can access other parts of your file system and then send that data to me through AJAX or perhaps another piece of executable code on the filesystem that came with this package. To a regular user this might look like a regular website that just "opened up a little weird but I still got it to work." If the browser prevents that, they're safer.
It's possible to turn these flags off (as in here: How to launch html using Chrome at "--allow-file-access-from-files" mode?), but that's more for knowledgeable users ("power users"), and probably comes with some kind of warning about how your browsing session isn't secure.
For the kind of scenarios you're talking about, you should be able to spin up a local HTTP server of some sort - perhaps using Python, Ruby, or node.js (I imagine node.js would be an attractive option for testing javascript base apps).
I have this webpage for testing purpose, and I am using for src and href the file-protocol, like this:
file:///D:/Internet/TestPictures/air elemental , luftelementar.jpg
The idea behind is to
save bandwidth, because the file is locally on my computer/mobile phone and does not need to be loaded later over the mobile network
saving webspace(files are locally on my pc)
Only I can see the files ?!(hopefully yes)
I have a few questions:
is it risky to do it that way, because an attacker knows the folder structure on my pc ?
why, doesn't the picture load initially like on my local pc and NAS ?
Thank you
To answer your last question: For security reasons, any references to the local filesystem won't work when the page is delivered from another domain, at least not on modern browsers. If that weren't the case, then a maliciously crafted website would have access to your local files.
As for the other concerns, since browsers already cache data, relying on a file being your local system would only save the initial transfer, which is a negligible consequence. Having others know about your filesystem structure isn't good per se, but it's not inherently risky unless the attacker has access to your system. But all of this is pretty much moot since you won't be able to access the referenced file as soon as the page is delivered by anything other than the filesystem anyway.
I'm curious what are some effects/downside of not putting an index.html file to your directories (e.g images). I know when an index file is not present to a directory, files inside that directory are no longer private and will be visible to the browsers when point (eg yoursite.com/images/). Aside from that what are some big effects to consider? and how to properly secure them.
thanks!
This depends on your web server, but there are two disadvantages to not having one.
If not secured, some web servers will show the directory contents if there is no default page.
If someone types your_site/directory/, and there is no default page, they will receive a 404 error.
With current web servers, there are many ways to get around not having a default page for each directory. You can set the custom 404 to redirect to a page that is available, and some can put one in automatically for you. As far as security goes, you can just turn off directory browsing to not allow people to see the contents.
If you are configuring your own server, not having one becomes less of an issue since you can control how the server responds to such situations. People normally just put it in as a fail safe, to make sure that the above two problems are taken care of.
We have several images and PDF documents that are available via our website. These images and documents are stored in source control and are copied content on deployment. We are considering creating a separate image server to put our stock images and PDF docs on - thus significantly decreasing the bulk of our deployment package.
Does anyone have experience with this approach?
I am wondering about any "gotchas" - like XSS issues and/or browser issues delivering content from the alternate sub-domain?
Pro:
Many browsers will only allocate two sockets to downloading assets from a single host. So if index.html is downloaded from www.domain.com and it references 6 image files, 3 javascript files, and 3 CSS files (all on www.domain.com), the browser will download them 2 at a time, with the other blocking until a socket is free.
If you pull the 6 image files off onto a separate host, say images.domain.com, you get an extra two sockets dedicated to download your images. This parallelizes the asset download process so, in theory, your page could render twice as fast.
Con:
If you're using SSL, you would need to either get an additional single-host SSL certificate for images.domain.com or a wildcard SSL certificate for *.domain.com (matches any subdomain). Failure to do so will generate a warning in the browser saying the page contains mixed secure and insecure content.
You will also, with a different domain, not send the cookies data with every request. This can increase performance.
Another thing not yet mentioned is that you can use different web servers to serve different sorts of content. For example, your static content could be served via lighttpd or nginx while still serving your dynamic content off Apache.
Pros:
-load balancing
-isolating a different functionality
Cons:
-more work (when you create a page on the main site you would have to maintain the resources on the separate server)
Things like XSS is a problem of code not sanitizing input (or output for that matter). The only issue that could arise is if you have sub-domain specific cookies that are used for authentication.. but that's really a trivial fix.
If you're serving HTTPS and you serve an image from an HTTP domain then you'll get browser security alert warnings pop up when you use it.
So if you do HTTPS, you'll need to buy HTTPS for your image domain awell if you don't want to annoy the hell out of your users :)
There are other ways around this, but it's not particularly in the scope of this answer - it was just a warning!