Using file:// in the internet - html

I have this webpage for testing purpose, and I am using for src and href the file-protocol, like this:
file:///D:/Internet/TestPictures/air elemental , luftelementar.jpg
The idea behind is to
save bandwidth, because the file is locally on my computer/mobile phone and does not need to be loaded later over the mobile network
saving webspace(files are locally on my pc)
Only I can see the files ?!(hopefully yes)
I have a few questions:
is it risky to do it that way, because an attacker knows the folder structure on my pc ?
why, doesn't the picture load initially like on my local pc and NAS ?
Thank you

To answer your last question: For security reasons, any references to the local filesystem won't work when the page is delivered from another domain, at least not on modern browsers. If that weren't the case, then a maliciously crafted website would have access to your local files.
As for the other concerns, since browsers already cache data, relying on a file being your local system would only save the initial transfer, which is a negligible consequence. Having others know about your filesystem structure isn't good per se, but it's not inherently risky unless the attacker has access to your system. But all of this is pretty much moot since you won't be able to access the referenced file as soon as the page is delivered by anything other than the filesystem anyway.

Related

Chmod a folder of imported images

I run a blog on which every external images posted by me or my users on a topic are directly imported to my web server (in the images folder).
This "images" folder is chmod to 755, however all the files in it (.jpg/png/jpeg/gif) are automatically chmod to 644.
In this case, if a user posts an "infected" image that contains a php script or malicious code, are they blocked by chmod 644 or is there still a chance that the code gets executed when the hacker opens the url mysite.com/images/infectedfile.png ?
Thanks for your help
Interesting question. There are two problems that I can see with a malicious file ending up in your images folder:
What does the malicious file do to your server?
What does the malicious file do to your users that download it?
I don't believe that your solution will be complete by any means. Considering what happens when a user visits a post with a malicious image on it. An unlucky user could be infected by the malicious code. A lucky user will have an anti malware product that will detect this and block the page or image from being loaded.
The lucky user is unlucky for you as this means your reputation with that user is damaged they might not come back. Worse still if the user reports your blog e.g. via Google Safe Browsing as serving malicious files then you can find yourself on a block list. This will mean virtually zero traffic to your site.
I would strongly recommend a layered approach to solving this problem. I would look for a technology that allows you to scan files as they are uploaded, or, at least confirm they are not known to be malicious. Secondly I would look at a solution that scans the files you are serving from your web server.
Understandably this becomes harder to do with modern solutions where you don't necessarily own the operating system for your web server and therefore can't deploy a traditional anti malware product. There are some options for looking up files as they pass into your environment via an API Virus Total and Sophos Intelix are a couple of examples but I am sure there are more.
You should configure your images folder in your web server software to not serve files unless they end in the exact file types you allow (e.g. .jpg, .png, .gif), and not to run the PHP process on them. You should also ensure that the correct mime type for those file types is always used, so image/jpeg for .jpg, image/png for .png, image/gif for .gif.
If those two things are configured, even if someone uploads e.g. a .jpg file that actually contains something malicious, the browser will try to load it as a JPEG image, and fail because it's not valid. Occasionally browsers have bugs that mean a specially broken image file can cause the browser to do something it shouldn't.

Why does MDN recommend sandboxing uploading files to a different (sub)domain?

Mozilla Development Network recommends sandboxing uploaded files to a different subdomain:
Sandbox uploaded files (store them on a different server and allow
access to the file only through a different subdomain or even better
through a fully different domain name).
I don't understand what additional security this would provide; my approach has been to upload files on the same domain as the web page with the <input> form control, restrict uploaded files to a particular directory and perform antivirus scans on them, and then allow access to them on the same domain they were uploaded to.
There's practical/performance reasons and security reasons.
From a practical/performance reason, unless you are on a budget, store your files on a system optimised for performance. This can be any type of CDN if you are serving them once uploaded, or just isolated upload-only servers. You can do this yourself, or better off you can use something like AWS S3 and customise the permissions to your needs.
From a security point of view, it is incredibly hard to protect an uploaded file from being executable, specially if you are using a server side scripting language. There are many methods, both in HTTP and in the most popular HTTP servers (nginx, apache, ...) to harden things and make them secure, but there is so many things that you have to take into account and another bunch that you would never even think about, that it is much safer to just leave your files somewhere else altogether, ideally where there's no scripting engine that could run script code on them.
I assume that the different subdomain or domain recommendation is about XSS, vulns exploiting bad configurations of CORS, prevention on phishing attempts (like someone successfully uploading content to your site that mimics your site but does something nasty such as stealing user credentials or providing fake information, and the site would still be served from your domain and there wouldn't be an https security alert from the certificate either).

Why are file:// paths always treated as cross-domain?

Why can't a local page like
file:///C:/index.html
Send a request for a resource
file:///C:/data.json
This is prevented because it's a cross origin request, but in what way is that cross origin? I don't understand why this is a vulnerability / prevented. It just seems like a massive pain when I want to whip up a quick utility for something in JavaScript/HTML and I can't run it without uploading to to a server somewhere because of this seemingly arbitrary restriction.
HTML files are expected to be "safe". Tricking people into saving an HTML document to their hard drive and then opening it is not difficult (Here, just open the HTML file attached to this email would cause many email clients to automatically safe it to a tmp directory and open it in the default application).
If JavaScript in that file had permission to read any file on the disk, then users would be extremely vulnerable.
It's the same reason that software like Microsoft Word prompts before allowing macros to run.
It protects you from malicious HTML files reading from your hard drive.
On a real server, you are (hopefully) not serving arbitrary files, but on your local machine, you could very easily trick users into loading whatever you want.
Browsers are set up with security measures to make sure that ordinary users won't be at increased risk. Imagine that I'm a malicious website and I have you download something to your filesystem that looks, to you, like a regular website. Imagine that downloaded HTML can access other parts of your file system and then send that data to me through AJAX or perhaps another piece of executable code on the filesystem that came with this package. To a regular user this might look like a regular website that just "opened up a little weird but I still got it to work." If the browser prevents that, they're safer.
It's possible to turn these flags off (as in here: How to launch html using Chrome at "--allow-file-access-from-files" mode?), but that's more for knowledgeable users ("power users"), and probably comes with some kind of warning about how your browsing session isn't secure.
For the kind of scenarios you're talking about, you should be able to spin up a local HTTP server of some sort - perhaps using Python, Ruby, or node.js (I imagine node.js would be an attractive option for testing javascript base apps).

Browsers Don't Display "Cruftless" links (no index.html for local web dev.)

Found an interesting article about "Cruftless" links (removing the "index.html" from links) but when I do that no browser shows the local pages.
http://www.nimblehost.com/blog/2012/11/why-cruftless-links-are-better/,
This is understandable, it's a 'file' url from a local machine, so what do people do to work on basic html sites offline? How do they preview them?
For example, no browser (understandably) will display this...
file:///JOBS/ABC/About/
... but this is fine...
file:///JOBS/ABC/About/index.html
?... so what do people do to get around this?
The meaning of file: URLs is, by definition, system-dependent. Normally browsers map them to files in the file system in a relatively straightforward manner.
Thus, a link with href value like file:///JOBS/ABC/About/ may or may not work, depending on system. It may fail, or it may open a generated document containing a directory (folder) listing, or it might do something else.
There is normally no need to get around this, and it is pointless to worry about SEO when dealing with local files.
This could, however, matter during site development when you work with a site locally (and perhaps test and demonstrate it locally). Then you might wish to have, say, About us so that it works locally as well as on a server, yielding About/index.html in both cases but without hard-wiring index.html in HTML markup.
I’m afraid the answer is “you can’t”. But as a workaound, you can install and use a local HTTP server, with settings similar to those that you will have on the real server. This means a little extra work (mainly downloading and installing and configuring software like XAMPP), but it also gives you important other benefits, like testing your pages locally with server-based features (to the extent that the real server is similar to the local server).

How can I use a link on an HTML document to link to a shared file through a Novell network?

I am a student programmer working for an IT department at my college, and my most recent project is implementing browser compatibility for a web form my supervisor wrote using HTML and Javascript.
Everything runs fine in IE, Firefox, and Chrome, with the exception of one column. The idea is that the entries in this column are links to files on a shared drive accessible through the college's Novell network. For instance:
724
Anyone accessing this web form from a college computer should hypothetically be able to click the link and have this .doc pulled up from the shared drive. However, this is hit or miss with IE, and doesn't work at all with Firefox or Chrome.
In Firefox and Chrome, I get errors telling me that it is not allowable to load the local resource.
In IE, the link opens a new Explorer window and (sometimes) opens the corresponding Word document.
My prevailing theory is that the way it's currently implemented depends on one of IE's security holes. How could I go about doing this in a way that works without compromising security?
NOTE: I was told that uploading the files to the web server is not an option; they need to stay where they are.
If they have Netware or OES Linux based servers, sharing the files, then ask if NetStorage is available, or else if NoRM (Novell Remote Manager).
Both can make files available over HTTP. NetStorage can provide it over WebDAV and NoRM would be a link to download a file.
Try http://server:8008 for NoRM and http://server/oneNet/NetStorage as likely URL's.