lighthouse report; insecure requests found - google-chrome

when I to test my Website (it is outside from my computer) with the tool lighthouse on Chrome, comes this report;
All sites should be protected with HTTPS, even ones that don't handle sensitive data. HTTPS prevents intruders from tampering with or passively listening in on the communications between your app and your users and is a prerequisite for HTTP/2 and many new web platform APIs. Learn more.
What I don't Understand why to come this, I have my Website with https — the report to say that my images and URL do not use HTTPS.
Screenshot from this warning;
I have to test my Website for https mistake with https://www.whynopadlock.com/f73e9366-da69-4ebf-a73f-6ceff2161cd6
Screenshot from it,
How to see, I have all gut, but the Tool lighthouse every time give me a similar result...
Can Please anyone help me with this problem, Thanks!

the transfer protocol your site current uses is HTTP/1.1
which has been the de-facto standard since 1997. In an effort to ensure a secure encrypted connection between browsers and websites, giants like Google and Let's Encrypt have been pushing sites to use HTTPS.
Google came up with a new way networking protocol SPDY which is termed a precursor to HTTP/2 which rolled out in 2015. Google suggests websites to use HTTP/2 as the new de-facto standard protocol which brings in multiplexing, header compression, binary transfer of data and much more.
The message that you have recieved would go away once you enable HTTP/2 on your server
As you haven't mentioned your webserver, here's how you enable it for Nginx and Apache

Related

Chrome and Safari not honorring HPKP

I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.

best method of linking to outside of secured script (using SSL)

I have shopping cart script in our site that is setup to be secure (https with SSL certificate). I have links in the script leading to other parts of my site that are not secure (WordPress blog, etc).
In the secure site, if I have links that are not secure ( http ), it triggers a message to user in browser, alerting of unsecured links. If I put the outgoing links in the script as relative links, when the user clicks on them and goes outside of script, it keeps them in secure mode (which we don't want for other parts of our site).
Years ago, I remember having this issue. I think I got around it by using a HTTP Redirect for every outgoing link in the secure site. Using a HTTP Redirect, I would have https://www.example.com/outgoinglink1a redirect to http://www.example.com/outgoinglink1b in the HTTP Redirect. This way, I could put https://www.example.com/outgoinglink1a in the secure site, and when it was clicked, it would lead to http://www.example.com/outgoinglink1b
In modern times, how do I have links in the secure site that lead to other parts of the site that aren't secure, without triggering SSL Error Message to user when they are in Secure part of site? Is using some type of 301 redirect in .htaccess better? Is there another preferred or easier method (than using HTTP Redirects) for accomplishing this?
Thank you for any guidance.
You can use https-2-http redirects to the unsecured site to avoid browser warnings.
But for multiple reasons, safety being one of them, I would really advice against using http and https for the same domain, even if lot of big sites still do it. You would ether have to use different cookies for the secure and the normal site, or the one cookie u use for your shopping cart can't have the secure flag, in which case you really don't need https in my opinion. Also, you will never be able to implement HSTS.
You've already gone to the lengths bought a certificate and set up an https-server, now why not secure the whole site?
Update to answer your question in the comment:
That is of course a deal-breaker, if you rely on those and the hosts haven't implemented https yes (which they probably will sooner or later, or they are going to be out of business)
Depending on what they actually do, you maybe could proxy the request to those scripts and serve them from you https-enabled server. But I would really consider this last a resort.
The slowing down part is mostly just the handshake. If you enable session resumption there shouldn't be too much overhead to actually slow down your site. Make sure your TLS session cache is big enough and that the ticket lifetime is ample.
Of course, your mileage may vary. So make sure you test your https site before going online.
I heard of such horror stories as well, but I think most of the time it's probably due to faulty or at least sub-standard implementation. Make sure you redirect EVERY single http-request to https with the 301 status and you should be fine. For some months now enabling https should actually help with your Google pagerank.
To link to an external site (differnt FQDN) you don't have to implement any trickery to avoid browser warnings - that's just linking to a different site and has nothing to do with mixed content policies.

Is loading scripts or other resources via HTTPS on a HTTP page problematic?

I'm aware of protocol-relative URLs, which are usually the right solution for serving scripts or other resources on pages that may be loaded using HTTP or HTTPS.
However, I have a script that I would like to always serve via HTTPS, even when the page it's being loaded onto is served via HTTP. Leaving the obvious potential security issues around mixing HTTP and HTTPS content aside (namely, that a MITM attack on some script served via HTTP could theoretically be used to inject exploit code used to read stuff from the script served via HTTPS), is this a bad idea for any other reason? For example, will this cause mixed content warnings in any old versions of IE?
Nope! At least, not on any browsers that remain in popular use.
Paul Irish (one of the developers of Google Chrome and modestly notable programming blogger and open-source contributor) has this advice to give in a 2014 update to his 2010 blog post, The Protocol-relative URL (emphasis from the original):
Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.
Allowing the snippet to request over HTTP opens the door for attacks like the recent Github Man-on-the-side attack. It’s always safe to request HTTPS assets even if your site is on HTTP, however the reverse is not true.
More guidance and details in Eric Mills’ guide to CDNs & HTTPS.
If Paul Irish says that requesting HTTPS assets on a HTTP page is fine, then that's good enough for me.

Make full site HTTPS / SSL? What performance / SEO issues & best practices still apply in 2012? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Note: There are existing question that look like duplicates (linked below) but most of them are from a few years ago. I'd like to get a clear and definitive answer that proves things either way.
Is making an entire website run in HTTPS not an issue today from a best practice and performance / SEO perspective?
UPDATE: Am looking for more information with sources, esp. around impact to SEO. Bounty added
Context:
The conversation came up when we wanted to introduce some buttons that spawn lightboxes with forms in them that collect personal information (some of them even allow users to login). This is on pages that make up a big portion of the site. Since the forms would need to collect and submit information securely and the forms are not on pages of their own, the easiest way we could see to make this possible was to make the pages themselves be HTTPS.
What I would like is for an answer that covers issues with switching a long running popular site to HTTPS such as the ones listed below:
Would a handshake be negotiated on every request?
Will all assets need to be encrypted?
Would browsers not cache HTTPS content, including assets?
Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
HTTPS and gzip might not be happy together?
Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO / performance? Any other SEO impact of changing this sitewide?
There's a move with some of the big players to always run HTTPS, see Always on SSL, is this setting a precedent / best practice?
Duplicate / related questions:
Good practice or bad practice to force entire site to HTTPS?
Using SSL Across Entire Site
SSL on entire site or just part of it?
Not sure I can answer all points in one go with references, but here goes. Please edit as appropriate:
Would a handshake must be negotiated on every request?
No, SSL connections are typically reused for a number of consecutive requests. The overhead once associated with SSL is mostly gone these days. Computers have also gotten a lot faster.
Will all assets need to be encrypted?
Yes, otherwise the browser will not consider the entire site secure.
Would browsers not cache HTTPS content, including assets?
I do not think so, caching should work just fine.
Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
For the proxy to cache SSL encrypted connections/assets, the proxy would need to decrypt the connection. That largely negates the advantage of SSL. So yes, proxies would not cache content.
It is possible for a proxy to be an SSL endpoint to both client and server, so it has separate SSL sessions with each and can see the plaintext being transmitted. One SSL connection would be between the proxy and the server, the proxy and the client would have a separate SSL connection signed with the certificate of the proxy. That requires that the client trusts the certificate of the proxy and that the proxy trusts the server certificate. This may be set up this way in corporate environments.
Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
Yes.
HTTPS and gzip might not be happy together?
Being on different levels of protocols, it should be fine. gzip is negotiated after the SSL layer is put over the TCP stream. For reasonably well behaved servers and clients there should be no problems.
Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO?
Why will backlinks always be HTTP? That's not necessarily a given. How it impacts SEO very much depends on the SE in question. An intelligent SE can recognize that you're simply switching protocols and not punish you for it.
1- Would a handshake be negotiated on every request?
There are two issues here:
Most browsers don't need to re-establish a new connection between requests to the same site, even with plain HTTP. HTTP connections can be kept alive, so, no, you don't need to close the connection after each HTTP request/response: you can re-use a single connection for multiple requests.
You can also avoid to perform multiple handshake when parallel or subsequent SSL/TLS connections are required. There are multiple techniques explained in ImperialViolet - Overclocking SSL (definitely relevant for this question), written by Google engineers, in particular session resumption and false start. As far as I know, most modern browsers support at least session resumption.
These techniques don't get rid of new handshakes completely, but reduce their cost. Apart from session-reuse, OCSP-stapling (to check the certificate revocation status) and elliptic curves cipher suites can be used to reduce the key exchange overhead during the handshake, when perfect forward-secrecy is required. These techniques also depend on browser support.
There will still be an overhead, and if you need massive web-farms, this could still be a problem, but such a deployment is possible nowadays (and some large companies do it), whereas it would have been considered inconceivable a few years ago.
2- Will all assets need to be encrypted?
Yes, as always. If you serve a page over HTTPS, all the resources it uses (iframe, scripts, stylesheets, images, any AJAX request) need to be using HTTPS. This is mainly because there is no way to show the user which part of the page can be trusted and which can't.
3- Would browsers not cache HTTPS content, including assets?
Yes, they will, you can either use Cache-Control: public explicitly to serve your assets, or assume that the browser will do so. (In fact, you should prevent caching for sensitive resources.)
4- Is downstream transparent proxies not caching HTTPS content, including assets (css, js etc.) still an issue?
HTTP proxy servers merely relay the SSL/TLS connection without looking into them. However, some CDNs also provide HTTPS access (all the links on Google Libraries API are available via https://), which, combined with in-browser caching, allows for better performance.
5- Would all external assets (tracking pixels, videos, etc) need to have HTTPS version?
Yes, this goes with point #3. The fact that YouTube supports HTTPS access helps.
6- HTTPS and gzip might not be happy together?
They're independent. HTTPS is HTTP over TLS, the gzip compression happens at the HTTP level. Note that you can compress the SSL/TLS connection directly, but this is rarely used: you might as well use gzip compression at the HTTP level if you need (there's little point compressing twice).
7- Backlinks and organic links will always be HTTP so you will be 301'ing all the time, does this impact SEO?
I'm not sure why these links should use http://. URL shortening services are a problem generally speaking for SEO if that's what you're referring to.
I think we'll see more and more usage of HTTP Strict Transport Security, so more https:// URLs by default.

What are my offline and socket options for a modern web application?

So I have been thinking about building quite a complex application. The idea of building an html5 version has become quite an attractive possibility. I have a few questions about it first however.
My first concern is how reliable the offline application API's are at the moment. I have been looking into this standard: http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html and it looks pretty easy to implement and use, but I am wondering how easy it is to use? And assuming you set up the manifest etc, is the web application just accessed (offline) by going to the same url you originally downloaded the application from?
My other concern is the use of sockets. This offline application still needs to be able to communicate with local servers, I ideally wanted to avoid having to host a web-server, a socket connection however would be plausible. How well do websockets currently work when the browser is offline? Is it possible, to have a fully networked / interactive browser application running even without an active internet connection? (after the app is first downloaded)
Any insight would be great!
That's a lot of questions, you may want to consider breaking it up into more easily answerable portions more directly related to what, exactly, you're trying to achieve. In the meantime I'll try to provide a short answer to each of your questions:
My first concern is how reliable the offline application API's are at
the moment.
Fairly reliable, they have been implemented for a number of versions across most major web browsers (except IE).
is the web application just accessed (offline) by going to the same
url you originally downloaded the application from?
Yes. Once the offline app has been cached, the application is served from that cache. No network requests will be made unless you explicitly request URLs from the NETWORK or FALLBACK sections of the manifest or aren't covered by the manifest at all, apart from to check whether the manifest itself has changed.
This offline application still needs to be able to communicate with
local servers, I ideally wanted to avoid having to host a web-server,
a socket connection however would be plausible.
A Web Socket still requires a web server. The initial handshake for a Web Socket is over HTTP. A Web Socket is not the same thing as a socket in TCP/IP.
How well do websockets currently work when the browser is offline?
They won't work at all, when you've set a browser to offline mode it won't make any network requests at all. Note that a browser being set to offline is not the same thing as the offline in 'offline API'. The offline API is primarily concerned with whether or not the server hosting the application can be reached, not whether the the browser is currently connected to a network or whether that network is connected to the internet. If the server goes down then the app is just as 'offline' as if the network cable on the user's computer got unplugged. Have a read through this blog post, in particular the comments. My usual approach to detecting offline status is to set up a pair of files in the FALLBACK section such that you get one when online and the other when offline - request that file with AJAX and see what you get.
Is it possible, to have a fully networked / interactive browser
application running even without an active internet connection?
Yes, but I don't think that means what you think it does. Separate instances of the app running on different browsers on different machines would not be able to communicate with each other without going via the web server. However, there's no requirement that the web server be 'on the internet', it will do just fine sitting on the local network.