Chrome 63 changing http to https - google-chrome

Does using Chrome v.63 force use of https?
I am running Apache 2.4.27 on a Windows 10 desktop as a sandbox where I can experiment and do some tutorials. I have a virtual host setup called www.tutorial.dev with an alias of tutorial.dev. In the Windows 10 hosts file I have set up www.tutorial.dev and tutorial.dev to point to localhost.
As of yesterday the url http://tutorial.dev/Bootstrap4FromScratch/ was working normally. In this case providing a directory listing as a jump off point into various examples and exercises. Today, when I type in the url Chrome changes it to https and I get a connection refused message.
I understand the connection refused message. There are no certificates setup.
The only change I can find is that Chrome changed from v.62.x to v.63.x. What in Chrome 63 could be forcing http to https?
I don't have this problem with MS Edge. I tested another similar configuration on a different machine that was in the process of downloading Chrome 63.x. It already had 62.x installed. It worked until the 63.x upgrade was complete then the same problem occurred.
Additional information: If I use http://localhost to bring up the index.html or version.php in the htdocs directory the switch from http to https does not happen. The virtual host www.tutorial.dev resides in another directory outside of htdocs.
If this has been asked and answered please point me to the question/answer thread.
Thanks in advance,
Barry

Google owns the .dev TLD and with Chrome 63 they are forcing HTTPS on all requests to anything.dev
I went through my local dev setup and replaced all references to .dev with .local, works fine now.
Your other option is to use Firefox for local development. .dev now triggers https in FF (since before FF61) with a workaround
Edit (asside):
I have switched to using .localhost for dev as browsers allow navigator.geolocation.getCurrentPosition() (blocked if site is not HTTPS).

Google Chrome 63 update, out December 2017, places .dev domains in the preloaded HSTS list with a rule enforcing HTTPS, no workarounds.
{ "name": "dev", "include_subdomains": true, "mode": "force-https" }
The "only" way is to switch .dev with something else, like .localhost
IETF states a few reserved TLDs for development:
TLDs for Testing, & Documentation Examples
There is a need for top level domain (TLD) names that can be used
for creating names which, without fear of conflicts with current or
future actual TLD names in the global DNS, can be used for private
testing of existing DNS related code, examples in documentation, DNS
related experimentation, invalid DNS names, or other similar uses.
For example, without guidance, a site might set up some local
additional unused top level domains for testing of its local DNS code
and configuration. Later, these TLDs might come into actual use on
the global Internet. As a result, local attempts to reference the
real data in these zones could be thwarted by the local test
versions. Or test or example code might be written that accesses a
TLD that is in use with the thought that the test code would only be
run in a restricted testbed net or the example never actually run.
Later, the test code could escape from the testbed or the example be
actually coded and run on the Internet. Depending on the nature of
the test or example, it might be best for it to be referencing a TLD
permanently reserved for such purposes.
To safely satisfy these needs, four domain names are reserved as
listed and described below.
.test
.example
.invalid
.localhost
".test" is recommended for use in testing of current or new DNS
related code.
".example" is recommended for use in documentation or as examples.
".invalid" is intended for use in online construction of domain
names that are sure to be invalid and which it is obvious at a
glance are invalid.
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use. Any other use
would conflict with widely deployed code which assumes this use.
PS: .foo is also in the preloaded HSTS list

Thanks everyone for the advice. I ended up going with .tst for now. I have a feeling I'l be switching over (forced?) to .localhost at some point. But for now .tst is less typing.

Related

Intercepting proxy's certificates generated on-the-fly provoke browser errors

I've written an intercepting proxy in Python 3 which uses a man-in-the-middle "attack" technique to be able to inspect and modify pages coming through it on the fly. Part of the process of "installing" or setting up the proxy involves generating a "root" certificate which is to be installed in the browser and every time a new domain is hit via HTTPS through the proxy, the proxy generates a new site certificate on-the-fly (and caches all certificates generated to disk so it doesn't have to re-generate certificates for domains for which certificates have already been generated) signed by the root certificate and uses the site certificate to communicate with the browser. (And, of course, the proxy forges its own HTTPS connection to the remote server. The proxy also checks the validity of the server certificate if you're curious.)
Well, it works great with the browser surf. (And, this might be relevant -- as of a few versions back, at least, surf didn't check/enforce certificate validity. I can't attest to whether that's the case for more recent versions.) But, Firefox gives a SEC_ERROR_REUSED_ISSUER_AND_SERIAL error on the second (and all later) HTTPS request(s) made through the proxy and Chromium (I haven't tested with Chrome proper) gives NET::ERR_CERT_COMMON_NAME_INVALID on every HTTPS request. These obviously present a major problem when trying to browse through my intercepting proxy.
The SSL library I'm using is pyOpenSSL 0.14 if that makes any difference.
Regarding Firefox's SEC_ERROR_REUSED_ISSUER_AND_SERIAL error, I'm pretty sure I'm not reusing serial numbers. (If anybody wants to check my work, that would be pretty rad: cert.py - note the "crt.set_serial_number(getrandbits(20 * 8))" on line 168.) The root certificate issuer of course doesn't change, but that wouldn't be expected to change, right? I'm not sure what exactly is meant by "issuer" in the error message if not the root certificate issuer.
Also, Firefox's "view certificate" dialog displays completely different serial numbers for different certificates generated by the proxy. (As an example, I've got one generated for www.google.com with a serial number of 00:BF:7D:34:35:15:83:3A:6E:9B:59:49:A8:CC:88:01:BA:BE:23:A7:AD and another generated for www.reddit.com with a serial number of 78:51:04:48:4B:BC:E3:96:47:AC:DA:D4:50:EF:2B:21:88:99:AC:8C .) So, I'm not really sure what Firefox is complaining about exactly.
My proxy reuses the private key (and thus public key/modulus) for all certificates it creates on the fly. I came to suspect this was what Firefox was balking about and tried changing the code to generate a new key pair for every certificate the proxy creates on the fly. That didn't solve the problem in Firefox. I still get the same error message. I have yet to test whether it solves the Chromium issue.
Regarding Chromium's NET::ERR_CERT_COMMON_NAME_INVALID error, the common name for site certificate is just supposed to be the domain, right? I shouldn't be including a port number or anything, right? (Again, if anybody would like to check my work, see cert.py .) If it helps any, my intercepting proxy isn't using any wildcards in the certificate common names or anything. Every certificate generated is for one specific fqdn.
I'm quite certain making this work without making Firefox or Chrome (or Chromium or IE etc) balk is possible. A company I used to work for purchased and set up a man-in-them-middling proxy through which all traffic from within the corporate network to the internet had to pass. The PC administrators at said company installed a self-signed certificate as a certificate authority in every browser on every company-owned computer used by the employees and the result never produced any errors like the ones Firefox and Chromium have been giving me for the certificates my own intercepting proxy software produces. It's possible the PC administrators tweaked some about:config settings in Firefox to make this all work or something, but I kindof doubt it.
To be fair, the proxy used at this company was either network or transport layer, not application layer like mine. But I'd expect the same can be accomplished in an application-layer HTTP(s) proxy.
Edit: I've tried setting the subjectAltName as suggested by brain99. Following is the line I added in the location brain99 suggested:
r.add_extensions([crypto.X509Extension(b"subjectAltName", False, b"DNS:" + cn.encode("UTF-8"))])
I'm still getting SEC_ERROR_REUSED_ISSUER_AND_SERIAL from Firefox (on the second and subsequent HTTPS requests and I'm getting ERR_SSL_SERVER_CERT_BAD_FORMAT from Chromium.
Here are a couple of certificates generated by the proxy:
google.com: https://pastebin.com/YNr4zfZu
stackoverflow.com: https://pastebin.com/veT8sXZ4
I noticed you only set the CN in your X509Req. Both Chrome and Firefox require the subjectAltName extension to be present; see for example this Chrome help page or this Mozilla wiki page discussing CA required or recommended practices. To quote from the Mozilla wiki:
Some CAs mistakenly believe that one primary DNS name should go into
the Subject Common Name and all the others into the SAN.
According to the CA/Browser Forum Baseline Requirements:
BR #9.2.1 (section 7.1.4.2.1 in BR version 1.3), Subject Alternative
Name Extension
Required/Optional: Required
Contents: This extension MUST contain at least one entry. Each entry MUST be either a dNSName containing the Fully-Qualified Domain Name or an iPAddress containing the IP address of a server.
You should be able to do this easily with pyOpenSSL:
if not os.path.exists(path):
r = crypto.X509Req()
r.get_subject().CN = cn
r.add_extensions([crypto.X509Extension("subjectAltName", False, "DNS:" + cn])
r.set_pubkey(key)
r.sign(key, "sha1")
If this does not solve the issue, or if it only partially solves it, please post one or two example certificates that exhibit the problem.
Aside from this, I also noticed you sign using SHA1. Note that certificates signed with SHA1 have been deprecated in several major browsers, so I would suggest switching to SHA-256.
r.sign(key, "sha256")

TFS 2015 Code Viewer Not Working in Google Chrome

I found the following issue here in stackoverflow however cannot comment as yet. I have a similar issue and wonder if there is anyone out there that has solved it.
https://stackoverflow.com/questions/40917501/tfs-2015-web-portal-code-viewer-not-working#
I am encountering similar here. In house TFS 2015, can't view code in the web portal using Google Chrome however IE is fine. I, however, am not using HTTPS so may be experiencing something slightly different.
When I do try to view a file in Chrome, the window where the code listing should be is simply blank. I did note too that the button for creating a new build definition appears to be indicating a broken image link.
This has not always been an issue. Around 4 months ago I could get the code view fine in Chrome and, to my knowledge as I have no access to the servers, nothing has changed apart from Chrome updates.
I've tried getting to previous versions of Chrome to no avail, though I wouldn't know which version I was on when this did work.
Interestingly, I have one or two .MD files around and these display perfectly well. They are simple text files. However when saved with .TXT extension (or anything else I've tried), they do not show. Curious.
Update
As you will see from the screenshot below, when selection on a file has been made, in this case a .SQL file, where I would expect the view to populate nothing at all appears.
As for the F12, I do get 5 of these:
Failed to load resource: net::ERR_CONNECTION_REFUSED
plus associated paths of course. We use Webroot internally here which has recently dropped in a Chrome extension however even when Webroot is disabled in its entirety (including removal of extension) I get the same behaviour.
All other Chrome extensions have been removed too at varying times to try to give a clean browser.
I have no other pop up blockers, ad blockers, etc installed on the workstation.
Problem solved thanks to the F12 key suggestion.
After some grovelling I was granted domain admin privs to have a dig around everything. It turns out that TFS was installed on ServerA with a URL port of 8080, this I knew from the original install and obviously the path I follow to get to my TFS web interface. What had also been done subsequently, with no consultation of the Dev user group, was that a second TFS application tier had been installed on ServerB, the port here was 8088.
I had not noticed the difference in path initially, assuming it was Chrome or workstation related. Anyway, I altered the port on ServerB to 8080 and everything jumped into life. I should not have made assumptions and should have paid more attention to the path in the error!
It seems the second application tier was set up on a non-production environment to allow senior Dev users access to the TFS Management Console rather than allowing them access to the original app tier which was on a production box. Our IT Operations just forgot to tell anyone.
Try to update your chrome to latest version of (55.0.2883.87 m (64-bit)).
Also clear the cache of chrome. I have also encountered similar issues. The solution is clear cache and connect to the web portal use another ID, then connect back use the original ID. I have no idea which one solved the problem. You could try both.
This problem should only be an individual phenomenon, since TFS2015 has been released for a long time.

What is the "resource://" URL scheme?

I recently encountered a web page containing the following line of markup:
<script src="resource://ember-inspector-at-emberjs-dot-com/ember-inspector/data/in-page-script.js"></script>
Note that the scheme in the URL is 'resource' and that the URL is not for something that can be reached over the Internet.
This is not a URL scheme that I have previously encountered. Despite some searching on the matter, I can't find any information regarding the use of this scheme.
What is the purpose of the 'resource' scheme? If I were a browser, what would I do with this?
The resource: URI scheme is exclusive to Firefox and was registered with Firefox v3.
It's used internally, related to chrome.manifest.
In Firefox enter this in the address bar and navigate to it..
resource:///
You should find a directory structure of your local Firefox user profile.
Background
Mozilla has multiple URI-schemes registered. Of these include resource: and chrome: (the latter, being more commonly familiar)
A Chrome directory is an important part of any Firefox installation. Inside the Chrome directory there are data files, documents, scripts, images, etc.. all of these files comprise the user interface elements and local user data.
But a chrome:// URI is actually just a special case of the lesser known resource:// URI which points to the top of the platform installation area. All paths in the chrome directory must begin with resource: or jar:
Info found in Rapid Application Development with Mozilla written by Nigel McFarlane
Specific use-case.. Emberjs
For the specific case you referred to, you can find more details here: https://github.com/emberjs/ember-inspector/issues/82
Issues
We allowed accessibility for resource:/// which pointed at the
installed on-disk resources that came with Firefox. I don't know if we
supported alternate resource aliases at the time, but I'm sure add-ons
weren't using them and that we didn't support resource aliasing in
chrome.manifest (which didn't exist).
When we introduced resource into chrome.manifest we should have added
the option contentaccessible=yes mechanism at the same time: let
add-ons opt-in to fingerprintability just as we do with chrome
content. Unfortunately anything we do may have compatibility problems:
searching addon source I find 810 chrome.manifest files that define
custom resource:// locations. One reason for so many is because it's
used by JetPack addons so I'm somewhat hopeful that most of those
don't need to reference these from content.
Quoted from Reference 2 below.
The only reason extensions would need to use resource: is to make things available to web content.
Quoted from Reference 2 below.
Directly from Mozilla
I had a really hard time finding any mention of resource:// in any documentation by Mozilla, IANA, or W3C. This is the one and ONLY direct mention of the definition of resource: that I could find published directly from Mozilla. It was so obscure I took a screenshot :)
Further Reading:
Bugzilla report on resource:// security vulnerability
Another Bugzilla report (source of quote above)
IANA resource:// Resource Identifier provision
IANA Complete List of URI-Scheme Assignments

How would I go about creating a custom protocol that would always open an address with a specific browser?

Okay, so I'm a student programmer in my college's IT department, and I'm doing browser compatibility for a web form my boss wrote. I need the user to be able to open a local file from a shared drive with a single click.
The problem is that Firefox and Chrome don't allow that for security reasons. Thus, I'm trying to write a custom protocol of my own to open an address in Internet Explorer regardless of the browser being used.
Can anyone help me with this? I'd also be willing to try an alternative solution to the problem.
The below worked for me, is this what you mean?
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo]
#="URL: foo Protocol"
"URL Protocol"=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\DefaultIcon]
#="C:\\Program Files (x86)\\Internet Explorer\\iexplore.exe"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell\open]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell\open\command]
#="C:\\Program Files (x86)\\Internet Explorer\\iexplore.exe \"%1\""
Just to note, I'm running Win7Pro, so you may have to move around file path(s) to conform to your environment.
And if that doesn't work, create a proxy between the protocol and the browser, pass the argument(s) from foo:// to that, parse what's necessary, then hand it off to IE using start iexplorer.exe "args".
I'm unsure whether I understand your question, if it is how do I open local files using chrome/firefox, this is your anwser:
First a disclaimer, I have never done this and cannot vouch for the accuracy of my response
IE
Microsoft's security model is pretty fail so you can go right ahead and open these files
FireFox
Some quick googling found that Firefox can do this after either editing prefs.js as outlined here or installing an addon called LocalLink
Chrome
Practically impossible due to its security, until now when locallink was ported to chrome.

Workaround for href="file://///..." in Firefox

On an intranet site, let's say I want to link to a file on a share using UNC, at:
\\servername\foldername\filename.rtf
It seems the correct way to do this is with markup like this:
filename.rtf
That's five slashes - two for the protocol, one to indicate the root of the file system, then two more to indicate the start of the server name.
This works fine in IE7, but in Firefox 3.6 it will only work if the html is from a local file. I can't get it to work when the file comes from a web server. The link is "dead" - clicking on it does nothing.
Is there a workaround for this in Firefox? Those two browsers should be all I need to worry about for now.
Since this is obviously a feature of Firefox, not a bug, can someone explain what the benefit is to preventing this type of link?
This question has been asked at least twice before, but I was unable to find those posts before posting my own (sorry):
Open a direct file on the hard drive from firefox (file:///)
Firefox Links to local or network pages do not work
Here is a summary of answers from all three posts:
Use WebDAV — this is the best solution for me, although much more involved than I had anticipated.
Use http:// instead of file:///// — this will serve up a copy of the document that the user cannot edit and save.
Edit user.js on the client as described here — this worked for me in Firefox 3.6.15, but without access to client machines, it's not a solution.
In Firefox, use about:config, change the Security.fileuri.strict_origin_policy setting to false — this doesn't work for me in 3.6.15. Other users on [SO] have also reported that it doesn't work.
Use the locallinks Firefox extension — this sets the Security.fileuri.strict_origin_policy to true for you, and appears to have no other effect.
Read the file server-side and send it as the response — this presents the same problem as simply configuring your web server to use http://.
Browsers like Firefox refuse to open the file:// link when the parent HTML page itself is served using a different protocol like http://.
Your best bet is to configure your webserver to provide the network mapped file as a web resource so that it can be accessed by http:// from the same server instead of by file://.
Since it's unclear which webserver you're using, I can't go in detail as to how to achieve this.
In Firefox to Open File:\\\\\yourFileServer\docs\doc.txt for example you need to turn on some options in Firefox configuration:
user_pref("capability.policy.policynames", "localfilelinks");
user_pref("capability.policy.localfilelinks.sites", "http://yourServer1.companyname.com http://yourServer2.companyname.com");
user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
As it turns out, I was unaware that Firefox had this limitation/feature. I can sympathize with the feature, as it prevents a user from unwittingly accessing the local file system. Fortunately, there are useful alternatives that can provide a similar user experience while sticking to the HTTP protocol.
One alternative to accessing content via UNC paths is to publish your content using the WebDAV protocol. Some content managements systems, such as MS SharePoint, use WebDAV to provide access to documents and pages. As far as the end-user experience is concerned, it looks and feels just like accessing network files with a UNC path; however, all file interactions are performed over HTTP.
It might require a modest change in your file access philosophy, so I suggest you read about the WebDAV protocol, configuration, and permission management as it relates to your specific server technology.
Here are a few links that may be helpful if you are interested in learning more about configuring and using WebDAV on a few leading HTTP servers:
Apache Module mod_dav
IIS 7.0 WebDAV Extension
Configuring WebDAV Server in IIS 7, 6, 5
Add your own policy, open configuration "about:config" in the location bar and add three new entries:
capability.policy.policynames MyPolicy
capability.policy.MyPolicy.sites http://localhost
capability.policy.MyPolicy.checkloaduri.enabled allAccess
Replace http://localhost with your website.
Works with Firefox 70.0.
I don't know if this will work, but give it a shot! Old article, but potentially still useful.
http://www.techlifeweb.com/firefox/2006/07/how-to-open-file-links-in-firefox-15.html