The google's certificate transparency project has been in place for some time, google chrome and mozilla firefox have both claimed to have joined the project, but how do I test if the browser actually suports certificate transparency and the three ways of delivery of SCT?
One of the easiest ways to test whether a browser is checking certificate transparency is to try a known bad site, such as https://invalid-expected-sct.badssl.com. Using this address, Chrome 69 will say the site is insecure, but Safari 12.0 which doesn't perform certificate transparency will let it through.
Chrome's policy can be found at https://github.com/chromium/ct-policy/blob/master/ct_policy.md
Apple are in the process of enforcing certificate transparency with I believe the plan being to roll it out in iOS 12.1.1 and macOS 10.14.2. Their policy can be found at https://support.apple.com/en-us/HT205280
Firefox 63.0.1 doesn't seem to support certificate transparency either although support is built into Firefox I believe it is currently not enforced until some other issues are resolved.
In terms of trying to test the three methods of delivery there is a research project at https://www.ida.liu.se/~nikca89/papers/pam18.html with code available that pulls SCTs for a given list of domains so you should be able to use that to check all 3 ways. To get it working you create a file top-1m.csv with entries for each domain on separate lines prefixed with an ignored numeric value and execute the main function in FirstTestCase. Alternatively you could look at the Conscrypt project although that is more work.
Related
We have a react front-end that is communicating with an ASP Core API.
Sometimes we detect there is something wrong with the front-end, including service workers, local cache, and that kind of stuff, so we want to tell the client to clean it up.
I've implemented the Clear-Site-Data (dev-moz) (w3c) as a response header, as "Clear-Site-Data": "cache", "cookies", "storage", "executionContexts"
When testing this out in Firefox, it works, and in the console I'm seeing:
Clear-Site-Data header found. Unknown value “"executionContexts"”. SignIn
Clear-Site-Data header forced the clean up of “cache” data. SignIn
Clear-Site-Data header forced the clean up of “cookies” data. SignIn
Clear-Site-Data header forced the clean up of “storage” data.
When doing the same in Chrome, it's not working, and I'm seeing the message
The request's credentials mode prohibits modifying cookies and other local data.
I'm trying to figure out what to do to fix it, but there are barely any any references. Just 7 results, mostly from browser integration test logs
All the documentation says that this should be implemented and working in Chrome... Any idea what's the problem?
Try manually reloading the page after the Clear-Site-Data has been received (so that the local data / cache is cleared and the header no longer contain Clear-Site-Data).
Both Firefox & Chrome don't appear to support executionContexts, which tells the browser to reload the original response.
If header contains executionContexts, then the browser should ignore it (as you see in Firefox console). However you can try wildcard mapping (*). (This will also add support for future properties).
Response.AppendHeader("Clear-Site-Data", "\"*\"");
Also Google reuse parts of their Chrome source code in their open source project Chromium. If you take a look at Chromium source code (https://doss-gitlab.eidos.ic.i.u-tokyo.ac.jp/sneeze/chromium/blob/9b22da4739ec7bf54fb8e730662e2ab7996532e0/content/browser/browsing_data/clear_site_data_handler.cc line 308). This implements the same exception The request's credentials mode prohibits modifying cookies. A flag LOAD_DO_NOT_SAVE_COOKIES is somehow being sent. The console error maybe an caused by an additional response header or a small chance theres a bug in Chrome.
Right now, my advice would be do not implement the Clear-Site-Data header at this time.
Despite the spec being widely available for some years, vendor support is still hit-and-miss and is now actually going in reverse.
As per the w3c github for this, there are a number of issues regarding executionContexts. The wildcard ('*') mentioned by Greg in their answer is not supported by Chrome, and Mozilla are about to remove the cache value as well.
All this points to a flawed standard which is likely to be removed at some point in the future.
I've written an intercepting proxy in Python 3 which uses a man-in-the-middle "attack" technique to be able to inspect and modify pages coming through it on the fly. Part of the process of "installing" or setting up the proxy involves generating a "root" certificate which is to be installed in the browser and every time a new domain is hit via HTTPS through the proxy, the proxy generates a new site certificate on-the-fly (and caches all certificates generated to disk so it doesn't have to re-generate certificates for domains for which certificates have already been generated) signed by the root certificate and uses the site certificate to communicate with the browser. (And, of course, the proxy forges its own HTTPS connection to the remote server. The proxy also checks the validity of the server certificate if you're curious.)
Well, it works great with the browser surf. (And, this might be relevant -- as of a few versions back, at least, surf didn't check/enforce certificate validity. I can't attest to whether that's the case for more recent versions.) But, Firefox gives a SEC_ERROR_REUSED_ISSUER_AND_SERIAL error on the second (and all later) HTTPS request(s) made through the proxy and Chromium (I haven't tested with Chrome proper) gives NET::ERR_CERT_COMMON_NAME_INVALID on every HTTPS request. These obviously present a major problem when trying to browse through my intercepting proxy.
The SSL library I'm using is pyOpenSSL 0.14 if that makes any difference.
Regarding Firefox's SEC_ERROR_REUSED_ISSUER_AND_SERIAL error, I'm pretty sure I'm not reusing serial numbers. (If anybody wants to check my work, that would be pretty rad: cert.py - note the "crt.set_serial_number(getrandbits(20 * 8))" on line 168.) The root certificate issuer of course doesn't change, but that wouldn't be expected to change, right? I'm not sure what exactly is meant by "issuer" in the error message if not the root certificate issuer.
Also, Firefox's "view certificate" dialog displays completely different serial numbers for different certificates generated by the proxy. (As an example, I've got one generated for www.google.com with a serial number of 00:BF:7D:34:35:15:83:3A:6E:9B:59:49:A8:CC:88:01:BA:BE:23:A7:AD and another generated for www.reddit.com with a serial number of 78:51:04:48:4B:BC:E3:96:47:AC:DA:D4:50:EF:2B:21:88:99:AC:8C .) So, I'm not really sure what Firefox is complaining about exactly.
My proxy reuses the private key (and thus public key/modulus) for all certificates it creates on the fly. I came to suspect this was what Firefox was balking about and tried changing the code to generate a new key pair for every certificate the proxy creates on the fly. That didn't solve the problem in Firefox. I still get the same error message. I have yet to test whether it solves the Chromium issue.
Regarding Chromium's NET::ERR_CERT_COMMON_NAME_INVALID error, the common name for site certificate is just supposed to be the domain, right? I shouldn't be including a port number or anything, right? (Again, if anybody would like to check my work, see cert.py .) If it helps any, my intercepting proxy isn't using any wildcards in the certificate common names or anything. Every certificate generated is for one specific fqdn.
I'm quite certain making this work without making Firefox or Chrome (or Chromium or IE etc) balk is possible. A company I used to work for purchased and set up a man-in-them-middling proxy through which all traffic from within the corporate network to the internet had to pass. The PC administrators at said company installed a self-signed certificate as a certificate authority in every browser on every company-owned computer used by the employees and the result never produced any errors like the ones Firefox and Chromium have been giving me for the certificates my own intercepting proxy software produces. It's possible the PC administrators tweaked some about:config settings in Firefox to make this all work or something, but I kindof doubt it.
To be fair, the proxy used at this company was either network or transport layer, not application layer like mine. But I'd expect the same can be accomplished in an application-layer HTTP(s) proxy.
Edit: I've tried setting the subjectAltName as suggested by brain99. Following is the line I added in the location brain99 suggested:
r.add_extensions([crypto.X509Extension(b"subjectAltName", False, b"DNS:" + cn.encode("UTF-8"))])
I'm still getting SEC_ERROR_REUSED_ISSUER_AND_SERIAL from Firefox (on the second and subsequent HTTPS requests and I'm getting ERR_SSL_SERVER_CERT_BAD_FORMAT from Chromium.
Here are a couple of certificates generated by the proxy:
google.com: https://pastebin.com/YNr4zfZu
stackoverflow.com: https://pastebin.com/veT8sXZ4
I noticed you only set the CN in your X509Req. Both Chrome and Firefox require the subjectAltName extension to be present; see for example this Chrome help page or this Mozilla wiki page discussing CA required or recommended practices. To quote from the Mozilla wiki:
Some CAs mistakenly believe that one primary DNS name should go into
the Subject Common Name and all the others into the SAN.
According to the CA/Browser Forum Baseline Requirements:
BR #9.2.1 (section 7.1.4.2.1 in BR version 1.3), Subject Alternative
Name Extension
Required/Optional: Required
Contents: This extension MUST contain at least one entry. Each entry MUST be either a dNSName containing the Fully-Qualified Domain Name or an iPAddress containing the IP address of a server.
You should be able to do this easily with pyOpenSSL:
if not os.path.exists(path):
r = crypto.X509Req()
r.get_subject().CN = cn
r.add_extensions([crypto.X509Extension("subjectAltName", False, "DNS:" + cn])
r.set_pubkey(key)
r.sign(key, "sha1")
If this does not solve the issue, or if it only partially solves it, please post one or two example certificates that exhibit the problem.
Aside from this, I also noticed you sign using SHA1. Note that certificates signed with SHA1 have been deprecated in several major browsers, so I would suggest switching to SHA-256.
r.sign(key, "sha256")
I vaguely remember that once one could place a link like in the following example on a website and a click on it (if the user had Skype installed) would fire up a Skype window and try to call/chat to the linked account:
Call me on Skype
There were even some browser addons for handling the "Skype-protocol" correctly. Today, Skype recommends to include a piece code which pulls in some javascript file from Microsoft's CDN instead.
What's the current (2017) situation in terms of browser support across operating systems for skype-links like in my example assuming that I do not want to pull stuff from Microsoft's CDN? Firefox 52 64bit on openSUSE Linux with default settings and no addons asks whether it should open Skype, so there is still some support for this method around. Chrome 58 on openSUSE does more or less the same.
Given the fact that this still works at least partially, how would one link to a Microsoft live id, which contains a colon? The following does not work unfortunately:
Call me on Skype
Both Firefox and Chrome fire up Skype, which then tries to call "live%3asome_id". Not escaping the colon kind of does not seem to work either ...
The gpsd program lets linux users cleanly organize their GPS peripheral data, such that a command line program like cgps or a graphical one like xgps can read the data, and write to a socket, like /var/run/gpsd.sock.
There's a nice tutorial on the net for rigging a raspberry pi to use this data. This is all well and good, but how can I integrate this data in firefox or chromium, as the geolocation API? Is there a specific build process I might need? For instance, setting a ./configure flag or something? Is there a way to integrate this data in a prebuilt version of either browser?
Firefox on Linux supports gpsd - it was added in Firefox 4, removed in Firefox 23 and added back in Firefox 50.
However, it still needs to be enabled during build, with --enable-gpsd (which seems not to be the case yet in Ubuntu) and in the configuration, by following these steps:
Navigate to about:config
Create a new string preference, name geo.location.use_gpsd value true
Prior to Firefox 23, you had to:
Create a new string preference, name geo.gpsd.host.ipaddr value localhost
Create a new boolean value, name geo.gpsd.logging.enabled value true
Google Chrome had gpds support added in November 2011 and removed in October 2013. It looks like hardware GPS support is not a priority. If this was handled in Chrome OS, it might be possible to use the same mechanism, but I don't see support there either.
Someone built an extension which attempts to provide support in recent versions, requiring to install a script system-side.
Firefox on linux used to support gpsd.
Navigate to about:config
Create a new string preference, name geo.gpsd.host.ipaddr value localhost
Create a new boolean value, name geo.gpsd.logging.enabled value true
However, it seems that the gpsd support has been removed
Chromium seems to have had gpsd support in the past, but I can't find anything about it now. It looks like hardware gps support is not a priority. If this was handled in ChromeOS, it might be possible to use the same mechanism, but I don't see support there either.
In both cases, it should be possible to write an extension to fake the GPS coordinates, which could read from your real GPS.
Okay, so I'm a student programmer in my college's IT department, and I'm doing browser compatibility for a web form my boss wrote. I need the user to be able to open a local file from a shared drive with a single click.
The problem is that Firefox and Chrome don't allow that for security reasons. Thus, I'm trying to write a custom protocol of my own to open an address in Internet Explorer regardless of the browser being used.
Can anyone help me with this? I'd also be willing to try an alternative solution to the problem.
The below worked for me, is this what you mean?
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo]
#="URL: foo Protocol"
"URL Protocol"=""
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\DefaultIcon]
#="C:\\Program Files (x86)\\Internet Explorer\\iexplore.exe"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell\open]
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\foo\shell\open\command]
#="C:\\Program Files (x86)\\Internet Explorer\\iexplore.exe \"%1\""
Just to note, I'm running Win7Pro, so you may have to move around file path(s) to conform to your environment.
And if that doesn't work, create a proxy between the protocol and the browser, pass the argument(s) from foo:// to that, parse what's necessary, then hand it off to IE using start iexplorer.exe "args".
I'm unsure whether I understand your question, if it is how do I open local files using chrome/firefox, this is your anwser:
First a disclaimer, I have never done this and cannot vouch for the accuracy of my response
IE
Microsoft's security model is pretty fail so you can go right ahead and open these files
FireFox
Some quick googling found that Firefox can do this after either editing prefs.js as outlined here or installing an addon called LocalLink
Chrome
Practically impossible due to its security, until now when locallink was ported to chrome.