Chrome Invalid SSL Certificate Security Warning - google-chrome

I ran into an interesting problem today using Chrome and I'm hoping there is a better way to fix it than what I ended up doing.
The issue starts with an invalid SSL certificate on a site that I'm configuring. In Chrome it's possible to advance past this screen using a link which adds a security exception for the current domain so that you don't have to view this warning message again.
It's also possible to clear this warning by going to the site with the exception then clicking the Not secure text and choosing the Re-enable warnings option.
Now my problem, I have a couple different redirects in place on the site that will redirect my .com and .bank domains to the primary .net domain. While developing I added security exceptions for all three of these domains. This becomes and issue when testing that my SSL certificate is configured properly. I want to clear out Chrome's stored exception for the .com domain - but I cannot do so using the Re-enable warnings option because as soon as I arrive at the page Chrome sees that an exception is already stored and proceeds to load the page normally which then gets redirected to the .net domain. Because of this there is no point where I can actually clear out the bypassed security warning in Chrome...
The only way I've been able to find to clear out these exceptions is to use the Reset option in Chrome's settings, which is not something I want to do regularly. I'm wondering if there is a hidden settings page in Chrome that lists all of the bypassed security warnings so that I may clear them out individually.

To "Re-enable warnings" for all SSL warnings if you don't want to clear your history (or if you dont know all the exemptions you have in place), you can close Chrome and edit:
"C:\Users\USER\AppData\Local\Google\Chrome\User Data\Default\Preferences"
and set ssl_cert_decisions":{},"
Stored in the JSON-path:
profile > content_settings > exceptions > ssl_cert_decisions
Or you can change the decision_expiration_time of the specific exemption to be equal to the last_modified time
Example: "ssl_cert_decisions":{"https://expired.badssl.com:443,*":{"last_modified":"13235055329485008","setting":{"cert_exceptions_map":{"-201cgaDTf2DD6Cj0N6/tKvudkzDuRBA3GwKd8T9hE7mHhQ=":1},"decision_expiration_time":"13235055329485008","version":1}}}

you will have to clear the browsing data for that site, the easiest way I found to do this is (Ctrl+Shift+Del) to bring the clear browser data window up then set time range to 1 hour, choose browsing history only then click clear data. Hope this is useful.

Related

How to remove all data chrome stores for a url?

TL;DR I'd like to make chrome's state as though it had never, ever visited a certain url before.
Longer version:
I'm working on an application, and have a complicated problem regarding XSS vulnerabilities, which could be caused by the browser 'remembering' something about a previous session which could cause nonces to not match. The upshot is that I need to be absolutely sure that when I visit the app url that chrome hasn't 'remembered' anything about it from any previous session(s).
Here's what I've tried:
Firstly, visiting: chrome://settings/cookies/detail?site=example.com and deleting all the cookies
Secondly, visiting: chrome://settings/clearBrowserData and deleting everything (unfortunately, this doesn't seem to be possible for one url at a time?)
I can prove that chrome has not completely 'forgotten' the site. The proof is complicated, but basically if I place a different app (with different flavicon) at the url, visit the url, then close out that tab, then complete the steps above to clear browser data and cookies (at this point chrome should have forgotten everything). Yet when I put a different app at the same url and visit the url, chrome uses the old app's flavicon, which (I think) proves that it hasn't completely forgotten everything it knew about the url!
So, that's the long version. But, the TL;DR is to simply make it as though chrome had never visited a site (preferably without altering data stored for other sites, or doing anything extreme like completely uninstalling/reinstalling)
A third attempt
To empty cache and hard reload, press cmd + opt + j to bring up the developer console, then right click on refresh and select 'Empty Cache and Hard Reload'. Yet the old flavicon still remains, indicating that not all info from that site was removed.
After about 2 hours, I figured the following techniques to try to remove the flavicon, but even after all of the following steps, as the flavicon from a previous app still remains in Chrome's 'memory'!
Do the first two steps from the question:
visit: chrome://settings/cookies/detail?site=example.com and delete all the cookies (replace example.com with the url in question)
visit: chrome://settings/clearBrowserData and deleting everything (would be great to know how to do this for a single site)
Right click on the tiny icon to the immediate left of the url (it will be a lock if using https, or the letter 'i' if using http).
Go into each of categories listed (e.g. 'Cookies', 'Site Settings' etc) and delete them all
Note
I didn't find a solution for removing all data from chrome, however, I found you can start a completely isolated chrome session with these instructions

What is causing Chrome to show error "The request's credentials mode prohibits modifying cookies and other local data."?

We have a react front-end that is communicating with an ASP Core API.
Sometimes we detect there is something wrong with the front-end, including service workers, local cache, and that kind of stuff, so we want to tell the client to clean it up.
I've implemented the Clear-Site-Data (dev-moz) (w3c) as a response header, as "Clear-Site-Data": "cache", "cookies", "storage", "executionContexts"
When testing this out in Firefox, it works, and in the console I'm seeing:
Clear-Site-Data header found. Unknown value “"executionContexts"”. SignIn
Clear-Site-Data header forced the clean up of “cache” data. SignIn
Clear-Site-Data header forced the clean up of “cookies” data. SignIn
Clear-Site-Data header forced the clean up of “storage” data.
When doing the same in Chrome, it's not working, and I'm seeing the message
The request's credentials mode prohibits modifying cookies and other local data.
I'm trying to figure out what to do to fix it, but there are barely any any references. Just 7 results, mostly from browser integration test logs
All the documentation says that this should be implemented and working in Chrome... Any idea what's the problem?
Try manually reloading the page after the Clear-Site-Data has been received (so that the local data / cache is cleared and the header no longer contain Clear-Site-Data).
Both Firefox & Chrome don't appear to support executionContexts, which tells the browser to reload the original response.
If header contains executionContexts, then the browser should ignore it (as you see in Firefox console). However you can try wildcard mapping (*). (This will also add support for future properties).
Response.AppendHeader("Clear-Site-Data", "\"*\"");
Also Google reuse parts of their Chrome source code in their open source project Chromium. If you take a look at Chromium source code (https://doss-gitlab.eidos.ic.i.u-tokyo.ac.jp/sneeze/chromium/blob/9b22da4739ec7bf54fb8e730662e2ab7996532e0/content/browser/browsing_data/clear_site_data_handler.cc line 308). This implements the same exception The request's credentials mode prohibits modifying cookies. A flag LOAD_DO_NOT_SAVE_COOKIES is somehow being sent. The console error maybe an caused by an additional response header or a small chance theres a bug in Chrome.
Right now, my advice would be do not implement the Clear-Site-Data header at this time.
Despite the spec being widely available for some years, vendor support is still hit-and-miss and is now actually going in reverse.
As per the w3c github for this, there are a number of issues regarding executionContexts. The wildcard ('*') mentioned by Greg in their answer is not supported by Chrome, and Mozilla are about to remove the cache value as well.
All this points to a flawed standard which is likely to be removed at some point in the future.

Chromium's XSS auditor refused to execute a script

This is a message from the Chrome Inspector:
The XSS Auditor refused to execute a script in
http://localhost/Disposable Working NOTAS.php
because its source code was found within the request.
The auditor was enabled as the server sent neither
an 'X-XSS-Protection' nor 'Content-Security-Policy' header.
... I have a couple dozen websites sitting on localhost on my notebook which I use for a big part of my workflow, and in the last couple days, after an updated Chrome changed something, pretty much all the websites' textareas' content is not being saved to file anymore.
The code which was saving edits I made, is uniformly broken; I enter new text, click
on save and my browser, instead of executing the file~writing subroutines in the script
for the webpage I am working in, simply opens a new blank page. If I then hit the back
button, the textarea still shows the freshly added content, but in the file, no changes
are being appended.
If you'd like to tell Chrome to disable its XSS protection, you can send an X-XSS-Protection header with a value of 0. Since you appear to be using PHP, you'd add this somewhere where it'll always be executed before any content has been output:
header("X-XSS-Protection: 0");
If you are getting blocked by XSS Auditor, you should check whether your code has a XSS vulnerability or not before simply disabling it.
If you're getting blocked by XSS Auditor, there's a decent chance you have a XSS vulnerability and just didn't realize it. If you simply disable the XSS Auditor, you will remain vulnerable: it's treating the symptoms, rather than the underlying illness (the root cause).
I encountered exactly the same issue when I was studying XSS recently. And below screenshot shows a PHP way to bypass Chrome XSS Auditor.
Just add -- header("X-XSS-Protection: 0");

What to do with chrome sending extra requests?

Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
But I'm just not sure if I have accounted for all the situations this feature can produce.
Would there be any special situations? Any best practices to deal with them?
Update 1: Now I see why my bank's page throws an error when I open the page with chrome! It says: "Only one window of the browser should be open." That's their solution to security threats?!!
Your best bet is to follow standard web development best practises: don't change application state as a result of a GET call.
If you're worried I recommend updating your data layer unit tests for GET calls to be duplicated & ensure they return the same data.
(I'm not seeing this behaviour with Chrome 8.0.552.224, by the way, is very new?)
I saw the subjected behavior while writing a server application and found that earlier answers are probably not true.
Chrome distributes a single request into multiple http ones to fetch resources in parallel. In this case, it is an image which it fetches as a separate http get.
I have attached screen shot of packet capture through wireshark.
It is for a simple get request to port 8080 for which the server returns a hello message.
Chrome sends the second get request for obtaining favorite icon which you see on top of every tab opened. It is NOT a second get to cater time out or any such thing.
It should be considered another element that differs across browsers. However, doing things in multiple http requests in parallel is kind of a standard thing in browsers as of 2018.
Here is a reference question that i found latter
Chrome sends two requests SO
Chrome issue on google code
It also can be caused by link tags with empty href attributes, at least in Chromium (v41). For example, each of the following line will generate an additional query on the page :
<link rel="shortcut icon" href="" />
<link rel="icon" type="image/x-icon" href="" />
<link rel="icon" type="image/png" href="" />
It seams that looking for empty attributes in the page is a good starting point, either href or src.
This behavior can be caused by SRC='' or SRC='#' in IMG or (as in my case) IFRAME tag. Replacing '#' with 'about:blank" has fixed the problem.
Here http://forums.mozillazine.org/viewtopic.php?f=7&t=1816755 they say that SCRIPT tags can be the issue as well.
My observation of this characteristic (bug/feature/whatever) occurs when I am typing in a URL and the Autocomplete lands on a match while still typing in the URL.
Chrome takes that match and fetches the page, I assume for the caching benefits that would occur when loading the page yourself....
I have just implemented a single-use Guid token (asp.net/TSQL) which is generated when the first form in a series of two (+confirmation page) is generated. The Token is recorded as "pending" in the DB when it is generated. The Guid token accompanies posts as a hidden field, and is finally marked as closed when the user operation is completed (payment). This mechanism does work, and prevents any of the forms being resubmitted after the payment is made. However, I see 2 or 3 (!?) additional tokens generated by additional requests quickly one after the other. The first request is what ends up in front of the user (localhost - so ie., me), where the generated content ends up for the other two requests I have no idea. I wondered initially why Page_Load handlers were firing mutliple times for one page impression, so I tried a flag in Http.Context.Current - but found to my dismay, that the subsequent requests come in on the same URL but with no post data, and empty Http.Context.Current arrays - ie., completely (for practical purposes) seperate http requests. How to handle this? Some sort of token and logic to refuse subsequent page body content requests while the first is still processing? I guess this could take place as a global context?
This only happens when I enable "webug" extension (which is a FirePHP replacement for chrome). If I disable the extension, server only gets one request.
I just want to update on this one. I've encountered the same problem but on css style.
I've looked at all my src, href, script tag and none of them had an empty string. The offending entry was this:
<div class="Picture" style="background-image: url('');"> </div>
Make sure you also check your styles for empty url string
I was having this problem, but none of the solutions here were the issue. For me, it was caused by the APNG extension in Chrome (support for animated PNGs). Once I disabled that extension, I no longer saw double requests for images in the browser. I should note that regardless of whether the page was outputting a PNG image, disabling this extension fixed the issue (i.e., APNG seems to cause the issue for images regardless of image type, they don't have to be PNG).
I had many other extensions as well (such as "Web Developer" which many have suggested is the issue), and those were not the problem. Disabling them did not fix the issue. I'm also running in Developer Mode and that didn't make a difference for me at all.
In my case, it was Chrome (v65) making a second GET /favicon.ico, even though the response was text/plain thus clearly no <link in there referring the icon. It stopped doing that after I replied with a 404.
Firefox (v59) was sending 2 requests for favicon; again it stopped doing this after the 404.
I'm having the same bug. And like the previous answer this issue is because I've installed the Validator chrome extension
Once disable the extension, works normally.
In my case I have enpoint (json) data to a different server and browser make first an empty request(Request Method:OPTIONS) to check if a endpoint accept requests from my server, Same-origin policy. Also goot to know is a Angular 1 App.
In conclusion I make requests from localhost to a online fake json data.
I had empty tcp packet sent by Chrome to my simple server before normal html GET query and /favicon after. Favicon wasn`t a problem but empty tcp was, since my server was waiting either for data or for connection to be finished. It had no data and wouldn't release connection for 2 minutes. So thread was hanging for 2 minutes.
Jrummell's Link in a comment to original post helped me. It says empty tcp packets could be caused by "Predict network actions to improve page load performance" setting. I tried turning off prediction settings one by one and it worked. In chrome version 73.0.3683.86 (Official Build) (64-bit) this behavior was caused by chrome setting "Use a prediction service to load pages more quickly" turned on.
So in chrome~73 you can try going to setting -> advanced -> privacy and security -> Use a prediction service to load pages more quickly and turn it OFF.
It could be situation when Chrome send in start the request with method OPTIONS and only the second is real request with method GET. Usually in code we deal only with GET (or POST/PUT/DELETE..) but not with OPTIONS. Check if the first request has method OPTIONS.

Page can not be displayed

I've got a client that sees the "Page can not be displayed" (nothing else) whenever they perform a certain action in their website. I don't get the error, ever. I've tried IE, FF, Chrome, and I do not see the error. The client sees the error on IE.
The error occurs when they press a form submit button that has only hidden fields.
I'm thinking this could be some kind of anti-malware / virus issue. has anyone ever dealt with this issue?
In IE, go to the "Anvanced" section of "Internet Options" and uncheck "Show friendly HTTP errors". This should give you the real error.
Is this an IE message? Ask them to switch off "short error messages" (or whatever they are called in the english version) somewhere deep in IEs options - This will make IE display the error message your server is sending instead of its own unhelpful message.
Also I've heard that IE might be forced to show server provided error messages if only the page is long/large enough, so you might want to add a longer " " section to error messages. This information is old enough that it might have effected older versions of IE - I usually get to the root of problems with eliminating the "short error messages"
Note: I'm neither running IE nor Windows, therefor can only operate on memory regarding the name of the config options of IE6...
Update: corrected usage in the suggestion to provide longer error messages... Perhaps somebody with access to IE can approve if longer error pages still force IE to display the original error page instead of the user friendly (sic) one.
It would be useful to you to figure out which error code is returned. Is it 404 - Resource not found or 503 - Forbidden Access? There are a few more, but in any case, it would help you figure out the cause of the problem.
If your client is running IE, ask him to disable friendly error messages in the advanced options.
Check their "hosts file". The location of this file is different for XP and vista
in XP I believe it's C:\windows\hosts or C:\windows\system32\hosts
Look for any suspicious domains.. Generally speaking, there should only be ~2 definitions (besides comments) in the files defining localhost and other local ip definitions. If there's anything else, make sure it's supposed to be there.
Otherwise, maybe the site's just having issues? Also, AFAIK, FF never displays "Page cannot be displayed", so are you sure this is the case in all browsers?
You can try using ieHTTPHeaders to see what is going on behind the scenes.
Do you have any events applied to your submit button? Are you doing a custom submit button that is a hyperlink with an href like "javascript:void(0)" and an event attached that submits the form?
Alought this is a 2008 thread,
but I think maybe someone still use windows xp in the virtualbox in 2018 like me.
The issue I met in 2018 is:
1. Ping to 8.8.8.8 can get correct responses.
2. HTTP sites is working fine, but HTTPS is not.
3. I cannot connect to any site with HTTPS so I cannot download Chrome or Firefox.
And my solution is to enable the TLS 1.0 for secure connections
Everything is fine.