Chrome: ERR_BLOCKED_BY_XSS_AUDITOR details - google-chrome

I'm getting this chrome flag when trying to post and then get a simple form.
The problem is that the Developer Console shows nothing about this and I cannot find the source of the problem by myself.
Is there any option for looking this at more detail?
View the piece of code triggering the error for fixing it...

The simple way for bypass this error in developing is send header to browser
Put the header before send data to browser.
In php you can send this header for bypass this error ,send header reference:
header('X-XSS-Protection:0');
In the ASP.net you can send this header and send header reference:
HttpContext.Response.AddHeader("X-XSS-Protection","0");
or
HttpContext.Current.Response.AddHeader("X-XSS-Protection","0");
In the nodejs send header, send header reference :
res.writeHead(200, {'X-XSS-Protection':0 });
// or express js
res.set('X-XSS-Protection', 0);

Chrome v58 might or might not fix your issue... It really depends to what you're actually POSTing. For example, if you're trying to POST some raw HTML/XML data whithin an input/select/textarea element, your request might still be blocked from the auditor.
In the past few days I hit this issue in two different scenarios: a WYSIWYG client-side editor and an interactive upload form featuring some kind of content preview. I managed to fix them both by base64-encoding the raw HTML before POSTing it, then decoding it on the receiving PHP page. This will most likely fix the issue and, most importantly, increase the developer's awareness level regarding the data coming from POST requests, hopefully pushing him into adopting effective data encoding/decoding strategies and strengthen their web application from XSS-type attacks.
To base64-encode your content on the client side you can either use the native btoa() function, which is supported by most browsers nowadays, or a third-party alternative such as a jQuery plugin (I ended up using this, which worked ok).
To base64-decode the POST data you can then use PHP's base64_decode(str) function, ASP.NET's Convert.FromBase64String(str) or anything else (depending on your server-side scenario).
For further info, check out this blog post that I wrote on the topic.

In this case, being a first-time contributor at the Creative forums, (some kind of vBulletin construct) and reduced to posting a PM to the moderators before forum access it is easy for one to encapsulate the nature of the issue from the more popular answers above.
The command was
http://forums.creative.com/private.php?do=insertpm&pmid=
And as described above the actual data was "raw HTML/XML data within an input/select/textarea element".
The general requirement for handling such a bug (or feature) at the user end is some kind of quick fixit tweak or twiddle. This post discusses the option of clearing cache, resetting Chrome settings, creating a new_user or retrying the operation with a new beta release.
It was also suggested that one launches a new instance with the following:
google-chrome-stable --disable-xss-auditor
The launch actually worked in this W10 1703 Chrome 061 edition after this modified version:
chrome --disable-xss-auditor
However, on logging back in to the site and attempting the post again, the same error was generated. Perhaps the syntax wants refining or something else is awry.
It then seemed reasonable to launched Edge and repost from there, which turned out to be no problem at all.

This may help in some circumstances. Modify Apache httpd.conf file and add
ResponseHeader set X-XSS-Protection 0
It may have been fixed in Version 58.0.3029.110 (64-bit).

I've noticed that if there is an apostrophe ' in the text Chrome will block it.

When I update href from javascript:void(0) to # in the page of POST request, it works.
For example:
login
Change to:
login

I solved the problem!
In my case when I make the submmit, I send the HTML to the action and in the model I had a property that accept the HTML with "AllowHTML".
The solution consist in remove this "AllowHTML" property and everything go OK!
Obviously I no longer send the HTML to the action because in my case I do not need it

It is a Chrome bug. The only remedy is to use FireFox until they fix this Chrome bug. XSS auditor trashing a page, that has worked fine for 20 years, seems to be a symptom, not a cause.

Related

What is causing Chrome to show error "The request's credentials mode prohibits modifying cookies and other local data."?

We have a react front-end that is communicating with an ASP Core API.
Sometimes we detect there is something wrong with the front-end, including service workers, local cache, and that kind of stuff, so we want to tell the client to clean it up.
I've implemented the Clear-Site-Data (dev-moz) (w3c) as a response header, as "Clear-Site-Data": "cache", "cookies", "storage", "executionContexts"
When testing this out in Firefox, it works, and in the console I'm seeing:
Clear-Site-Data header found. Unknown value “"executionContexts"”. SignIn
Clear-Site-Data header forced the clean up of “cache” data. SignIn
Clear-Site-Data header forced the clean up of “cookies” data. SignIn
Clear-Site-Data header forced the clean up of “storage” data.
When doing the same in Chrome, it's not working, and I'm seeing the message
The request's credentials mode prohibits modifying cookies and other local data.
I'm trying to figure out what to do to fix it, but there are barely any any references. Just 7 results, mostly from browser integration test logs
All the documentation says that this should be implemented and working in Chrome... Any idea what's the problem?
Try manually reloading the page after the Clear-Site-Data has been received (so that the local data / cache is cleared and the header no longer contain Clear-Site-Data).
Both Firefox & Chrome don't appear to support executionContexts, which tells the browser to reload the original response.
If header contains executionContexts, then the browser should ignore it (as you see in Firefox console). However you can try wildcard mapping (*). (This will also add support for future properties).
Response.AppendHeader("Clear-Site-Data", "\"*\"");
Also Google reuse parts of their Chrome source code in their open source project Chromium. If you take a look at Chromium source code (https://doss-gitlab.eidos.ic.i.u-tokyo.ac.jp/sneeze/chromium/blob/9b22da4739ec7bf54fb8e730662e2ab7996532e0/content/browser/browsing_data/clear_site_data_handler.cc line 308). This implements the same exception The request's credentials mode prohibits modifying cookies. A flag LOAD_DO_NOT_SAVE_COOKIES is somehow being sent. The console error maybe an caused by an additional response header or a small chance theres a bug in Chrome.
Right now, my advice would be do not implement the Clear-Site-Data header at this time.
Despite the spec being widely available for some years, vendor support is still hit-and-miss and is now actually going in reverse.
As per the w3c github for this, there are a number of issues regarding executionContexts. The wildcard ('*') mentioned by Greg in their answer is not supported by Chrome, and Mozilla are about to remove the cache value as well.
All this points to a flawed standard which is likely to be removed at some point in the future.

Wordpress JSON API returns normal site page in html. How do I get it to give me JSON like it's supposed to

For example, entering http://mywordpresswebsite.example.com/?json=1 into the browser loads the main site html, the same as omitting the json querystring variable: http://mywordpresswebsite.example.com/
The JSON API is activated. I have tried reactivating and deactivating, checking .htaccess file settings, and deactivating all other plugins. None of those have made much difference so far.
TIA
I had the same problem with my localhost test page and was wondering, why my route worked last week and was not accessible this week.
Short explanation
After some tests and a lot of frustration, I was able to use the REST API Route again by following the wordpress documentation about routes-vs-endpoints with “Pretty Permalinks” and “Ugly” Permalinks
Longer explanation
I guess in my case it was based on the reinstall of my MySQL Database. By installing the new database, my previous setup has been reset to the wordpress standard installation with permalinks as "plain", which is an "ugly" permalink. That's the reason, why the answer of Mattygabe work for me after the reinstall of the database.
But with this solution, I had a problem with my filter value and therefore I found the solution with "pretty premalinks" and changed my permalinks to "Month and name", as shown in the picture. After this change, I could access my REST API via the desired route.
There could be also some difficulties with REST APIs related to the following examples:
using "wp" within the REST route
if you work on plugins, which should be shared, keep on mind that some plugins may restrict REST Access, e.g. iThemes Security
I'm likely doing it wrong, but when I form my requests for a Wordpress installation at http://www.example.com/ like this:
http://www.example.com/index.php?rest_route=/my/rest/route/here
I end up getting proper responses back.
I had a heck of a time figuring this out and ended up grokking a URL formatted like that in the HTML returned to me. I was expecting to make requests as http://www.example.com/wp_json/wp/v2/my/rest/route/here , but I only got HTML responses.
(FWIW, I am reposting this on all similar questions on the StackExchange network. Admins/mods - if this is against the rules or seen as rep spamming, feel free to take it down. Was hoping to help anyone else hitting the same issue I am, and to also learn what it is I've done wrong and why.)
Ok, so the new endpoint for Wordpress 4.7 is mywordpresswebsite.example.com/index.php/wp-json. It's part of Wordpress Core as of 4.7 and not a plugin anymore, there's nothing to be activated. Thank you, Mark Kaplun.
I also experienced this issue. I did install the WP API plugin and then realized I didn't need it so I deactivated it and deleted it. Afterwards I tried a GET request to https://example.com/wp-json/wp/v2/posts and received the HTML of my wordpress site.
To fix this I ended up deactivating all plugins and then I started receiving the JSON response from https://example.com/wp-json/wp/v2/posts so I stepped through each plugin reactivating and in the end all my plugins are active and the endpoint is responding with JSON.
I changed Permalinks (Settings => Permalinks)
I had an issue returning html page instead of JSON response on Wordpress 5.3 and I got resolved when I changed the Permalink as Post name from plain

'Script error' fix does not work for Chrome

The fix for the cryptic JS Script error (resulting from errors in a cross-origin script) is well documented. I've implemented the solution and it now works for Firefox, but it does not for Chrome. Has anyone else encountered this problem or know what might be going wrong? I did look through this post, where they identify it as a bug that was fixed back in 2013, so I'm not sure what gives.
I ran into this issue, which might not be your issue, but in case someone else comes here with the same thing:
Adding the CORS headers to the HTTP responses for the linked status is not enough. I overlooked this, but adding crossorigin="anonymous" (or crossorigin="use-credentials see here on MDN ) to the <script... tag is also necessary.
It makes sense - in order to allow sharing of source code info in between two domains, one would want permissions to be given on both sides - the page requesting the external script, and the source of the script both have to be okay with showing debug info.
It's already in the "The Fix" link in the original post, but I lost some time to it, so thought it was worth reiterating.
I ran into the same issue, and I don't know if this helps you (probably not since it's been over a year), but for any future readers: send your files over HTTPS, not just HTTP. It seems that Chrome is stricter than Firefox and won't allow this 'fix' to work over regular HTTP.
(And if you're using webpack, be sure to have devtool: "source-map" set (as opposed to eval-source-map)

Chrome basic authentication custom message stopped working

I am using nginx proxy to server my web-page. For login user need to provide his 2 factor authentication code and his password, to let users know that they need to enter their password+2 factor code to login, I send them a message "Login required, username, password+VIP token"'
And this what I get now
This has stopped working from Chrome version 49. I am on 49.0.2623.110.
Any work around to fix this? It works perfectly on firefox.
This was indeed answered in Change Basic HTTP Authentication realm and login dialog message.
Short explanation: You were actually defining realms with auth_basic directives of Nginx on the server side. But "whether to prompt this message or not" is basically a design choice made by specific client programs. And Chrome just chose to hide it, for reasons you may find in the first link.
In fact, as of my decade (2022), Firefox seems to hide the message too.
Why I necromance this very old post: I was reading this documentation of Nginx. At the end of the article, there is a screenshot similar to the Firefox one in the OP. Unsurprisingly, my browser didn't behave like that even I followed all the instructions therein. Then I started Googling and this is the first hit relevant to my question. After I learned something about HTTP basic authentication, realm etc and finally came across the first link, I think I should post something here.
Apparently, Nginx documentation is using kinda modern UI to host kinda outdated contents. Hope this answer will help anyone who is confused by that screenshot too ;)

What to do with chrome sending extra requests?

Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
But I'm just not sure if I have accounted for all the situations this feature can produce.
Would there be any special situations? Any best practices to deal with them?
Update 1: Now I see why my bank's page throws an error when I open the page with chrome! It says: "Only one window of the browser should be open." That's their solution to security threats?!!
Your best bet is to follow standard web development best practises: don't change application state as a result of a GET call.
If you're worried I recommend updating your data layer unit tests for GET calls to be duplicated & ensure they return the same data.
(I'm not seeing this behaviour with Chrome 8.0.552.224, by the way, is very new?)
I saw the subjected behavior while writing a server application and found that earlier answers are probably not true.
Chrome distributes a single request into multiple http ones to fetch resources in parallel. In this case, it is an image which it fetches as a separate http get.
I have attached screen shot of packet capture through wireshark.
It is for a simple get request to port 8080 for which the server returns a hello message.
Chrome sends the second get request for obtaining favorite icon which you see on top of every tab opened. It is NOT a second get to cater time out or any such thing.
It should be considered another element that differs across browsers. However, doing things in multiple http requests in parallel is kind of a standard thing in browsers as of 2018.
Here is a reference question that i found latter
Chrome sends two requests SO
Chrome issue on google code
It also can be caused by link tags with empty href attributes, at least in Chromium (v41). For example, each of the following line will generate an additional query on the page :
<link rel="shortcut icon" href="" />
<link rel="icon" type="image/x-icon" href="" />
<link rel="icon" type="image/png" href="" />
It seams that looking for empty attributes in the page is a good starting point, either href or src.
This behavior can be caused by SRC='' or SRC='#' in IMG or (as in my case) IFRAME tag. Replacing '#' with 'about:blank" has fixed the problem.
Here http://forums.mozillazine.org/viewtopic.php?f=7&t=1816755 they say that SCRIPT tags can be the issue as well.
My observation of this characteristic (bug/feature/whatever) occurs when I am typing in a URL and the Autocomplete lands on a match while still typing in the URL.
Chrome takes that match and fetches the page, I assume for the caching benefits that would occur when loading the page yourself....
I have just implemented a single-use Guid token (asp.net/TSQL) which is generated when the first form in a series of two (+confirmation page) is generated. The Token is recorded as "pending" in the DB when it is generated. The Guid token accompanies posts as a hidden field, and is finally marked as closed when the user operation is completed (payment). This mechanism does work, and prevents any of the forms being resubmitted after the payment is made. However, I see 2 or 3 (!?) additional tokens generated by additional requests quickly one after the other. The first request is what ends up in front of the user (localhost - so ie., me), where the generated content ends up for the other two requests I have no idea. I wondered initially why Page_Load handlers were firing mutliple times for one page impression, so I tried a flag in Http.Context.Current - but found to my dismay, that the subsequent requests come in on the same URL but with no post data, and empty Http.Context.Current arrays - ie., completely (for practical purposes) seperate http requests. How to handle this? Some sort of token and logic to refuse subsequent page body content requests while the first is still processing? I guess this could take place as a global context?
This only happens when I enable "webug" extension (which is a FirePHP replacement for chrome). If I disable the extension, server only gets one request.
I just want to update on this one. I've encountered the same problem but on css style.
I've looked at all my src, href, script tag and none of them had an empty string. The offending entry was this:
<div class="Picture" style="background-image: url('');"> </div>
Make sure you also check your styles for empty url string
I was having this problem, but none of the solutions here were the issue. For me, it was caused by the APNG extension in Chrome (support for animated PNGs). Once I disabled that extension, I no longer saw double requests for images in the browser. I should note that regardless of whether the page was outputting a PNG image, disabling this extension fixed the issue (i.e., APNG seems to cause the issue for images regardless of image type, they don't have to be PNG).
I had many other extensions as well (such as "Web Developer" which many have suggested is the issue), and those were not the problem. Disabling them did not fix the issue. I'm also running in Developer Mode and that didn't make a difference for me at all.
In my case, it was Chrome (v65) making a second GET /favicon.ico, even though the response was text/plain thus clearly no <link in there referring the icon. It stopped doing that after I replied with a 404.
Firefox (v59) was sending 2 requests for favicon; again it stopped doing this after the 404.
I'm having the same bug. And like the previous answer this issue is because I've installed the Validator chrome extension
Once disable the extension, works normally.
In my case I have enpoint (json) data to a different server and browser make first an empty request(Request Method:OPTIONS) to check if a endpoint accept requests from my server, Same-origin policy. Also goot to know is a Angular 1 App.
In conclusion I make requests from localhost to a online fake json data.
I had empty tcp packet sent by Chrome to my simple server before normal html GET query and /favicon after. Favicon wasn`t a problem but empty tcp was, since my server was waiting either for data or for connection to be finished. It had no data and wouldn't release connection for 2 minutes. So thread was hanging for 2 minutes.
Jrummell's Link in a comment to original post helped me. It says empty tcp packets could be caused by "Predict network actions to improve page load performance" setting. I tried turning off prediction settings one by one and it worked. In chrome version 73.0.3683.86 (Official Build) (64-bit) this behavior was caused by chrome setting "Use a prediction service to load pages more quickly" turned on.
So in chrome~73 you can try going to setting -> advanced -> privacy and security -> Use a prediction service to load pages more quickly and turn it OFF.
It could be situation when Chrome send in start the request with method OPTIONS and only the second is real request with method GET. Usually in code we deal only with GET (or POST/PUT/DELETE..) but not with OPTIONS. Check if the first request has method OPTIONS.