HTTP No Authorization field in the digest authentication requests - google-chrome

I have a http server with digest authentication on my SOC. On attempt to authenticate the server correctly sends response with 401 code and WWW-Authenticate header with a nonce and Digest schema. However on some hosts browsers do not include Authorization field back with nonce and etc. in consequent requests which they supposed to include.
Here is the Edge login attempt:
Response with WWW-Authenticate - https://i.imgur.com/tcw1XYL.png.
In the screen above correct WWW-Authenticate field returned by server.
Request without Authorization - https://i.imgur.com/4z61rU5.png.
I expect Authorization field in the next request but there is none!
The Chrome attempt is similar except it instantly shows 401 page without login prompt because there is no Authorization field in header.
Chrome and Edge both are latest 64bit versions on Windows 10.
What possible issues could cause this behavior?

Apparently the problem was multi-line WWW-Authenticate header. You can see the "/r/n" separators between header field values in the screenshots(0x0d 0x0a bytes).
Such multi-line was allowed in the original RFC 2616 and then deprecated by the newer RFC 7230. See https://stackoverflow.com/a/31324422/8876135 for details and links.
After fixing the header field by making it single line the problem was gone. Still i have no idea why the exact same browsers had this issue with the header at some hosts but was completely fine at my work/home PC's.

Related

CORB OPTIONS Requests Blocked in Chrome 73

It appears that in a recent Chrome release, (or at least recently when making calls to my API --- haven't see it until today), Google is throwing warnings about CORB requests being blocked.
Cross-Origin Read Blocking (CORB) blocked cross-origin response [domain] with MIME type text/plain. See https://www.chromestatus.com/feature/5629709824032768 for more details.
I have determined that the requests to my API are succeeding, and that it's the pre-flight OPTIONS request that is triggering the warning in console.
The application which is calling the API, is not explicitly making the OPTIONS request, rather I have come to understand this is enforced by the browser when making a cross-origin request and is done automatically by the browser.
I can confirm that the OPTIONS request response does not have a mime-type defined. However, I am a little confused as it is my understanding that an OPTIONS response, is only headers, and does not contain a body. I do not understand why such a request would require a mime-type to be defined.
Moreover, the console warning says the request was blocked; yet the various POST and GET requests, are succeeding. So it looks as though the OPTIONS request isn't actually being blocked?
This is a three-part question:
Why does an OPTIONS request require a mime-type to be defined, when there is no body response?
What should the mime-type be for an OPTIONS request, if plain/text is not appropriate? Would I assume application/json to be correct?
How do I configure my Apache2 server to include a mime-type for all pre-flight OPTIONS requests?
I have gotten to the bottom of these CORB warnings.
The issue is related, in part, to my use of the content-type-options: nosniff header. I set this header in order to stop the browser from trying to sniff the content-type itself, thereby removing mime-type trickery, namely with user-uploaded files, as an attack vector.
The other part of this, is related to the content-type being returned application/json;charset=utf-8. Per Google's documentation, it notes:
A response served with a "X-Content-Type-Options: nosniff" response header and an incorrect "Content-Type" response header, may be blocked.
Based on this, I set out to double check IANA's site on acceptable media types. To my surprise, I discovered that no charset parameter was ever actually defined in any RFC for the application/json type, and further notes:
No "charset" parameter is defined for this registration. Adding one really has no effect on compliant recipients.
Based on this, I removed the charset from the content-type: application/json and can confirm the CORB warnings stopped in Chrome.
In conclusion, it would appear that per a recent Chrome release, Google has opted to start treating the mime-type more strictly than it has in the past.
Lastly, as a side note, the reason all of our application requests still succeeds, is because it appears Cross-Origin Read Blocking isnt actually enforced in Chrome:
In most cases, the blocked response should not affect the web page's behavior and the CORB error message can be safely ignored.
Was having the same issue.
The problem I had was due to the fact the API was answering to the preflight with 200 OK but was an empty response without the Content-Length header set.
So, either changing the preflight response status to 204 No Content or by simply setting the Content-Length: 0 header solved the issue.

Chrome is not sending if-none-match

I'm trying to do requests to my REST API, I have no problems with Firefox, but in Chrome I can't get the browser to work, always throws 200 OK, because no if-none-match (or similar) header is sent to the server.
With Firefox I get 304 perfectly.
I think I miss something, I tried with Cache-Control: max-age=10 to test but nothing.
One reason Chrome may not send If-None-Match is when the response includes an "HTTP/1.0" instead of an "HTTP/1.1" status line. Some servers, such as Django's development server, send an older header (probably because they do not support keep-alive) and when they do so, ETags don't work in Chrome.
In the "Response Headers" section, click "view source" instead of the parsed version. The first line will probably read something like HTTP/1.1 200 OK — if it says HTTP/1.0 200 OK Chrome seems to ignore any ETag header and won't use it the next load of this resource.
There may be other reasons too (e.g. make sure your ETag header value is sent inside quotes), but in my case I eliminated all other variables and this is the one that mattered.
UPDATE: looking at your screenshots, it seems this is exactly the case (HTTP/1.0 server from Python) for you too!
Assuming you are using Django, put the following hack in your local settings file, otherwise you'll have to add an actual HTTP/1.1 proxy in between you and the ./manage.py runserver daemon. This workaround monkey patches the key WSGI class used internally by Django to make it send a more useful status line:
# HACK: without HTTP/1.1, Chrome ignores certain cache headers during development!
# see https://stackoverflow.com/a/28033770/179583 for a bit more discussion.
from wsgiref import simple_server
simple_server.ServerHandler.http_version = "1.1"
Also check that caching is not disabled in the browser, as is often done when developing a web site so you always see the latest content.
I had a similar problem in Chrome, I was using http://localhost:9000 for development (which didn't use If-None-Match).
By switching to http://127.0.0.1:9000 Chrome1 automatically started sending the If-None-Match header in requests again.
Additionally - ensure Devtools > Network > Disable Cache [ ] is unchecked.
1 I can't find anywhere this is documented - I'm assuming Chrome was responsible for this logic.
Chrome is not sending the appropriate headers (If-Modified-Since and If-None-Match) because the cache control is not set, forcing the default (which is what you're experiencing). Read more about the cache options here: https://developer.mozilla.org/en-US/docs/Web/API/Request/cache.
You can get the wished behaviour on the server by setting the Cache-Control: no-cache header; or on the browser/client through the Request.cache = 'no-cache' option.
Chrome was not sending 'If-None-Match' header for me either. I didn't have any cache-control headers. I closed the browser, opened it again and it started sending 'If-None-Match' header as expected. So restarting your browser is one more option to check if you have this kind of problem.

CSRF using CORS

I'm studing HTML5's security problems. I saw all the presentations made by Shreeraj Shah. I tried to simulate a basic CSRF attack with my own servers using withCredentials tag sets to true (so in the response message the cookies should be replayed) and adding Content-Type sets to text/plain in the request (to bypass the preflight call).
When I tried to start the attack the browser told me that the XMLHttpRequest can not be accomplish because of the Access-Control-Allow-Origin header. So I put a * in the header of the victim's web page and the browser told me that I can't use the * character when I send a request with withCredentials sets to true.
I tried to make the same thing with the web apps stored in the same domain, and all was fine (I suppose it is because the browser doesn't check if the request comes from the same domain).
I'm asking, it's a new features that modern browsers set up recently to avoid this kind of problems?
Because in the Shreeraj's videos, the request was across different domains and it worked...
Thank you all and sorry for my english :-)
EDIT:
I think I found the reason why the CSRF attack doesn't work fine as in the Shreeraj's presentations.
I read the previous CORS document, published in 2010, and I found that there wasn't any recommendation about the with credential flag setted to true when Access-Control-Allow-Origin is set to *, but if we look at the last two publications about CORS (2012 and 2013), in the section 6.1, one of the notes is that we can't make a request using with credentials flag setted to true if the Access-Control-Allow-Origin is set to *.
Here are the links:
The previous one (2010): http://www.w3.org/TR/2010/WD-cors-20100727/
The last two (2012, 2013): http://www.w3.org/TR/2012/WD-cors-20120403/ --- http://www.w3.org/TR/cors/
Here is the section I'm talking about: http://www.w3.org/TR/cors/#supports-credentials
If we look at the previous document we can not find it, because there isn't.
I think this is the reason why the simple CSRF attack made in 2012 by Shreeraj Shah today doesn't work (of course in modern browsers that follow the w3c's recommendations). Could it be?
The request will still be made despite the browser error (if there's no pre-flight).
The Access-Control-Allow-Origin simply allows access to the response from a different domain, it does not affect the actual HTTP request.
e.g. it would still be possible for evil.com to make a POST request to example.com/transferMoney even though there are no CORS headers set by example.com using AJAX.

Facebook login issue with only Chrome while using DotNetOpenAuth 2.0 in mvc 3 application

In my mvc3 application I have used DotNetOpenAuth for all providers, everything is working fine for all browsers except Chrome. Sometimes only I am getting below error message when I click on Facebook icon for login.
error": {
"message": "Invalid redirect_uri: Given URL is not allowed by the Application configuration.",
"type": "OAuthException",
"code": 191
}
Facing this issue on few computers not on all. Please help me to resolve this issue.
I doubt it's actually a browser issue. It's more likely a subtle difference in the URL to your web site between your different browser windows. Look for capitalization differences, or HTTP vs. HTTPS, trailing slashes, etc. The URL used in your redirect_uri must be exactly as it appears in your app's Facebook registration page (within the boundaries set in the spec, which generally allow adding query string parameters IIRC).
If your site allows visits from multiple URLs (HTTP vs. HTTPS, different host names, etc.) you must take care to either redirect the user to a normalized URL prior to beginning the OAuth flow, or you must explicitly supply a normalized redirect_uri parameter value to DotNetOpenAuth so that the library doesn't pick up on the request URL by default.

how to get around "Content-encoding gzip deflate" header sent by Chrome?

We have a simple HTML login form on our embedded device's web server. The web server is custom coded because of severe memory limitations. Regardless of these limitations, we like Chrome and would like to support it.
All browsers post an HTTP Request to our login form containing the expected "username=myname&password=mypass" string, but not Chrome. Instead we receive from Chrome a "Content-encoding gzip deflate" request. BTW, by "all browsers", I mean this was tested to work fine on Internet Explorer versions 9 beta, 8, 7, 6 ; Firefox versions 4 beta, 3, 2 ; Opera 10, 9 ; Safari 5, 4, 3 ; and SeaMonkey 2.
Referring to section "14.2 Accept Charset" of the w3.org's http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html we tried sending back a HTTP 406 code to indicate that this server does not support that encoding in the hope that Chrome would try again and post the expected strings the standard way. The 406 code returned by the web server is clearly displayed in Chrome's "Inspect Element" window, but it seems to be treated by Chrome as an error code, and no further requests are sent to the web server. "Login failed." We also tried HTTP return codes 405 and 200, same result.
Is there a way to get around this behavior either with client-side JavaScript that will prevent Chrome from sending the "Content-encoding gzip deflate" request, or with a server-side response that will explain nicely to Chrome we don't do gzip, just send it to us the regular way?
We tried posting to the Google Chrome Troubleshooting forum with no response.
Any help would be greatly appreciated!
Best regards,
Bert
You're looking in the wrong section for the error code: Section 14.11 of RFC 2616 specifies that you send a 415 (Unsupported Media Type) if you can't deal with the Content-Encoding.
It sounds like when using chrome to do a post to a server the first time, chrome defaults to using a gzip encoding. Pretty strange.
Easy way out is to just place your username/pass as GET parameters, and when sending the response, as long as you don't send gzip content encoding, chrome should start using none-gzipped posts from that point on. Hope that works?
I tested this out a bit with a simple Python script that printed to stdout. I thought I was getting the same problem, but then I realized that I was just forgetting to flush stdout. It seems that Chrome always sends the request up to the end of the headers before sending the request content, and you have to use a second recv call to get the POST data. In contrast, the entire Firefox request is returned in a single recv call.