Web app 302 redirecting HTTP requests in Chrome 90 from embedded iframe - google-chrome

We recently started having issues with a web app used internally at our organization. Most users have been using Chrome to access the web app. The issue seems to correspond with the release of Chrome 90. The web app has been in place for a couple of years working with previous versions of Chrome without issue in this regard.
The web app uses an embedded iframe from a 3rd-party vendor. The vendor app does an HTTP GET to a URL within our web app to indicate success or failure. We then close the iframe and update our app accordingly. This has worked fine until recently. Now it seems that the HTTP GET from the vendor iframe is being 302 redirected to our login.
Example of 302 redirect
Prior to this and using MS Edge as the browser, the same HTTP GET gets a 200 response and our web app works as expected.
Example of HTTP 200 response
Since other browsers are continuing to work and there have been no significant changes to the web server, web app, or network access, we suspect something has changed with the latest version of Chrome and perhaps stricter security requirements. Why the 302 redirect? Does this have something with our SameSite cookie config? (Up to this point, we have done nothing specific with regards to SameSite).

We found that with the latest updates to Chrome, we had to set the ASP.Net Session cookie headers to include "SameSite=None; Secure".
This article provided the answer: https://web.dev/samesite-cookie-recipes/

Related

Mixed-content warning from Chrome 87 when accessing HTTP image source from an HTTPS page

We have an in-house (.Net) application that runs on our corporate desktops. It runs a small web server listening on for HTTP requests on a specific port on localhost. We have a separate HTTPS website that communicates with this application by setting the ImageUrl of a hidden image to the URL of the - this invokes an HTTP request to localhost, which the application picks up on and actions. For example, the site will set the URL of the image to:
http://127.0.0.1:5000/?command=dostuff
This was to work around any kind of "mixed content" messages from the site, as images seemed to be exempt from mixed-content rules. A bit of a hack but it worked well.
I'd seen that Chrome was making moves towards completely blocking mixed content on pages, and sure enough Chrome 87 (currently on the beta channel) now shows these warnings in the Console:
Mixed Content: The page at 'https://oursite.company.com/' was loaded
over HTTPS, but requested an insecure element
'http://127.0.0.1:5000/?command=dostuff'. This request was
automatically upgraded to HTTPS, For more information see
https://blog.chromium.org/2019/10/no-more-mixed-messages-about-https.html
However, despite the warning saying the request is being automatically upgraded, it hasn't been - the application still gets a plain HTTP request and continues to work normally.
I can't find any clear guidance on whether this warning is a "soft fail", and whether future versions of Chrome will enforce the auto-upgrade to HTTPS (which would break things). We have plans to replace the application in the longer term, but I'd like to be ahead of anything that will suddenly stop the application from working before then.
Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
This answer will focus on your main question: Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
The answer is yes.
The blog post you linked to says:
Update (April 6, 2020): Mixed image autoupgrading was originally scheduled for Chrome 81, but will be delayed until at least Chrome 84. Check the Chrome Platform Status entry for the latest information about when mixed images will be autoupgraded and blocked if they fail to load over https://.
That status entry says:
In developer trial (Behind a flag) (tracking bug) in:
Chrome for desktop release 86
Chrome for Android release 86
Android WebView release 86
…
Last updated on 2020-11-03
So this feature has been delayed, but it is coming.
Going through your question and all comments - and putting myself in your shoes, I would do the following:
Not messing with either the currently working .Net app/localhost server (HTTP), nor the user-facing (HTTPS) front-end.
Write a simple/cheap cloud function (GCP Cloud Function or AWS Lambda) to completely abstract away your .Net app from the front-end. Your current HTTPS app would only call the cloud function (HTTPS to HTTPS - not having to pray anymore that Google will not shut-down mixed traffic, which will happen eventually, although nobody knows when).
The cloud function would simply temporarily copy the image/data coming from the (insecure) .Net app to cloud storage and then serve it straight away through HTTPS to your client-side.

Chrome 80 handling HTTP 302 redirect and dropping query string

I am trying to troubleshoot an SPA application which does authentication through Auth0 by first redirecting to their site for providing the credentials and then to a /callback path on their site with token included in the URL query string, and finally to a /callback path on my App's static file site. Just yesterday, I noted in DevTools, in the Network tab, that Chrome gets the 302 with the location header set to my app's /callback path AND including the query string, but when Chrome does the redirection it drops the query string. It seems this behavior may have started recently and was intermittent yesterday but consistent today. Since the 302 redirect goes from auth0.com to my app's domain, is this some new cross site security feature?
Here are the requests trail:
POST to https://my-app.auth0.com/login/callback with the token, etc. in the body.
Response is HTTP 302 with a location header like: https://myapp.com/callback#access_token=...
GET to https://myapp.com/callback <-- no query string!
I'm using Chrome Version 80.0.3987.149 (Official Build) (64-bit) on Ubuntu Linux in incognito mode. Any help appreciated!
The #access-token... part of the URL is not a query string, but a URL fragment. The browser retains this and applies it to the returned page content. You can access this fragment from JavaScript on the page. This is the standard approach that Auth0 uses for single-page apps.

Preflight CORS request not working in Chrome 60

I am having a small issue whereby Chrome (Version 60.0.3112.113, Mac OS) is returning a failed status response from a CORS preflight OPTIONS request.
The endpoint it is querying is a nodejs server which previously did not respond correctly to the preflight request. I have since fixed this.
The preflight request works in all other browsers, and works in Chrome on all other computers. I have tested using Browserling, and everything works as expected.
As such I am assuming (with 99% confidence) that this is some sort of caching issue with Chrome on my development computer. I have however been unable to resolve this, and have at this point tried deleting any/all cache options that I can find in the various Chrome options menus.
Can anyone share any insight?
I could not find a way of clearing whatever internal cache Chrome is using in this regard.
My resolution was simply to append a query string (based on the build time) to the request such that Chrome does not use this internal cache.
This is a good way of versioning resources (JS, CSS, API endpoints etc) anyway.

Since v38, Chrome extension cannot load from HTTP URLs anymore, workaround?

The users of our website run our Chrome plugin which, amongst other things, performs cross-origin requests via XMLHttpRequest as described on the Chrome extension development pages. This has been running just fine for a few years now. However, ever since our users upgraded to the latest version of Chrome (v38), these requests have failed. Our site runs on HTTPS and some of the URLs loaded via our content script are on HTTP. The message is:
[blocked] The page at 'https://www.ourpage.com/' was loaded over
HTTPS, but ran insecure content from 'http://www.externalpage.com':
this content should also be loaded over HTTPS.
The reported line where the error occurred is in the content script where I'm issuing the HTTP call:
xhr.send(null);
I have no control over the external page and I would rather not remove SSL from our own page. Question: Is this a bug or is there a workaround that I am not aware of?
(Note: The permissions in the manifest were always set to <all_urls> which had worked for a long time. Setting it to http://*/ and https://*/ did not help.)
If possible, use the https version of that external page.
If that is not possible, use the background page to handle the AJAX request (example).

HTML5 App Cache fails with Firefox 11 - works with Chromium

I have successfully tested HTML5 Application Cache under Chromium. For instance:
CACHE MANIFEST
http://localhost/pycoh-mnt/materialRequisition/create
The above URL renders an HTML5 file. When I protect it with cookie-based authentication, Firefox 11 fails; I get an error whose description I could not find, but I think is due to an HTTP Redirect response. If I make the URL public, it correctly caches it.
In the other hand, Chromium 18 handles the caching properly in both cases. I'm afraid Firefox is not sending the cookie information when it issues the caching request.
Any idea? Thank you!
PD. I forgot to say I'm running 64 bits apps.
Check if third party cookies are disabled in FF. There is currently a bug in FF that prevents cookies from being sent in the manifest request when 3rd party cookies are disabled:
http://bugzilla.mozilla.org/show_bug.cgi?id=722683