Checking out a website I built a while ago, I noticed a new bug:
A couple of my webfonts won't load. But only in Chrome. Other browsers manage to load the font just fine.
Looking at the network section in devtools gives me some interesting info:
In the status column of the network report, I see "(failed) net::ERR_FAILED".
But if I look at the http conversation I can see that the font is requested, and the server responds with an http status of '200'.
Why is Chrome telling me that the font is both available to the network, and unavailable due to a network error?
Related
We are seeing an issue with our pre-production and production native homegrown web app where after 15-30 seconds it appears that the session times out when making the preflight API call in the Chrome browser. Edge v101.0.1210.53 appears to be working. We don't see any other errors in the Chrome inspect console.
We saw this behavior on May 25, 2022 right around when chrome 102 got pushed to the browsers. IF we use inspect and turn off the cache then we still see issues with loading the app.
Testing using chrome 101.0.4951.61 against the same web app on my virtual machine does not reproduce the issue. After the browser updated on the virtual machine, the error was reproduced.
Are there any new features in 102 we could turn off to see if a specific new security check might be at play?
Appears to be a bug in chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=1329248
Work around is to disable this: chrome://flags/#private-network-access-send-preflights
My copy of Google Chrome, on my laptop, fails to register all service workers. If I load any website that has offline functionality, devtools console outputs:
Uncaught (in promise) TypeError: Failed to register a ServiceWorker for scope ('https://googlechrome.github.io/samples/service-worker/basic/') with script ('https://googlechrome.github.io/samples/service-worker/basic/service-worker.js'): An unknown error occurred when fetching the script.
Note that the above error is from https://googlechrome.github.io/samples/service-worker/basic/, which is a technology demonstration made by google, specifically about service workers, specifically for chrome, not something I created. One other interesting thing is that service workers fetched from localhost can be registered with no problem. This would suggest an SSL issue (I think), but then again the host of the above website is github pages (is there any way my browser could be not trusting github pages? I haven't found any evidence to support that).
I get the same error if I directly type navigator.serviceWorker.register("https://googlechrome.github.io/samples/service-worker/basic/service-worker.js"); into the console on the same site.
It's not just a console message: if I load the website, then disconnect from the internet, then refresh the page, the browser reports "no internet" and does not load the website. The same thing happens if I use devtools>Application>Service Workers offline mode. This indicates that the service worker is failing to register. However, the Service Workers devtools tab does show service workers present – it just lists them as "redundant" instead of "active and running". Bizarrely, the "time received" is 1970-01-01.
I'm using Google Chrome Version 89.0.4389.114 (Official Build) (x86_64) on MacOS Big Sur. On other browsers (Safari, Firefox, and any browser on any other computer/mobile device), this error does not occur. I have restarted my Chrome multiple times since I first noticed the error, and updated it once since then. Neither fixed it.
I'm aware of a couple similar SO questions (here and here for example), but all of them have accepted answers about how the website creators can fix bugs in their site. This bug is appearing on websites I did not create, and seems specific to my browser (which I haven't tampered with in any way). The main reason this is driving me nuts is that I really really like Chrome's devtools, and I would like to use them for a current PWA project.
If anyone familiar with Chrome's inner workings knows what could be causing this, or if anyone has solved this in the past, I will be forever in your debt.
We have an in-house (.Net) application that runs on our corporate desktops. It runs a small web server listening on for HTTP requests on a specific port on localhost. We have a separate HTTPS website that communicates with this application by setting the ImageUrl of a hidden image to the URL of the - this invokes an HTTP request to localhost, which the application picks up on and actions. For example, the site will set the URL of the image to:
http://127.0.0.1:5000/?command=dostuff
This was to work around any kind of "mixed content" messages from the site, as images seemed to be exempt from mixed-content rules. A bit of a hack but it worked well.
I'd seen that Chrome was making moves towards completely blocking mixed content on pages, and sure enough Chrome 87 (currently on the beta channel) now shows these warnings in the Console:
Mixed Content: The page at 'https://oursite.company.com/' was loaded
over HTTPS, but requested an insecure element
'http://127.0.0.1:5000/?command=dostuff'. This request was
automatically upgraded to HTTPS, For more information see
https://blog.chromium.org/2019/10/no-more-mixed-messages-about-https.html
However, despite the warning saying the request is being automatically upgraded, it hasn't been - the application still gets a plain HTTP request and continues to work normally.
I can't find any clear guidance on whether this warning is a "soft fail", and whether future versions of Chrome will enforce the auto-upgrade to HTTPS (which would break things). We have plans to replace the application in the longer term, but I'd like to be ahead of anything that will suddenly stop the application from working before then.
Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
This answer will focus on your main question: Will using HTTP to localhost for images and other mixed content, as used in the scenario above, be an actual issue in the future?
The answer is yes.
The blog post you linked to says:
Update (April 6, 2020): Mixed image autoupgrading was originally scheduled for Chrome 81, but will be delayed until at least Chrome 84. Check the Chrome Platform Status entry for the latest information about when mixed images will be autoupgraded and blocked if they fail to load over https://.
That status entry says:
In developer trial (Behind a flag) (tracking bug) in:
Chrome for desktop release 86
Chrome for Android release 86
Android WebView release 86
…
Last updated on 2020-11-03
So this feature has been delayed, but it is coming.
Going through your question and all comments - and putting myself in your shoes, I would do the following:
Not messing with either the currently working .Net app/localhost server (HTTP), nor the user-facing (HTTPS) front-end.
Write a simple/cheap cloud function (GCP Cloud Function or AWS Lambda) to completely abstract away your .Net app from the front-end. Your current HTTPS app would only call the cloud function (HTTPS to HTTPS - not having to pray anymore that Google will not shut-down mixed traffic, which will happen eventually, although nobody knows when).
The cloud function would simply temporarily copy the image/data coming from the (insecure) .Net app to cloud storage and then serve it straight away through HTTPS to your client-side.
I am building an intreactive information kiosk for public use. Making it a web app in Chrome. It need to work offline. My solution is using the browsers application cache. Problem: Chrome looses all content in the application cache at crash/power failure. Is this normal behaviour?! Is there a way to hold the content in the event of system failure?
Edit 2012-12-19:
The content of the appcache is not lost, it is still there. But Chrome lost the reference to the data. If I start Chrome with the "--restore-last-session" flag it has the connection to the cached data. This is not a "pretty" solution though...
Clarification: The real problem is when Windows and Chrome starts after a power failure and the network connection is absent. Chrome acts as if there is nothing in the appcache and you get the "No network connection" error. I can't se why it works this way?
I've been developing a web app that uses the offline cache, partly as a way to reduce the number of calls made to the server while in use.
I was hoping to have the login page load and cache all the resources such that all pages behind the login would not have to.
What I'm noticing from the server logs is that although all the resources (images, stylesheets, javascript files) in the manifest are requested when the login page loads, after the user has logged in, and redirected to, say, /workspace/, Safari (both desktop and mobile) seems to request the the stylesheets and javascript files listed in /workspace/ again, resulting in a HTTP 304 from the server.
While the load in serving a 304 is minimal, I'd like like to know if there was a way to avoid those. I tested the same code in Chrome (dev channel), and Chrome only requests the cache manifest again after login, and that's it.
Would appreciate any thoughts! Thanks in advance!
I have noticed in my offline app that the host page (the one with the manifest tag in it) must be in the manifest file as well (only in iPhone iOS since 4.3), this to support startup in airline / offline mode.
Perhaps this has something to do with your problem as well.
I had a problem with the offline mode in iOS 4.3
(read this for more insight in the 4.3 issue http://www.theregister.co.uk/2011/03/15/apple_ios_throttles_web_apps_on_home_screen/) however when I updated to 4.3.2 it worked again.
I have found an interesting situation with iOS 4.3.3. I have an HTML5 offline app that worked in iOS 4.2 on iPad. But I updated my iPad to iOS 4.3.1, it can no longer run in offline mode from the Home Screen. However, when I saw that "user593037" say that it was working on iOS 4.3.2, I updated my iPad again and today, its at iOS 4.3.3.
Initially my offline app still did not work offline. So I went back to the MOST basic offline web page and I used "cache.manifest" as the manifest file name it worked. So, it seems that on iOS 4.3.3 the offline caching will only work if that is the file name used for the cache manifest. I even tried with a file name of cache2.manifest and it will fail to run offline.
And you can also run it full screen with the "apple-mobile-web-app-capable" set to "yes".