I've been working with Appcache for quite some time, but I recently took a couple of weeks to develop a different project. When I returned to my offline project, I started getting this error every time I try to download the contents of my manifest:
Application Cache Error event: Manifest fetch failed (9)
This is followed by 2 addresses; the file and line number of the page that calls the manifest (on right of same line), and the relative url of the manifest itself (on a second line). The download of the individual resources does not begin.
Now, other folks work on this project, but I'm the only guy who touches anything that as much as smells of offline. The issue doesn't appear to be related to any of my usual suspects, like a syntax error in the manifest (tried clearing all the files, just to be sure), the manifest being served incorrectly, or something wrong with one of the files being cached. I don't think it's a memory problem, as I have over 30 gigs of space outside of the size of the files I'm caching. Furthermore, this worked 2 weeks ago, so I'm assuming that there isn't something wrong with my setup. However, nobody seems to know what the hell this error is; nobody even seems to be getting this error. I can't find anything online to describe what this issue is. Hence, my question is:
What does Manifest fetch Failed (9) mean?
My browser is Chrome on Windows 7, and is up to date.
GAH. Ok so I figured out the problem, or at least I figured out a solution. 9 might indicate a certificate error, which is what I was experiencing. Lovely, just... lovely.
(9) means that there is a security error. Since cache manifests with invalid certificates allow a man-in-the-middle attack, as explained in the Chromium issue that disallowed this. If you still want to use a cache manifest with an invalid certificate for testing purposes, you can pass --ignore-certicate-errors to Chrome on launch.
Related
Since yesterday afternoon, I have not succeeded in using Folium to generate maps, even the most basic display function can not be completed, the network connection is normal, it is likely that there is a problem with the call of js.
I tried switching the network environment and trying to change computers, but it didn't work.
Failed to load resource: net::ERR_CERT_DATE_INVALID
leaflet.awesome-markers.js:17 Uncaught ReferenceError: L is not defined
at leaflet.awesome-markers.js:17
at leaflet.awesome-markers.js:122
leaflet.css:1 Failed to load resource: net::ERR_CERT_DATE_INVALID
map.html:39 Uncaught ReferenceError: L is not defined
at map.html:39
#This is the code for the most basic function I've tried.
import folium
m = folium.Map(location=[29.488869,106.571034],
zoom_start=16,
control_scale=True,
width='50%')
m.save('map.html')
I hope to generate map pages
I dont think there is any problem in your jupyter notebook/Python IDLE. Check with your browser. If you’ve determined that the ERR_CERT_DATE_INVALID is caused by an issue on your computer, try these steps to resolve it:
First things first, check the Date and Time set on your computer, if
these are wrong it probably explains how you got the
ERR_CERT_DATE_INVALID error.
Sometimes fixing this error is as simple as shutting down your
browser and then restarting it. Other times a system reboot may work.
However, there are a couple of instances where you’ll need to do a
little more work to set things straight.
Check your connection, if you’re connected to public WiFi or some
other public network there’s a chance that your browser is right and
you don’t actually have a secure connection. If that’s the case, stop
browsing and resume when you’re on a more secure setup.
Scan your computer with a trusted antivirus software, you may have
malware of some sort that is causing the issue. Unfortunately we
can’t provide you with info on how to fix every last piece of
malware, but if your antivirus can’t, someone on the internet
probably knows.
Disable any third-party plugins you have running on Chrome. Sometimes
these can cause unwanted problems.
Clear your browser cache on Chrome. Click the menu icon, open History
and select “clear browsing data.”
Delete and then re-install Chrome. Sometimes this helps.
There are a few other crazy fixes like bringing down your firewall or
modifying network settings. However I would not suggest this one since your PC then become susceptible to virus.
I have running a site serve both via https and http.
The issue only happen when access via https, steps are
open a chrome, first time to load the only single site in HTTPS, chrome stall this inital connection about 1second.
however any immediate subsequent refreshing the same page, only have zero to couples of milliseconds, can ignore.
put aside the page(don't interact with it, for a short while, may few minutes), then pick it up to refresh it, it will repeat from step #1(stall about 1 second, followed by any immediate refresh almost zero stalling)
This only happen when access the site by HTTPS, don't have this kind of issue when access by HTTP(always almost zero stalling).
Issue only seen in Chrome browser, but not safari and firefox(there is almost no stalled time), all tested in Mac OS
Would any one help to give some idea please? why the first loading introduce 1 second stalling? how to reduce that stalling time please
screenshot
sorry, this is really hard to explain the issue
I think i found the cause now, i was using a self-signed certificate for https connection, though i add it to browser exception list to trust it, it looks like chrome is being more strict on this(than others ), after switching to a trust CA signed cert. The chrome stalled time reduce to jus few millseconds for all request now, i am happy to close this question now
Saw this error a few times today in Chrome's developer tools, and trying to figure out what it means / what we can do to avoid it.
"Failed to load resource: net:: ERR_CERT_DATABASE_CHANGED"
This was causing some image urls to fail to load in our testing. Fwiw I just checked the cert for the site in question, and it was issued over a year ago and is valid until the end of 2016, so it doesn't look like any changed serverside.
Google search turns up pretty much nothing for this error message, so hoping Stack Overflow will have more answers.
So the best I've been able to discover is this: https://chromium.googlesource.com/chromium/src/net/+/master/spdy/spdy_session_pool.cc and a few related tickets in Chromium about this code. It would appear that when the system cert database changes - in my case, potentially just a crappy puppet policy double checking that only trusted certs are in our store - Chrome reacts by closing down all existing connections and returning network errors to any outstanding request.
I had the same problem.
You may have Kaspersky Antivirus installed.
The Kaspersky software is manipulating the system keychain frequently (sometimes many times a second), which is causing Chrome to flush connections, because the system trust store has changed.
You can try it:
Remove Kaspersky;
disable Kaspersky web traffic interception
I also found a page with a description of the bug https://bugs.chromium.org/p/chromium/issues/detail?id=925779#c29.
My browser extension is crashing occasionally. The problem is, I cannot find a good, comprehensive list of things that can cause an extension to crash, and thus am having a hard time creating a checklist of things to work with.
My assumption is that anything that causes a standard Chrome tab to crash would cause the extension to crash when run in the Background.html file.
Off the top of my head, I'm assuming the following could cause problems...
Infinite loops or other instances of a script becoming unresponsive
Uncaught exceptions (eg, a JSON.parse with no try/catch)
Database storage errors
Excessive resource usage (??)
That's really all I can think of. I'm having a heck of a time trying to debug my extension and would really appreciate any help creating a checklist...
I'm coming back to this question about 3 months after asking it because a 2nd extension of mine was also crashing. In this case, though, the extension was far simpler -- only about 40 lines of code in the background.js script.
2 operations seemed to be possible culprits: writing to localStorage and using console.log
I have previously observed that it is possible to crash a normal chrome tab by using console.log repeatedly with large objects in a website if you leave the page open for an extended period. Because background.js is always open, it seems like a likely culprit here.
tl;dr
Don't use console.log in production. Ever.
I'm encountering an issue that Selenium IDE seems not to record a specific event on a real webserver.
However, if I save the page (including all resources) via firefox entirely to disk, open the saved file in the browser and try to record the same issue, Selenium IDE now works correctly and records the event as expected.
I'm not sure what is causing this behavior - maybe some race conditions inside Selenium IDE exists (latencies from a real webserver are higher than on a local file URL), or maybe it has something to do with URLs - but these are only quick guesses.
Does anybody have some suggestions/best practices how to track down such kind of Selenium IDE issues?
UPDATE:
I figured out my root issue, only with trial and error, but with succeess. I filed a bug at the selemium project.
The reason why it locally worked was a file not found after form submit which not happened at the serverside. It seems that the file not found error strangely prevented the bug from occuring.
However, the main part of this question isn't really answered yet, next time I still do not know how to quickly track down such issues. So for now, I'll keep it open.
I have similar issue. The Selenium IDE does not record anything from this website "http://suppliers.inwk.com". You may not have credentials to get login access, but if you can get the login page itself recorded in Selenum IDE, then I think we can come to the root cause, or atleast get a clue.