My browser extension is crashing occasionally. The problem is, I cannot find a good, comprehensive list of things that can cause an extension to crash, and thus am having a hard time creating a checklist of things to work with.
My assumption is that anything that causes a standard Chrome tab to crash would cause the extension to crash when run in the Background.html file.
Off the top of my head, I'm assuming the following could cause problems...
Infinite loops or other instances of a script becoming unresponsive
Uncaught exceptions (eg, a JSON.parse with no try/catch)
Database storage errors
Excessive resource usage (??)
That's really all I can think of. I'm having a heck of a time trying to debug my extension and would really appreciate any help creating a checklist...
I'm coming back to this question about 3 months after asking it because a 2nd extension of mine was also crashing. In this case, though, the extension was far simpler -- only about 40 lines of code in the background.js script.
2 operations seemed to be possible culprits: writing to localStorage and using console.log
I have previously observed that it is possible to crash a normal chrome tab by using console.log repeatedly with large objects in a website if you leave the page open for an extended period. Because background.js is always open, it seems like a likely culprit here.
tl;dr
Don't use console.log in production. Ever.
Related
Since yesterday afternoon, I have not succeeded in using Folium to generate maps, even the most basic display function can not be completed, the network connection is normal, it is likely that there is a problem with the call of js.
I tried switching the network environment and trying to change computers, but it didn't work.
Failed to load resource: net::ERR_CERT_DATE_INVALID
leaflet.awesome-markers.js:17 Uncaught ReferenceError: L is not defined
at leaflet.awesome-markers.js:17
at leaflet.awesome-markers.js:122
leaflet.css:1 Failed to load resource: net::ERR_CERT_DATE_INVALID
map.html:39 Uncaught ReferenceError: L is not defined
at map.html:39
#This is the code for the most basic function I've tried.
import folium
m = folium.Map(location=[29.488869,106.571034],
zoom_start=16,
control_scale=True,
width='50%')
m.save('map.html')
I hope to generate map pages
I dont think there is any problem in your jupyter notebook/Python IDLE. Check with your browser. If you’ve determined that the ERR_CERT_DATE_INVALID is caused by an issue on your computer, try these steps to resolve it:
First things first, check the Date and Time set on your computer, if
these are wrong it probably explains how you got the
ERR_CERT_DATE_INVALID error.
Sometimes fixing this error is as simple as shutting down your
browser and then restarting it. Other times a system reboot may work.
However, there are a couple of instances where you’ll need to do a
little more work to set things straight.
Check your connection, if you’re connected to public WiFi or some
other public network there’s a chance that your browser is right and
you don’t actually have a secure connection. If that’s the case, stop
browsing and resume when you’re on a more secure setup.
Scan your computer with a trusted antivirus software, you may have
malware of some sort that is causing the issue. Unfortunately we
can’t provide you with info on how to fix every last piece of
malware, but if your antivirus can’t, someone on the internet
probably knows.
Disable any third-party plugins you have running on Chrome. Sometimes
these can cause unwanted problems.
Clear your browser cache on Chrome. Click the menu icon, open History
and select “clear browsing data.”
Delete and then re-install Chrome. Sometimes this helps.
There are a few other crazy fixes like bringing down your firewall or
modifying network settings. However I would not suggest this one since your PC then become susceptible to virus.
Is there any way to monitor the loading of the crossdomain.xml file?
I'd like report the load times of this file, since it seems to be intermittently taking longer than expected. There doesn't seem to be an event from URLLoader and Security.loadPolicyFile() doesn't allow any event listeners.
How can I get the load time for a crossdomain.xml file without requiring additional loads of the file?
When I ran into issues with crossdomain.xml, I read some basic stuff about what it was and how you can work with it. This is a "heavily simplified/not entirely true version of it" but it might serve as a guideline.
Crossdomain.xml-loading is owned by 'browser/flash initialization' and tells browser if 'any code' should be allowed to execute or not. Hence you can't really measure it since it's loaded prior to your code and in another environment.
There might be some way of finding it, it might have changed since I last tried to do that several years ago... but...
You can always measure it in another way, last app I used to troubleshoot issues in this area was Charles Web Debugging Proxy.
I've been working with Appcache for quite some time, but I recently took a couple of weeks to develop a different project. When I returned to my offline project, I started getting this error every time I try to download the contents of my manifest:
Application Cache Error event: Manifest fetch failed (9)
This is followed by 2 addresses; the file and line number of the page that calls the manifest (on right of same line), and the relative url of the manifest itself (on a second line). The download of the individual resources does not begin.
Now, other folks work on this project, but I'm the only guy who touches anything that as much as smells of offline. The issue doesn't appear to be related to any of my usual suspects, like a syntax error in the manifest (tried clearing all the files, just to be sure), the manifest being served incorrectly, or something wrong with one of the files being cached. I don't think it's a memory problem, as I have over 30 gigs of space outside of the size of the files I'm caching. Furthermore, this worked 2 weeks ago, so I'm assuming that there isn't something wrong with my setup. However, nobody seems to know what the hell this error is; nobody even seems to be getting this error. I can't find anything online to describe what this issue is. Hence, my question is:
What does Manifest fetch Failed (9) mean?
My browser is Chrome on Windows 7, and is up to date.
GAH. Ok so I figured out the problem, or at least I figured out a solution. 9 might indicate a certificate error, which is what I was experiencing. Lovely, just... lovely.
(9) means that there is a security error. Since cache manifests with invalid certificates allow a man-in-the-middle attack, as explained in the Chromium issue that disallowed this. If you still want to use a cache manifest with an invalid certificate for testing purposes, you can pass --ignore-certicate-errors to Chrome on launch.
I'm encountering an issue that Selenium IDE seems not to record a specific event on a real webserver.
However, if I save the page (including all resources) via firefox entirely to disk, open the saved file in the browser and try to record the same issue, Selenium IDE now works correctly and records the event as expected.
I'm not sure what is causing this behavior - maybe some race conditions inside Selenium IDE exists (latencies from a real webserver are higher than on a local file URL), or maybe it has something to do with URLs - but these are only quick guesses.
Does anybody have some suggestions/best practices how to track down such kind of Selenium IDE issues?
UPDATE:
I figured out my root issue, only with trial and error, but with succeess. I filed a bug at the selemium project.
The reason why it locally worked was a file not found after form submit which not happened at the serverside. It seems that the file not found error strangely prevented the bug from occuring.
However, the main part of this question isn't really answered yet, next time I still do not know how to quickly track down such issues. So for now, I'll keep it open.
I have similar issue. The Selenium IDE does not record anything from this website "http://suppliers.inwk.com". You may not have credentials to get login access, but if you can get the login page itself recorded in Selenum IDE, then I think we can come to the root cause, or atleast get a clue.
This is an odd freeze. When I switch from source view to design view for an HTML or ASPX file, the client area freezes, but I can still click on other tabs and menus.
What am I missing here? Really don't feel like reinstalling VS2008.
I had the same problem, and found one resolution.
In VS 2008, In a page that was using a master page, the either frequency while working in source view or switching to design view, IDE would freeze for 10-20 seconds.
In my master template, I had references to the Google hosted JQuery, Jquery UI, and one or two more scripts off site. These were placed directly in my master page's head section.
I downloaded the js and then by deleting any offsite references, my IDE would be smooth again in both design and source mode.
I also discovered I could put the scripts inside my ToolkitScriptManager (I'm using AjaxControlToolkit) and added the Mode="Release", and could place the http://www.google.com references for the scripts. The IDE is still working fine for me.
This is often due to the Design mode downloading external resources that are timing out. As #JonK mentioned, for him it was jQuery references. I have seen this when the ConnectionString was set to production databases that could not be accessed on my development machine, even though I wasn't debugging (running) the site only editing code, it would still try to connect and because it couldn't it would stall waiting for the timeout.
VS2008 is mostly single-threaded for UI operations like this, so if it is downloading a slow or non-existent network path it hangs like this.
VS2008 can make all kinds of network requests, so these two examples may not solve it for you. The best way I have found to diagnose the problem is to use the Microsoft tool Process Monitor, filter by the Process webdev.exe, and watch for I/O requests that are long running and/or throwing errors. In my case, I could find the place that was having a problem because there would be a 20 second gap in between the hundreds of I/O entries in Process Monitor. Then, just back-tracked from when that gap began and I eventually found the request that was causing the problem.
This may not be possible for you, but if you can, an upgrade to VS2010 would help; it does a much better job of running process on multiple threads in more places so you don't have to worry about this as much.
Have you tried restarting your computer and then reopening your project?