I just wanted to check and see if anyone knows what is happening here...
Sometimes these min.js files time out at 10 seconds and throw a 504. This has happened at work and at home, wired and on wireless. It also seems to be random which library/libraries this happens to.
I'm assuming these are called to load automatically when I include the viewer3d file?
<script src="https://developer.api.autodesk.com/modelderivative/v2/viewers/7.*/viewer3D.min.js"></script>
Sorry about the troubles. Apparently the https://developer.api.autodesk.com servers were getting some unusually heavy traffic, and our logs are showing that during that time there were a couple of long latency requests. The engineering team is currently investigating that.
Related
I have loaded up a database with about 7.5M nodes having 33+M relationships - it's about 25 GB in total. So, it's reasonably large is my point. What I am finding since loading it is that periodically my Neo4j client is just falling over, leaving nothing more than Chrome's irritating "Aw snap - something went wrong" behind. I have checked the logs and found nothing significant there. How can I begin to track down what is happening on these failed queries?
The issue might be with your Chrome environment. This Google support page on the "Aw Snap!" error may be of help.
Also, if your queries return a lot of data or take too long to respond, the browser might be running out of memory or exceeding some internal timeouts. So, make sure the queries you make via a browser are tailored accordingly.
Is there any way to monitor the loading of the crossdomain.xml file?
I'd like report the load times of this file, since it seems to be intermittently taking longer than expected. There doesn't seem to be an event from URLLoader and Security.loadPolicyFile() doesn't allow any event listeners.
How can I get the load time for a crossdomain.xml file without requiring additional loads of the file?
When I ran into issues with crossdomain.xml, I read some basic stuff about what it was and how you can work with it. This is a "heavily simplified/not entirely true version of it" but it might serve as a guideline.
Crossdomain.xml-loading is owned by 'browser/flash initialization' and tells browser if 'any code' should be allowed to execute or not. Hence you can't really measure it since it's loaded prior to your code and in another environment.
There might be some way of finding it, it might have changed since I last tried to do that several years ago... but...
You can always measure it in another way, last app I used to troubleshoot issues in this area was Charles Web Debugging Proxy.
so I load large amounts of data from services. I just updated our project to the latest 4.7 FB with Flex 4.9 SDK and AIR 3.4. Implemented workers. They seem to work great for one-off tasks EXCEPT when it comes to internet data loading. I haven't found the magic limit yet, but it seems as though if you load an internet request (Loader, URLLoader, HttpService... whatever) that is receiving a large replay, the worker just locks up and quits. The main thread can do this just fine in all cases (but with UI being unresponsive until the load is done). This is why I was so excited about workers is to offload the sometimes large data loads to the background workers.
Has anyone else run into this? I saw comments on the Worker class docs online where a few other people have seen similar problems and suggest putting data loading from the internet back on the main thread. Seems like "what's the point of a worker then"?
Can they only do local calculations? Math is cool... but HTTP Gets are not?
I tried giving the worker app privileges.. no help there. Is there a magic worker.canloadlargefiles = true? (rhetorical).
Any direction or help here would be greatly appreciated.
Well not a solution, but a workaround. Instead of digging internals of VM, use KISS (Keep It Simple and Stupid) principle, divide your data into managable chunks. You do not need to divide the files or data but can tell the server do it by a request format so it sends you manageable portions of data that will not timeout the VM. Retrieve and join the chunks on the client side to form the big file back.
Thats my two cents.
I've been running into this problem with odd regularity. Code working fine, and then it starts taking a couple minutes to save or load. It's about 2000 lines, so only on the larger-side of average. Well this problem found its way into my day today again, and I finally found the cause.
It turns out that all of my "slow code" had been copy and pasted, generally when I've used Select All. I've been doing this as I deploy the apps to coworkers and the like, causing a far amount of frustration.
I know this may sound obvious, but it seems the best practice for installing apps into other accounts is to share the script at viewer level and then make a copy.
I've also gone back to other slow code, tracked down a good version and made a copy and updated bit by bit. Slow code now fast.
My browser extension is crashing occasionally. The problem is, I cannot find a good, comprehensive list of things that can cause an extension to crash, and thus am having a hard time creating a checklist of things to work with.
My assumption is that anything that causes a standard Chrome tab to crash would cause the extension to crash when run in the Background.html file.
Off the top of my head, I'm assuming the following could cause problems...
Infinite loops or other instances of a script becoming unresponsive
Uncaught exceptions (eg, a JSON.parse with no try/catch)
Database storage errors
Excessive resource usage (??)
That's really all I can think of. I'm having a heck of a time trying to debug my extension and would really appreciate any help creating a checklist...
I'm coming back to this question about 3 months after asking it because a 2nd extension of mine was also crashing. In this case, though, the extension was far simpler -- only about 40 lines of code in the background.js script.
2 operations seemed to be possible culprits: writing to localStorage and using console.log
I have previously observed that it is possible to crash a normal chrome tab by using console.log repeatedly with large objects in a website if you leave the page open for an extended period. Because background.js is always open, it seems like a likely culprit here.
tl;dr
Don't use console.log in production. Ever.