Load timeout for module: domReady!_unnormalized2 - google-chrome

How do you interpret this error?
Uncaught Error: Load timeout for modules:
domReady!_unnormalized2,domReady!_unnormalized3,domReady!
I'm using requirejs 2.1.2 and domReady 2.0.1.
It doesn't happen always, and apparently only in Chrome (in IE and firefox works fine).
I incremented the default load time with:
require.config({ waitSeconds: 90 });
but it keeps failing.
Any Ideas? I would appreciate any help

There is a standard amount of time that RequireJS will wait for a given require() call to complete; it allows some time for the relevant files to download. When using domReady!, the require call is forced to wait until the DOM is ready, which could be a longer period of time than require is willing to wait - resulting in the error you mention.
Ideally the DOM would not take so long to be ready, as that itself is an issue for the user experience, but in the case that it does, I believe we will have to avoid the domReady! dependency.

Related

Exception handling with Realbrowserlocusts

In using realbrowserlocusts class it appears that I'm limited in any exception handling.
The only reference that partially works is: self.client.wait.until(EC.visibility_of_element_located ....
In a failed condition where the element is not found the script simply starts over again. With the script I'm working with I need to maintain a solid session state; I need to throw and exception(report an error), log the user out and then let the script start over again. I've been testing out the behavior with the locust.py script that Nick B. created with several approaches to "try, except" and they work running without realbrowserlocusts (selenium only) but with it the execution just stops.
Any examples would be greatly appreciated.
In its current format I've been able to run 3x the amount of a browser-based load per/agent/slave than our commercial tool. My goal is to replace it with a locust/selenium approach.
locust-plugins's WebdriverUser has a little bit better exception handling I think. A failure to find an element will log a failed request and if you use RescheduleTaskOnFail (as in the the example) it will restart the task when that happens.
https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/webdriver_ex.py

How to trace while developing a MediaWiki extension?

How to add tracing (for bug hunting) code to my MediaWiki extension?
When I add echo "XXX"; or var_dump(...);, I don't see it in output (despite the code line where I put this tracing works for sure, as I checked by adding exit(0); instead of this tracing and watching it crashing by exit as expected).
I assume you mean debug logging ("trace" is usually used for recording what method calls happen, as in XDebug function traces). The MediaWiki debugging help page has some information on it, although it's not in great shape. Basically you set $wgDebugLogGroups['mydebuglog'] to point to a logfile, and then use wfDebugLog( 'mydebuglog', 'XXX' ). (PSR-3-style structured logging is possible but requires some setting up.)
Usually var_dump works too, but there is a lot of stuff that happens outside of requests with a web response (jobs or heavy processing that's delayed until the response has been sent).
If you did mean tracing, the profiling help page has some information.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

How do BundleActivator, ManagedService, and my application interact on start/stop?

I had a non-OSGi application. To convert it to OSGi, I first bundled it up and gave it a simple BundleActivator. The activator's start() started up a thread of what used to be the main() of my app (and is now a Runnable), and remembered that thread. The activator's stop() interrupted that thread, and waited for it to end (via join()), then returned. This all seemed to be working fine.
As a next step in the OSGiification process, I am now trying to use OSGi configuration management instead of the Properties-based configuration that the application used to use. So I am adding in a ManagedService in addition to the Activator.
But it's no longer clear to me how I am supposed to start and stop my application; examples that I've seen are only serving to confuse me. Specifically, here:
http://felix.apache.org/site/apache-felix-config-admin.html
They no longer seem to do any real starting of the application in BundleActivator.start(). Instead, they just register a ManagedService to receive configuration. So I'm guessing maybe I start up the app's main thread when I receive configuration, in the ManagedService? They don't show it - the ManagedService's updated() just has vague comments saying to "apply configuration from config admin" when it is passed a non-null Dictionary.
So then I look here:
http://blog.osgi.org/2010/06/how-to-use-config-admin.html
In there, it seems like maybe they're doing what I guessed. They seem to have moved the actual app from BundleActivator to ManagedService, and are dealing with starting it when updated() receives non-null configuration, stopping it first if it's already started.
But now what about when the BundleActivator's stop() gets called?
Back on the first example page that I mentioned above, they unregister the ManagedService. On the second example page, they don't show what they do.
So I'm guessing maybe unregistering the ManagedService will cause null configuration to be sent to ManagedService.updated(), at which point I can interrupte the app thread, wait for it to end, and then return?
I suspect that I'm thoroughly incorrect, but I don't know what the "real" way to do this is. Thanks in advance for any help.
BundleActivator (BA) and ManagedService (MS) are callbacks to your bundle. BundleActivator is for the active state of your bundle. BA.start is when you bundle is being started and BA.stop is when it is being stopped. MS is called to provide your bundle a configuration, if there is one, or notify you there is no configuration.
So in BA.start, you register your MS service and return. When MS is called (on some other thread), you will either receive your configuration or be told there is no configuration and you can act accordingly (start app, etc.)
Your MS can also be called at anytime to advice of the modification or deletion of your configuration and you should act accordingly (i.e. adjust your app behavior).
When you are called at BA.stop, you need to stop your app. You can unregister the MS or let the framework do it for you as part of normal bundle stop processing.

HTML 5 Application Cache catch events in Chrome

I've created a website using HTML 5 offline Application Cache and it works well in most cases, but for some users it fails. In Chrome, when the application is being cached, the progress is displayed for each file and also error messages if something goes wrong, like:
Application Cache Checking event
Application Cache Downloading event
...
Application Cache Progress event (7 of 521) http://localhost/HTML5App/js/main.js
...
Application Cache Error event: Failed to commit new cache to storage, would exceed quota.
I've added event listeners to window.applicationCache (error, noupdate, obsolete, etc.), but there is no information stored on the nature of the error.
Is there a way to access this information from the web site using JavaScript ?
I would like to determine somehow which file caused the error or what kind of error occurred.
I believe that the spec doesn't mention that the exact cause of the exception should be included in the error. Currently the console is your only friend.
To wit, your current error "exceed quota" is due to the fact that Chrome currently limits the storage to 5MB. You can work around this by creating an app package that requests unlimited_Storage via the permission model. See http://code.google.com/chrome/apps/docs/developers_guide.html#live for more details.
If you want specific error messages on the "onerror" handler raise a bug on http://crbug.com/new