Jenkins throwing IndexOutOfBoundsException at end of build on Mac? - exception

I am getting:
FATAL: Bounds exceeds available space : size=262144, offset=262145
java.lang.IndexOutOfBoundsException: Bounds exceeds available space : size=262144, offset=262145
at com.sun.jna.Memory.boundsCheck(Memory.java:168)
at com.sun.jna.Memory.getByte(Memory.java:394)
...
at the end of every build.

There's a bug in recent builds that causes this, but the work around is to add
-Dhudson.util.ProcessTreeKiller.disable=true
to the start-up command of the tomcat. One way is to add it to JAVA_OPTS in <tomcat-home>/bin/startup.sh

Related

Exception handling with Realbrowserlocusts

In using realbrowserlocusts class it appears that I'm limited in any exception handling.
The only reference that partially works is: self.client.wait.until(EC.visibility_of_element_located ....
In a failed condition where the element is not found the script simply starts over again. With the script I'm working with I need to maintain a solid session state; I need to throw and exception(report an error), log the user out and then let the script start over again. I've been testing out the behavior with the locust.py script that Nick B. created with several approaches to "try, except" and they work running without realbrowserlocusts (selenium only) but with it the execution just stops.
Any examples would be greatly appreciated.
In its current format I've been able to run 3x the amount of a browser-based load per/agent/slave than our commercial tool. My goal is to replace it with a locust/selenium approach.
locust-plugins's WebdriverUser has a little bit better exception handling I think. A failure to find an element will log a failed request and if you use RescheduleTaskOnFail (as in the the example) it will restart the task when that happens.
https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/webdriver_ex.py

Bluemix deployment error using web gui: FAILED Invalid JSON response from server: json: cannot unmarshal number into Go value of type string

I'm running a Liberty profile Bluemix hosted application. I'm using a Jazzhub devops deployment pipeline with several stages, one for each target work space (dev, QA, test, production).
When using the devops deployment pipeline web-based gui, I can successfully deploy my chosen build to all stages except the last one (production). When I attempt to deploy the final pipeline stage to production it fails with the following cryptic error message:
FAILED Invalid JSON response from server: json: cannot unmarshal
number into Go value of type string
I have compared the final stage with the stages that are working and can find no difference other than the target itself. I have rebuilt the final stage from scratch a couple times to see if that would resolve the issue. However, I get the same error every time I try to deploy to the production target using the gui.
If I use the command line tools (i.e. cf login, cf push), the deployment completes without an error even using the exact same commands listed in the production stage's profile from the gui.
So while I can deploy to the target production work space, I am left without the handy gui indicator of what build is actually running in production at any given time. Also, I have to keep track of this information myself if I want to know what's running there.
I've seen similar questions regarding container deployment issues but I'm using the built-in Bluemix Liberty runtime and have no access to adjust details of container deployment like Docker version, etc.
Does anybody have any clue what might be causing this error or how I can troubleshoot the issue further?
Thanks
#crjenkins89 Full log looks like this, even with setting CF_TRACE=true:
Downloading artifacts...DOWNLOAD SUCCESSFUL
Target: https://api.ng.bluemix.net
adding: wlp/ (stored 0%)
adding: wlp/usr/ (stored 0%)
adding: wlp/usr/servers/ (stored 0%)
adding: wlp/usr/servers/defaultServer/ (stored 0%)
adding: wlp/usr/servers/defaultServer/server.xml (deflated 56%)
adding: wlp/usr/servers/defaultServer/jvm.options (deflated 20%)
adding: wlp/usr/servers/defaultServer/server-local.xml (deflated 53%)
adding: wlp/usr/servers/defaultServer/apps/ (stored 0%)
adding: wlp/usr/servers/defaultServer/apps/rccs.war (deflated 2%)
adding: wlp/usr/servers/defaultServer/apps/rca_help.war (deflated 5%)
Using manifest file manifest-prod.yml
FAILED
Invalid JSON response from server: json: cannot unmarshal number into Go value of type string
Finished: FAILED
Stage has no runtime information
This was apparently fixed when the cf cli version used by the devops deployment pipeline was updated. All stages are working properly now.

Fiware CEP server stops responding

In developing in Fi-Cloud's CEP I've been having an issue that has been happening repeatedly. As I'm trying to develop a definition to perform a task, CEP's server and Authoring Tool stop responding, although ssh is still responsive.
This issue happens as I develop. I'm using the AuthoringTool to alter the definition bit by bit and then I re-upload it to the server through the authoring tool's export feature.
To reinitiate the proton with the new definition each time I alter it, I use Google's Postman with this single operation:
-PUT (url:http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
header: 'Content-Type' : 'application/json'; body : {"action": "ChangeDefinitions","definitions-url" : "/ProtonOnWebServerAdmin/resources/definitions/Definition_Name"}
At the same time, I'm logged in with three ssh intances, one to monitor the files being created on /opt/tomcat10/sample/ and other things, and the other two to 'tail -f ' log files the definition writes to, as events are processed: one log for events recieved and another log for events detected by the EPAgent.
I'm iterating through these procedures over and over as I'm developing and eventualy CEP server and the Authoring Tool stop responding.
By "tailing" tomcat's log file (# tail -f /opt/tomcat10/logs/catalina.out) I can see that, when under these circumstances, if I attemp a:
-GET (url: http://{ip}:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer)
I get no response back and tomcat logs the following response:
11452100 [http-bio-8080-exec-167] ERROR org.apache.wink.server.internal.RequestProcessor - An unhandled exception occurred which will be propagated to the container.
java.lang.OutOfMemoryError: PermGen space
Exception in thread "http-bio-8080-exec-167" java.lang.OutOfMemoryError: PermGen space
Ssh is still responsive and I can look at tomcat's log this way.
To get over this and continue, I exit ssh connections and restart CEP's instance in the Fi-Cloud.
Is the procedure I'm using to re-upload and re-run the definition inapropriate? Should I take a different approach to developing?
When you update a definition that the CEP is already working with, and you want the CEP engine to work with the updated definition, you need to:
Export the definition using the authoring tool export (as you did)
Stop the engine run, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"stop"}
Start the engine, using REST PUT
PUT //host:8080/ProtonOnWebServerAdmin/resources/instances/ProtonOnWebServer
{"action":"ChangeState","state":"start"}
You don't need to activate the "ChangeDefinitions" action, since it is the same definition name that the engine is already working with.
Activating "ChangeDefinitions" action, only influences the next run of the CEP, and has no influence on the current run.
This answer your question about how you should update a CEP definition.
Hope it will solve your issue.

my nodejs script is not exiting on its own after successful execution

I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();

Strange exception on jbooss 5.1 javax.ejb.EJBTransactionRolledbackException

In one of our client deployments of jboss5.1 mysql and an JSF application we get this one line error
16:46:08,970 ERROR [org.jboss.aspects.tx.TxPolicy] [] - [] (WorkManager(2)-17) javax.ejb.EJBTransactionRolledbackException
it happens every 15 seconds eaven if we dont have any chron jobs on that time scale.
We have two ears deployed on the server and if I undeploy one of them the error stops. So it must be something related to the aplication. The strange thing is that we have not touched that ear for a long time and this error started to appear withowt redeploy or any code change.
Any hints on how can I dig furthure to resolve this strange error?
go look at the following link
https://community.jboss.org/message/830295
https://community.jboss.org/message/635405?_sscc=t