OpenGL VBO error causing system exit - exception

I'm using JOGL with to load an OBJ model and display it in a GL canvas using a VBO. Everything is work for the most part however, there are some models where the vertices must be deformed. For example, I have an arrow object and must be able to deform the stem of the arrow to make the tail as long/short as needed while maintaining the object geometry for the arrow head.
This works fine for one instance of the renderer but when I try and add another one to the scene, the system exits on the GLDrawElements call and outputs this error log. Can anyone point me in the right direction? I'm at a complete loss.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000069e3e4c8, pid=6544, tid=2692
#
# JRE version: 6.0_21-b06
# Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode windows-amd64 )
# Problematic frame:
# C [nvoglnt.dll+0x93e4c8]
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
...
Stack: [0x0000000052640000,0x0000000052740000], sp=0x000000005273ecb0, free space=3fb0000000000000000k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [nvoglnt.dll+0x93e4c8]
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
J com.sun.opengl.impl.GLImpl.glDrawElements0(IIIJ)V
J com.sun.opengl.impl.GLImpl.glDrawElements(IIIJ)V
j com.sonogenics.model.AbstractModelHandler$Renderer.display(Ljavax/media/opengl/GL;)V+196
j com.sonogenics.model.AbstractModelHandler$Renderer.display(Ljavax/media/opengl/GL;Lcom/sonogenics/camera/SimpleProjection;FFFLcom/sonogenics/playout/Field;)V+436
...

Use GDebugger to see what call causes the error and check for invalid data in your gl calls.
It's quite awesome. :)

ACCESS_VIOLATION means you told GL to read memory that is outside the the 'good' areas :)
Within Drawelements there are a couple reasons that could be, you want to check where you setup the GL buffers as well as what you are passing into DrawElements.
-One of your buffers was a bad address, causing it to read from who
knows where
-One of your offsets, strides, were too long causing GL to go beyond the
end of an allocation
-You number of verts you said were in the model was too long... causing it
to go beyond the end of the
allocation
-Your VBO allocation wasn't large enough for the stride * number of
verts

Related

How to use an external variable in linkage section in COBOL and pass values from it into a new module and write into my new output file

Could someone please tell me why a variable is declared as "External" in a module and how to use that in other modules through Linkage section and how to pass them into new fields so I can write it to a new file.
EXTERNAL items are commonly found in WORKING-STORAGE. These are normally not passed from one program to another via CALL and LINKAGE but shared directly via the COBOL runtime.
Declaring an item as EXTERNAL behaves like "runtime named global storage", you assign a name and a length to a global piece of memory and can access it anywhere in the same runtime unit (no direct CALL needed), even in cases like the following:
MAIN
-> CALL B
B: somevar EXTERNAL
-> MOVE 'TEST' TO somevar
-> CANCEL B
-> CALL C
C: somevar EXTERNAL -> now contains 'TEST'
On an IBM Z mainframe, running z/OS, the runtime routines for all High Level Languages (HLLs) is called Language Environment (LE). Decades ago, each HLL had its own runtime and this caused some problems when they were mixed into the same run unit; starting in the early 1990s IBM switched all HLLs to LE for their runtime.
LE has the concept of an enclave. Part of the text at that link says an enclave is the equivalent of a run unit in COBOL.
Your question is tagged CICS, and sometimes behavior is different when running in that environment. Quoting from that link...
Under CICS the execution of a CICS LINK command creates what Language Environment calls a Child Enclave. A new environment is initialized and the child enclave gets its runtime options. These runtime options are independent of those options that existed in the creating enclave.
[...]
Something similar happens when a CICS XCTL command is executed. In this case we do not get a child enclave, but the existing enclave is terminated and then reinitialized with the runtime options determined for the new program. The same performance considerations apply.
So, as #SimonSobich noted, if you use CALLs to invoke your subroutines when running in CICS, EXTERNAL data is global to the run unit. But, if you use EXEC CICS XCTL to invoke your subroutines, you may see different behavior and have to design your application differently.

Get H2OFrame as object instead of getting reference to a location in H2O cluster

We have created and trained model using H2O libraries. Configured H2O in OpenShift container and deployed the trained model for getting realtime inference. It worked well when we have one container. We have to scale up to handle the increase in transaction volume. Encountered an issue with the statefull nature of the H2OFrame. Please see my sample code.
Step-1: Converts the JSON dictionary in to Pandas frame.
Step-2: Converts the Pandas frame in to H2O frame.
Step-3: Run the model with H2O frame as input.
Here step-2 is returning a handle to the data stored in the container. "H2OFrame is similar to pandas’ DataFrame, or R’s data.frame. One of the critical distinction is that the data is generally not held in memory, instead it is located on a (possibly remote) H2O cluster, and thus H2OFrame represents a mere handle to that data." So Step-3's request must go to the same container. If not it cannot find the H2O frame and throws error.
Step-1: convert JSON dictionary to data frame using Pandas dataFrame
ToBeScored = pd.DataFrame([jsonDictionary])
Step-2: convert panda data frame to H2o frame
ToBeScored_hex = h2o.H2OFrame(ToBeScored)
Step-3: run the model
outPredections = rf_model.predict(ToBeScored_hex)
If the H2OFrame can be returned as an in memory object in step-2 then the statefull nature can be avoided. Is there any way?
Or, Can the H2O clustering be configured to store the H2OFrame such a way that it can be accessible from any OpenShift container in the cluster?
Useful links H2O’s Predict function accepts data only in H2OFrame format.
Predict function - http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/model_categories.html#h2o.model.model_base.ModelBase.predict
H2O frame data type - http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html
Updated on 6/19/2019 continuation question to #ErinLeDell's clarification
We have upgraded to H2O 3.24 and used MOJO model. Removed step 2 and replaced step 3 with this function call.
import h2o as h
result = h.mojo_predict_csv(input_csv_path="PredictionDataRow.csv",mojo_zip_path="rf_model.zip",
genmodel_jar_path="h2o-genmodel.jar", java_options='-Xmx512m -XX:ReservedCodeCacheSize=256m', verbose=True)
Internally it executed the below command which initialized a new JVM and started H2O local server for every call. H2O local server is initialized to find the path to java.
java = H2OLocalServer._find_java() // Find java path then creates below command line
C:\Program Files (x86)\Common Files\Oracle\Java\javapath\java.exe -Xmx512m -XX:ReservedCodeCacheSize=256m -cp h2o-genmodel.jar hex.genmodel.tools.PredictCsv --mojo C:\Users\admin\Documents\Code\python\rf_model.zip --input PredictionDataRow.csv --output C:\Users\admin\Documents\Code\python\prediction.csv --decimal
Question-1: Is there any way to use an existing JVM and not always spawn a new one for every transaction? 
Question-2: Is there a way to pass the java path to avoid the H2O local server initialization? Is H2OLocalServer required for anything other than finding java path? If it cannot be avoided then, Is it possible to initialize local server once and direct new requests to existing H2O local server instead of starting a new H2O local server?
An alternative is to use an H2O MOJO model (instead of a binary model which needs to exist in H2O cluster memory to make predictions). MOJO models can sit on disk and do not require a running H2O cluster. Then you can skip Step 2 and use the h2o.mojo_predict_pandas() function in Step 3.

Can't write into iomem region in qemu using gdb

I'm trying to add an new device in the qemu.
In the respective cpu file, used sysbus_mmio_map to set the base address.
sysbus_mmio_map(SYS_BUS_DEVICE(&s->brif), 0, BASE_ADDRESS);
In the newly created device file,
memory_region_init_io(&s->iomem, obj, &ops, s, "brif", SIZE);
sysbus_init_mmio((SYS_BUS_DEVICE(obj), &s->iomem);
The ops has the corresponding read and write handlers.
My read handler is getting called when I access the IO memory region using gdb, but my write handler is not getting called when I write to the IO memory region using gdb.
What am I missing?
Update: I do get the write handlers if I write to the IO memory region from the code running inside the guest, the problem is only when I try to access from the gdb.
I belive it's just a bug. Se this bugreport (with a patch included).

Output JRuby heap space from application code

Is there a way of outputting the Java heap space, the JRuby application was started with (like -J-Xms2048m), from the application code?
The value passed in at the JVM command line is available in the Runtime JMX bean provided by the JDK, as #stephanlindauer pointed out.
Another way that's less sensitive to how the JVM was launched is to use the Memory bean, like so:
membean = java.lang.management.ManagementFactory.memory_mx_bean
heap = membean.heap_memory_usage
puts heap.max
On my system with no special JRuby flags, this outputs "466092032" (roughly reflecting our default max of 500MB), and when specifying a 2GB maximum heap, (jruby -J-Xmx2g) it outputs "1908932608".
turns out, you can puts values that were passed in via the environment variables like this:
puts java.lang.System.getenv()['JRUBY_OPTS']
or
puts ENV['JRUBY_OPTS']
or
puts java.lang.management.ManagementFactory.getRuntimeMXBean().getInputArguments().to_s
remember to require "java" before.

(cocos2d-x 3.1 + VS2012) TextureCache::addImageAsync causes a crash occasionally

I load some textures in asynchronously at the beginning of my game, about 40-50 of them.
vector<string> textureFileNames;
textureFileNames.push_back("textures/particle.png");
textureFileNames.push_back("textures/menu_title.png");
...
textureFileNames.push_back("textures/timer_bar.png");
for (auto fileName: textureFileNames)
{
Director::getInstance()->getTextureCache()
->addImageAsync(fileName, CC_CALLBACK_1(LoadingLayer::textureLoadedCallback, this));
}
My textureLoadedCallback method does nothing funky; at this stage it simply increments a value and updates a progress timer. The callback is called from the main thread by cocos2d-x design, so I don't suspect any problems arise from there.
90% of the time this works fine. But sometimes it crashes in VS2012 midway through loading the textures:
Debug Assertion Failed!
Program: C:\Windows\system32\MSVCP110D.dll
File: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\include\vector
Line: 1140
Expression: vector subscript out of range
Breaking at this point, I can see that it dies in the internals of vector, specifically the [] operator, and traces back through _Hash to the TextureCache::loadImage() method: auto it = _textures.find(asyncStruct->filename) on line 174 of CCTextureCache.cpp. _textures is defined as std::unordered_map<std::string, Texture2D*> _textures, a standard library unordered map. asyncStruct->filename resolves to the full path and filename of a texture to load.
Using the debugger, I can see that the filename is fine. I can see that _textures already contains the 19 textures before this one that it has processed just fine.
The fact that it seems to just be dying in the midst of the standard library doesn't strike me as right... but I'm unable to determine where CCTextureCache goes wrong. Only that it doesn't always fail, and that it's failing in an asynchronous thread. There's no concurrency bollocks going on with my code (as far as I know).
Is this a cocos2d-x bug, a VS2012 bug or a bug with my code I pasted above?
I think a potential cause could be that the for loop issues like 19 async image loads all at once, which may or may not be supported by that method. Try issuing the next async load only after your texture callback is called. That way no two async loads are performed simultaneously.