Getting this exception randomly in my 3D game (using libgdx nightly build from 2014-01-03). Sometimes none for days, sometimes 5 times in 10 minutes. Almost never happens on emulator. Haven't found any reproducible scenario for weeks now, can happen even when I'm not touching the tablet at all. (I had a feeling that moving the camera or touching the screen causes this with higher probability but couldn't prove it.) Usually happens when a long worker thread (computer 'thinking', continuous rendering is turned off during this) is finished and some of the models are being repositioned to show the results.
Happens always on the same model instance consisting of 100 nodes (each node is a textured box created by six rect() calls). Some nodes may be in a short animation. I'm not requesting an iterator anywhere in my own code, I walk through the nodes with a normal for() loop because their number is fixed.
Any suggestions on how to start investigating this are appreciated. The only clue I might have that when I had the above boxes as 100 different model instances, the exception occurred somewhat less frequently. Today I merged them into one large model and already got the exception 10 times.
E/AndroidRuntime(30999): com.badlogic.gdx.utils.GdxRuntimeException: #iterator() cannot be used nested.
E/AndroidRuntime(30999): at com.badlogic.gdx.utils.Array$ArrayIterator.hasNext(Array.java:487)
E/AndroidRuntime(30999): at com.badlogic.gdx.graphics.g3d.ModelInstance.getRenderables(ModelInstance.java:356)
E/AndroidRuntime(30999): at com.badlogic.gdx.graphics.g3d.ModelInstance.getRenderables(ModelInstance.java:328)
E/AndroidRuntime(30999): at com.badlogic.gdx.graphics.g3d.ModelBatch.render(ModelBatch.java:281)
E/AndroidRuntime(30999): at com.badlogic.gdx.graphics.g3d.ModelBatch.render(ModelBatch.java:296)
It seems it was a threading problem. Most of my models are manipulated via AnimationController, but there was one place where a direct move was issued from another thread:
modelInstanceTiles.nodes.get(nodeIndex).translation.set(pos);
modelInstanceTiles.nodes.get(nodeIndex).rotation.set(rot);
modelInstanceTiles.calculateTransforms();
I have changed this to a very fast animation and no exception for 3 days now.
Related
I'm trying to validate our company's code works when NServiceBus v4.3 is using the MaximumConcurrencyLevel value setup in the config.
The problem is, when I try to process 12k+ of queued entries, I cannot tell any difference in times between the five different max concur levels I change. I set it to 1 and I can process the queue in 8m, then I put it to 2 and I get 9m, seems interesting (I was expecting more, but it's still going in the right direction), but then I put 3, 4, 5 and the timings stay at around 8m. I was expecting a much better throughput.
My question is, how can I verify that NServiceBus is actually indeed using five threads to process entries on the queue?
PS I've tried setting the MaximumConcurrencyLevel="1" and the MaximumMessageThroughputPerSecond along with logging the Thread.CurrentThread.ManagedThreadId thinking\hoping I was ONLY going to see one ThreadID value, but I'm seeing quite a few of different ones, which surprised me. My plan was to see one, then bump the max concur level to 5 and hopefully see five different values.
What am I missing? Thank you in advance.
There can be multiple reasons why you don't see faster processing times when increasing the concurrency setting described on the official documentation page: http://docs.particular.net/nservicebus/operations/tuning
You mentioned you're using the MaximumMessageThroughputPerSecond which will negate any performance gains my parallel message processing if a low value has been configured. Try removing this setting if possible.
Maybe you're accessing a resource in your handlers which isn't supporting/optimized for parallel access.
NServiceBus internally schedules the processing logic on the threadpool. This means that even with a MaximumConcurrencyLevel of 1, you will most likely see a different thread processing each message since there is no thread affinity. But the configuration values work as expected, if your queue contains 5 messages:
it will process these messages one by one if you configured MaximumConcurrencyLevel to 1
it will process all messages in parallel if you configured MaximumConcurrencyLevel to 5.
Depending on your handlers it can of course happen that the first message is already processed at the time the fifth message is read from the queue.
I'm trying to improve FPS of my deferred renderer and so I stumbled on chrome://tracing (here and here). I added console.time and console.timeEnd for every layer I render. You can see them here:
http://i.imgur.com/Rh5jfpN.jpg
My trace points are the topmost items starting with node/..., together, they run for about 2ms. All is done within these two milliseconds, all the gl* calls. My current FPS is around 38 FPS, one frame account for around 28ms. Now another, zoomed out picture with GPU process:
http://i.imgur.com/sM4aAXB.jpg
You can still see the trace points there, tiny bars at the top. Strangely, Chrome renders two frames very quickly, then runs this mysterious DoSwapBuffers / Onscreen tasks that halts the rendering for about 25ms, plus the MakeCurrent tasks that overlaps with the second (as in the two fast consecutive frames) frame, that takes 15ms.
It seems to me that Chrome is deferring all the WebGL tasks for later, making it impossible to make any kind of profiling. But that's just my guess. What to do with this and how can I profile my WebGL code?
As the GPU is running independent of the main JS thread, it is not synchronized to the JS activity as long as there is no direct feedback of the rendered data into the JS context. So issuing any WebGL command does not block JavaScript from going on, and the frame is only presented if the GPU has finished all operations issued by JS.
So it is impossible to see the actual computation time for any WebGL operation in this trace.
There is however a simple trick: Synchronize the JS thread by pulling some rendering-dependent data back into the JS world. The easiest thing to do so is to render some pixels of every intermediate texture to the 3D canvas, and this canvas into a 2D canvas by the 2D context drawImage method (and just to be sure access this canvas by ctx.getImageData()). This method is quite slow and will add some time, by also make all WebGL draw operations visible to profiling / tracing.
Beware that this may also skew the results to some extend as everything will be forced to compute in-order and the GPU driver cannot optiimize independent draw to operations to interleave each other.
I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.
I'm trying to implement a separable convolution filter using CUDA as part of a bigger application I'm working on. My code has multiple CUDA kernels which are called one after the other (each performing one stage of the process). The problem is I keep getting this weird error and I'm not sure what exactly does it mean or what is causing it. I also can't find anything about it on the Internet except a couple of Stack Overflow questions related to OpenGL and CUDA interoperability (which I'm not doing, i.e. I'm not using OpenGL at all).
Can someone please explain to me why such an error may occur?
Thanks.
I'm making a game in AS3 that requires a huge amount of bullets to be fired sequentially in an extremely short amount of time. For example, at a certain point, I need to fire one bullet, every 1-5 millisecond, for about 1 second. The game runs (smoothly) at 60 FPS with around 800+ objects on screen, but the timers don't seem to be able to tick faster than my framerate (around once every 16 milliseconds). I only have one enterFrame going, which everything else updates from.
Any tips?
The 16 milliseconds sounds about right... According to the docs it has a resolution no smaller than 16.6 seconds.
delay:Number — The delay between timer events, in milliseconds. A delay lower than 20 milliseconds is not recommended. Timer frequency is limited to 60 frames per second, meaning a delay lower than 16.6 milliseconds causes runtime problems.
I would recommend that you create x objects (bullets) off-screen, at different offsets, on each tick to get the required amount of objects you want in 1 second. This assumes that your context allows for enemies off-screen to shoot.
How can you possibly have 800+ objects on screen? is each object a single pixel or is the entire screen just filled? I mean to be fair I have a 1920x1080 screen in front of me so each object could be 2 pixels wide and 2 pixels tall and it wouldn't quite fill the entire screen width wise 1600x1600. I'm just curious why you would have such a scenario as I've been toying with game development a bit.
As for the technical question a Timer is not guaranteed to be triggered at the moment after the duration has expired (just some time after), it depends on how quickly it's able to get around to processing the code for the timer tick. My guess is having so many objects is exhausting the CPU (on *NIX systems use top in the console to see or in Windows use task manager is it peaking a core of the cpu?). This can probably confirm or deny it or if you turn off the creation/updating of your objects and see if the timer itself ticks at the correct rate then. If either is true it suggests the CPU is peaking out.
Consider using Stage3D to offload the object drawing to the GPU to free up the CPU to run your Timer. You may also want to consider a "game framework" like flixel to help manage your resources though I don't know that it takes advantage of the GPU... actually just Googled and found an interesting post discussing it:
http://forums.flixel.org/index.php?topic=6101.0