as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.
Related
I've been developing a game using HTML5 canvas for several months, and I've recently begun doing some of my development work on a macbook. In spite of a smooth frame rate of ~60fps, after a few seconds the game is pushing the macbook GPU up past 80 degrees C. I've also noticed on my desktop machine (which has a radeon 7870 video card) that after a while the GPU temperature rises and the fans kick in.
Considering it's a 2D game without any particularly fancy effects or too much going on, running at a reasonable resolution, this seems to indicate a major performance issue as it seems the GPU is being taxed a great deal. I'm already implementing many of the performance optimisations I've seen recommended (rendering at integer coordinates, no shadows, offscreen prerendering). Profiling the game reveals by far the most time is being consumed by the drawImage calls, but I'd expect a frame rate drop and/or other indications of lagging performance if this was the cause of the heat issue - but the framerate is beautiful on the macbook, there is no lag whatsoever.
To try and address this I've recently split the game into multiple layers and used pre-rendering to avoid unnecessary redrawing of layers, but this has actually made the frame rate significantly worse, and has not solved the heat issue at all. At this point I'm wondering whether any other optimisations I make will have any effect (eg. avoiding unnecessary fillStyle changes), or if I will be wasting my time?
I'll be grateful if anyone can provide advice or shed light on the source of this problem. A relatively basic 2D game should not cause this degree of GPU heat, and I ideally need it to be playable on laptops and lower-end devices, preferably without setting fire to them :)
Try a minimal project, like the project templates provided with most engines. Or just draw a sprite and nothing else. If this shows the same behavior, you can't do anything about it. It might also be an engine, driver or browser bug.
You have to consider that in Windows desktop the GPU typically is on idle and does minimal work to draw stuff. Even in a 2D game however all shader units etc run at full speed to provide the best possible performance. Only the latest models & drivers (hey, try a driver update!) allow the GPU to throttle in games when it determines that the game doesn't require the full performance to run at 60 fps. So even if it's a simple 2D game the GPU might still fire up because it enters "reroute all power to the shaders" mode.
Also note that 2D games are all essentially 3D games but with a fixed or orthographic projection. From the perspective of the GPU a 2D game is just another 3D game, it just so happens that the world is only translated along two axis at most.
I made an AIR application that uses the logitech c920 webcam for image capturing. The camera can display and record 720p and 1080p perfectly when I'm using the Logitech software. But when I use 720p in my AIR app there is obvious lag. Still useable, but annoying. 1080p is unacceptable in terms of lag. All this is on an i3 laptop. On my i7 desktop there is much less lag and I can do 1080p, but it's still not nearly as good as when I'm using the Logitech software.
The other odd thing is my older camera, the Logitech 9000 doesn't seem to work properly anymore in Flash or Air. The lag times are several seconds.
My questions are: (1) Do the logitech drivers use GPU acceleration to make the webcams work lightning fast even on a slow i3 computer while Flash cannot?
(2) Why does the older camera give such crumby performance now whether on the i3 or i7? Did Flash change how it handles cameras or something?
(3) Will the Flash Player be updated to allow GPU acceleration for webcams?
The simple truth is that Flash's performance is not good enough for realtime video applications.
Maybe lag would be better if you could attach your webcam to StageVideo (GPU decoding) instead of using the old Video class. Or you could simply lower the resolution...
If you want to develop realtime video applications you should take a look at Cinder, or OpenFrameworks instead. Both use C++ and performance is amazing. I have personally done projects involving 4k video on multiple monitors with Cinder.
Another option would be using Max MSP, much more powerful than Flash in terms of video performance, and you program visually using nodes and boxes.
I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.
I've created a Flash application (AS3) in CS6 (Mac) that performs as expected when published as a Flash Projector. But when I publish as an AIR app (v3.4.0.2540) the app's performance is about 50% worse than the Projector. I set it to use GPU Hardware Acceleration, render mode is Auto. Are there other settings I should be using? The performance hits come at expected times (when using MOUSE_MOVE and ENTER_FRAME listeners), but it works fine in the stand alone player.
Setting render mode to auto will cause AIR to fall back to the CPU (see "renderMode"). If you are relying on the GPU (you didn't state whether you are or not but it is implied) this would likely cause significant performance degradation.
You need to set render mode to direct or gpu to take advantage of the GPU in AIR. I'm not entirely certain what the difference is but I've always used direct when working with Starling.
Another thing to consider with AIR: are you publishing a release build or a debug build? Debug builds perform considerably worse than release builds.
As we know many HTML 5 renderers use the GPU to draw canvas elements. I'm wondering about using this ability to trigger the GPU to use it for GPGPU. There probably are no native GPGPU functions in the canvas API or HTML 5, but what about a hack to do that?
I was thinking about using something like a texture (2D or 3D array) with the values to be processed and then ask a canvas element to perform some operation on this matrix. This operation has to be a function that I can somehow send to the canvas element. Then we have browser-based GPGPU.
Is such a thing possible? What do you think? Do you have any other ideas of how to implement this?
There is WebCL standard which is created exactly to give Javascripts running in browser access to GPGPU's computational power (provided client has any). However the list of existing implementations is pretty short.
Successful attempts to harness GPU power for genral-purpose calculations were long before (and lead to) the emergence of CUDA, OpenCL and similar GPGPUs framework. Here is what looks like a good tutorial, and I guess it is portable to WebGL (which has much broader support then WebCL). See #MikkoOhtamaa's answer for good introductory article about WebGL itself
You probably want to use webGL shaders for your nefarious purposes.
http://www.html5rocks.com/en/tutorials/webgl/shaders/
Shaders provide limited opportunities for parallel computations.