I've been developing a game using HTML5 canvas for several months, and I've recently begun doing some of my development work on a macbook. In spite of a smooth frame rate of ~60fps, after a few seconds the game is pushing the macbook GPU up past 80 degrees C. I've also noticed on my desktop machine (which has a radeon 7870 video card) that after a while the GPU temperature rises and the fans kick in.
Considering it's a 2D game without any particularly fancy effects or too much going on, running at a reasonable resolution, this seems to indicate a major performance issue as it seems the GPU is being taxed a great deal. I'm already implementing many of the performance optimisations I've seen recommended (rendering at integer coordinates, no shadows, offscreen prerendering). Profiling the game reveals by far the most time is being consumed by the drawImage calls, but I'd expect a frame rate drop and/or other indications of lagging performance if this was the cause of the heat issue - but the framerate is beautiful on the macbook, there is no lag whatsoever.
To try and address this I've recently split the game into multiple layers and used pre-rendering to avoid unnecessary redrawing of layers, but this has actually made the frame rate significantly worse, and has not solved the heat issue at all. At this point I'm wondering whether any other optimisations I make will have any effect (eg. avoiding unnecessary fillStyle changes), or if I will be wasting my time?
I'll be grateful if anyone can provide advice or shed light on the source of this problem. A relatively basic 2D game should not cause this degree of GPU heat, and I ideally need it to be playable on laptops and lower-end devices, preferably without setting fire to them :)
Try a minimal project, like the project templates provided with most engines. Or just draw a sprite and nothing else. If this shows the same behavior, you can't do anything about it. It might also be an engine, driver or browser bug.
You have to consider that in Windows desktop the GPU typically is on idle and does minimal work to draw stuff. Even in a 2D game however all shader units etc run at full speed to provide the best possible performance. Only the latest models & drivers (hey, try a driver update!) allow the GPU to throttle in games when it determines that the game doesn't require the full performance to run at 60 fps. So even if it's a simple 2D game the GPU might still fire up because it enters "reroute all power to the shaders" mode.
Also note that 2D games are all essentially 3D games but with a fixed or orthographic projection. From the perspective of the GPU a 2D game is just another 3D game, it just so happens that the world is only translated along two axis at most.
Related
I am making a VR animation using A-Frame (HTML). My animation has many 3D models. But problem is that when I run the animation it gives low fps (15-20) and high draw calls (230-240). Due to this both animation and camera control are lagging. So, how to increase fps and reduce draw calls?
The number of draw calls sounds high, but not so high as to cause a frame rate drop as low as 15-20 FPS (though it depends a bit what spec system you are running on).
As well as looking at ways to reduce draw calls, you might also want to reduce the complexity of the models you are using, or the resolution of the textures, and look into other possible causes of performance problems.
Some options:
reducing texture resolutions - just open in a picture editor like Paint or GIMP and reduce the resolution. Keep textures to power of two resolutions where possible, e.g. 512 x 512 or 1024 x 1024.
reducing model complexity. Look at decimation. Best done outside the browser as a pre-processing step, with a 3D modelling tool such a blender. Also, worth checking how many meshes are in each model, and whether those can be combined in a single mesh.
reducing calls. You need to either merge geometries or (if you are using the same model multiple times) use instancing. Some suggestions for instancing here: Is there an instancing component available for A-Frame to optimize my scene with many repeated objects?. Merging geometries will involve writing some Javascript code yourself, but might be a better option if you don't have repeated geometries.
If you haven't already done this, also worth reviewing this list of performance tips here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance
It could be something else on that list, e.g. raycasting, garbage collection issues etc. that's causing the problem.
Using the browser debuger to profile your code may give some further clues as to what's going on with performance.
I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.
I'm looking for an example of code that samples signal from microphone and
plays it on speakers. I need a solution that has a resonably constant delay on
different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
mind if it varies everytime the application starts.
I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
One event would put data into buffer the other would read data.
But it seems impossible to provide constant delay, either the delay grows constantly or
it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
generating SampleDataEvent.SAMPLE_DATA events.
I wan't to process each incoming frame so using setLoopBack(true) is not an option.
ps This is more a problem on Android devices than on PC. Althought when I start to resize application
window on PC delay starts to grow also.
Please help.
Unfortunately, this won't be possible... at least not directly.
Some sound devices will use a different clock between recording and playback. This will be especially true for cell phones where what is running the microphone may very well be different hardware than the headphone audio output.
Basically, if you record at 44.1kHz and play back at 44.1kHz, but those clocks are not in sync, you may be recording at 44.099kHz and play back at 44.101kHz. Over time, this drift will mean that you won't have enough data in the buffer to send to the output.
Another complication (and more than likely your problem) is that your record and playback sample rates may be different. If you record from the microphone at 11kHz and playback at 48kHz, you will note that 11 doesn't evenly fit into 48. Software is often used to up-sample the recording. Sometimes this is done with a nice algorithm which is guaranteed to give you the necessary output. Other times, that 11kHz will get pushed to 44kHz and is deemed "close enough".
In short, you cannot rely on recording and playback devices being in sync, and will need to synchronize yourself. There are many algorithms out there for handling this. The easiest method is to add a sample here and there that averages the sample before and after it. If you do this with just a few samples, it will be inaudible. Depending on the kind of drift problem you are having, this may be sufficient.
I'm designing a game for the first time, but I wonder on what game time is based. Is it based on the clock or does it rely on frames? (Note: I'm not sure if 'game time' is the right word here, correct me if it isn't)
To be more clear, imagine these scenarios:
Computer 1 is fast, up to 60fps
Computer 2 is slow, not more than 30fps
On both computers the same game is played, in which a character walks at the same speed.
If game time is based on frames, the character would move twice as fast on computer 1. On the other hand, if game time was based on actual time, computer 1 would show twice as much frames, but the character would move just as fast as on computer 2.
My question is, what is the best way to deal with game time and what are advantages and disadvantages?
In general, commercial games have two things running - a "simulation" loop and a "rendering" loop. These need to be decoupled as much as possible.
You want to fix your simulation time-step to some value (greater or equal to your maximum framerate). Complex physics doesn't like variable time steps. I'm surprised no-one has mentioned this, but fixed-time steps versus variable time steps are a big deal if you have any kind of interesting physics. Here's a good link:
http://gafferongames.com/game-physics/fix-your-timestep/
Then your rendering loop can run as fast as possible, and render the output of the current simulation step.
So, referring to your example:
You would run your simulation at 60fps, that is 16.67ms time step. Computer A would render at 60fps, ie it would render every simulation frame. Computer B would render every second simulation frame. Thus the character would move the same distance in the same time, but not as smoothly.
Really old games used a frame-count. It became fairly obvious quickly that this was a poor idea, since machines get newer, and thus the games run faster.
Thus, base it on the system clock. Generally this is done by knowing how long last frame took, and using that number to know how much 'real time' to go through this frame.
It should rely on the system clock, not on the number of frames. You've made your own case for this.
The FPS is simply how much frame the computer can render per second.
The game time is YOUR game time. You define it. It is often called the "Game Loop". The frame rendering is a part of the game loop. Also check for FSM related to game programming.
I highly suggest you to read a couple of books on game programming. The question you are asking is what those book explain in the first chapters.
For the users of each to have the same experience you'll want to use actual time, otherwise different users will have advantages/disadvantages depending on their hardware.
Games should all use the clock, not the frames, to provide the same gameplay whatever the platform. It is obvious when you look at MMO or online shooter games: no player should be faster than others.
It depends on what you're processing, what part of the game is in question.
For example, animations, physics and AI need to be framerate independent to function properly. If you have a FPS-dependent animation or physics thread, then the physics system will slow down or character will move slower on slower systems and will go incredibly fast on very fast systems. Not good.
For some other elements, like scripting and rendering, you obviously need it to be per-frame and so, framerate-dependent. You would want to process each script and render each object once per frame, regardless of the time difference between frames.
Game must rely on system clock. Since you don't want your game is played in decent computers in notime!
Games typically use the highest resolution timer available like QueryPerformanceCounter on Windows to time things. Old games used to use frames, but after you could literally run faster in Quake by changing your FPS, we learned not to do that anymore.
as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.