Using Motion class for lumia 520 - windows-phone-8

I am trying to get the trajectory of the mobile using the sensor data. I tried logging accelerometer data but noticed that it includes gravitational acceleration also. The developer website says that the Motion class can give cleaner data since it combines various sensors. Is it possible to use Motion class on Lumia 520, even though it has no gyroscope?
Thanks in advance.

Motion class cannot be used on Lumia 520, not because of lack of gyroscope, but the lack of magnetometer (i.e. compass).
Motion class requires accelerometer and compass data, and it uses gyroscope data if available too, for better results.
Since there is no magnetometer in the Lumia 520, Motion class won't work on it. Motion.IsSupported will be false.

Related

Robust detection of texture capabilities in threejs

I have to load some big textures in a threejs application, using the best the hardware can do.
I use:
var local_size = gl.getParameter(gl.MAX_TEXTURE_SIZE);
To determine what's the maximum size accepted by the device.
On a PC, what's returned by this call matches reality, eg if this call returns 8096 then I can use textures of this size.
On mobile devices though the situation varies quite a lot. For example, on a nexus 5, I get 4096 and a texture of this size does work.
On a Samsung Galaxy tab 3, I get 4096 but if I use this size, I just get a black texture... The maximum size I can use is 2048.
On a Nexus 4, same issue.
In summary : some browsers (at least chrome on Android) just return wrong values regarding the webgl implementation capabilities.
I found, in the Khronos webgl regression test suite, some hints on how to test the actual capabilities.
https://www.khronos.org/registry/webgl/sdk/tests/conformance/limits/gl-max-texture-dimensions.html
However this is a lot of code and may slow down the startup of my application, so (at last) here's my question :
any ideas on how to test the actual texture capabilities of a specific device? Maybe an error condition to test if the texture creation went wrong in Threejs?
Thank you for your help!

Canvas game heating up GPU but frame rate performance is smooth

I've been developing a game using HTML5 canvas for several months, and I've recently begun doing some of my development work on a macbook. In spite of a smooth frame rate of ~60fps, after a few seconds the game is pushing the macbook GPU up past 80 degrees C. I've also noticed on my desktop machine (which has a radeon 7870 video card) that after a while the GPU temperature rises and the fans kick in.
Considering it's a 2D game without any particularly fancy effects or too much going on, running at a reasonable resolution, this seems to indicate a major performance issue as it seems the GPU is being taxed a great deal. I'm already implementing many of the performance optimisations I've seen recommended (rendering at integer coordinates, no shadows, offscreen prerendering). Profiling the game reveals by far the most time is being consumed by the drawImage calls, but I'd expect a frame rate drop and/or other indications of lagging performance if this was the cause of the heat issue - but the framerate is beautiful on the macbook, there is no lag whatsoever.
To try and address this I've recently split the game into multiple layers and used pre-rendering to avoid unnecessary redrawing of layers, but this has actually made the frame rate significantly worse, and has not solved the heat issue at all. At this point I'm wondering whether any other optimisations I make will have any effect (eg. avoiding unnecessary fillStyle changes), or if I will be wasting my time?
I'll be grateful if anyone can provide advice or shed light on the source of this problem. A relatively basic 2D game should not cause this degree of GPU heat, and I ideally need it to be playable on laptops and lower-end devices, preferably without setting fire to them :)
Try a minimal project, like the project templates provided with most engines. Or just draw a sprite and nothing else. If this shows the same behavior, you can't do anything about it. It might also be an engine, driver or browser bug.
You have to consider that in Windows desktop the GPU typically is on idle and does minimal work to draw stuff. Even in a 2D game however all shader units etc run at full speed to provide the best possible performance. Only the latest models & drivers (hey, try a driver update!) allow the GPU to throttle in games when it determines that the game doesn't require the full performance to run at 60 fps. So even if it's a simple 2D game the GPU might still fire up because it enters "reroute all power to the shaders" mode.
Also note that 2D games are all essentially 3D games but with a fixed or orthographic projection. From the perspective of the GPU a 2D game is just another 3D game, it just so happens that the world is only translated along two axis at most.

why game is running slow in libgdx?

I am making racing game in Libgdx.My game apk size is 9.92 mb and I am using four texture packer of total size is 9.92 Mb. My game is running on desktop but its run on android device very slow. What is reason behind it?
There are few loopholes which we neglect while programming.
Desktop processors are way more powerful so the game may run smoothly on Desktop but may slow on mobile Device.
Here are some key notes which you should follow for optimum game flow:
No I/O operations in render method.
Avoid creating Objects in Render Method.
Objects must be reused (for instance if your game have 1000 platforms but on current screen you can display only 3, than instead of making 1000 objects make 5 or 6 and reuse them). You can use Pool class provided by LibGdx for object pooling.
Try to load only those assets which are necessary to show on current screen.
Try to check your logcat if the Garbage collector is called. If so than try to use finalize method of object class to find which class object are collected as garbage and try to improve on it.
Good luck.
I've got some additional tips for improving performance:
Try to minimize texture bindings (or generally bindings when you're making a 3D game for example) in you render loop. Use texture atlases and try to use one texture after binding as often as possible, before binding another texture unit.
Don't display things that are not in the frustum/viewport. Calculate first if the drawn object can even be seen by the active camera or not. If it's not seen, just don't load it onto your GPU when rendering!
Don't use spritebatch.begin() or spritebatch.end() too often in the render loop, because every time you begin/end it, it's flushed and loaded onto the GPU for rendering its stuff.
Do NOT load assets while rendering, except you're doing it once in another thread.
The latest versions of libgdx also provide a GLProfiler where you can measure how many draw calls, texture bindings, vertices, etc. you have per frame. I'd strongly recommend this since there always can be situations where you would not expect an overhead of memory/computational usage.
Use libgdx Poolable (interface) objects and Pool for pooling objects and minimizing the time for object creation, since the creation of objects might cause tiny but noticable stutterings in your game-render loop
By the way, without any additional information, no one's going to give you a good or precise answer. If you think it's not worth it to write enough text or information for your question, why should it be worth it to answer it?
To really understand why your game is running slow you need to profile your application.
There are free tools avaiable for this.
On Desktop you can use VisualVM.
On Android you can use Android Monitor.
With profiling you will find excatly which methods are taking up the most time.
A likely cause of slowdowns is texture binding. Do you switch between different pages of packed textures often? Try to draw everything from one page before switching to another page.
The answer is likely a little more that just "Computer fast; phone slow". Rather, it's important to note that your computer Java VM is likely Oracles very nicely optimized JVM while your phone's Java VM is likely Dalvik, which, to say nothing else of its performance, does not have the same optimizations for object creation and management.
As others have said, libGDX provides a Pool class for just this reason. Take a look here: https://github.com/libgdx/libgdx/wiki/Memory-management
One very important thing in LibGDX is that you should make sure that sometimes loading assets from the memory cannot go in the render() method. Make sure that you are loading the assets in the right times and they are not coming in the render method.
Another very important thing is that try to calculate your math and make it independent of the render in the sense that your next frame should not wait for calculations to happen...!
These are the major 2 things i encountered when I was making the Snake game Tutorial.
Thanks,
Abhijeet.
One thing I have found, is that drawing is laggy. This means that if you are drawing offscreen items, then it uses a lot of useless resources. If you just check if they are onscreen before drawing, then your performance improves by a lot surprisingly.
Points to ponder (From personal experience)
DO NOT keep calling a function,in render method, that updates something like time,score on HUD (Make these updates only when required eg when score increases ONLY then update score etc)
Make calls IF specific (Make updations on certain condition, not all the time)
eg. Calling/updating in render method at 60FPS - means you update time 60 times a sec when it just needs to be updated once per sec )
These points will effect hugely on performance (thumbs up)
You need to check the your Image size of the game.If your image size are more than decrease the size of images by using the following link "http://tinypng.org/".
It will be help you.

Implementing shader animation for Windows Store DirectX App

I am trying to achieve shader animation in Windows Store DirectX App. Actually I just would like to achieve the same animation I see on below link (implemented for DirectX 9 and C#).
http://www.rastertek.com/dx10tut33.html
I am kind of able to find my way around with DirectX 11.1 (Windows Store App compatible DirectX shaders) but I can not see how may I pass the time parameter to the shader code from the C++ program logic so that I can affect shader state and have different effect based on the time.
Please share an opinion if you have some.
To pass parameters to a shader you can use constantbuffers (msdn). You create a constantbuffer, fill it with your data, e.g. the actual time, and set it in the desired shader with
ID3D11DeviceContext::GSSetConstantBuffers
ID3D11DeviceContext::PSSetConstantBuffers
or ID3D11DeviceContext::VSSetConstantBuffers.

How much interacting can i get with the GPU with Flash CS4?

as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.