How performant is collision detection in ActionScript 3? - actionscript-3

I'm gonna be using hitTestObject() for quite a lot sprites each frame (e.g. 4 * 500). There will be many false and only a few or none true.
I thought I might check distances sprite1.x - sprite2.x and sprite1.y - sprite2.y first so that I let only near objects to be checked for collision. Then I wondered does ActionScript routines already check for distances first ? Flash is optimized in many ways so do I have to bother to increase performance for collision detection ?
// something like this ?
public static function near(sprite1: Sprite, sprite2: Sprite): Boolean
{
return (Math.abs(sprite1.x - sprite2.x) < 64) && (Math.abs(sprite1.y - sprite2.y) < 64);
}
if (near(sprite1, sprite2))
if (sprite1.hitTestObject(sprite2))
collide(sprite1, sprite2);

No, hitTestObject() is not optimized this way. So, if you can lower the amount of hitTestObject() calls, do it.

If you want performance on many objects you should try shape collisions instead. If you don't want to code it yourself, just take a look at Box2D or Nape. Both engines have been heavily optimised to do those kind of calculatons.

Related

How to speed up the world in LibGDX without altering delta time

This is not a duplicate of this question because I need a way to speed up my world without changing deltaTime and have everything happen faster. Why can't I use deltaTime or change it? I'm using the velocity Verlet sympletic integration for the simulation of orbital mechanics and deltaTime is required to be as low as possible for increased precision. Therefore I have it set to Gdx.graphics.getDeltaTime() * 0.001f. I am using LibGDX in Java and if it is necessary, I am using the Game class to have my screens structured.
What have I tried or thought of?
Using higher order sympletic integrators will not require a smaller deltaTime but are difficult to implement and are my plan B for if this is not possible.
The question which you said is not duplicate is actually most likely your solution.
Your goal is to have small time-step of Gdx.graphics.getDeltaTime() * 0.001f. When we use framerate 60fps then it can be writen as 1f / 60f * 0.001f. So your current time-step is about 0.000017. This is the value which you want to use for Constants.TIME_STEP.
Then you will just need to replace WorldManager.world.step with call to your physics function. And the delta time which you pass as parameter will be Constants.TIME_STEP.
But due to your small time-step there will be large amount of calls to the physics function, which means it will have to be fast or you will have to find a way to reduce the time-step anyway.

How to slow down the All actions of cocos2dx Game

I'm implementing a game on cocos2d-x.
Now I implemented a "Replay of My Game" feature (Game shows from start)
But I want to replay my game at the speed of 1x , 2x , 3x , 4x. When changing speed to 2x all actions (move and rotate etc.) should work with respect to new changed variable.
How can I do that by changing the general speed of CCAction?
I want a general solution. I know the solution with variables or scheduler,
but I want a general solution.
You can use following code to slow or fast all scheduler and action:-
float val = 2.0; // to fast
val = 0.5; // to slow
Director->getInstance()->setTimeScale(val);
Default value is 1.0;
Write a class like CCEaseIn by yourself.
Rewrite update(float time).
m_pInner->update(powf(time, m_fRate)); // this is what update() like in CCEaseIn
The code may be changed like this:
m_pInner->update(func(time));
func(float time) is the function to change the time. like time/2 which means to 0.5x, time*2 means 2x. You may save some param to make the function more adaptable.

Action Script 3.0 Making objects visible only in scene

I have thousand of objects in my app. I wanna make objects visible only at the scene that I see, and make objects invisible out of scene. I wrote a code but it's working laggy. Here is my code :
for(var i:int = 0; i<container.numChildren; i++){
var obj:MovieClip = container.getChildAt(i) as MovieClip;
rectScene.x = -container.x + 25; // position things...
rectScene.y = -container.y + 25;
if(rectScene.intersects(new Rectangle(obj.x-40,obj.y-43,obj.width,obj.height))){
obj.visible = true;
}else{
obj.visible = false;
}
}
Example Image : http://i.stack.imgur.com/GjUG8.png
It's laggy to check all of thousands objects everytime I drag the scene. How can I solve this ?
Thanks a lot !
I would create a Sprite per Scene and add the belonging Object to them. So the display list could look like this:
+root
+-+scene1
+obj1
+obj2
.
.
+objN
+-+scene2
and so on. This way you just need to toggle the current scenes' visibility. But as I see, you are sorting the objects based on intersection with a »scene rect« and that is costly process if you have that many Objects to check. If you cannot add a data structure to associate Objects to Scenes, than you can try to cache the result and run the search only if something changes…
Last but not least you could implement an improved searching algorithm based on space partition, so that you decrease the number of intersection test as much as possible. But that depends on strongly on your application, if everything is moving a lot and you need to update the partition tree very often, that might not be an improvement.
EDIT
One possible algorithm for an efficient lookup of the objects in an area could be orthogonal range searching. An explanation would go too far here and is not the subject of the question.
A book including a nice description is this and a result of a quick and dirty googling is here.
The algorithm is based in a binary tree which contains the coordinates of the objects. If the objects do not move that is perfect for your application, because then the tree just needs to be initialized once and following range queries are quite fast. To be exact, their speed depends on the size of the area to query. So the improvement will be bigger, the smaller the size of the queried area is compared to the size of the overall world.
Good luck!
EDIT #2
I once had a similar thing to handle, that was one dimensional (x-axis only), but as far as I know, it should not be too complicated to make this work in two dimensions (Got that from the book, linked above). You can have a look at the question here.

How To Store a Dungeon Map

I'm using AS3, but general programming wisdom unspecific to AS3 is great too!
I am creating my first game, a top-down dungeon crawler with tile-based navigation, and I am deciding how to store my maps. I need to be able to access a specific tile at any point in time. My only thought so far is to use nested Vectors or Arrays with the first level being the row and the second being the column, something like this:
private var map:Array = new Array(Array(0,1,0,0,1,1,0),Array(0,1,0,1,0,1,0));
private var row2col3:uint = map[1][2];
/*map would display as such:*/
#|##||#
#|#|#|#
Ultimately, the idea is to build a Map class that will be easily extensible and, again, allow free access to any specific tile. I am looking for help in determining an effective/efficient design architecture for that Map class.
Thanks!
As stated in the comments I would upload and give my source code for a 12 hour challenge project to create a tile based level editor. The source code can be found at: GitHub BKYeates
This level editor focuses on textures being a power of 2, and uses blitting for drawing on all the textures. It can read, write, and store partial tiles. There is also some functionality to erase and draw on collision boxes.
Now in regards to how the storage should be setup, it is really up to you. If you are going to be storing lots of information I recommend using Vectors. Vectors perform faster than most other container types except for ByteArray (if used correctly). In my level editor I used a Vector with a particular setup.
The Vector I used named _map in a class called tilemodel. tilemodel is responsible for updating all the storage information when a change is made. The _map variable is setup like so:
_map = new Vector.<Vector.<Vector.<Object>>>();
This is a pretty heavily nested Vector and in the end stores, can you believe it, an Object! Which admittedly really chunks out the performance gains you get from using Vector when you are indexing the furthest nested elements.
But ignore that because the indexing gain from this setup is really key. The reason it is setup this way is because I can reference a layer, a row, and a column to grab a specific tile object. For example, I have a tile on layer 2 in row 12 column 13 that I want to access:
var tileObject:Object = _map[2][12][13];
That works perfectly for pretty much any scenario I could use in my tile based game, and the speed is comparatively better than that of a Object or Dictionary when this is being accessed multiple times (i.e. - in a loop which happens often).
The level editor is designed to use all blitting and leave onus to my management classes for storage. The speed gain from doing this is very high, and the way it is currently setup the tilemodel can store partial bitmaps making it slightly more flexible than your standard rigidness of a power of 2 texture reader.
Feel free to look through the source code. But here is a summary of what some of the classes do:
tilecontroller - Issues state changes and updates to tilemanager and tilemodel
tilemanager - Responsible for texture drawing and removal.
tilemodel - Stores and updates the current map on state changes.
r_loader - Loads all assets from assetList.txt (paths set to images
there).
hudcontroller - Currently this was the last thing I was working on, lets you draw on collision boxes that are stored in a separate file alongside the map.
g_global & g_keys - Global constants and static methods use
ubiquitously
LevelEditor - Main class, also designed as "View" class ( see MVC pattern: MVC Pattern )
Also as I've mentioned it can read back all the storage. The class used for that I did not upload to GitHub, but figured I would show the important method here:
//#param assets needs to be the list of loaded bitmap images
public function generateMap( assets:* ):void {
var bmd:BitmapData = new BitmapData( g_global.stageWidth, g_global.stageHeight, true, 0 );
_canvas = new Bitmap( bmd, "auto", true );
_mapLayer.addChild( _canvas );
_canvas.bitmapData.unlock();
g_global.echo( "generating map" );
var i:int, j:int, m:int;
for ( m = 0; m < _tiles.length; m++ ) {
for ( i = 0; i < _tiles[m].length; i++ ) {
for ( j = 0; j < _tiles[m][i].length; j++ ) {
//wondering why im type casting in this evaluation? _tiles[i][j].tile == int( _tiles[i][j].tile )
//the level editor stores tiles that are larger than the grid size at indices containing values that are a percent of the tile size
var tile:Object = _tiles[m][i][j];
if ( tile != null && int( tile.tile ) == tile.tile ) {
addTile( g_global.GRIDSIZE * tile.column, g_global.GRIDSIZE * tile.row, { index:tile.tile, bitmap:assets[ tile.tile ] }, tile.rotation );
}
}
}
}
_canvas.bitmapData.lock();
}
Anyway I hope this information finds you well. Good luck!
I asked a similar question a while back: https://gamedev.stackexchange.com/questions/60433/is-it-more-efficient-to-store-my-tile-grid-as-a-dictionary-or-an-array. I'm not sure that it would really matter whether it's an Array or a Vector (the differences in efficiency seem to differ between FP versions, etc.).
But, yeah, you probably want to use one or the other (not a Dictionary or anything), and you probably want to index it like [y * width + x], not [x][y]. Reasons: Efficiency and not having overly complicated data structures.
Also if you need to be able to regularly access the Array or Vector outside of that class, just make the variable internal or public or whatever; making it private and wrapping over it with functions, while being more prim-and-proper class design, would still be overkill.
One method I am using right now for my own thing is that I'm storing my tiles in a black and white pixel bitmap (and wrote a wrapper around that). I'm not sure how efficient this is overall as I've never benchmarked it and just wrote it quickly to create a map for testing purposes, but I am finding that it does offer an advantage in that I can draw my maps in an image editor and view them easily while still allowing random pixel/tile access.
Looking at your sample code, I'm guessing you have only two types of tiles right now, so you could just use black and white pixels as well if you want to try it.
I've done the 2d array method as well (using it still actually for other parts) which works fine too, but perhaps can be harder to visualise at larger sizes. Looking forward to Bennett's answer.

How to avoid memory leaks in this case?

In order to prevent memory leaks in ActionScript 3.0, i use a member vector in classes that have to work with vectors, for example:
public class A
{
private var mHelperPointVector:Vector.<Point> = new Vector.<Point>();
public static GetFirstData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
public static GetSecondData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
}
and then i have consumers who uses GetFirstData and GetSecondData methods, storing references to vectors returned by these methods, for example:
public function OnEnterFrame():void
{
var vector:Vector.<Point> = A.GetSecondData();
....
}
This trick seems to be good, but sometimes i need to process the vector returned by GetSecondData() after some period of time, and in this case, this vector becomes overwritten by another call to GetSecondData() or GetFirstData()...The solution is to copy vector to a new vector...but in this case is better to avoid this trick at all. How do you deal with these problems? I have to work with a big amount of vectors (each of length between 1-10).
The thing about garbage collection is just trying to avoid instantiating (and disposing of) as much as possible. It's hard to say what would be the best approach since I can't see how/why you're using your Vector data, but at first glance I think that with your approach you'll be constantly losing data (you're pretty much creating the equivalent of weak instances, since they can be easily overwritten) and changing the length of a Vector doesn't really avoid garbage collection (it may delay and reduce it, but you're still constantly throwing data away).
I frankly don't think you'd have memory leaks with point Vectors unless you're leaking the reference to the Vector left and right. In which case, it'd be better to fix these leftover references, rather than simply coming up with a solution to reuse the same vectors (which can have many more adverse effects).
However, if you're really concerned about memory, your best solution, I think, is either creating all vectors you need in advance (if it's a fixed number and you know their length ahead of time) or, better yet, using Object Pools. The latter would definitely be a more robust solution, but it requires some setup on your end, both by creating a Pool class and then when using it. To put it in code, once implemented, it would be used like this:
// Need a vector with length of 9
var myVector:Vector.<Point> = VectorPool.get(9);
// Use the vector for stuff
...
// Vector not needed anymore, put it back in the pool
VectorPool.put(myVector);
myVector = null; // just so it's clear we can't use it anymore
VectorPool would control the list of Vectors you have, letting other parts of your code "borrow" vectors as needed (in which they would be marked as being "used" inside the VectorPool) and give them back (marking them back as unused). Your code could also create vectors on the spot (inside get()), as needed, if no usable vectors are available within the list of unused objects; this would make it more flexible (not recommended in some cases since you're still spending time with instantiation, but probably negligible in this case).
This is a very macro explanation (you'd still have to write VectorPool), but object pools like that are believed to be the definitive solution to avoid re-instantiating as well as garbage collection of objects that are just going to be reused.
For reference, here's what I used as a very generic Object Pool:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/ObjectPool.as
Or a more specialized one, that I use in situations when I need a bunch of throwaway BitmapData instances of similar sizes:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/BitmapDataPool.as
I believe the implementation of a VectorPool class in the molds of what you need would be similar to the link above.
As a side note, if performance is a concern, I'd suggest using vectors of fixed length too, e.g.
// Create a vector of 9 items, filled with `nulls`
var myPoints:Vector.<Point> = new Vector.<Point>(9, true);
This makes it faster since you won't have micro allocations over time. You have to set the items directly, instead of using push():
myPoints[0] = new Point(0, 0);
But that's actually a forced advantage since setting the vector items is faster than push().