Ball vs Ball collision Optimization - actionscript-3

I'm working on a project that has balls colliding, The way I detect this collision is simple. It takes an object from the array and then compares itself with all the objects in the array. How collision is detected is by checking if the center point distance is lower than the two radius's.
This works great but when you have 100 plus objects at the same time, there is a lot of redundancy. Meaning why would a single object check for the position of an object on the other side of the screen, where the chances of it colliding are low.
I found a theory on this that an object should check only other objects in an area larger than itself. Then if there is another object in that area, detecting collision would start. But this just creates extra checks since the object has to check all objects if it's in the area and then check if the object is colliding.
Is there a method to efficiently detect collision?
public function newHandler():void
{
for ( var i:int = 0; i < _objectArrayLayer1.length; i++ )
{
mcBall1 = _objectArrayLayer1[i];
for ( var j:int = i + 1; j < _objectArrayLayer1.length; j++)
{
mcBall2 = _objectArrayLayer1[j];
p1 = new Point(mcBall1.nX, mcBall1.nY);
p2 = new Point(mcBall2.nX, mcBall2.nY);
distance = Point.distance(p1,p2);
radius1 = mcBall1._radius ;
radius2 = mcBall2._radius ;
if (distance <= radius1 + radius2)
{
solveBalls( mcBall1, mcBall2 );
}
}
}

Here are a few things you can do to speed up your loop - this is not a definitive list but it'll help gain a bit of extra performance.
You're creating a lot of temporary Point objects for your checks. Just create 2 Point objects at the beginning of the function and keep updating their x and y coordinates. This way you'll avoid the cost of constructing a lot of objects. Memory allocation is expensive and then the garbage collector will also have to deal with them.
You should cache mcBall1._radius in a local variable (like radius1 - it's faster than reading the property in every iteration of the nested loop.
Instead of using distance, use the square of the distance - this way you avoid calculating a square root. In this case you can cache radius1*radius1.
In addition to these, you can partition your world into sections and assign a partition index to each of them. Add a partition variable to each of the balls - as they move across the world, you'll update this partition variable. This is not an expensive operation since it's just a simple check to see if the ball is within which partition rectangle - depending on the speed of the balls, they can only move to so many other partitions from a given partition.
If you use a 5x5 partitioning that gives you 25 partitions and at any time you'll know how many balls are within a single partition. When it comes to collision detection, you can avoid checking any objects that are not in the current partition or in any of the surrouinding partitions. This will give your collision detection code a noticable boost.
Look at this question for an example: Optimizing collison detection code in AS3
Edit: while I was typing, #Paddyd posted his answer of quad trees - this is really a simple implementation of a "quadtree" but with no nested partitions and 5x5 instead of 2x2 partitions.

Read up on Quadtrees.
Here is an example of implementing one.

Related

Action Script 3.0 Making objects visible only in scene

I have thousand of objects in my app. I wanna make objects visible only at the scene that I see, and make objects invisible out of scene. I wrote a code but it's working laggy. Here is my code :
for(var i:int = 0; i<container.numChildren; i++){
var obj:MovieClip = container.getChildAt(i) as MovieClip;
rectScene.x = -container.x + 25; // position things...
rectScene.y = -container.y + 25;
if(rectScene.intersects(new Rectangle(obj.x-40,obj.y-43,obj.width,obj.height))){
obj.visible = true;
}else{
obj.visible = false;
}
}
Example Image : http://i.stack.imgur.com/GjUG8.png
It's laggy to check all of thousands objects everytime I drag the scene. How can I solve this ?
Thanks a lot !
I would create a Sprite per Scene and add the belonging Object to them. So the display list could look like this:
+root
+-+scene1
+obj1
+obj2
.
.
+objN
+-+scene2
and so on. This way you just need to toggle the current scenes' visibility. But as I see, you are sorting the objects based on intersection with a »scene rect« and that is costly process if you have that many Objects to check. If you cannot add a data structure to associate Objects to Scenes, than you can try to cache the result and run the search only if something changes…
Last but not least you could implement an improved searching algorithm based on space partition, so that you decrease the number of intersection test as much as possible. But that depends on strongly on your application, if everything is moving a lot and you need to update the partition tree very often, that might not be an improvement.
EDIT
One possible algorithm for an efficient lookup of the objects in an area could be orthogonal range searching. An explanation would go too far here and is not the subject of the question.
A book including a nice description is this and a result of a quick and dirty googling is here.
The algorithm is based in a binary tree which contains the coordinates of the objects. If the objects do not move that is perfect for your application, because then the tree just needs to be initialized once and following range queries are quite fast. To be exact, their speed depends on the size of the area to query. So the improvement will be bigger, the smaller the size of the queried area is compared to the size of the overall world.
Good luck!
EDIT #2
I once had a similar thing to handle, that was one dimensional (x-axis only), but as far as I know, it should not be too complicated to make this work in two dimensions (Got that from the book, linked above). You can have a look at the question here.

How To Store a Dungeon Map

I'm using AS3, but general programming wisdom unspecific to AS3 is great too!
I am creating my first game, a top-down dungeon crawler with tile-based navigation, and I am deciding how to store my maps. I need to be able to access a specific tile at any point in time. My only thought so far is to use nested Vectors or Arrays with the first level being the row and the second being the column, something like this:
private var map:Array = new Array(Array(0,1,0,0,1,1,0),Array(0,1,0,1,0,1,0));
private var row2col3:uint = map[1][2];
/*map would display as such:*/
#|##||#
#|#|#|#
Ultimately, the idea is to build a Map class that will be easily extensible and, again, allow free access to any specific tile. I am looking for help in determining an effective/efficient design architecture for that Map class.
Thanks!
As stated in the comments I would upload and give my source code for a 12 hour challenge project to create a tile based level editor. The source code can be found at: GitHub BKYeates
This level editor focuses on textures being a power of 2, and uses blitting for drawing on all the textures. It can read, write, and store partial tiles. There is also some functionality to erase and draw on collision boxes.
Now in regards to how the storage should be setup, it is really up to you. If you are going to be storing lots of information I recommend using Vectors. Vectors perform faster than most other container types except for ByteArray (if used correctly). In my level editor I used a Vector with a particular setup.
The Vector I used named _map in a class called tilemodel. tilemodel is responsible for updating all the storage information when a change is made. The _map variable is setup like so:
_map = new Vector.<Vector.<Vector.<Object>>>();
This is a pretty heavily nested Vector and in the end stores, can you believe it, an Object! Which admittedly really chunks out the performance gains you get from using Vector when you are indexing the furthest nested elements.
But ignore that because the indexing gain from this setup is really key. The reason it is setup this way is because I can reference a layer, a row, and a column to grab a specific tile object. For example, I have a tile on layer 2 in row 12 column 13 that I want to access:
var tileObject:Object = _map[2][12][13];
That works perfectly for pretty much any scenario I could use in my tile based game, and the speed is comparatively better than that of a Object or Dictionary when this is being accessed multiple times (i.e. - in a loop which happens often).
The level editor is designed to use all blitting and leave onus to my management classes for storage. The speed gain from doing this is very high, and the way it is currently setup the tilemodel can store partial bitmaps making it slightly more flexible than your standard rigidness of a power of 2 texture reader.
Feel free to look through the source code. But here is a summary of what some of the classes do:
tilecontroller - Issues state changes and updates to tilemanager and tilemodel
tilemanager - Responsible for texture drawing and removal.
tilemodel - Stores and updates the current map on state changes.
r_loader - Loads all assets from assetList.txt (paths set to images
there).
hudcontroller - Currently this was the last thing I was working on, lets you draw on collision boxes that are stored in a separate file alongside the map.
g_global & g_keys - Global constants and static methods use
ubiquitously
LevelEditor - Main class, also designed as "View" class ( see MVC pattern: MVC Pattern )
Also as I've mentioned it can read back all the storage. The class used for that I did not upload to GitHub, but figured I would show the important method here:
//#param assets needs to be the list of loaded bitmap images
public function generateMap( assets:* ):void {
var bmd:BitmapData = new BitmapData( g_global.stageWidth, g_global.stageHeight, true, 0 );
_canvas = new Bitmap( bmd, "auto", true );
_mapLayer.addChild( _canvas );
_canvas.bitmapData.unlock();
g_global.echo( "generating map" );
var i:int, j:int, m:int;
for ( m = 0; m < _tiles.length; m++ ) {
for ( i = 0; i < _tiles[m].length; i++ ) {
for ( j = 0; j < _tiles[m][i].length; j++ ) {
//wondering why im type casting in this evaluation? _tiles[i][j].tile == int( _tiles[i][j].tile )
//the level editor stores tiles that are larger than the grid size at indices containing values that are a percent of the tile size
var tile:Object = _tiles[m][i][j];
if ( tile != null && int( tile.tile ) == tile.tile ) {
addTile( g_global.GRIDSIZE * tile.column, g_global.GRIDSIZE * tile.row, { index:tile.tile, bitmap:assets[ tile.tile ] }, tile.rotation );
}
}
}
}
_canvas.bitmapData.lock();
}
Anyway I hope this information finds you well. Good luck!
I asked a similar question a while back: https://gamedev.stackexchange.com/questions/60433/is-it-more-efficient-to-store-my-tile-grid-as-a-dictionary-or-an-array. I'm not sure that it would really matter whether it's an Array or a Vector (the differences in efficiency seem to differ between FP versions, etc.).
But, yeah, you probably want to use one or the other (not a Dictionary or anything), and you probably want to index it like [y * width + x], not [x][y]. Reasons: Efficiency and not having overly complicated data structures.
Also if you need to be able to regularly access the Array or Vector outside of that class, just make the variable internal or public or whatever; making it private and wrapping over it with functions, while being more prim-and-proper class design, would still be overkill.
One method I am using right now for my own thing is that I'm storing my tiles in a black and white pixel bitmap (and wrote a wrapper around that). I'm not sure how efficient this is overall as I've never benchmarked it and just wrote it quickly to create a map for testing purposes, but I am finding that it does offer an advantage in that I can draw my maps in an image editor and view them easily while still allowing random pixel/tile access.
Looking at your sample code, I'm guessing you have only two types of tiles right now, so you could just use black and white pixels as well if you want to try it.
I've done the 2d array method as well (using it still actually for other parts) which works fine too, but perhaps can be harder to visualise at larger sizes. Looking forward to Bennett's answer.

How to avoid memory leaks in this case?

In order to prevent memory leaks in ActionScript 3.0, i use a member vector in classes that have to work with vectors, for example:
public class A
{
private var mHelperPointVector:Vector.<Point> = new Vector.<Point>();
public static GetFirstData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
public static GetSecondData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
}
and then i have consumers who uses GetFirstData and GetSecondData methods, storing references to vectors returned by these methods, for example:
public function OnEnterFrame():void
{
var vector:Vector.<Point> = A.GetSecondData();
....
}
This trick seems to be good, but sometimes i need to process the vector returned by GetSecondData() after some period of time, and in this case, this vector becomes overwritten by another call to GetSecondData() or GetFirstData()...The solution is to copy vector to a new vector...but in this case is better to avoid this trick at all. How do you deal with these problems? I have to work with a big amount of vectors (each of length between 1-10).
The thing about garbage collection is just trying to avoid instantiating (and disposing of) as much as possible. It's hard to say what would be the best approach since I can't see how/why you're using your Vector data, but at first glance I think that with your approach you'll be constantly losing data (you're pretty much creating the equivalent of weak instances, since they can be easily overwritten) and changing the length of a Vector doesn't really avoid garbage collection (it may delay and reduce it, but you're still constantly throwing data away).
I frankly don't think you'd have memory leaks with point Vectors unless you're leaking the reference to the Vector left and right. In which case, it'd be better to fix these leftover references, rather than simply coming up with a solution to reuse the same vectors (which can have many more adverse effects).
However, if you're really concerned about memory, your best solution, I think, is either creating all vectors you need in advance (if it's a fixed number and you know their length ahead of time) or, better yet, using Object Pools. The latter would definitely be a more robust solution, but it requires some setup on your end, both by creating a Pool class and then when using it. To put it in code, once implemented, it would be used like this:
// Need a vector with length of 9
var myVector:Vector.<Point> = VectorPool.get(9);
// Use the vector for stuff
...
// Vector not needed anymore, put it back in the pool
VectorPool.put(myVector);
myVector = null; // just so it's clear we can't use it anymore
VectorPool would control the list of Vectors you have, letting other parts of your code "borrow" vectors as needed (in which they would be marked as being "used" inside the VectorPool) and give them back (marking them back as unused). Your code could also create vectors on the spot (inside get()), as needed, if no usable vectors are available within the list of unused objects; this would make it more flexible (not recommended in some cases since you're still spending time with instantiation, but probably negligible in this case).
This is a very macro explanation (you'd still have to write VectorPool), but object pools like that are believed to be the definitive solution to avoid re-instantiating as well as garbage collection of objects that are just going to be reused.
For reference, here's what I used as a very generic Object Pool:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/ObjectPool.as
Or a more specialized one, that I use in situations when I need a bunch of throwaway BitmapData instances of similar sizes:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/BitmapDataPool.as
I believe the implementation of a VectorPool class in the molds of what you need would be similar to the link above.
As a side note, if performance is a concern, I'd suggest using vectors of fixed length too, e.g.
// Create a vector of 9 items, filled with `nulls`
var myPoints:Vector.<Point> = new Vector.<Point>(9, true);
This makes it faster since you won't have micro allocations over time. You have to set the items directly, instead of using push():
myPoints[0] = new Point(0, 0);
But that's actually a forced advantage since setting the vector items is faster than push().

Locating all elements between starting and ending points, given by value (not index)

The problem is as follows,
I would be given a set of x and y coordinates(an coordinate array of around 30 to 40 thousand) of a long rope. The rope is lying on the ground and can be in any shape.
Now I would be given a start point(essentially x and y coordinate) and an ending point.
What is the efficient way to determine the set of x and y coordinates from the above mentioned coordinate array lie between the start and end points.
Exhaustive searching ie looping 40k times is not an acceptable solution (mentioned on the question paper)
A little bit margin for error is acceptable
We need to find the start point in the array, then the end point. For each, we can think of the rope as describing a function of distance from that point, and we're looking for the lowest point on that distance graph. If one point is a long way away and another is pretty close, we can do some kind of interpolation guess of where to search next.
distance
| /---\
|-- \ /\ -
| -- ------- -- ------ ---------- -
| \ / \---/ \--/
+-----------------------X--------------------------- array index
In the representation above, we want to find "X"... we look at the distances at a few points, get an impression of the slope of the distance curve, possibly even the rate of change of that slope, to help guide our next bit of probing....
To refine the basic approach of doing binary- or interpolated- searches in areas where we know the distance values are low, we may be able to use the following:
if we happen to be given the rope length and know the coordinate samples are equidistant along the rope, then we can calculate a maximum change in distance from our target point per sample.
if we know the rope has a stiffness ensuring it can't loop in a trivially small diameter, then
there's a known limit to how fast the slope of the curve can change
distance curve converges to vertical on both sides of the 0 point
you could potentially cross-reference/combine distance with, or use instead, the direction of each point from the target: only at the target would the direction instantly change ~180 degrees (how well the data points capture this still depends on the distance between adjacent samples and any stiffness of the rope).
Otherwise, there's always risk the target point may weirdly be encased by two very distance points, frustrating our whole searching algorithm (that must be what they mean about some margin for error - every now and then this search would have to revert to a O(N) brute-force search because any trend analysis fails).
For a one-time search, sometimes linear traversal is the simplest, fastest solution. Maybe that's the case for this problem.
Iterate through the ordered list of points until finding the start or end, and then collect points until hitting the other endpoint.
Now, if we expected to repeat the search, we could build an index to the points.
Edit: This presumes no additional constraints beyond those mentioned by #koool. Constraining the distance between the points would allow the hill-climbing approach described in #Tony's answer.
I don't think you can solve it accurately using anything other than exhaustive search. Say for cases where the rope is folded into half and the resulting double rope forms a spiral with the two ends on the centre.
However if we assume that long portions of the rope are in straight line, then we can eliminate a lot of points based on the slope check:
if (abs(slope(x[i],y[i],x[i+1],y[i+1])
-slope(x[i+1],y[i+1],x[i+2],y[i+2]))<tolerance)
eliminate (x[i+1],y[i+1]);
This will reduce the search time significantly if large portions of the rope are in straight line. But will be linear WRT number of remaining points.
So basically, you've got a sorted list of the points that comprise the entire rope and you're given two arbitrary points from within that list, and tasked with returning the sublist that exists between those two points.
I'm going to make the assumption that the start and end points that are provided are guaranteed to coincide exactly with points within the sorted list (otherwise it introduces a host of issues, particularly if the rope may be arbitrarily thin and passes by the start/end points multiple times).
That means all you're really looking for are the indices of the two provided coordinates. Or the index of one, and the answer to "is the second coordinate to the right or to the left?".
A simple O(n) solution to that would be:
For each index in array
coord = array[index]
if (coord == point1)
startIndex = index
if (coord == point2)
endIndex = index
if (endIndex < startIndex)
swap(startIndex, endIndex)
return array.sublist(startIndex, endIndex)
Or, if you wanted to optimize for repeated queries, I'd suggest a hashing based approach where you map each cooordinate to its index in the array. Something like:
//build the map (do this once, at init)
map = {}
For each index in array
coord = array[index]
map[coord] = index
//find a sublist (do this for each set of start/end points)
startIndex = map[point1]
endIndex = map[point2]
if (endIndex < startIndex)
swap(startIndex, endIndex)
return array.sublist(startIndex, endIndex)
That's O(n) to build the map, but once it's built you can determine the sublist between any two points in O(1). Assuming an efficient hashmap, of course.
Note that if my assumption doesn't hold, then the same solutions are still usable, provided that as a first step you take the provided start and end points and locate the points in the array that best correspond to each one. As noted, unless you are given some constraints regarding the thickness of the rope then interpolating from an arbitrary coordinate to one that's actually part of the rope can only be guesswork at best.

Fastest method to find a complex type in a list (Vector, Array, Dictionary, whatever is faster)

Hail, Stack!
I need to know the best method to find an item inside a list (Vector, Array, Dictionary, whatever is faster) of complex type (extensions of Objects and Sprites).
I've used "Needle in Haystack" method, but it seems that it isn't fast enough.
E.g.
Suppose that I have a collection of Sprites (a pool, in fact).
Each sprite will be added to the stage and perform some action. After that, it will die.
I don't want to pay the cost to dispose it (garbage collect) and create another (new) one every time so I'll keep the dead sprites in a collection.
Sometimes times I'll call a method that will add a sprite to the stage.
This sprite can be a old one, if it is already dead, or a new one, if the pool don't have any free sprite.
One of the scenarios that pushed me to this question was a Particle System.
A "head" particle leaving a "trail" of particles every frame and exploding into a flashy multitude of particles... Every frame...
Some times this counts up to 50.000 PNGs with motion, rotation, alpha, event dispatching, scale, etc...
But, this is JUST ONE scenario...
At the moment I'm trying to use a Object Pool with a Linked List...
Hopes that it will be faster that running a whole Array/Vector or create new instances every frame an let them dye with Garbage Collection.
Someone knows a better/faster way to do it?
I'm not sure about what you are searching and what you want to do with it.
If you are trying to determine if a string is in a list, the fastest solution is the Dictionnary. I've done a little benchmark.
/**
* Determine which is the fastest to find a key between array, object, vector and dictionnary
**/
public class FlashTest extends Sprite {
public function FlashTest() {
var array:Array = new Array();
var object:Object = new Object();
var dictionary:Dictionary = new Dictionary();
var vector:Vector.<String> = new Vector.<String>();
for (var i:int=0; i < 10000; i++)
{
array.push(i.toString());
object[i.toString()] = 1;
vector.push(i.toString());
dictionary[i.toString()] = 1;
}
var time:uint = getTimer();
for (i=0; i < 10000; i++)
{
array.indexOf(i.toString());
}
trace("array"+(getTimer()-time)); //2855
time = getTimer();
for (i=0; i < 10000; i++)
{
vector.indexOf(i.toString());
}
trace("vector"+(getTimer()-time)); //3644
time = getTimer();
for (i=0; i < 10000; i++)
{
object.hasOwnProperty(i.toString());
}
trace("object"+(getTimer()-time)); //48
time = getTimer();
for (i=0; i < 10000; i++)
{
dictionary.hasOwnProperty(i.toString());
}
trace("dictionary"+(getTimer()-time)); //35
}
}
Cheers!
Based on your revised question, I don't see this as a search optimization issue unless we're talking about thousands of objects. When one of your objects from the pool "dies", it could notify your code that is managing the objects (by event or callback) and you then null out its entry in your Dictionary of active items (or splice it from Array, or kill it with a flag, more on this below**), and push it onto another Array, which is a stack of objects waiting to be reused. When you need a new object, you first check if there is anything in your recycle stack, and if there is, you pop one of those and re-initialize it. If the stack is empty, you construct a new one.
**The best choice of which data structure you use for holding your active objects list (Array/Vector/Dictionary) will probably depend on the nature of your list--I'd be willing to bet that the data structure that is quickest to loop over (for example if you need to do it every frame to update them) is not necessarily the one that is cheapest to delete from. Which one you go with should be whichever works best for the number of objects you've got and the frequency with which they get removed, re-added or updated. Just do a little benchmarking, it's easy enough to test them all. If you have relatively few objects, and don't need to loop over them continuously, I'd put my money on the Dictionary for the active pool.
The key point to get from this answer is that you should not be looping through all the items each time you need to find a dead one to re-use, you should pull them out when they die and keep that as a separate pool.
This is really going to depend on how you need to identify the object. Are you comparing some value of it or comparing references?
Assuming references, you should be using the built in methods found on both Array & Vector. Linear searches (like looping over the entire list) grow slower as the number of items in the list increases. The built in solutions will use faster non-linear search algorithms. For example:
myList.indexOf(myObject);
The fastest thing you can do is direct access. If you are using either an Object or Dictionary you can use the object or a unique id as the key, this gives you direct access. But this doesn't help if you need the objects position in the list. eg:
//when setting it
var map = {};
map[myObject] = myObject;
//or
map[myObject.id] = myObject;
//then later
var myObject = map[THE KEY]
Do you have any sort of key that can be attributed to the objects?(or possibly use the object as a key)
I believe that a Dictionary look up would probably be the fastest since it is done similar to Java's HashMap.
In that case you could have the object as the key and just do
myDictionary[complexObject] != null
(it will be null if there is no entry of that object)
Although if you could specify further what it is you need to lookup I might be able to offer further application of this
I'd say Vector, but there seems to be some mixed results.
Vector.<> vs array
http://impossiblearts.com/blog/2008/06/fp10-vector-vs-array/
If your sprites have unique IDs, and you can use a Dictionary, then what's the question? Dictionary is definitely faster.
The average search on a Vector or Array takes, at best, O(log(n)) time. Though you can only achieve that if your Vector is sorted, which means spending time elsewhere to ensure it stays sorted. If you don't sort it then you have to scan, which means O(n).
Dictionaries (variously known as hashtables, maps), when implemented right (and I can only assume that what Actionscript provides is implemented right) guarantee O(1) access time. You pay for that in more memory. The O(1) hides some constant times that are higher than those of Vector searches, so for small lists of items Vector would be more efficient. However, from the description of your problem it sounds like a no-brainer.