Most efficient way to track x and y values of multiple object instances on the stage? - actionscript-3

I have an arbitrary number of object instances on the stage. At any one given time the number of objects may be between 10 and 50. Each object instance can move, but the movement is gradual, the current coordinates are not predictable and at any given moment I may need to retrieve the coordinates of a specific object instance.
Is there a common best-practice method to use in this case to track object instance coordinates? I can think of two approaches:
I write a function within the object class that, upon arbitrary event execution, is called on an object instance and returns that object instances coordinates.
Within the object class I declare global static variables that represent x and y values and, upon arbitrary event execution, the variables are updated with the latest values for that object instance.
While I can get both methods to work, I do not know whether one or the other would be detrimental to program performance in the long run. I lean toward the global variables because I expect it is less resource intensive to update and call a variable than to call a function which subsequently updates and calls a variable. Maybe there is even a third option?
I understand that this is a somewhat subjective question. I am asking with respect to resource consumption so please answer in that respect.

I don't understand.. The x and y properties are both stored on the object (if it's a DisplayObject) and readable.. Why do you need to store these in a global or whatever?
If you're not using DisplayObject as a base, then just create the properties yourself with appropriate getters.
If you want to get the coordinates of all your objects, add them to an array, let's say objectList.
Then just use a loop to check the values:
for each(var i:MovieClip in objectList)
{
trace(i.x, i.y);
}
I think I'm misunderstanding the question, though.

definitely 1.
for code readability use a get property, ie
public function get x():Number { return my_x; }
The problem with 2, is you may well also need to keep track of which object those coords are for - not to mention it is just messy... Globals can get un-managable quickly, hence all this reesearch into OOP and encapsuilation, and doing away with (mostly) the need for globals..
with only 50 or less object - don't even consider performance issues...
And remember that old mantra - "Premature optimisation is the root of programming evil" ;-)

Related

Constructor/Function overload signature lookup time complexity?

I was reading up on the std::string class in C++ and noticed there are quite a few different constructors available giving us a wide set of initialization features. This got me wondering how a compiler picks which constructor to choose when given parameters, or in the case of overloads, how a compiler matches a function signature with a given set of parameters.
If we have the following functions declared in pseudo-code:
function f1(int numberHere) {
//....do something
}
function f1(int numberHere, string stringHere) {
//....do something
}
And I decide to call f1(4), there are obviously two options to choose from, but what if there are 10000 options/signatures? Would it take proportionally longer? If so, what takes longer? Does the compiler have some sneaky O(n) way to index overloads such that it can call the right one in O(1) time once the program is running or would it compile in O(1) no matter how many overloads exist but take longer to run the finished result because of on-the-fly signature matching?
Can this question even be answered effectively?
Thanks!
Matching function signatures is actually not different from any other search or lookup problem. There are three basic ways to do it depending on the data structure you are storing the available function signatures in:
Use an unsorted list or array and get O(n) time complexity.
Use a sorted array or a tree-like structure and get O(log(n)). (You can sort by type of 1st argument, then 2nd and so on, assuming that each type has an integer id assigned to it.)
Use a hash map and get O(1).
But I doubt that time complxity has any practical relevance in this case. It describes the asymptotic behaviour of algorithms for large values of n. Even for n=100, an unsorted array search might be faster than hash map lookup because it has less overhead.
And from a usability point of view it is a very bad idea to design an API having functions with 10 or even 100 overloads.

AS3 repeat same code in a vertor which has 2500 objects

This is my problem, i'm making a path finding program 'jump point search algorithm'. And i need to reset every node (object) in the vector 40 by 40 vector so 2500 nodes, so i need to do the following
//* some type of loop*//
{
node.is_been_on = false;
}
But my path finding may happen 5 times every seconds with a few objects. So that a lot of looping.
What is the CPU friendly way to do this, or another solution which means i don't need to do it.
One of my friends saying that i should make a 40 by 40 boolean array and having the is_been_on variable it, so i would refer to that and not the node, would that be better?
Thanks for reading, and i hope you can help
The most simple idea is to reset only the nodes that you've changed - store them in different array and iterate only it - JPS should modify only a small part of the given nodes.
The idea of your friends is not better, since you will still iterate over all nodes, and modify each value. The values of the node are also boolean (or at least I hope so), so you win nothing but having second array (vector) of values.
Either way I don't find it that bad to modify bool values, but if you really need to optimize (which I find great) - go with "reset what's changed" - can't imagine better one.
But why do you recalculate path 5 times every second? You have graph with a size 40x40, by the help of A* or another one algorithm, you will be able find correct path. As you calculate path, you don't recalculate it again, only if you have dynamic obstacles in the game.
If you don't know how to implement pathfinding algorithm in AS3 project. There are several ready solutions

How to avoid memory leaks in this case?

In order to prevent memory leaks in ActionScript 3.0, i use a member vector in classes that have to work with vectors, for example:
public class A
{
private var mHelperPointVector:Vector.<Point> = new Vector.<Point>();
public static GetFirstData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
public static GetSecondData():Vector.<Point>
{
mHelperPointVector.length = 0;
....
return mHelperPointVector;
}
}
and then i have consumers who uses GetFirstData and GetSecondData methods, storing references to vectors returned by these methods, for example:
public function OnEnterFrame():void
{
var vector:Vector.<Point> = A.GetSecondData();
....
}
This trick seems to be good, but sometimes i need to process the vector returned by GetSecondData() after some period of time, and in this case, this vector becomes overwritten by another call to GetSecondData() or GetFirstData()...The solution is to copy vector to a new vector...but in this case is better to avoid this trick at all. How do you deal with these problems? I have to work with a big amount of vectors (each of length between 1-10).
The thing about garbage collection is just trying to avoid instantiating (and disposing of) as much as possible. It's hard to say what would be the best approach since I can't see how/why you're using your Vector data, but at first glance I think that with your approach you'll be constantly losing data (you're pretty much creating the equivalent of weak instances, since they can be easily overwritten) and changing the length of a Vector doesn't really avoid garbage collection (it may delay and reduce it, but you're still constantly throwing data away).
I frankly don't think you'd have memory leaks with point Vectors unless you're leaking the reference to the Vector left and right. In which case, it'd be better to fix these leftover references, rather than simply coming up with a solution to reuse the same vectors (which can have many more adverse effects).
However, if you're really concerned about memory, your best solution, I think, is either creating all vectors you need in advance (if it's a fixed number and you know their length ahead of time) or, better yet, using Object Pools. The latter would definitely be a more robust solution, but it requires some setup on your end, both by creating a Pool class and then when using it. To put it in code, once implemented, it would be used like this:
// Need a vector with length of 9
var myVector:Vector.<Point> = VectorPool.get(9);
// Use the vector for stuff
...
// Vector not needed anymore, put it back in the pool
VectorPool.put(myVector);
myVector = null; // just so it's clear we can't use it anymore
VectorPool would control the list of Vectors you have, letting other parts of your code "borrow" vectors as needed (in which they would be marked as being "used" inside the VectorPool) and give them back (marking them back as unused). Your code could also create vectors on the spot (inside get()), as needed, if no usable vectors are available within the list of unused objects; this would make it more flexible (not recommended in some cases since you're still spending time with instantiation, but probably negligible in this case).
This is a very macro explanation (you'd still have to write VectorPool), but object pools like that are believed to be the definitive solution to avoid re-instantiating as well as garbage collection of objects that are just going to be reused.
For reference, here's what I used as a very generic Object Pool:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/ObjectPool.as
Or a more specialized one, that I use in situations when I need a bunch of throwaway BitmapData instances of similar sizes:
https://github.com/zeh/as3/blob/master/com/zehfernando/data/BitmapDataPool.as
I believe the implementation of a VectorPool class in the molds of what you need would be similar to the link above.
As a side note, if performance is a concern, I'd suggest using vectors of fixed length too, e.g.
// Create a vector of 9 items, filled with `nulls`
var myPoints:Vector.<Point> = new Vector.<Point>(9, true);
This makes it faster since you won't have micro allocations over time. You have to set the items directly, instead of using push():
myPoints[0] = new Point(0, 0);
But that's actually a forced advantage since setting the vector items is faster than push().

Is there any advantage in using Vector.<Object> in place of a standard Array?

Because of the inability to create Vectors dynamically, I'm forced to create one with a very primitive type, i.e. Object:
var list:Vector.<Object> = new Vector.<Object>();
I'm assuming that Vector gains its power from being typed as closely as possible, rather than the above, but I may be wrong and there are in-fact still gains when using the above in place of a normal Array or Object:
var list:Array = [];
var list:Object = {};
Does anyone have any insight on this?
You will not gain any benefits from Vector.< Object > compared to Array or vice versa. Also the underlying data structure will be the same even if you have a tighter coupled Vector such as Vector.< Foo >. The only optimization gains will be if you use value types. The reason for this is that Ecmascript will still be late binding and all reference objects share the same referencing byte structure.
However, in Ecmascript 4 (of which Actionscript is an implementation) the Vector generic datatype adds bounds checking to element access (the non-vector will simply grow the array), so the functionality varies slightly and consequently the number of CPU clock cycles will vary a little bit. This is negligible however.
One advantage I've seen is that coding is a bit easier with vectors, because FlashDevelop (and most coding tools for as3) can do code hinting better. so I can do myVector. and see my methods and functions, array won't let you do that without casting myArr[2] as myObject (thought this kind of casting is rumoured to make it faster, not slower)
Array's sort functions are faster however, but if it is speed you're after, you might be better served by linked lists (pending the application)
I think using vectors is the proper way to be coding, but not necessarily better.
Excellent question- Vectors have a tremendous value! Vector. vs Array is a bad example of the differences though and benchmarks may be similar. However, Vector. vs Array is DEFINITELY better both memory and processing. The speed improvement comes from Flash not needing to "box" and "unbox" the values (multiple mathematical operations required for this). Also, Array cannot allocate memory as effectively as a typed Vector. Strict-typing collections are almost always better.
Benchmarks:
http://jacksondunstan.com/articles/636
http://www.mikechambers.com/blog/2008/09/24/actioscript-3-vector-array-performance-comparison/
Even .NET suffers from boxing collections (Array):
http://msdn.microsoft.com/en-us/library/ms173196.aspx
UPDATE:
I've been corrected! Only primitive numeric types get a performance enhancement
from Vectors. You won't see any improvement with Array vs Vector.<Object>.

What is (or should be) the cyclomatic complexity of a virtual function call?

Cyclomatic Complexity provides a rough metric for how hard to understand a given function is, or how much potential for containing bugs it has. In the implementations I've read about, usually all of the basic control flow constructs (if, case, while, for, etc.) increase the complexity of a function by 1. It appears to me given that cyclomatic complexity is intended to determine "the number of linearly independent paths through a program's source code" that virtual function calls should increase the cyclomatic complexity of a function as well, because of the ambiguity of which implementation will be called at runtime (the call creates another branch in the path of execution).
However, penalizing the function the same amount that one would if it contained an equivalent switch statement (one point for every 'case' keyword, with one case keyword for every class in the hierarchy implementing the virtual function in question) feels overly harsh, because a virtual function call is generally regarded as much better programming practice.
What should the cost in cyclomatic complexity of a virtual function call be? I'm not sure if my reasoning is an argument against the utility of cyclomatic complexity as a metric or one against the use of virtual functions or something different.
Edit: After people's responses I realized that it shouldn't add to cyclomatic complexity because we could consider the virtual function call equivalent to a call to a global function that contains the massive switch statement. Even though that function will get a bad score, it only exists once in the program, whereas replacing each virtual function call directly with switch statement would cause the cost many times.
Cyclomatic complexity usually does not apply across function call boundaries, but is an intra-function metric. Hence, virtual calls do not count any more than non-virtual, static function calls.
A virtual function call does not increase the cyclomatic complexity, because the "ambiguity [over] of which implementation will be called" is external to the function call. Once the objects value is set, there is no ambiguity. We know exactly what methods will be called.
BaseClass baseObj = null;
// this part has multiple paths & add to CC
if (x == y)
baseObj = new Derived1();
else
baseObj = new Derived2();
// this part has one path and does not add to the CC
baseObj.virtualMethod1();
baseObj.virtualMethod2();
baseObj.virtualMethod3();
virtual function calls should increase
the cyclomatic complexity of a
function as well, because of the
ambiguity of which implementation will
be called at runtime
Ah, but it isn't ambiguous at runtime (unless you're doing metaprogramming / monkey patching); it's completely determined by the type/class of the receiver.
I'm not a big fan of cyclomatic complexity, but in this case you're calling a function. It will do approximately the same thing (unless the class hierarchy design is really screwed up), with some variations depending on that calls it. Thing is, if you call any function, you can get some varied behavior depending on the arguments you pass in, and this isn't counted in CC.
Therefore, I'd completely ignore that cost.