My code is as follows:
void Scene::copy(Scene const & source)
{
maxnum=source.maxnum;
imagelist = new Image*[maxnum];
for(int i=0; i<maxnum; i++)
{
if(source.imagelist[i] != NULL)
{
imagelist[i] = new Image;
imagelist[i]->xcoord = source.imagelist[i]->xcoord;
imagelist[i]->ycoord = source.imagelist[i]->ycoord;
(*imagelist[i])=(*source.imagelist[i]);
}
else
{
imagelist[i] = NULL;
}
}
}
A little background: The Scene class has a private int called maxnum and an dynamically allocated Array of Image pointers upon construction. These pointers point to images. The copy constructor attempts to make a deep copy of all of the images in the array. Somehow I'm getting a Segfault, but I don't see how I would be accessing an array out of bounds.
Anyone see something wrong?
I'm new to C++, so its probably something obvious.
Thanks,
I would suggest that maxnum (and maybe imagelist) become a private data member and implement const getMaxnum() and setMaxnum() methods. But I doubt that is the cause of any segfault the way you described this.
I would try removing that const before your reference and implement const public methods to extract data. It probably compiles since it is just a reference. Also, I would try switching to a pointer instead of pass by reference.
Alternatively, you can create a separate Scene class object and pass the Image type data as an array pointer. And I don't think you can declare Image *imagelist[value];.
void Scene::copy(Image *sourceimagelist, int sourcemaxnum) {
maxnum=sourcemaxnum;
imagelist=new Image[maxnum];
//...
imagelist[i].xcoord = sourceimagelist[i].xcoord;
imagelist[i].ycoord = sourceimagelist[i].ycoord;
//...
}
//...
Scene a,b;
//...
b.Copy(a.imagelist,a.maxnum);
If the source Image had maxnum set higher than the actual number of items in its imagelist, then the loop would run past the end of the source.imagelist array. Maybe maxnum is getting initialized to the value one while the array starts out empty (or maxnum might not be getting initalized at all), or maybe if you have a Scene::remove_image() function, it might have removed an imagelist entry without decrementing maxnum. I'd suggest using an std::vector rather than a raw array. The vector will keep track of its own size, so your for loop would be:
for(int i=0; i<source.imagelist.size(); i++)
and it would only access as many items as the source vector held. Another possible explanation for the crash is that one of your pointers in source.imagelist belongs to an Image that was deleted, but the pointer was never set to NULL and is now a dangling pointer.
delete source.imagelist[4];
...
... // If source.imagelist[4] wasn't set to NULL or removed from the array,
... // then we'll have trouble later.
...
for(int i=0; i<maxnum; i++)
{
if (source.imagelist[i] != NULL) // This evaluates to true even when i == 4
{
// When i == 4, we're reading the xcoord member from an Image
// object that no longer exists.
imagelist[i]->xcoord = source.imagelist[i]->xcoord;
That last line will access memory that it shouldn't. Maybe the object still happens to exist in memory because it hasn't gotten overwritten yet, or maybe it has been overwritten and you'll retrieve an invalid xcoord value. If you're lucky, though, then your program will simply crash. If you're dealing directly with new and delete, make sure that you set a pointer to NULL after you delete it so that you don't have a dangling pointer. That doesn't prevent this problem if you're holding a copy of the pointer somewhere, though, in which case the second copy isn't going to get set to NULL when you delete-and-NULL the first copy. If you later try to access the second copy of the pointer, you'll have no way of knowing that it's no longer pointing to a valid object.
It's much safer to use a smart pointer class and let that deal with memory management for you. There's a smart pointer in the standard C++ library called std::auto_ptr, but it has strange semantics and can't be used in C++ containers, such as std::vector. If you have the Boost libraries installed, though, then I'd suggest replacing your raw pointers with a boost::shared_ptr.
Related
I have been working on creating an assets class that can generate dynamic TextureAtlas objects whenever I need them. The specific method is Assets.generateTextureAtlas() and I am trying to optimise it as much as possible as I quite frequently need to regenerate texture atlas's and was hoping to get a better time than my 53ms average.
53ms is currently costing me about 3 frames which can add up quickly the more items I need to pack inside my texture atlas and the frequency I need to generate them. So an answer to all the pitfalls within my code would be great.
The entire class code is available here in a github gist.
The RectanglePacker class is simply used to pack rectangles as close together as possible (similar to Texture Packer) and can be found here.
For reference, here is the method:
public static function generateTextureAtlas(folder:String):void
{
if (!_initialised) throw new Error("Assets class not initialised.");
if (_renderTextureAtlases[folder] != null)
{
(_renderTextureAtlases[folder] as TextureAtlas).dispose();
}
var i:int;
var image:Image = new Image(_blankTexture);
var itemName:String;
var itemNames:Vector.<String> = Assets.getNames(folder + "/");
var itemsTexture:RenderTexture;
var itemTexture:Texture;
var itemTextures:Vector.<Texture> = Assets.getTextures(folder + "/");
var noOfRectangles:int;
var rect:Rectangle;
var rectanglePacker:RectanglePacker = new RectanglePacker();
var texture:Texture;
noOfRectangles = itemTextures.length;
if (noOfRectangles == 0)
{
return;
}
for (i = 0; i < noOfRectangles; i++)
{
rectanglePacker.insertRectangle(Math.round(itemTextures[i].width), Math.round(itemTextures[i].height), i);
}
rectanglePacker.packRectangles();
if (rectanglePacker.rectangleCount != noOfRectangles)
{
throw new Error("Only " + rectanglePacker.rectangleCount + " out of " + noOfRectangles + " rectangles packed for folder: " + folder);
}
itemsTexture = new RenderTexture(rectanglePacker.width, rectanglePacker.height);
itemsTexture.drawBundled(function():void
{
for (i = 0; i < noOfRectangles; i++)
{
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i, rect);
image.texture = itemTexture;
image.readjustSize();
image.x = rect.x + itemTexture.frame.x;
image.y = rect.y + itemTexture.frame.y;
itemsTexture.draw(image);
}
});
_renderTextureAtlases[folder] = new TextureAtlas(itemsTexture);
for (i = 0; i < noOfRectangles; i++)
{
itemName = itemNames[rectanglePacker.getRectangleId(i)];
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i);
(_renderTextureAtlases[folder] as TextureAtlas).addRegion(itemName, rect, itemTexture.frame);
}
}
Well reading the project & finding what all can be optimized would sure take time.
Start by removing multiple calls to rectanglePacker.getRectangle(i) inside loops.
For example :
itemName = itemNames[rectanglePacker.getRectangleId(i)];
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i);
perhaps, could have been:
rect = rectanglePacker.getRectangle(i);
itemName = itemNames[rect];
itemTexture = itemTextures[rect];
If getRectangle does indeed just 'get a rectangle' & not set anything.
I think the bigger issue at hand is this, why oh why do you HAVE to do this during run-time, in a situation when this can't take more time? This IS an expansive operation, no matter how much you optimize this you will probably end up with it taking about 40ms or similar when done in AS3.
This is why these kind of operations should be done during compile time or during "loading screens" or other "transitions" when frame-rate is not critical and when you can afford it.
Alternatively create another system in c++ or some other language which can actually handle the number-crunching that gives you the finished result.
Also, when it comes to checking performance, yes the entire function takes 53ms, BUT, where are those milliseconds used? 53ms says nothing and is only the "overhead profiling thing" where you found the culprit, you need to break it down into smaller chunks to gather some reliable information about what it is that ACTUALLY takes time, inside that function.
I mean, inside that function, you have 3 for loops, several calls to other classes, casts, deletes, creations. It's not like you are doing one thing, that function probably results in ~500 lines of code and a bazillion cpu operations. And, you have no idea where it is used. I would guess that it is the rectanglePacker.packRectangles(); that takes 60% of that time, but without profiling, you and we don't know on what to optimize, we simply don't have sufficient data.
If you HAVE to do this during run-time in AS3, I would recommend doing this spread out during several frames and distributing workload evenly during 10 frames or so. You could also doing it with help of another thread and workers. But most of all, this seems like a design error since this could probably be done at another time. And if not, then in another language which is better at these kind of operations.
The easiest way to profile this is to add a couple of timestamps similar to:
var timestamps:Array = [];
And then push getTimer() at different places in code, and then print them out when function is done
As others said, it's unlikely that the reason of bad performance is non-optimized AS code. Output from the profiler (Scout, for example) wold be very helpful. However, if your purpose is just adding new textures, I can suggest several optimizations:
Why would you need to re-generate the whole atlas every time (calling Assets.getTextures() and creating new render texture)? Why don't you just add new items to the existing atlas? Creation of a new RenderTexture (and, thus, a new texture in GPU memory) is very costly operation, because it requires sync between CPU and GPU. On the other hand, drawing into RenderTexture is carried out entirely inside GPU, so it takes much less time.
If you place every item on a grid, then you can avoid using RectanglePacker as all of your rectangles can have the same dimensions matching the dimensions of a grid.
Edit:
To clarify, some time ago I had a similar problem: I had to add new items to the existing atlas on a regular basis. And the performance of this operation was quite acceptable (about 8ms on iPad3 using 1024x1024 dynamic texture). But I used the same RenderTexture and the same Sprite object that contained my dynamic atlas items. When I need to add a new item, I just create new Image with desired texture (stand-alone or from another static atlas), then place it inside the Sprite container, and then redraw this container to the RenderTexture. Similarly with deletion/modification of an item.
I am implementing garbage collection within an AS3 app. In one part, several display objects are created within a loop like so:
for(var i:uint = 0; i <= this._exampleVector.length - 1; i++)
{
this._customText = new CustomTextObject(this._exampleVector[i].playlistText), this._customTextWidth);
this.addChild(this._customText);
etc etc
this._customTextVector.push(this._customText); // used for ref in garbage collection
}
I then perform my garbage collection preparation by looping through the _customTextVector variable.
for(var i:uint = 0; i <= this._customTextVector.length - 1; i++)
{
this.removeChild(this._customTextVector[i]);
this._customTextVector[i].gcAllObjects();
**this._customTextVector[i] = null;**
}
When I try to make the _customText within the _customTextVector null, this does not work. It only makes the index inside the Vector null. Any ideas on how to do this or another method to garbage collect?
Thanks
Chris
Is it possible to do following after looping through all the indices
_customTextVector =null;
In order to cause the AS3 Garbage Collector to GC your objects, you need to remove all references to them (including event listeners). On the next GC pass, the object's memory will be freed. There is no way to directly, instantly "null" an object like you want.
If you're having issues with memory, have a look at this post.
I'd like to be able to pass Vectors around as references. Now, if a method takes a Vector.<Object>, then passing a Vector.<TRecord>, where TRecord inherits directly from Object does not work. Where a method takes just plain Object; say vec: Object, then passing the Vector is possible. Once inside this method, an explicit cast at some stage is required to access vec as a Vector again. Unfortunately, a cast seems to make a copy, which means wrapping one up in multiple Flex ListCollectionViews is useless; each ListCollectionView will be pointing to a different Vector.
Using Arrays with ArrayCollection presents no such problem, but I lose out out the type safety, neatness (code should be clean enough to eat off of) and performance advantages of Vector.
Is there a way to cast them or pass them as references in a generic manner without copies being made along the way?
Note in this example, IRecord is an interface with {r/w id: int & name: String} properties, but it could be a class, say TRecord { id: int; name: String} or any other usable type.
protected function check(srcVec: Object): void
{
if (!srcVec) {
trace("srcVec is null!");
return;
}
// srcVec = (#b347e21)
trace(srcVec.length); // 4, as expected
var refVec: Vector.<Object> = Vector.<Object>(srcVec);
// refVec = (#bc781f1)
trace(refVec.length); // 4, ok, but refVec has a different address than srcVec
refVec.pop();
trace(refVec.length); // 3 ok
trace(srcVec.length); // 4 - A copy was clearly created!!!
}
protected function test(): void
{
var vt1: Vector.<IRecord> = new Vector.<IRecord>; // (#b347e21) - original Vector address
var vt2: Vector.<Object> = Vector.<Object>(vt1); // (#bbb57c1) - wrong
var vt3: Vector.<Object> = vt1 as Vector.<Object>; // (#null) - failure to cast
var vt4: Object = vt1; // (#b347e21) - good
for (var ix: int = 0; ix < 4; ix++)
vt1.push(new TRecord);
if (vt1) trace(vt1.length); // 4, as expected
if (vt2) trace(vt2.length); // 0
if (vt3) trace(vt3.length); // vt3 is null
if (vt4) trace(vt4.length); // 4
if (vt1) trace(Vector.<Object>(vt1).length); //
trace("calling check(vt1)");
check(vt1);
}
This is not possible. If a type T is covariant with type U, then any container of T is not covariant with a container of type U. C# and Java did this with the built-in array types, and their designers wish they could go back and cut it out.
Consider, if this code was legal
var vt1: Vector.<IRecord> = new Vector.<IRecord>;
var vt3: Vector.<Object> = vt1 as Vector.<Object>;
Now we have a Vector.<Object>. But wait- if we have a container of Objects, then surely we can stick an Object in it- right?
vt3.push(new Object());
But wait- because it's actually an instance of Vector.<IRecord>, you can't do this, even though the contract of Vector.<Object> clearly says that you can insert Object. That's why this behaviour is explicitly not allowable.
Edit: Of course, your framework may allow for it to become a non-mutable reference to such, which is safe. But I have little experience with ActionScript and cannot verify that it actually does.
I have noticed a weird behavior of the variables in for loops. It's not really a problem, but it disturbs me a lot.
Actually I've created two loops this way:
for (var i:uint; i<19; i++) SomeFunction (i);
for (var i:uint; i<26; i++) SomeOtherFunction (i);
What I received was a compilation warning:
Warning: Duplicate variable definition.
This warning really surprised me. Nothing like that ever happened to me in other languages.
It seems that the i variable gets into the scope that is higher in the hierarchy and becomes available out of the loop's block. I've also tried to embrace the loop block in a curly brace, but it didn't change anything.
Why does it happen? Is it normal? Is it possible to avoid it? For now I've just set different names for both of the variables, but that's not a real solution I think. I'd really like to use the i-named variable in most of my for-loops.
yes, the loop increment variable is in the scope of the loops parent, not inside the loop itself. This is intentional, for examples like this:
public function getPositionOfValue ( value:String ) : int
{
for ( var i:int = 0; i < someArray; i++ )
{
if (someArray[i] == value )
{
break;
}
}
return i;
}
this allows you to access the value of i once the loop is over. There are lots of cases where this is very useful.
What you should do in the cases where you have multiple loops inside the same scope is var the i outside of the loops:
public function getPositionOfValue ( value:String ) : int
{
var i:int;
for ( i = 0; i < 15; i++ )
{
//do something
}
for ( i = 0; i < 29; i++ )
{
//do something else
}
return i;
}
then you get rid of your warning. The other thing to consider is to name your loop increment variables something more descriptive.
Update: Two other things to consider:
1) you shouldn't use uints except for things like colors and places where Flex expects a uint. They are slower than int's to use. Source]1 Update: it looks like this may no longer be the case in newer versions of the flash player: source
2) when you var a loop increment variable inside of a loop declaration, you want to make sure you set it to the proper initialization value, usually 0. You can get some hard to track down bugs if you dont.
As mentioned here, as3 has global and local scope and that's about it.
It does not do block-level scoping (or for-level either). With hoisting, you can even write to variables before you define them. That's the bit that would do my head in :-)
Early versions of Visual C had this bug, leading to all sorts of wonderful funky macro workarounds but this is not a bug in as3, it's working as designed. You can either restrict your code to having the declaration in the first for only or move the declaration outside all the for statements.
Either way, it's a matter of accepting that the language works one way, even though you may think that's a bad way :-)
Declare the variable i outside the loops to avoid this. As long as you reset it (i=0) you can still use it in all loops.
var i : uint;
for (i=0; i<19; i++) SomeFunction(i);
for (i=0; i<26; i++) SomeOtherFunction(i);
I am creating a bunch of objects in an array. I'm used to doing this iteratively, like
for(i:int; i < number; i++){
ball= new Ball;
balls.push(ball);
}
and then later I can make reference to balls[i]. Now however I'm creating the objects with mouse click, not with a for loop, so I don't have that [i], so my other code makes reference to just "ball", which means it affects only whichever one was just created. Is there any reasonable way to 'name' each object arbitrarily so I can later say "each of you go off and do your own thing, and ignore everyone else"?
There is a couple of solutions,
If you have a reference to the ball, you can use
myArray.indexOf(myBall)
Which will give you its position in the array, so you can talk to it.
you can also store them with a name value like this:
myArray["name" + uniqueIdentifier] = myBall;
That would replace your push statements.
Alternatively you could just loop over them all like this:
for(var i:int = 0; i < myArray.length; i++)
{
ball = myArray[i];
//perhaps you stored a value on ball to distinguish when it was created
if(ball.wasCreatedByMouseClick) // do something
}
Hope that gives you some ideas.
i don't know what your code is doing but according to what you described i'd use a dictionary to store your objects with a string key.
dictionary class behaves as an object in which you can set a dynamic property (key) and its value (your object).
it's different from a object because keys could be anything (strings, objects, arrays,...anything).
this is an example:
var d:Dictionary = new Dictionary (true);
d["myBall"] = new Ball();
function getBallByKey(key:String):Ball
{
return d[key];
}
depending on the scenario a good thing is to set dictionary references to weak. this is to avoid memory leaks (actually letting the object "die" as no other reference but the dictionary itself is pointing at that object)