How can I optimise this method? - actionscript-3

I have been working on creating an assets class that can generate dynamic TextureAtlas objects whenever I need them. The specific method is Assets.generateTextureAtlas() and I am trying to optimise it as much as possible as I quite frequently need to regenerate texture atlas's and was hoping to get a better time than my 53ms average.
53ms is currently costing me about 3 frames which can add up quickly the more items I need to pack inside my texture atlas and the frequency I need to generate them. So an answer to all the pitfalls within my code would be great.
The entire class code is available here in a github gist.
The RectanglePacker class is simply used to pack rectangles as close together as possible (similar to Texture Packer) and can be found here.
For reference, here is the method:
public static function generateTextureAtlas(folder:String):void
{
if (!_initialised) throw new Error("Assets class not initialised.");
if (_renderTextureAtlases[folder] != null)
{
(_renderTextureAtlases[folder] as TextureAtlas).dispose();
}
var i:int;
var image:Image = new Image(_blankTexture);
var itemName:String;
var itemNames:Vector.<String> = Assets.getNames(folder + "/");
var itemsTexture:RenderTexture;
var itemTexture:Texture;
var itemTextures:Vector.<Texture> = Assets.getTextures(folder + "/");
var noOfRectangles:int;
var rect:Rectangle;
var rectanglePacker:RectanglePacker = new RectanglePacker();
var texture:Texture;
noOfRectangles = itemTextures.length;
if (noOfRectangles == 0)
{
return;
}
for (i = 0; i < noOfRectangles; i++)
{
rectanglePacker.insertRectangle(Math.round(itemTextures[i].width), Math.round(itemTextures[i].height), i);
}
rectanglePacker.packRectangles();
if (rectanglePacker.rectangleCount != noOfRectangles)
{
throw new Error("Only " + rectanglePacker.rectangleCount + " out of " + noOfRectangles + " rectangles packed for folder: " + folder);
}
itemsTexture = new RenderTexture(rectanglePacker.width, rectanglePacker.height);
itemsTexture.drawBundled(function():void
{
for (i = 0; i < noOfRectangles; i++)
{
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i, rect);
image.texture = itemTexture;
image.readjustSize();
image.x = rect.x + itemTexture.frame.x;
image.y = rect.y + itemTexture.frame.y;
itemsTexture.draw(image);
}
});
_renderTextureAtlases[folder] = new TextureAtlas(itemsTexture);
for (i = 0; i < noOfRectangles; i++)
{
itemName = itemNames[rectanglePacker.getRectangleId(i)];
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i);
(_renderTextureAtlases[folder] as TextureAtlas).addRegion(itemName, rect, itemTexture.frame);
}
}

Well reading the project & finding what all can be optimized would sure take time.
Start by removing multiple calls to rectanglePacker.getRectangle(i) inside loops.
For example :
itemName = itemNames[rectanglePacker.getRectangleId(i)];
itemTexture = itemTextures[rectanglePacker.getRectangleId(i)];
rect = rectanglePacker.getRectangle(i);
perhaps, could have been:
rect = rectanglePacker.getRectangle(i);
itemName = itemNames[rect];
itemTexture = itemTextures[rect];
If getRectangle does indeed just 'get a rectangle' & not set anything.

I think the bigger issue at hand is this, why oh why do you HAVE to do this during run-time, in a situation when this can't take more time? This IS an expansive operation, no matter how much you optimize this you will probably end up with it taking about 40ms or similar when done in AS3.
This is why these kind of operations should be done during compile time or during "loading screens" or other "transitions" when frame-rate is not critical and when you can afford it.
Alternatively create another system in c++ or some other language which can actually handle the number-crunching that gives you the finished result.
Also, when it comes to checking performance, yes the entire function takes 53ms, BUT, where are those milliseconds used? 53ms says nothing and is only the "overhead profiling thing" where you found the culprit, you need to break it down into smaller chunks to gather some reliable information about what it is that ACTUALLY takes time, inside that function.
I mean, inside that function, you have 3 for loops, several calls to other classes, casts, deletes, creations. It's not like you are doing one thing, that function probably results in ~500 lines of code and a bazillion cpu operations. And, you have no idea where it is used. I would guess that it is the rectanglePacker.packRectangles(); that takes 60% of that time, but without profiling, you and we don't know on what to optimize, we simply don't have sufficient data.
If you HAVE to do this during run-time in AS3, I would recommend doing this spread out during several frames and distributing workload evenly during 10 frames or so. You could also doing it with help of another thread and workers. But most of all, this seems like a design error since this could probably be done at another time. And if not, then in another language which is better at these kind of operations.
The easiest way to profile this is to add a couple of timestamps similar to:
var timestamps:Array = [];
And then push getTimer() at different places in code, and then print them out when function is done

As others said, it's unlikely that the reason of bad performance is non-optimized AS code. Output from the profiler (Scout, for example) wold be very helpful. However, if your purpose is just adding new textures, I can suggest several optimizations:
Why would you need to re-generate the whole atlas every time (calling Assets.getTextures() and creating new render texture)? Why don't you just add new items to the existing atlas? Creation of a new RenderTexture (and, thus, a new texture in GPU memory) is very costly operation, because it requires sync between CPU and GPU. On the other hand, drawing into RenderTexture is carried out entirely inside GPU, so it takes much less time.
If you place every item on a grid, then you can avoid using RectanglePacker as all of your rectangles can have the same dimensions matching the dimensions of a grid.
Edit:
To clarify, some time ago I had a similar problem: I had to add new items to the existing atlas on a regular basis. And the performance of this operation was quite acceptable (about 8ms on iPad3 using 1024x1024 dynamic texture). But I used the same RenderTexture and the same Sprite object that contained my dynamic atlas items. When I need to add a new item, I just create new Image with desired texture (stand-alone or from another static atlas), then place it inside the Sprite container, and then redraw this container to the RenderTexture. Similarly with deletion/modification of an item.

Related

Forcing update rendering of MX ProgressBar?

Would anyone know a method (or trick) to force a rendering update to an MX ProgressBar in manual mode when using setProgress?
I have a situation with a block of code containing a couple of for loops which take a bit of time to complete. It would be tedious to unwrap this code to generate events, etc.
Update
Let me expand on this with a bit of pseudo code. I want to update the progress bar during operations on the contents of an array. THe for loops blocks so the screen isn't updating. I've tried validateNow() but that had no effect.
Is there some non-convoluted way I can either unwrap the for loop or use AS3's event model to update a progress bar? (I'm more accustomed to multi-threaded environments where this sort of task is trivial).
private function doSomeWork():void {
progressBar.visible = true;
for(var n = 0; n < myArray.length; n++){
progressBar.setProgress(n, myArray.length);
progressBar.label = "Hello World " + n;
progressBar.validateNow(); // this has no apparent effect
var ba:ByteArray = someDummyFunction(myArray[i]);
someOtherFunction(ba);
}
progressBar.visible = false;
}
In Flex, the screen is never updating while Actionscript code is running. It basically works like this:
Execute all runnable Actionscript code.
Update the screen.
Repeat continuously.
To learn more details, google for [flex elastic racetrack]. But the above is the nut of what you need to understand.
If you don't want a long-running piece of code to freeze the screen, you'll have to break it up into chunks and execute them across multiple frames, perhaps within a FRAME_ENTER event handler.
I am not sure what exactly is the problem. I tried the following code and it works without any need to validateNow.
protected function button2_clickHandler(event:MouseEvent):void
{
for(var n:int = 0; n < 100; n = n+20){
progressBar.setProgress(n, 100);
progressBar.label = "Hello World " + n;
// progressBar.validateNow();
}
}
<mx:VBox width="100%" height="100%">
<mx:ProgressBar id="progressBar"/>
<mx:Button label="Update Progress" click="button2_clickHandler(event)"/>
</mx:VBox>

Faster way to tell if a sprite is near another sprite?

When one of my sprites is being dragged (moved around), I'm cycling through other sprites on the canvas, checking whether they are in range, and if they are, I set a background glow on them. Here is how I'm doing it now:
//Sprite is made somewhere else
public var circle:Sprite;
//Array of 25 sprites
public var sprites:Array;
public function init():void {
circle.addEventListener(MouseEvent.MOUSE_DOWN, startDrag);
}
private function startDrag(event:MouseEvent):void {
stage.addEventListener(MouseEvent.MOUSE_MOVE, glowNearbySprites);
stage.addEventListener(MouseEvent.MOUSE_UP, stopDrag);
circle.startDrag();
}
private function stopDrag(event:MouseEvent):void {
stage.removeEventListener(MouseEvent.MOUSE_MOVE, glowNearbySprites);
stage.removeEventListener(MouseEvent.MOUSE_UP, stopDrag);
circle.stopDrag();
}
private function glowNearbySprites(event:MouseEvent):void {
for (var i = 0; i < sprites.length; i++) {
var tSprite = sprites.getItemAt(i) as Sprite;
if (Math.abs(tSprite.x - circle.x) < 30 &&
Math.abs(tSprite.y - circle.y) < 30) {
tSprite.filters = [new GlowFilter(0xFFFFFF)];
}
else {
tSprite.filters = null;
}
}
}
Basically I'm cycling through each sprite every time a MOUSE_MOVE event is triggered. This works fine, but the lag when dragging the sprite around is pretty noticeable. Is there a way to do this that is more efficient, with no or less lag?
Well, depending on the size of the amount of sprites you have, it may be trivial. However, if you're dealing with over 1k sprites -- use a data structure to help you reduce the amount of checks. Look at this QuadTree Demo
Basically you have to create indexes for all the sprites, so that you're not checking against ALL of them. Since your threshold is 30, when a sprite moves, you could place it into a row/column index of int(x / 30), int(y / 30). Then you can check just the sprites that exist in 9 columns around the row/column index of the mouse position.
While this would seem more cumbersome, it actually it way more efficient if you have more items -- the number of checks stays consistent even as you add more sprites. With this method I'm assuming you could run 10k sprites without any hiccup.
Other performance optimizations would be:
use an vector/array of sprites rather than getChildAt
preincrement i (++i)
store a static single instance glowfilter, so it's only one array, rather creating a separate filter for all the sprites.
GlowFilter is pretty CPU intensive. Might make sense to draw all the sprites together in one shot, and then apply GlowFilter once to it -- (this of course depends on how you have things set up -- might even be more cumbersome to blit your own bitmap).
Make your variable declaration var sprite:Sprite = .... If you're not hard typing it, it has to do the "filters" variable lookup by string, and not by the much faster getlex opcode.
I'd incorporate all the improvements that The_asMan suggested. Additionally, this line:
tSprite.filters = [new GlowFilter(0xFFFFFF)];
is probably really bad, since you're just creating the same GlowFilter over and over again, and creating new objects is always expensive (and you're doing this in a for loop every time a mouse_move fires!). Instead create it once when you create this class and assign it to a variable:
var whiteGlow:GlowFilter = new GlowFilter(0xFFFFFF);
...
tSprite.filters = [whiteGlow];
If you're still having performance issues after this, consider only checking half (or even less) of the objects every time you call glowNearbySprites (set some type of flag that will let it know where to continue on the next call (first half of array or second half). You probably won't notice any difference visually, and you should be able to almost double performance.
Attempting to compile the suggestions by others into a solution based on your original code, so far I've created the GlowFilter only once and re-used, secondly I've changed the loop to use a for each instead of the iterant based loop, third I've updated to use ENTER_FRAME event instead of MOUSE_MOVE. The only thing I've left out that's been suggested so far that I see is using a Vector, my knowledge there is pretty much nil so I'm not going to suggest it or attempt until I do some self education. Another Edit
Just changed the declaration of sprites to type Vector no code here for how it's populated but article below says you can basically treat like an Array as it has all the same method implemented but has a couple of caveats you should be aware of, namely that you cannot have empty spots in a Vector and so if that is a possibility you have to declare it with a size. Given it knows the type of the object this probably gets a performance gain from being able to compute the exact position of any element in the array in constant time (sizeOfObject*index + baseOffset = offset of item). The exact performance implications aren't entirely clear but it would seem this will always result in at least as good as Array times if not better.
http://www.mikechambers.com/blog/2008/08/19/using-vectors-in-actionscript-3-and-flash-player-10/
//Array of 25 sprites
public var sprites:Vector.<Sprite>;
private var theGlowFilterArray:Array;
public function init():void
{
theGlowFilterArray = [new GlowFilter(0xFFFFFF)];
circle.addEventListener(MouseEvent.MOUSE_DOWN, startDrag);
}
private function startDrag(event:MouseEvent):void
{
stage.addEventListener(MouseEvent.MOUSE_UP, stopDrag);
addEventListener(Event.ENTER_FRAME, glowNearbySprites);
circle.startDrag();
}
private function stopDrag(event:MouseEvent):void
{
stage.removeEventListener(MouseEvent.MOUSE_UP, stopDrag);
removeEventListener(Event.ENTER_FRAME, glowNearbySprites);
circle.stopDrag();
}
private function glowNearbySprites(event:Event):void
{
var circleX:Number = circle.x;
var circleY:Number = circle.y;
for each(var tSprite:Sprite in sprites) {
if (Math.abs(tSprite.x - circleX) < 30 && Math.abs(tSprite.y - circleY) < 30)
tSprite.filters = theGlowFilterArray;
else
tSprite.filters = null;
}
}
You problem is that making calculations that are at least linear O(n) on every mouse change event is terribly inefficient.
One simple heuristic to bring down the amount of times that you make your calculations is to save the distance to the closest sprite and only after mouse moved that distance would you recalculate the potential crash. This can be calculated in constant time O(1).
Notice that this works only when one sprite moves at a time.

building small GUI engine: visible vs. addChild/removeChild

Currently, i'm experimenting with a very simple GUI drawing ... "engine" (i guess you could call it that). The gist of it:
there is a FrontController that gets hit by user requests; each request has a uid
each uid (read "page") has a declaration of the components ("modules") that are present on it
components are Sprite subclasses and, in essence, are unique
Naturally, i need a way of hiding/showing these sprites. Currently, i have it pretty much like Flex has it by default - in the way "if we are in a place where the comp is visible, create it, cache it and reuse it every time it's visible again".
The question is - which would be the more appropriate and efficient way of hiding and showing - via addChild/removeChild or toggling visible.
The way i see it is that:
visible is quick and dirty (on first tests)
visible does not create a chain of bubbling events like Event.ADDED or Event.REMOVED
invisible components don't get mouse events
So removeChild would be something i'd call when i'm sure, that the component will no longer be necessary on the screen (or the cache is too big, for instance)
What do stackoverflow'ers / AS3-crazed people think?
Update:
Here's a good read (forgot about google).
i will be sticking to visible; it seems to suit my task better; the manual "OPTIMIZING PERFORMANCE FOR THE FLASH PLATFORM" by Adobe on p. 69 gave me even more confidence.
here's a code snippet i put up to test things for those that are interested:
package
{
import flash.display.Sprite;
import flash.display.Stage;
import flash.events.Event;
import flash.events.KeyboardEvent;
import flash.ui.Keyboard;
import flash.utils.getTimer;
/**
* Simple benchmark to test alternatives for hiding and showing
* DisplayObject.
*
* Use:
* <code>
* new DisplayBM(stage);
* </code>
*
* Hit:
* - "1" to addChild (note that hitting it 2 times is expensive; i think
* this is because the player has to check whether or not the comp is
* used elsewhere)
* - "q" to removeChild (2 times in a row will throw an exception)
* - "2" to set visible to true
* - "w" to set visible to false
*
* #author Vasi Grigorash
*/
public class DisplayBM{
public function DisplayBM(stage:Stage){
super();
var insts:uint = 5000;
var v:Vector.<Sprite> = new Vector.<Sprite>(insts);
var i:Number = v.length, s:Sprite
while (i--){
s = new Sprite;
s.graphics.beginFill(Math.random() * 0xFFFFFF);
s.graphics.drawRect(
Math.random() * stage.stageWidth,
Math.random() * stage.stageHeight,
10,
10
);
s.graphics.endFill();
v[i] = s;
}
var store:Object = {};
store[Event.ADDED] = null;
store[Event.REMOVED] = null;
var count:Function = function(e:Event):void{
store[e.type]++;
}
var keydown:Function = function (e:KeyboardEvent):void{
var key:String
//clear event counts from last run
for (key in store){
store[key] = 0;
}
stage.addEventListener(Event.ADDED, count);
stage.addEventListener(Event.REMOVED, count);
var s0:uint = getTimer(), op:String;
var i:Number = v.length;
if (e.keyCode === Keyboard.NUMBER_1){
op = 'addChild';
while (i--){
stage.addChild(v[i]);
}
}
if (e.keyCode === Keyboard.Q){
op = 'removeChild';
while (i--){
stage.removeChild(v[i]);
}
}
if (e.keyCode === Keyboard.NUMBER_2){
op = 'visibile';
while (i--){
v[i].visible = true;
}
}
if (e.keyCode === Keyboard.W){
op = 'invisibile';
while (i--){
v[i].visible = false;
}
}
if (op){
//format events
var events:Array = [];
for (key in store){
events.push(key + ' : ' + store[key])
}
trace(op + ' took ' + (getTimer() - s0) + ' ' + events.join(','));
}
stage.removeEventListener(Event.ADDED, count);
stage.removeEventListener(Event.REMOVED, count);
}
//autodispatch
stage.addEventListener(KeyboardEvent.KEY_DOWN, keydown);
}
}
}
Visible makes more sense to me (since removing a child indicates a finality) and is what I tend to use in my own projects when showing/hiding.
I'd also assume that addChild is slightly less performant but I haven't done any tests.
EDIT: I just came across this Adobe article http://help.adobe.com/en_US/as3/mobile/WS5d37564e2b3bb78e5247b9e212ea639b4d7-8000.html which specifies that when using GPU rendering mode just setting visible = false can have a performance impact since there is a cost for drawing overlapping objects (even though they are not visible). Instead, removing the child entirely is advised:
Avoid overdrawing whenever possible. Overdrawing is layering multiple
graphical elements so that they obscure each other. Using the software
renderer, each pixel is drawn only once. Therefore, for software
rendering, the application incurs no performance penalty regardless
how many graphical elements are covering each other at that pixel
location. By contrast, the hardware renderer draws each pixel for each
element whether other elements obscure that region or not. If two
rectangles overlap each other, the hardware renderer draws the
overlapped region twice while the software renderer draws the region
only once.
Therefore, on the desktop, which use the software renderer, you
typically do not notice a performance impact of overdraw. However,
many overlapping shapes can adversely affect performance on devices
using GPU rendering. A best practice is to remove objects from the
display list rather than hiding them.
Remove child is better to reduce instances,events and free up memory from your flash movie, You may find after time the sprites may effect each other.From how they are drawn or there listneres,Also Garbage collection generaly comes into play when this method is implemented wich can ultimatly screw around with your application
Visible still has the sprite in memory, its just currently not drawn.you could also save the sprite and then remove it, then reload it when needed would be an ideal over all solution.
using arrays to store data is another solution aswell depends on how your application is implemented, hard to say as we dont know ,lol
Adding the child performance i would say is less stress as its still only item adding vs multiples that are hidden.Also in these hidden children"s" there properties are stored im memory along with listeners.
Here's some hard data on the subject by Moock:
http://www.developria.com/2008/11/visible-false-versus-removechi.html
Children on the Single-frame
Display List .visible .alpha Elapsed Time (ms)
No Children 0 -- -- 4
Non-visible 1000 false 1 4
Zero Alpha 1000 true 0 85
Fully Visible 1000 true 1 1498
90% Transparent 1000 true .1 1997

Is there a way to get the actual bounding box of a glyph in ActionScript?

I'm learning ActionScript/Flash. I love to play with text, and have done a lot of that kind of thing with the superb Java2D API.
One of the things I like to know is "where, exactly, are you drawing that glyph?" The TextField class provides the methods getBounds and getCharBoundaries, but these methods return rectangles that extend far beyond the actual bounds of the whole text object or the individual character, respectively.
var b:Sprite = new Sprite();
b.graphics.lineStyle(1,0xFF0000);
var r:Rectangle = text.getCharBoundaries(4);
r.offset(text.x, text.y);
b.graphics.drawRect(r.x,r.y,r.width,r.height);
addChild(b);
b = new Sprite();
b.graphics.lineStyle(1,0x00FF00);
r = text.getBounds(this);
b.graphics.drawRect(r.x,r.y,r.width,r.height);
addChild(b);
Is there any way to get more precise information about the actual visual bounds of text glyphs in ActionScript?
Richard is on the right track, but BitmapData.getColorBounds() is much faster and accurate... I've used it a couple of times, and optimized for your specific needs its not as slow as one might think.
Cory's suggestion of using flash.text.engine is probably the "correct" way to go, but I warn you that flash.text.engine is VERY (very!) hard to use compared to TextField.
Not reasonably possible in Flash 9 -- Richard's answer is a clever work-around, though probably completely unsuitable for production code (as he mentions) :)
If you have access to Flash 10, check out the new text engine classes, particularly TextLine.
I'm afraid all the methods that are available on TextField are supposed to do what you have already found them to do. Unless performance is key in your application (i.e. unless you intend to do this very often) maybe one option would be to draw the text field to a BitmapData, and find the topmost, leftmost, et c colored pixels within the bounding box retrieved by getCharBoundaries()?
var i : int;
var rect : Rectangle;
var top_left : Point;
var btm_right : Point;
var bmp : BitmapData = new BitmapData(tf.width, tf.height, false, 0xffffff);
bmp.draw(tf);
rect = tf.getCharBoundaries(4);
top_left = new Point(Infinity, Infinity);
btm_right = new Point(-Infinity, -Infinity);
for (i=rect.x; i<rect.right; i++) {
var j : int;
for (j=rect.y; j<rect.bottom; j++) {
var px : uint = bmp.getPixel(i, j);
// Check if pixel is black, i.e. belongs to glyph, and if so, whether it
// extends the previous bounds
if (px == 0) {
top_left.x = Math.min(top_left.x, i);
top_left.y = Math.min(top_left.y, j);
btm_right.x = Math.max(btm_right.x, i);
btm_right.y = Math.max(btm_right.y, j);
}
}
}
var actualRect : Rectangle = new Rectangle(top_left.x, top_left.y);
actualRect.width = btm_right.x - top_left.x;
actualRect.height = btm_right.y - top_left.y;
This code should loop through all the pixels that were deemed part of the glyph rectangle by getCharBoundaries(). If a pixel is not black, it gets discarded. If black, the code checks whether the pixels extends further up, down, right or left than any pixel that has previuosly been checked in the loop.
Obviously, this is not optimal code, with nested loops and unnecessary point objects. Hopefully though, the code is readable enough, and you are able to make out the parts that can most easily be optimized.
You might also want to introduce some threshold value instead of ignoring any pixel that is not pitch black.

Segfault Copy Constructor

My code is as follows:
void Scene::copy(Scene const & source)
{
maxnum=source.maxnum;
imagelist = new Image*[maxnum];
for(int i=0; i<maxnum; i++)
{
if(source.imagelist[i] != NULL)
{
imagelist[i] = new Image;
imagelist[i]->xcoord = source.imagelist[i]->xcoord;
imagelist[i]->ycoord = source.imagelist[i]->ycoord;
(*imagelist[i])=(*source.imagelist[i]);
}
else
{
imagelist[i] = NULL;
}
}
}
A little background: The Scene class has a private int called maxnum and an dynamically allocated Array of Image pointers upon construction. These pointers point to images. The copy constructor attempts to make a deep copy of all of the images in the array. Somehow I'm getting a Segfault, but I don't see how I would be accessing an array out of bounds.
Anyone see something wrong?
I'm new to C++, so its probably something obvious.
Thanks,
I would suggest that maxnum (and maybe imagelist) become a private data member and implement const getMaxnum() and setMaxnum() methods. But I doubt that is the cause of any segfault the way you described this.
I would try removing that const before your reference and implement const public methods to extract data. It probably compiles since it is just a reference. Also, I would try switching to a pointer instead of pass by reference.
Alternatively, you can create a separate Scene class object and pass the Image type data as an array pointer. And I don't think you can declare Image *imagelist[value];.
void Scene::copy(Image *sourceimagelist, int sourcemaxnum) {
maxnum=sourcemaxnum;
imagelist=new Image[maxnum];
//...
imagelist[i].xcoord = sourceimagelist[i].xcoord;
imagelist[i].ycoord = sourceimagelist[i].ycoord;
//...
}
//...
Scene a,b;
//...
b.Copy(a.imagelist,a.maxnum);
If the source Image had maxnum set higher than the actual number of items in its imagelist, then the loop would run past the end of the source.imagelist array. Maybe maxnum is getting initialized to the value one while the array starts out empty (or maxnum might not be getting initalized at all), or maybe if you have a Scene::remove_image() function, it might have removed an imagelist entry without decrementing maxnum. I'd suggest using an std::vector rather than a raw array. The vector will keep track of its own size, so your for loop would be:
for(int i=0; i<source.imagelist.size(); i++)
and it would only access as many items as the source vector held. Another possible explanation for the crash is that one of your pointers in source.imagelist belongs to an Image that was deleted, but the pointer was never set to NULL and is now a dangling pointer.
delete source.imagelist[4];
...
... // If source.imagelist[4] wasn't set to NULL or removed from the array,
... // then we'll have trouble later.
...
for(int i=0; i<maxnum; i++)
{
if (source.imagelist[i] != NULL) // This evaluates to true even when i == 4
{
// When i == 4, we're reading the xcoord member from an Image
// object that no longer exists.
imagelist[i]->xcoord = source.imagelist[i]->xcoord;
That last line will access memory that it shouldn't. Maybe the object still happens to exist in memory because it hasn't gotten overwritten yet, or maybe it has been overwritten and you'll retrieve an invalid xcoord value. If you're lucky, though, then your program will simply crash. If you're dealing directly with new and delete, make sure that you set a pointer to NULL after you delete it so that you don't have a dangling pointer. That doesn't prevent this problem if you're holding a copy of the pointer somewhere, though, in which case the second copy isn't going to get set to NULL when you delete-and-NULL the first copy. If you later try to access the second copy of the pointer, you'll have no way of knowing that it's no longer pointing to a valid object.
It's much safer to use a smart pointer class and let that deal with memory management for you. There's a smart pointer in the standard C++ library called std::auto_ptr, but it has strange semantics and can't be used in C++ containers, such as std::vector. If you have the Boost libraries installed, though, then I'd suggest replacing your raw pointers with a boost::shared_ptr.