have developed a touch screen stand alone application. The interactive is currently running 10 - 13 hours per day. If the user interacts with interactive the memory level is going on increasing. The interactive has five screens while travelling through each screen I have removed the movieclip, assets, listener's and I set objects to null. Yet the memory level keep increasing.
Also I have used third party tool "gskinner" to solve this problem, It improves the result even though some memory leakage is there.
Please help me, thanks in advance.
Your best results will come from writing the code in a way that elements are properly garbage collected on removal. That means removing all the objects, listeners and MovieClips/Sprites within that are no longer used.
When I'm trying to get this stuff done quickly, I've been using casalib's CasaMovieClip and CasaSprite instead of regular MovieClips and Sprites. The reason is that they have the destroy() functions as well as some other functions that help you garbage collect easily.
But the best advice I can give is to read up on garbage collection. Grant Skinner's blog is a great place to start.
Also, check for setTimeout() and dictionaries, as these can cause leaks as well if not used properly.
Related
so, in a nutshell, my question is pretty much in title, I have my first big project and now after 11k lines of code in one big file I am running into memory issues after a while I have tried out my flash, slowly it slows down until I cant do anything, even if I always clear containers when I can and remove listeners when not needed.
procedural, for those who dont know, I have everything in big main function and within that hundreds of other functions.
oop logic feels a bit too big of thing to try and understand at this point, procedural so far has been much more natural to me...
any tips how to prevent memory from running out?
You don't really need any object oriented programming to break it down.
Just apply a bit of logic of where you can group & separate things. Also, chances of repetitive lines of code is very high too.
So first of, start grouping lines. put them inside different functions & call them in main.
After you bring it all down to chunks, you can start thinking of grouping the functions into classes as well. But at the very least the first step should have brought your problem down.
Your problem is hard to solve without using object-oriented programming. Btw, the C style of coding is usually called "imperative programming"... just so you know.
The bad news is, 11k lines of code in one file means all the code is inside one translation unit, so everything you coded is always in memory.
If you break it up into classes, then individual class instances (objects) will be created and destroyed on a requirement basis, thereby taking up as much memory as needed (growing and shrinking, not static).
Finally, using as3 as if it were C will hurt you in many other ways long-term. So please learn OOP and break up your monolithic code into objects.
Well afterall calling this a bad practice, you may not giving VM (virtual machine) a chance to breath... I mean If your procedures goes on busy everytime in a loop like thing polling states then with great probability the VM does not find the opportunity to garbage collect. Which is a disaster.
So do not poll events if you are doing. Try getting rid of the grand procedure (the seer of all :) ) and try another architecture which the central monitor procedure is invoked by event handlers when needed.
Then when you settle, sell your product and get rich. Learn oop a.s.a.p. :)
I started running into serious issues with the garbage collector partially picking this up, but still leaving most of the object up and running in the background without a reference:
m_win = new MyWindow(arr);
m_win.open();
m_win.addEventListener(Event.CLOSE, onClose);
.
.
.
private function onClose(pEvent:Event):void
{
m_win.removeEventListener(Event.CLOSE, onClose);
m_win.close();
m_win = null;
// RAM usage for m_win is only reduced about 20-40%. I can see the garbage
// collector has run as a result of that reduction, but the other 60-80% are
// a serious problem.
}
That's the only event listener I have added to m_win, and that's the only reference my code has to m_win. MyWindow is basically a standalone AIR project (although it's nested in a different class than its own Main class to adapt it to work for this scenario; otherwise the same). MyWindow has NetConnections, NetStreams, and such that were kept live after the garbage collector would run.
So one of the first things I tried was to go in and disconnect its NetConnections and NetStreams. But that didn't work. Then I came across these blogs:
http://tomgabob.blogspot.com/2009/11/as3-memory-management.html
I found the link for that in another blog from another guy who had trouble with the same thing: Supposedly if AS3's garbage collector finds an "island" that has reached a critical mass (in my experience, maybe 30 MB?), it just refuses to do anything with it. So I went with this guy's recommendation and at least attempted to null out every reference, remove every event listener, call removeAllElements() and/or removeChildren() where necessary, and call "destructor" functions manually all throughout MyWindow, the sole exception being the Main class that isn't really used for m_win.
It didn't work, unless I messed up. But even if I left a couple of stones unturned, it should have still broken up the island more than enough for it to work. I've been researching other causes and work-arounds for this problem and have tried other things (such as manually telling the garbage collector to run), but nothing's cleaning the mess up properly. The only thing that has worked has been to disconnect the NetConnection/NetStream stuff on the way out, but a good 25-30 MB is remaining uncollected.
How can this be fixed? Thanks!
EDIT: Given Adobe's statements at http://www.adobe.com/devnet/actionscript/learning/as3-fundamentals/garbage-collection.html:
"GCRoots are never garbage collected."
and:
"The MMgc is considered a conservative collector for mark/sweep. The MMgc can't
tell if certain values in memory are object pointers (memory addresses) or a
numeric value. In order to prevent accidentally collecting objects the values
may point to, the MMgc assumes that every value could be a pointer. Therefore,
some objects that are not really being pointed to will never be collected and
will be considered a memory leak. Although you want to minimize memory leaks to
optimize performance, the occasional leaks that result from conservative GC tend
to be random, not to grow over time, and have much less of an impact on an
application's performance than leaks caused by developers."
I can see sort of a link between that and the "big island" theory at this point - assuming the theory is correct, which I'm not. Adobe's pretty much admitting at this point, in at least the second statement, that there are at least benign issues with orphans getting skipped over. They act like it's no big deal, but to just leave it like that and pretend it's nothing is probably mostly just typical Adobe sloppiness. What if one of those rare missed orphans that they mentioned has a large, fully-active object hierarchy within it? If you can't prevent the memory leak there completely, I could definitely see how going through and doing things like nulling out references all throughout that hierarchy before you lose your reference would generally do a lot to minimize the amount of memory leaked as a result.
My thinking is that the garbage collector would generally still be able to get rid of most of everything within that island, just not the island itself. Also what some of these people who were supposedly able to really use the "big island" theory were seeing was probably some of the "less benign" manifestations of what Adobe was admitting to. This remains a hypothesis though.
EDIT: After I checked again on the destructorsand straightened out a couple of issues, I did see a significant improvement. The reason I'm leaving this question up and going is that not only is there a chance I'm still missing something else I could be doing to release memory, but the main explanation I've used so far isn't official or proven.
try calling
try {
new LocalConnection().connect('foo');
new LocalConnection().connect('foo');
} catch (e:*) {}
// the GC will perform a full mark/sweep on the second call.
to force garbage collection. This is undocumented, but works.. credits to Grant Skinner, who explains more on his blog
I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently.
In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request.
But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them.
That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well.
Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event.
Thanks in advance!
Think of events like network packets, since they're usually handled by similar mechanisms. Now think, your mouse sends a couple of hundred packets a second, maximum, and they're around 6 bytes each. That's nothing compared to the bandwidth capabilities of modern machines.
In fact, you could make a responsive GUI where every mouse motion literally sent a network packet (86 bytes including headers) on hardware built around 20 years ago: X11, the fundamental GUI mechanism for Linux and most other Unixes, can do exactly that, and frequently was used that way in the late 80s and early 90s. When I first used a GUI, that's what it was, and while it wasn't great by current standards, given that it was running on 20 MHz machines, it really was usable.
My understanding is as follows:
Every application/window has an event loop which is filled by the OS-interrupts.
The mouse move will come in there.
All windows have a separate queue/process by my knowledge (in windows since 3.1)
Every window has controls.
The window will bubble up this events to the controls.
The control will determine if the event is for him.
So its not necessary to "compute" which item is drawn under the mouse cursor.
The window, and then the control will determine if the event is for them.
By what criteria do you determine that it's too much? It's as much work as it needs to be. Mouse events happen in the millisecond range. The work required to get it to the handler code is probably measured in microseconds. It's just not an issue.
You're pretty much right - though mouse events occur at a fixed rate(e.g. an USB mouse on linux gives you events 125 times a second by default - which really is not a lot),, and the OS or application might further merge mouse events that's close in time or position before sending it off to be handled
I am building a Flash AS3 application that allows users to modify images, (drag and drop, select, scale, alter saturation, etc) and then submit-save them to a server.
The user will then have the ability to log in and access these saved images via a separate admin tool in a thumbnail gallery. They can either delete an image or click a thumbnail to view it at original size.
I am architecting and building the front end only and will have design ready assets supplied.
Since I have been burned in working to fixed quote before, would appreciate ANY feedback advice on quoting this project!
Thanks in advance!
I've done a lot of estimating, and I've found that the only way I can get a reliable estimate is to break down all of the tasks and subtasks to as granular a level as I can, estimate all of those elements, and then add it up. This usually takes me several passes, and a couple of times waking up in the middle of the night.
It's time-intensive, but works out really well in at least three ways.
Obviously, the first way is that you end up with a pretty reliable estimate.
You also think of all kinds of things that you wouldn't have thought of if you hadn't sat down and wrote everything out (which is a big part of why estimates turn out to be wrong, in the first place). You also give yourself the chance to really think through your overall approach, and you end up making better decisions on things like which framework to use.
Writing everything out to the detail level helps a lot in sequencing the work you're doing with the work of other teammates. Makes it easy to see that at a given point you'll be roadblocked if you don't have an API from the server team, etc. Also helps you realize how you will potentially roadblock your teammates, and gives you the ability to deal with that.
Hope that's helpful. Making myself work hard at the estimation end of a project has really helped me be successful in the actual development aspect.
I'm building a game such as Same Game, when I have to create a new level I've just run an algorithm to fill the board with N colors, this algorithm fills the board at random, but obviously the levels generated this way are not all has a solution.
I have to make a function to resolve this problem, so the game can be played by a perfect player for ever.
I have a maximum of 6 color and a minimum of 2 and the board has a reasonable size (14x12) but can be modified.
The language is irrelevant.
EDIT: I don't need to solve the puzzle, I need to create levels that has at least one solution.
I've just check out about five different versions of the game on Ubuntu and I've found an answer you can pillage from!
Simon Tatham's Portable Puzzle Collection
I play about five of his games incessantly but preferred Same GNOME. I just loaded up his Same Game and it has the option to ensure solubility when creating custom games. Even has a customisable scoring system. It's all awfully advanced.
An exe and source code is available from the above link.
And the license is MIT (meaning you can use it freely in commercial games - but please donate something to him if you can afford it)
One method, which, I'll add, is rarely the most efficient, is to build the level in reverse.
It's fairly simple to do in this case though. You just start with nothing and add clickable groups with some randomness... I say some randomness, as you may need to add extra blocks to make sure all columns are filled.
But thinking about it, even then there's a possibility two clickable groups you add will touch each other and cause an unforeseen collapse, resulting in an unfinishable game. So this method wouldn't guarantee a solvable game.
You could have a look at the source for an open source version like Same GNOME and see how they do it (if they do it at all!)
create a "solved" board, and then change it using N valid but random backwards moves. After adding each backward move, you could run the moves forward (on a temp board) to verify a solvable puzzle.
If you can't run a verification algorithm, because of time constraints, perhaps what you need to work with is a library of puzzles. You can have a background thread generating new random puzzles all the time, and running a verification algorithm on them to check if they are valid. When a valid puzzle is found, it is added to your library of puzzles (assuming the same puzzle doesn't already exist).
Then your game just loads randomly from the library. This allows you to ensure you always have valid puzzles, but still allows you to randomly generate them and verify them without slowing down the puzzle-loading.
I think the best way is, if you generate a level randomly, I mean add 1 or more blocks at the same time to the same column, so you're gonna have some connecting blocks. Then you write a simple solving algorithm, which just solves the board till there is no more possible moves. Then you just simply try to complete the remaining part, just pushing some blocks from the top so that you have some more blocks to vanish. You continue till you finish the board.
You store the pieces you added in another matrix.
After that you just have to add the 2nd matrix to the 1st from the top. If the board is not full, you simply complete the board with blocks to start with(connecting blocks).