I am working on how to reduce the memory usage in the code and got to know that removing the component also remove its children present inside it.If this happens the memory usage must decrease but it is increasing.
I have a titlewindow which contains of hboxes and those hboxes have canvases as children which contains images. Now if i use removeChild(titlewindow)
Does all the hboxes, canvases and images present in it gets removed or not?
If gets removed the memory usage is reduceed or not? How can i do that in flex?
Yeah, everything pretty much gets removed with it, as long as you then set the value of titleWindow to null and don't ever re-add those children. As for whether this clears out any memory or not, it basically will under two conditions:
The garbage collector runs afterwards. This can be expensive, and thus Adobe's designed it to not necessarily just keep happening over and over again at regular intervals. Instead it tends to happen when Flash Player or AIR is running out of memory in its current heap, at which point the garbage collector will check first to see if it can free up enough space within the current heap before anything more is grabbed from the operating system.
You don't have any non-orphaned references to these children anywhere else. By "non-orphaned", I mean that if the only places where you still have references to them are themselves without any references in the rest of your program, this condition is still met.
There is at least one exception to this rule, and that is that the garbage collector can single out multiple objects in your program as GCRoots. A GCRoot is never garbage-collected, period. So if you orphan off a GCRoot (make it so that neither it nor any of its descendants have any references anywhere outside of themselves), the garbage collector basically just doesn't care. The GCRoot will be left in there, and any references it has to any objects are thus considered live and active. Additionally there are certain occasions when the garbage collector will simply not be able to tell whether something in memory is a reference or not, so it'll just assume that it is and potentially fail to delete something. Usually this is not a problem, but if your program is big enough and is not doing a lot of object-pooling, I can tell you from experience that reacting specifically to this can on rare occasions be a necessity.
Try setting the titlewindow to null after removing them:
removeChild(titlewindow);
titlewindow = null;
The garbage collector will remove all your boxes from memory if there are no more references to them from your main code. It should be okay to ignore explicitly removing the children, as long as the only references to them are from the parent, i.e. titlewindow and its children are an isolated group of objects. But make sure you also remove any event listeners that anything might have registered to with removeEventListener().
Also, there is no guarantee when the garbage collector actually runs, so if it looks like your memory is increasing, it might just mean the GC hasn't had a chance to clear up the memory yet. Here's an SO question on how to force GC to run. (when debugging, System.gc() usually works for me).
Related
I'm learning the copy-on-write technique. I can understand that parent and child process share the same address space. When the parent or child want to modify the page, so that page will be copied to private memory of process then modified it.
So my question is, assume that child process is modified the page, then complete and terminate. How the modified data? is it still there and visible to parent process and other child processes?
In short, if child process modified the page, and what happen next to parent and other child processes for that modified page/data?
I read the COW concepts and understand it basic principles but not sure how deep I understand.
In short - the parent does not have access to the child process data. Neither do any other siblings. The moment the child process terminates, all its modifications is lost.
Remember, COW is just an optimization. From the processes point of view, they don't even realize it is copy on write. From their perspective, each process has its own copy of the memory space.
Long answer, what happens behind the scenes:
*Note, I am simplifying some corner case, not everything is 100% accurate but this is the idea.
Each process has its own page table, which maps process virtual addresses to physical pages.
At some point, the parent process calls fork. At this step, a child process is created, its VMA descriptors are duplicated (there are certain rules on how that is done, with intermediate chain etc, not going to deep dive into this). What is important is that at this stage, child and parent virtual addresses are pointing to the same physical addresses.
Next, all pages are made read only.
At this point, if either child or parent try to write to a certain page, it will cause a page fault. They can read freely, however.
Now assume child writes to a page. This causes a page fault. This page fault is caught by the kernel. So the kernel understands it is COW now. What it does is creating a separate copy of a physical page for child.
So at this point, child and parent have same virtual address pointing to two different physical addresses.
That should answer your question. Parent cannot access other process physical pages. The virtual address is the same, but that does not matter. When child dies, its pages are recycled, and all changes are lost.
This question is a little "meta" for SO, but there doesn't seem to be a better place to ask it...
According to Google, realtime collaborative objects are never deleted from the model. So it makes sense to pool objects where possible, rather than not-really-delete them and subsequently create new ones, thus preventing an unnecessary increase in file-size and overhead.
And here's the problem: in an "undo" scenario, this would mean pulling a deleted object out of the trash pool. But "undo" only applies to operations by the local user, and I can't see how the realtime engine could cope if that "deleted" object had already been claimed by a different user.
My question is, am I missing something or wrong-thinking, and/or is there an alternative to a per-user pool?
(It also occurs to me that as a feature, the API could handle pooling deleted objects, automatically minimizing file-bloat.)
I think you have to be very careful about reusing objects in the way you describe. Its really hard to get right. Are you actually running into size issues? In general as long as you don't constantly create and throw out objects, it shouldn't be a big deal.
You can delete the contents of the collab object when its not being used to free up space. That should generally be enough.
(Note, yes, the API could theoretically handle this object cleanup automatically. It turns out to be a really tricky problem to get right, do to features like undo. It might show up as a future feature if it becomes a real issue for people.)
Adding to Cheryl's answer, the one thing that I see as particularly challenging (actually, impossible) is the pulling-an-object-from-the-pool stuff:
Let's say you have a pool of objects, which (currently) contains a single object O1.
When a client needs a new object it will first check the pool. if the pool is not empty it will pull an object from there (the O1 object) and use it, right?
Now, consider the scenario where two clients (a.k.a, editors/collaborators) need a new object at the same time. Each of these clients will run the logic described in the previous paragraph. That is: both clients will check whether the pool is empty and both clients will pull O1 off of the pull.
So, the loosing client will "think" for some time that it succeeded. it will grab an object from the pool and will do some things with it. later on it will receive an event (E) that tells it that the object was actually pulled by another client. At this point the "loosing" client will need to create another object and re-apply whatever changes it did to the first object to this second object.
Given that you do not know if/when the (E) event is going to fire it actually means that every client needs to be prepared to replace every collaborative object it uses with a new one. This seems quite difficult. Making it more difficult is the fact that you cannot do model changes from event handlers (as this will trump the redo/undo stack). So the actual reaction to the (E) event need to be carried out outside of the (E) event handler. Thus, in the time between the receiving of the (E) event and the fix to the model, your UI layer will not be able to use the model.
I am suffering from a memory leap within my WP8 app.
I have investigated and narrowed the problem down though I would like some guidance on where to go next.
I do have a hunch that the Pivot and LongListSelector are the cause, described below.
Problem Description:
App browses across numerous pages (typically a few dozen).
Memory use is increasing by around 2-4MB per page and not being fully released.
After around 25 pages memory use has increased from 30MB to 90MB.
Existing Code:
I am already performing some cleanup on each page before navigating to the next.
I am also already calling NavigationService.RemoveBackEntry() to keep the Back Stack small.
Investigation:
I have used the Memory Profiler which reveals about 15MB of the growth is my own data that isn't being cleaned up.
The remaining 40-50MB is not included in the Heap Summary (i.e. the total only comes to about 15MB out of the observed 90MB).
I will work on the 15MB, but since it doesn't appear to be the most significant factor, I would like some guidance on sensible next steps.
Possible Cause:
I have another set of pages (again typically a few dozen pages when browsed) composed of very similar content.
These pages however do NOT show the same problem. Memory use remains low throughout.
Both sets of pages use similar cleanup code when navigating.
One key difference is the affected pages use a Pivot control, each of which has a 3 or 4 Pivot Items, a couple of which contain LongListSelectors.
The LongListSelectors are data bound to Generic Lists generated at run time. No images, only text. Not especially long lists, 20 or so items in each.
I have come across a couple of posts vaguely suggesting that this combination of controls is susceptible to memory leaks.
I commented the code that populates these controls, and sure enough, memory use now peaks at around 50-60MB.
It may be even lower if I remove the controls completely (I haven't tested that yet).
So, these controls are not the whole story, but clearly are a large part of the problem.
Question:
Is there a known issue with these controls (LongListSelector, Pivot)?
Should there be some code used to clean up these controls? I have tried setting the list ItemSource to an empty list, but this had no effect on the memory growth.
Is there any way to workaround it? (obviously changing the type of controls used is one option).
Thanks for reading.
After some investigation, I have found what looks like a problem with the LongListSelector recovering memory when it is cleaned up.
I have posted more details and a workaround here:
http://cbailiss.wordpress.com/2014/01/24/windows-phone-8-longlistselector-memory-leak/
I had a same problem. All I did just changed source List<>s to ObservableCollection<>s, and then on SelectedItem changing I clearing source collection which is not visible at the moment.
I started running into serious issues with the garbage collector partially picking this up, but still leaving most of the object up and running in the background without a reference:
m_win = new MyWindow(arr);
m_win.open();
m_win.addEventListener(Event.CLOSE, onClose);
.
.
.
private function onClose(pEvent:Event):void
{
m_win.removeEventListener(Event.CLOSE, onClose);
m_win.close();
m_win = null;
// RAM usage for m_win is only reduced about 20-40%. I can see the garbage
// collector has run as a result of that reduction, but the other 60-80% are
// a serious problem.
}
That's the only event listener I have added to m_win, and that's the only reference my code has to m_win. MyWindow is basically a standalone AIR project (although it's nested in a different class than its own Main class to adapt it to work for this scenario; otherwise the same). MyWindow has NetConnections, NetStreams, and such that were kept live after the garbage collector would run.
So one of the first things I tried was to go in and disconnect its NetConnections and NetStreams. But that didn't work. Then I came across these blogs:
http://tomgabob.blogspot.com/2009/11/as3-memory-management.html
I found the link for that in another blog from another guy who had trouble with the same thing: Supposedly if AS3's garbage collector finds an "island" that has reached a critical mass (in my experience, maybe 30 MB?), it just refuses to do anything with it. So I went with this guy's recommendation and at least attempted to null out every reference, remove every event listener, call removeAllElements() and/or removeChildren() where necessary, and call "destructor" functions manually all throughout MyWindow, the sole exception being the Main class that isn't really used for m_win.
It didn't work, unless I messed up. But even if I left a couple of stones unturned, it should have still broken up the island more than enough for it to work. I've been researching other causes and work-arounds for this problem and have tried other things (such as manually telling the garbage collector to run), but nothing's cleaning the mess up properly. The only thing that has worked has been to disconnect the NetConnection/NetStream stuff on the way out, but a good 25-30 MB is remaining uncollected.
How can this be fixed? Thanks!
EDIT: Given Adobe's statements at http://www.adobe.com/devnet/actionscript/learning/as3-fundamentals/garbage-collection.html:
"GCRoots are never garbage collected."
and:
"The MMgc is considered a conservative collector for mark/sweep. The MMgc can't
tell if certain values in memory are object pointers (memory addresses) or a
numeric value. In order to prevent accidentally collecting objects the values
may point to, the MMgc assumes that every value could be a pointer. Therefore,
some objects that are not really being pointed to will never be collected and
will be considered a memory leak. Although you want to minimize memory leaks to
optimize performance, the occasional leaks that result from conservative GC tend
to be random, not to grow over time, and have much less of an impact on an
application's performance than leaks caused by developers."
I can see sort of a link between that and the "big island" theory at this point - assuming the theory is correct, which I'm not. Adobe's pretty much admitting at this point, in at least the second statement, that there are at least benign issues with orphans getting skipped over. They act like it's no big deal, but to just leave it like that and pretend it's nothing is probably mostly just typical Adobe sloppiness. What if one of those rare missed orphans that they mentioned has a large, fully-active object hierarchy within it? If you can't prevent the memory leak there completely, I could definitely see how going through and doing things like nulling out references all throughout that hierarchy before you lose your reference would generally do a lot to minimize the amount of memory leaked as a result.
My thinking is that the garbage collector would generally still be able to get rid of most of everything within that island, just not the island itself. Also what some of these people who were supposedly able to really use the "big island" theory were seeing was probably some of the "less benign" manifestations of what Adobe was admitting to. This remains a hypothesis though.
EDIT: After I checked again on the destructorsand straightened out a couple of issues, I did see a significant improvement. The reason I'm leaving this question up and going is that not only is there a chance I'm still missing something else I could be doing to release memory, but the main explanation I've used so far isn't official or proven.
try calling
try {
new LocalConnection().connect('foo');
new LocalConnection().connect('foo');
} catch (e:*) {}
// the GC will perform a full mark/sweep on the second call.
to force garbage collection. This is undocumented, but works.. credits to Grant Skinner, who explains more on his blog
have developed a touch screen stand alone application. The interactive is currently running 10 - 13 hours per day. If the user interacts with interactive the memory level is going on increasing. The interactive has five screens while travelling through each screen I have removed the movieclip, assets, listener's and I set objects to null. Yet the memory level keep increasing.
Also I have used third party tool "gskinner" to solve this problem, It improves the result even though some memory leakage is there.
Please help me, thanks in advance.
Your best results will come from writing the code in a way that elements are properly garbage collected on removal. That means removing all the objects, listeners and MovieClips/Sprites within that are no longer used.
When I'm trying to get this stuff done quickly, I've been using casalib's CasaMovieClip and CasaSprite instead of regular MovieClips and Sprites. The reason is that they have the destroy() functions as well as some other functions that help you garbage collect easily.
But the best advice I can give is to read up on garbage collection. Grant Skinner's blog is a great place to start.
Also, check for setTimeout() and dictionaries, as these can cause leaks as well if not used properly.