I am suffering from a memory leap within my WP8 app.
I have investigated and narrowed the problem down though I would like some guidance on where to go next.
I do have a hunch that the Pivot and LongListSelector are the cause, described below.
Problem Description:
App browses across numerous pages (typically a few dozen).
Memory use is increasing by around 2-4MB per page and not being fully released.
After around 25 pages memory use has increased from 30MB to 90MB.
Existing Code:
I am already performing some cleanup on each page before navigating to the next.
I am also already calling NavigationService.RemoveBackEntry() to keep the Back Stack small.
Investigation:
I have used the Memory Profiler which reveals about 15MB of the growth is my own data that isn't being cleaned up.
The remaining 40-50MB is not included in the Heap Summary (i.e. the total only comes to about 15MB out of the observed 90MB).
I will work on the 15MB, but since it doesn't appear to be the most significant factor, I would like some guidance on sensible next steps.
Possible Cause:
I have another set of pages (again typically a few dozen pages when browsed) composed of very similar content.
These pages however do NOT show the same problem. Memory use remains low throughout.
Both sets of pages use similar cleanup code when navigating.
One key difference is the affected pages use a Pivot control, each of which has a 3 or 4 Pivot Items, a couple of which contain LongListSelectors.
The LongListSelectors are data bound to Generic Lists generated at run time. No images, only text. Not especially long lists, 20 or so items in each.
I have come across a couple of posts vaguely suggesting that this combination of controls is susceptible to memory leaks.
I commented the code that populates these controls, and sure enough, memory use now peaks at around 50-60MB.
It may be even lower if I remove the controls completely (I haven't tested that yet).
So, these controls are not the whole story, but clearly are a large part of the problem.
Question:
Is there a known issue with these controls (LongListSelector, Pivot)?
Should there be some code used to clean up these controls? I have tried setting the list ItemSource to an empty list, but this had no effect on the memory growth.
Is there any way to workaround it? (obviously changing the type of controls used is one option).
Thanks for reading.
After some investigation, I have found what looks like a problem with the LongListSelector recovering memory when it is cleaned up.
I have posted more details and a workaround here:
http://cbailiss.wordpress.com/2014/01/24/windows-phone-8-longlistselector-memory-leak/
I had a same problem. All I did just changed source List<>s to ObservableCollection<>s, and then on SelectedItem changing I clearing source collection which is not visible at the moment.
Related
I am about to implement a HTML based Log file viewer. The update volume varies from 1-10 updates per second
The server is WebSocket based and will be developed by me as well - I have built a Fleck based prototype and this side looks fine.
Is there any other smart HTML field besides a plain text field which
I could use for updating?
Would you recommend me to collect updates and work with a fixed
update interval?
I guess it would be more efficient to add the update interval in the server then, right?I am new to Java Script and HTML 5, so please do not be too harsh if these questions are trivial.
I am about to build a similar application and I therefore played around a little bit, comparing the performance of 1.) attaching DOM elements for every log row, 2.) attaching a table row for every log row, and 3.) using a textarea tag:
http://jsfiddle.net/PBzg5/18/
While removing all rows from the viewer is fastest with the textarea it takes longest to fill it. Also, there seems to be no faster method than manual string concatenation for textarea. Attaching elements to DOM (i.e. one text element and one <br> element per log row) is definitely fastest, with the table-based version being close behind. Also, using DOM elements will allow you to do more advanced things like coloring individual words than when using textareas. However, I haven't tested the performance influence of this yet.
When you implement your viewer be sure to keep in mind that browsers will actually brake down pretty fast when you try to display an unlimited number of rows. Therefore, just keep a certain number of the newest rows in a buffer (like terminals usually do it) and only display them.
As many of you might now, yesterday the new version of Monotouch was released and it includes a very useful and much needed memory profiler. I'm using it to fine tune my app. What I am trying now, is to make sure that the reference count is not increasing constantly on any of my objects.
So my question to any monotouch/cocoa gurus is this: Let's say I have a child UIViewController that I regularly present through my main view controller. If the reference count for the child view controller is constantly 1 even after I repeat the process of presenting it and hiding it a few times, does this mean I am out of the woods?
In other words, is this the only thing I should take care of in order to allow monotouch/ios to do proper garbage collection and not hog the device's memory? I am asking because the TOTAL MEMORY as reported in the profiler is increased with each presentation, even though the reference count of the child view controller does not increase.
The child view controller uses a lot of UIImage, loaded with UIImage.FromBundle
Thanks in advance
The problem is likely UIImage.FromBundle. This method will cache the image for the lifetime of the application (which seems to match your description of how memory just increases).
have developed a touch screen stand alone application. The interactive is currently running 10 - 13 hours per day. If the user interacts with interactive the memory level is going on increasing. The interactive has five screens while travelling through each screen I have removed the movieclip, assets, listener's and I set objects to null. Yet the memory level keep increasing.
Also I have used third party tool "gskinner" to solve this problem, It improves the result even though some memory leakage is there.
Please help me, thanks in advance.
Your best results will come from writing the code in a way that elements are properly garbage collected on removal. That means removing all the objects, listeners and MovieClips/Sprites within that are no longer used.
When I'm trying to get this stuff done quickly, I've been using casalib's CasaMovieClip and CasaSprite instead of regular MovieClips and Sprites. The reason is that they have the destroy() functions as well as some other functions that help you garbage collect easily.
But the best advice I can give is to read up on garbage collection. Grant Skinner's blog is a great place to start.
Also, check for setTimeout() and dictionaries, as these can cause leaks as well if not used properly.
I read some website development materials on the Web and every time a person is asking for the organization of a website's js, css, html and php files, people suggest single js for the whole website. And the argument is the speed.
I clearly understand the fewer request there is, the faster the page is responded. But I never understand the single js argument. Suppose you have 10 webpages and each webpage needs a js function to manipulate the dom objects on it. Putting 10 functions in a single js and let that js execute on every single webpage, 9 out of 10 functions are doing useless work. There is CPU time wasting on searching for non-existing dom objects.
I know that CPU time on individual client machine is very trivial comparing to bandwidth on single server machine. I am not saying that you should have many js files on a single webpage. But I don't see anything go wrong if every webpage refers to 1 to 3 js files and those js files are cached in client machine. There are many good ways to do caching. For example, you can use expire date or you can include version number in your js file name. Comparing to mess the functionality in a big js file for all needs of many webpages of a website, I far more prefer split js code into smaller files.
Any criticism/agreement on my argument? Am I wrong? Thank you for your suggestion.
A function does 0 work unless called. So 9 empty functions are 0 work, just a little exact space.
A client only has to make 1 request to download 1 big JS file, then it is cached on every other page load. Less work than making a small request on every single page.
I'll give you the answer I always give: it depends.
Combining everything into one file has many great benefits, including:
less network traffic - you might be retrieving one file, but you're sending/receiving multiple packets and each transaction has a series of SYN, SYN-ACK, and ACK messages sent across TCP. A large majority of the transfer time is establishing the session and there is a lot of overhead in the packet headers.
one location/manageability - although you may only have a few files, it's easy for functions (and class objects) to grow between versions. When you do the multiple file approach sometimes functions from one file call functions/objects from another file (ex. ajax in one file, then arithmetic functions in another - your arithmetic functions might grow to need to call the ajax and have a certain variable type returned). What ends up happening is that your set of files needs to be seen as one version, rather than each file being it's own version. Things get hairy down the road if you don't have good management in place and it's easy to fall out of line with Javascript files, which are always changing. Having one file makes it easy to manage the version between each of your pages across your (1 to many) websites.
Other topics to consider:
dormant code - you might think that the uncalled functions are potentially reducing performance by taking up space in memory and you'd be right, however this performance is so so so so minuscule, that it doesn't matter. Functions are indexed in memory and while the index table may increase, it's super trivial when dealing with small projects, especially given the hardware today.
memory leaks - this is probably the largest reason why you wouldn't want to combine all the code, however this is such a small issue given the amount of memory in systems today and the better garbage collection browsers have. Also, this is something that you, as a programmer, have the ability to control. Quality code leads to less problems like this.
Why it depends?
While it's easy to say throw all your code into one file, that would be wrong. It depends on how large your code is, how many functions, who maintains it, etc. Surely you wouldn't pack your locally written functions into the JQuery package and you may have different programmers that maintain different blocks of code - it depends on your setup.
It also depends on size. Some programmers embed the encoded images as ASCII in their files to reduce the number of files sent. These can bloat files. Surely you don't want to package everything into 1 50MB file. Especially if there are core functions that are needed for the page to load.
So to bring my response to a close, we'd need more information about your setup because it depends. Surely 3 files is acceptable regardless of size, combining where you would see fit. It probably wouldn't really hurt network traffic, but 50 files is more unreasonable. I use the hand rule (no more than 5), but surely you'll see a benefit combining those 5 1KB files into 1 5KB file.
Two reasons that I can think of:
Less network latency. Each .js requires another request/response to the server it's downloaded from.
More bytes on the wire and more memory. If it's a single file you can strip out unnecessary characters and minify the whole thing.
The Javascript should be designed so that the extra functions don't execute at all unless they're needed.
For example, you can define a set of functions in your script but only call them in (very short) inline <script> blocks in the pages themselves.
My line of thought is that you have less requests. When you make request in the header of the page it stalls the output of the rest of the page. The user agent cannot render the rest of the page until the javascript files have been obtained. Also javascript files download sycronously, they queue up instead of pull at once (at least that is the theory).
Today my co-worker noticed that when adding a decimal place to a progress indicator leads to the impression that the program is running faster than without. (i.e. instead of 1,2,3... it shows 1, 1.2, 1.4, 1.6, ...) I checked it and I was surprised that I got the same impression even though I knew it was faked.
That makes me wonder: What other things are there to create the impression of a fast application?
Of course the best way is to actually make the application faster, but from an algorithmic point of view often there's not much you can do. Additionally I think making a user less frustrated is a good thing, even though it is more or less a psychologic trick.
This effect can be very dramatic: doing relatively large amounts of work to give users a correct and often updating status of progress can of course slow down the actual running time of the application (screen updates, progress display needed calculations, etc) while still giving the user the feeling it takes less time.
Some of the things you could do in GUIs:
make sure your application remains responsive (resizing the forms remains possible, perhaps give a cancel button for the operation?) while background processing is occurring
be very consistent in showing status messages/hourglass cursors throughout the application
if you have something updating during an operation, make sure it updates often (like the almost ridiculous showing of filenames and registry keys during an install), or make sure there's an option to make it do this for users that like this behavior
Present some intermediate, interesting results first. "We've found 2,359 zetuyls matching your request, we're just calculating their future value".
I've seen transport reservation systems do that sort of thing quite nicely.
Showing details (such as the names of files being copied in an installation process) can often make things seem like they're going faster because there's constant, noticeable activity (as opposed to a slowly-creeping progress bar).
If your algorithm is such that it generates a list of results, and you have some way of displaying results as they're generated (as opposed to all at once at the end), do so - the sooner the user has something else to look at besides a spinner, the better.
Allow the user to do something else, while your application is processing data or waiting for a result. In application-scope you could allow to do some refinement of a search query or collect information for preparing next steps. Or just present some other "work" necessary to do or just some hints, documentation, statistics, entertainment..
Use one of those animated progress bars which look like they are doing something even when they aren't progressing. Also, as peSHIr said - print each filename that you copy and update it really fast - you could even fake it by cycling through a large string array N times a second.
I've read somewhere that if the process seems to be speeding up, it seems to be faster than when it's progressing at a steady pace. I can't find the reference right now, but it should be simple to implement.
(10 minutes later...)
A further look down Google lane unearthed the following references:
http://www.azarask.in/blog/post/hacking-memory/
http://blogs.msdn.com/time/
Here is an article about "Expressing time in your UI" and user perception of time. I do not know if it is exactly what you expect as an answer, but it is definitely worth the read.
Add a thread sleep at critical points. With each passing version, reduce the delay.