I've made a program In which I have one page, which has a button to another page, and the other page contains a lot of visual effects. Its possible to navigate back and forth between the two pages. However after navigating back to the first page (The one only containing a button) the memory increases with 20 mb. I've tried a lot of things and concluded that it is in the retained visuals where my problem lies.
As seen in the pictures, the only place where there is a significant change in memory is at the retained allocations, and when further inspecting these it is seen that there is no destroy time on them. I'm using the MVVM structure and tried to outcomment all the viewmodels and binding, both with no luck.
Is there a way to find out which elements that are hanging and what can be done to remove them once navigating back to the first page?
Related
I'm writing a graphic console that highlights different entries and stores things when you input them (in AS3) but I've found that once there are thousands of entries, the program starts lagging and scrolling is slow. If I want scrolling to be animated with acceleration it gets even slower.
How do I move the giant block of objects that are my stored entries up and down?
Do I have to progressively load messages around where the user is looking? How does the scrollbar handle this, then?
you should create a custom container instead TextField, it would be easier to build an accelerated scrolling too,
each log entry would be an extended DisplayObject that holds anything you want just like inflating layouts in android.
the most important part should be reducing Memory usage:
you may only store plain text of log enteries in something like a global array and when scroll position is close enough, generate this layouts, then adding them in container to show, and vice versa for removing far behind chats.
however this proccess stills using much memory during runtime.
so, just according the concept of android's DiskLruCache, it is possible to storing some part of our invisible data which would be too far from our scroll position to disk instead memory, using SharedObject's.
How do I move the giant block of objects that are my stored entries up
and down?
You don't. As you have noticed, when the number Display Objectson the DisplayList greatly increases, the memory overhead increases and the housekeeping details of managing the Display Objectseventually causes performance to suffer. You don't mention any details of how you are implementing what you have so far so my comments will be general.
The way this is handled by various platform list components in Flex, iOS and I assume, Flash, is to only display the minimum number of objects needed, and as the user scrolls, objects are shuffled in and out of the render list. A further optimization is to use a "pool" of "template" objects which are reused so you don't pay a initialization time penalty. There is probably an actual name for this ("...buffering...") technique but I don't know what it is (hopefully some kind person will provide it and a link to a fuller description for how it works).
And as for how it works – you can DIY it, figuring out, as the user scrolls, which objects are moving off-screen and can be recycled, which are going to move on-screen, etc. Of course this all assumes that you have your objects stored in a data structure like and Array, ArrayList or ArrayCollection. As an alternative to coding all this from scratch, you might see if the DataGrid or List components will meet your needs – they manage all of this for you.
Flash Tutorial: The DataGrid Component (youTube video)
Customize the List component
Lots of other examples and resources out there.
(again, I work in Flex where the DataGrid and other list-based components can customized extensively using "skins" and custom item renderers for visual style – not sure if it is the same in Flash)
I have an application that displays a page with 5-10000 rows in a table element and has a drop down menu that switches to other views with a similar structure. Currently I do an async request when switching between views (I also clear/remove the current view from the document) and load the appropriate list each time; however, I was thinking of
1) loading all views in the background before they are requested for viewing so they load instantly on click.
and
2) Just hiding a particular set of rows rather than removing it so if the client navigates back it will be instant as well.
This would mean that potentially 10s of thousands of html elements would be loaded into the current document; is this a problem? What if it was more than 10's of thousands?
Loading 10000+ HTML elements onto your page is not a very smart idea. If the user's computer is of normal to fast speed, the user may experience slight lag, and if the user's computer is slow, it may even cause a system crash (depending on the amount of RAM on the computer).
A technique you could explore is to lazyload the HTML elements - this means that when the user scrolls down to a particular portion of the page, then the elements are loaded via AJAX. (also known as "infinite scrolling").
This means that the browser's rendering engine does not have to render so many elements in one go, and that saves memory, and increases speed.
It depends on the HTML elements, but I would say as a rule of thumb, don't load upfront.
When I say it depends on the elements, I mean take a look at facebook for example. They load maybe 10 or 20 items into the feed, and then add more as you scroll, because each item is so rich in content (photos, videos, etc).
However on the flipside of that, think about how much info is in each row, if it's say, less than 500 bytes, 500 x 10000 = 5MB, which isn't an awful load for a web request, and if you can cache intelligently, maybe it will be a lot less than that.
The bottom line is, don't assume that the number of HTML elements matters so much, think about the amount of data that it amounts to. My advice, cross that bridge when you get there. Monitor your request times, if they are too long, come up with an AJAX page loader, which shouldn't be too hard.
The size of your html document would be conciderable...
I once got a page with 5000~10000 items (table rows) and the browser (IE) was rendering way too long (downloading the page, parsing and rendering it).
The best solution seems to me to set up a WebService with a LazyLoading system.
So IMHO yes, the number of element in a HTML document should be monitored.
Yes of course! The number of element in a HTML document can be monitored.! Use FireBug in Firefox!
I'm developing a little desktop app using Adobe Air and their HTML API.
The app has two window, one displaying a slideshow of images present in a folder on the local machine and the other window allows you to browse those images (one big image and prev/next buttons).
At first for a quick test I just loaded all images from the folder into the DOM of each window and it works just fine until I reach too many images (150+) as they are high resolution JPEGs from a DSLR. Obviously each image is taking a lot of memory and will probably kill the app from overleaking. So I started with optimising the browsing window, instead of loading them all I use just a single tag and replace the .src value with javascript. But this technique is just delaying issues because as I carry on browsing all images the memory usage is growing and growing. Replacing the src of the image does not release the memory used by the previous image. Same thing if I try to delete the image from the DOM and recreate it.
An idea I have but I don't like it too much is to display the image inside a frame loading another HTML file passing it the image src as parameter. Then reload the whole frame, hopefully it can reset the memory usage. Haven't tried yet.
Anyone has an idea of how to handle this?
This is a nice tool for optimizing your Adobe Air application. Adobe Air Tuner:
I'm not familiar with your project; or how it is being implemented. The Adobe AIR has several methods that are accessible to free memory. Which will allow you correctly remove or dispose of your objects. Those cleanups can be found here.
One thing that some people do when creating media players; especially ones with large media. Example:
Lets say your media player contains six pages of content; totaling 1GB of total data. That is a very, very large memory allocation for your project. So rather then call the entire 1GB at once; the first page loads and the second page loads.
The other four pages remain 'uncalled' not dynamically loading. Then the user switches to page two; page three the content begins preloading. The user switches to page three; page four will start to load. But it also disposes of the array or objects created in page one. This way it doesn't affect the application.
Obviously this way is tedious, as your controlling all aspects of the loading. Also it poses issues if your user starts moving to quickly through the pages.
So another possible solution; would be to create thumbnails so the size is drastically smaller. Then load full size images as stand alone streams that can be disposed of without any issues once they leave that area. That way the gallery is stand alone from that.
If you provide some code or some additional details I can possibly assist you above and beyond just interface / memory suggestions of implementation.
The question may be a bit theoretical (did I mean rhetorical?) or fuzzy, but here it is anyway.
I'm planning to build a web page where content will be added through Ajax requests and displayed using tabs and panes. The initial view has just one tab and shows a list of links. Clicking on a link opens a new tab/pane. The user can navigate through the tabs and close them, just like in a web browser.
I could build a UI able to display an unlimited number of tabs, but that implies adding DOM content to as many -- and possibly a lot of -- panes. More likely, I'll set a limit to how many tabs can be open simultaneously. But what should that limit be? What rule would you follow? What's your experience in how much content a DOM document can hold without impact on performance?
There is no way to answer your question. I, for example, have Chrome and Firefox running side by side right now. All browsers together need 9.85GB of shared memory (so some of that RAM is shared between all browsers; I have only 8GB of RAM and about 2GB is actually in use).
There is no JavaScript API to tell when memory gets tight; the only indication is that the machine is starting to swap or that the browser is crashing.
So you have the following solutions:
Limit the user to something that is too small. Good for you, bad for power users like me.
Set the limit too big. Bad for you, since users will start complaining how your stupid site makes their browser crash but good for me.
Instead of setting a limit, clear the panes which are not visible. Save the state of the pane somewhere (server, local storage) and restore it when the user opens it. Quite some effort but it would solve your problem.
I'm working with a project where we are running a projector from/on a CD, and this can't be changed, to run an e-learning program of sorts. Everything is included on the CD and it doesn't need anything else to run, ie all imgs and xml and whatnot is on the CD.
The problem is that on some computers, assumedly less powerful ones, the entire process clogs up and instead of showing the images just white blank areas appear. The imgs used have been downgraded to a lower resolution, from 1333x1000 to 800x600, and this has solved the problem from what we've found so far.
My question is, does anyone know of any other way to solve this without cutting down the img quality?
As it is right now all the pics (about 180 i think) are being loaded right when the process starts and one idea i have is that you load the pics gradually as needed, 2-3 in advance back and forward or maybe an entire chapter even (7 chapters with imgs are used), to always ensure smooth tweens without having to wait for loading.
But as I've read that there seems to be some memory issue when using a flash projector on a CD/DVD, I'd also like to get more details on what the actual problem is and if possible to find more solutions to it.
I found some links that were supposed to target Adobes own views on the problem, but these links were obsolete. (links found here http://www.flashjester.com/?section=faq&cPath=14_23#394).
Any ideas, help, links, tutorials and what not are welcome.
Yeah, you need to load your assets on-demand and be careful about references so unused things are really garbage collected when they aren't needed anymore.
If you are really running from a projector, then load times aren't totally a concern. Assets form the local filesystem are always available the next frame; they aren't streamed from the disk like from the network. However, bigs file or ones with lots of exports may end up with a longer frame time, or a noticeable delay.
You also need profile things to see if your changes are actually doing anything. Poke through the flash.system.System class to see how you can get info, or take a look at SWFProfiler.