Actionscript memory management? - actionscript-3

I saw System.gc() somewhere on the internet today and I wanted to know if it is or isn't recommended to use in a Flash CS5 project and why.

In every garbage-collected system I know of, the garbage collection machinery was designed to run in the background as an abstraction the programmer should theoretically pay no attention to. There are some special situations where forcing a collection is useful, but these usually involve interrupts (real machine interrupts, not actionscript events), testing/debugging scenarios, or some tricky latency management necessities. Odds are you will never need to call System.gc() and you can safely ignore it.

System.gc() is only available in the debugger version of Flash Player and some AIR applications. Calling it on a normal website, under a normal Flash Player will have no effect whatsoever and will silently fail.
System.gc() is designed only for testing purposes.

System.gc() is only for testing purposes. It can be handy to monitor the memory usage your application is using and call System.gc() in order to highlight the possibility of any memory leaks.
Tip: As far as I remember you have to call System.gc() twice to force it to collect immediately.
The documentation states that this method only works in a debug player.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/system/System.html#gc()
So, to summarise, if you're testing memory, it's quite handy, otherwise don't use it.

Related

can a cuda code finish without cudaStreamDestroy()?

In our large code base, I could find there are multiple cudaStreamCreate() functions. However, I could not find cudaStreamDestroy() anywhere. Is it important to destroy streams after program is complete or one does not need to worry about this? What is a good programming practice in this regard?
Is it important to destroy streams after program is complete or one does not need to worry about this?
The runtime API will clean up all resources allocated (streams, memory, events, etc) by the context owned by the process during normal process termination. It isn't necessary to explicitly destroy streams in most situations.
While talonmies answer is correct, it is still often important to destroy your streams, and other entities created in CUDA:
If you're writing a library - you may finish your work well before the application exits. (Although in that case you might be working in a different CUDA context, and maybe you'll simply destroy the whole context).
If your code which creates streams might be called many times.
also, if you don't synchronize your streams after completing all work on them, then you might be missing some errors (and the results of your last bits of work); and if you do have a "last synch", that often means an opportunity to also destroy the stream.
Finally, if you use C++-flavored wrappers, like mine, then streams get destroyed when you leave the scope in which they were created, and you don't have to worry about it (but you pay the overhead of stream destruction API calls).

What may cause heavy memory leak in short time

when our flash game is in scene A, the memory is stable about 800M(it loads almost all the role animations and role skill animations). But when toggle to scene B, the memory keep increasing to 1400M in one minute. I have watched the explorer and make sure it doesn't load any resource when the memory is increasing. And when I repeat it, the memory increase to 2000M and the explorer freeze, the page crashed.
So what may cause such heary memory leak in short time? I haven't met such problem before, any help will be appreciated.
The question is not giving enough concrete information on what you're doing and thus it's hard to precisely what you' doing wrong.
But there are ways to deal with these situations:
Install Adobe Scout (http://gaming.adobe.com/technologies/scout/). This is a really good profiling tool to help you see what's going on in your app.
Enable telemetry data in your app. There are settings for that in both Flash Professional and Flash Builder. If you don't know how to enable it, please search the web since it's very well explained.
Run your app and look at Scout's panels to see what is happening and how the much memory, at what time you're allocating.
Other than that there are hundreds of reasons why the memory leaks. Just look at your code and understand when you call what and use profiling tools to know where to look.
If using FlashBuilder you can run the profiler to try and track down memory leaks and watch how many instances are being created. There are other profiling tools out there if you are using another type of IDE.
If using flash professional you can check out this link Profiling tools in flash builder to improve the performance of flash professional projects
After some days's work, we finally find out the problem.
Before I ask the question, I have tried Scout and Profile but not work(because the problem not occurs). I guess only bitmapdata draw or copypixels functon was called in an infinite loop or in a enterframe event handler could such quick and big memory leak.
Then we found out how to repeat the problem in luckļ¼Œ it really makes it much easier to solve the problem.
So here is the procedure we solve the problem after we could repeat the problem.
run the game in profile, and take a memory snapshot.
repeat the problem, after the memory increase a lot, take a memory snapshot.
find the loitering objects between the memory snapshot.
At last, the problem is an function was called in each frame when one skill appers. And in the function a bitmapdata was used to draw the role animation

Way to Log Function Calls in SWF at Runtime?

Is there any way (tool, library, etc.) to log all the actionscript function calls at runtime in a SWF created from Flash Professional? I often inherit projects, and want to more easily analyze and understand their operation. Profiling be nice too.
Logging all calls will probably not help much, because if there are many "small" items involved (i.e. cells in a list, nodes in a tree, particles in a particle engine, enemies in a game, etc. etc.) the log files will clutter up with repetition and soon grow to a size where the sheer amount of information will make learning about the functionality a slow, tedious and painful task.
It is more useful to use a profiler to manage dependencies, memory etc., and use a debugger to step through the code, and/or set breakpoints at interesting points and navigate deeper into the architecture from there.
FDT has a great profiler and debugger. And as a free tool, Monster Debugger is quite good.
You can try Show method entry and Show method exit in SWFWire Debugger. It also offers some profiling. You can also track object creation and destruction, and memory usage.
Disclaimer: I wrote this app

VGA Video using an ARM7

I need to put out a VGA signal from an AT91SAM7SE512. How can I do this without using an extra controller? I saw stuff on the web, but it needs to be able to modify the specific pixels.
You could probably use something similar to old tricks to make NTSC signals with PWM it will probably look horrible. A better bet is to get some form of video controller even a cheap low resolution one.
You could also try some form of FPGA to VGA like this
Unless your ARM7 has some kind of controller, capable of reading memory and outputting video signal without CPU intervention, ie some kind of framebuffer, I don't think you can do that with an ARM7. Well, you probably can, but not within a general purpose OS like linux.
What you can do is transform your ARM7 into a VGA dedicated controlleur, that spends its time launching dma transfer from SDRAM to an external bus. This will IMO not leave a lot of resource to do anything else.
Your ARM chip has an ADC. It doesn't have a DAC, though. VGA is an multiple-channel analog output, so you need some kind of DAC, and in turn an external component. Another problem you might encounter is the need for proper drivers (the electronic kind, not sofware). A VGA cable can be quite long, which means you have large capacities to overcome, plus it may work as an antenna.

How much interacting can i get with the GPU with Flash CS4?

as many of you most likly know Flash CS4 intergrates with the GPU. My question to you is, is there a way that you can make all of your rendering execute on the GPU or can i not get that much access.
The reason i ask is with regards to Flash 3D nearly all existing engines are software renderers. However, i would like to work on top of one of theses existing engines and convert it to be as much of a Hardware renderer as possible.
Thanks for your input
Regards
Mark
First off, it's not Flash CS4 that is hardware accelerated, it is Flash Player 10 that does it.
Apparently "The player offloads all raster content rendering (graphics effects, filters, 3D objects, video etc) to the video card". It does this automatically. I don't think you get much choice.
The new GPU accelerated abilities of Flash Player 10 is not something that is accessible to you as a developer, it's simply accelerated blitting that's done "over your head".
The closest you can get to the hardware is Pixel Bender filters. They are basically Flash' equivalent to pixel shaders. However, due to (afaik) cross platform consistency issues these do not actually run on the GPU when run in the Flash player (they're available in other adobe products, and some do run them on the gpu).
So, as far as real hardware acceleration goes the pickings are pretty slim.
If you need all the performance you can get Alchemy can be something worth checking out, this is a project that allows for cross compiling c/c++ code to the AVM2 (the virtual machine that runs actionscript3). This does some nifty tricks to allow for better performance (due to the non dynamic nature of these languages).
Wait for Flash Player 11 to release as a beta in the first half of next year. It would be an awesome.