I'm building a remote presentation tool in AS3. In a nutshell, one user (the presenter) has access to a "table of contents" HTML page with links for each slide in the presentation, and an arbitrary number of viewers can watch the presentation on another page, which in turn is in the form of a SWF that polls the server every second to ensure that it's on the right slide. Whenever the admin clicks a slide link in the TOC, the database gets updated, and on its next request the presentation swf compares the label of the slide it's currently displaying to the response it got from the server. If the response differs from the current label, the swf scrubs through the timeline until it finds the right frame label; otherwise, it does nothing and waits for the next poll result (a second later).
Each slide consists of a movieclip with its own nested timeline that loops as long as the slide is displayed. There's no actionscript controlling any of the nested movieclips, nor is there any actionscript on the main timeline except the stop();s on each keyframe (each of which is a slide in the presentation).
Everything is built and working perfectly. The only thing that's troubling is that if the presentation swf is open for long enough (say, 20 minutes), the polling starts to have a noticeable effect on the framerate of the movieclips animating on any given slide. That is, every second, there's a noticeable drop in the framerate of the animations that lasts about three-tenths of a second, which is quite noticeable (and hence is a deal-breaker for the whole presentation suite!).
I know that AS3 has issues with memory management, and I've tried to be diligent in my re-use of objects and event listeners. The code itself is dead simple; there's a Timer instance that fires every second, which triggers a new URLRequest to be loaded by a URLLoader. The URLLoader is reused from call to call, while the URLRequest is not (it needs to be initialized with a new cache-killing value each time, retrieved from a call to new Date().time). The only objects instantiated in the entire class are the Timer, the URLLoader, the various URLRequests (which should be garbage-collected), and the only event listeners are on the Timer (added once), the URLLoader (added once), and on the routines that scrub backwards and forwards in the timeline to find the right slide (and they're removed once the correct slide is found).
I've been using mr doob's stats package to monitor memory usage, which definitely grows over time, so there's gotta be a leak somewhere (it grows from ~30 MB initially to > 200 MB after some scrubbing and about 25 minutes of uptime).
Does anyone have any ideas as to what might be causing the performance problems?
UPDATE: I'm not entirely sure the performance troubles are tied directly to memory; I ran an instance of the presentation swf for about 15 minutes and although memory usage only climbed to around 70 MB (and stayed there), a noticeable hiccup started appearing at one-second intervals, coinciding with the polling calls (tracked via Firebug's Net panel). What else might cause stuttering movieclips?
I know this is coming a bit late, but I have been using Flash Builder's profiler frequently and one thing I found is that the TimerEvent generated by the timer class
uses up quite a bit of memory individually and
seems to not get released properly during garbage collection (even if you stopped the timer and removed all references to it).
A new event is generated for each Timer tick. I use setInterval instead, even though a few AS3 evangelists seem to recommend against that. I don't know why. setInterval still generates timer events, but they appear to be garbage-collected properly over time.
So one strategy may be that
you replace the Timer with a call to setInterval() ... which is arguably more robust code anyway and
(CAUTION) force garbage collection on each slide scrub (but not on each poll). See this question for more details on the pros and cons.
The second suggestion is only a stop-gap measure. I really encourage you to use the profiling tools to find the leak. Flash Builder Pro has a 60-day trial that might help.
Finally, when moving to a completely new slide SWF (not a new timeline position in the current slide), how are you making sure that the previous slide SWF got unloaded properly? Or am I misunderstanding your setup and there is only one actual slide SWF?
Just two things that came into my mind:
Depending on the version of the Flash player and the cpu usage the garbage collections sometimes does not start before 250 MB (or even more) memory are consumed.
Moviesclips, Sprites, Loader and whatever that has an Eventlistener listening will not be killed by the garbage collection.
So I believe your problem is, that either the slides or the loader are not cleaned correctly after you used them, so the were keept in memory.
A good point to start reading: http://www.gskinner.com/blog/archives/2006/06/as3_resource_ma.html
Related
I'm a new member of this site. I am making a game where I need to use a lot event listeners.
Problem:
There'll be around 300 event listeners in my game, I'm worried about if this will affect my game?
Yes it will affect your game, primarily because you will need to control all 300 so that they won't create memory leaks in form of defunct (released) objects left in memory because they have a listener attached to say stage. Secondary aspect is performance, each listener taking actions is several function calls below the curtains, thus it's better to organize those listeners somehow. It's fine to have a button listen on itself for say MouseEvent.CLICK and to have 300 buttons like that, because each time you click only a few (ideally one) listeners would react. It's not as fine to have 300 listeners listen for Event.ENTER_FRAME because each frame all of them would be invoked, and it's better to have one listener instead, but every subsystem or every object would then get called from that listener. This approach will also lessen the overhead on Flash event subsystem to direct calls, and lessen your hassle about unattached listeners.
There may be more performance aspects regarding listeners, especially since Flash engine developers started placing security checks into the engine, slowing event processing by a significant margin, these are however obscure and the only thing is known about them is "use fewer listeners". You will still have to rely on Flash event cycle at least on the top level, even if you devise an event processing system of your own, or use a system made by another, but the main point stands, "the fewer, the better". If you can lessen the number of listeners, please do so.
Well you are very vague on the kind of event listeners you use if they are enterframes it could be a issue try not using enterframs on objects and use them on the stage but if you are using 300.
I'm sure only a subset would be Enter_Frames and most will be mouse events. And i don't think most of them would be on active MovieClips.
So only a subset would be active at a time so mostly nothing to be worried about as long as there isn't any unwanted behaviour. I feel you should be good to go . But do Manage all of your enterframes.
I'm writing an application which pulls up to several dozen images from a server using Loader objects. It works fine in all browsers except Firefox, where I'm finding that, with over 6 or so connections, some simply never load - and I cease to get progress events (and can detect no errors/error events)
I extended the Loader class so that it will kill and reopen the transfer if it takes longer than ten seconds, but this temporary hack has created a new problem, in that when there are quite a few connections open, many of them will load 90-odd percent of the image, get killed for exceeding the time limit, open again, load 90-odd percent etc...until the traffic is low enough for it to actually complete. This means I'm getting transfers of many times the amount of data that is actually being requested!
It doesn't happen in any other browser (I was anticipating IE errors, so for Firefox to be the anomaly was unexpected!), I can write a class to manage Loaders, but wondered if anyone else had seen this problem?
Thanks in advance for any help!
Maybe try to limit number of concurrent connections.
Instead of loading all assets at once (then FP or browser manages the connections) try to build a queue.
Building a simple queue is fairly easy - just create an array of URLs and shift or pop a value every time loader has finished loading previous asset.
You might use an existing loader manager like LoaderMax or BulkLoader - they allow to create a queue, limit number of connections and are fairly robust. LoaderMax is my favourite.
I'm new to Actionscript. There's probably a better way to do this, and if there is, I'm all ears.
What I'm trying to do is have a background layer run for, say 150 seconds in a loop. Then have another layer (we'll call it Layer 1) with a single object on it loop for 50 seconds. Is there a way to have Layer 1 loop 3 times inside of that 150 seconds that the background layer is looping?
Here's the reason I want Layer 1 to be shorter:
When a certain combination is entered (for example, A1), an item will pop out of and in front of the object on Layer 1.
I haven't written any code for it yet, but my hopeful plan is to have the background layer run continuously then have different scene sections on Layer 1 for each of the items coming out of the object on Layer 1. That way when A1 is entered, Layer1 can goToAndPlay(51) without messing up the background layer.
If it helps you understand it at all, it's a vending machine project. My group's vending machine is the TARDIS. The TARDIS is flying through space while you're entering what you want out of the vending machine and stuff is popping out of it.
If I understand correctly, the background is a MovieClip that loops within its own timeline. When Flash plays through a timeline, the timing is dependent on the performance of the computer and how complex the animation is. You can add an audio track set to 'streaming' to lock the timing down, which will then drop frames if the CPU is overloaded. I have used a silent sound set to loop infinitely and play mode 'streaming' to do this when there is no audio to be used.
Instead of using timeline animations I would recommend using TweenMax http://www.greensock.com/tweenmax/ as it allows tween chaining, that is creating a chain of sequential and parallel tweens. When you use a tween you define the timing in seconds and can use values like 1.25 seconds. It will be accurate to the timing you define. You can also run methods on complete, use easing and all sorts of goodies. If you get comfortable using this you will be able to create much more complex interactions in your Flash projects and also be able to change animations and timing much easier than messing with the timeline.
In fact when hiring Flash developers we always screen candidates by asking if they prefer to do animations on the timeline or programmatically. Although Flash is on its way out, still good to learn as the ideas will apply to javascript and whatever new technology comes about.
I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting.
I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc.
I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream.
If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile
The only thing that comes to mind is to try and use metadata, flash media streams support metadata and the onMetaData callback. I assume you're using flash media server for the audio coming in and to record the audio going out. If you use the send method while your streaming the audio back to the server, you can put the listening audio track's playhead timestamp in it, so when you get the 2 streams back to the server you can mux them together properly. You can also try encoding the audio that is streamed to the client with metadata and try and use onMetaData to sync them up. I'm not sure how to do this, but a second approach is to try and combine the 2 streams together as the audio goes back so that you don't need to mux them later, or attach it to a blank video stream with 2 audio tracks...
If you're to inject something into the NetStream... As complex as SOUND... I guess here it would be better to go with Socket instead. You'll be directly reading bytes. It's possible there's a compression on the NetStream, so the data sent is not raw sound data - some class for decompressing the codec there would be needed. When you finally get the raw sound data, add the input in there, using Socket.readUnsignedByte() or Socket.readFloat(), and write back the modified data using Socket.writeByte(), or Socket.writeFloat().
This is the alternative with injecting the back into the audio.
For syncing, it is actually quite simple. Even though the data might not be sent instantly, one thing still stays the same - time. So, when user's audio is finished, just mix it without anything else to the back track - the time should stay the same.
IF the user has slow internet DOWNLOAD, so that his backtrack has unwanted breaks - check in the SWF if the data is buffered enough to add the next sound buffer (usually 4096 bytes if I remember correctly). If yes, continue streaming user's audio.
If not, do NOT stream, and start as soon as the data catches back on.
In my experience NetStream is one of the most inaccurate and dirty features of Flash (NetStream:play2 ?!!), which btw is quite ironic seeing how Flash's primary use is probably video playback.
Trying to sync it with anything else in a reliable way is very hard... events and statuses are not very straight forward, and there are multiple issues that can spoil your syncing.
Luckily however, netStream.time will tell you quite accurately the current playhead position, so you can eventually use that to determine starting time, delays, dropped frames, etc... Notice that determining the actual starting time is a bit tricky though. When you start loading a netStream, the time value is zero, but when it shows the first frame and is waiting for the buffer to fill (not playing yet) the time value is something like 0.027 (depends on the video), so you need to very carefully monitor this value to accurately determine events.
An alternative to using NetStream is embedding the video in a SWF file, which should make synchronization much easier (specially if you use frequent keyframes on encoding). But you will lose quality/filesize ratio (If I remember correctly you can only use FLV, not h264).
no events seem to be sent when a connection starts
sure there does.. NetStatusEvent.NET_STATUS fires for a multitude of reasons for NetConnections and Netstreams, you just have to add a listener and process the contents of NET_STATUS.info
the as3 reference docs here and you're looking for NET_STATUS.info
I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently.
In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request.
But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them.
That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well.
Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event.
Thanks in advance!
Think of events like network packets, since they're usually handled by similar mechanisms. Now think, your mouse sends a couple of hundred packets a second, maximum, and they're around 6 bytes each. That's nothing compared to the bandwidth capabilities of modern machines.
In fact, you could make a responsive GUI where every mouse motion literally sent a network packet (86 bytes including headers) on hardware built around 20 years ago: X11, the fundamental GUI mechanism for Linux and most other Unixes, can do exactly that, and frequently was used that way in the late 80s and early 90s. When I first used a GUI, that's what it was, and while it wasn't great by current standards, given that it was running on 20 MHz machines, it really was usable.
My understanding is as follows:
Every application/window has an event loop which is filled by the OS-interrupts.
The mouse move will come in there.
All windows have a separate queue/process by my knowledge (in windows since 3.1)
Every window has controls.
The window will bubble up this events to the controls.
The control will determine if the event is for him.
So its not necessary to "compute" which item is drawn under the mouse cursor.
The window, and then the control will determine if the event is for them.
By what criteria do you determine that it's too much? It's as much work as it needs to be. Mouse events happen in the millisecond range. The work required to get it to the handler code is probably measured in microseconds. It's just not an issue.
You're pretty much right - though mouse events occur at a fixed rate(e.g. an USB mouse on linux gives you events 125 times a second by default - which really is not a lot),, and the OS or application might further merge mouse events that's close in time or position before sending it off to be handled