listeners firing order messes up testing - swing

I have encountered the following issue. I have an interactive Swing application. It basically creates a bunch of graphical objects on a canvas. You pick the kind you want to create from a palette (oval, circle, etc.) and click on the canvas. Everything works as expected. Now I want to record a test using Abbot/Costello testing framework. It is pretty simple. Fire up Costello app, create a new script and start recording events. Let's say I want to record this sequence: click on a palette and drop a graphic on canvas. It is natural to expect that testing app would record a click before the canvas component had a chance to process it and added a new graphic. For various reasons I need to capture the app's state before any changes are made to it, not after. It turns out that my app gets first crack at the click event resulting in creation of a new graphic and only after that my testing app receives the event for recording. At this point it is too late for me, the state has been irretrievably changed and I am basically recording a future state and not the state that preceded it.
I understand that this is a result of listeners firing in different order. I also understand that Swing makes no guarantees as to the order of firing listeners. Have I reached limits of possible or there is a solution?

Related

Worry about a lot of event listener in AS3

I'm a new member of this site. I am making a game where I need to use a lot event listeners.
Problem:
There'll be around 300 event listeners in my game, I'm worried about if this will affect my game?
Yes it will affect your game, primarily because you will need to control all 300 so that they won't create memory leaks in form of defunct (released) objects left in memory because they have a listener attached to say stage. Secondary aspect is performance, each listener taking actions is several function calls below the curtains, thus it's better to organize those listeners somehow. It's fine to have a button listen on itself for say MouseEvent.CLICK and to have 300 buttons like that, because each time you click only a few (ideally one) listeners would react. It's not as fine to have 300 listeners listen for Event.ENTER_FRAME because each frame all of them would be invoked, and it's better to have one listener instead, but every subsystem or every object would then get called from that listener. This approach will also lessen the overhead on Flash event subsystem to direct calls, and lessen your hassle about unattached listeners.
There may be more performance aspects regarding listeners, especially since Flash engine developers started placing security checks into the engine, slowing event processing by a significant margin, these are however obscure and the only thing is known about them is "use fewer listeners". You will still have to rely on Flash event cycle at least on the top level, even if you devise an event processing system of your own, or use a system made by another, but the main point stands, "the fewer, the better". If you can lessen the number of listeners, please do so.
Well you are very vague on the kind of event listeners you use if they are enterframes it could be a issue try not using enterframs on objects and use them on the stage but if you are using 300.
I'm sure only a subset would be Enter_Frames and most will be mouse events. And i don't think most of them would be on active MovieClips.
So only a subset would be active at a time so mostly nothing to be worried about as long as there isn't any unwanted behaviour. I feel you should be good to go . But do Manage all of your enterframes.

Event-Driven Programming - How does an event know when to occur?

In the past few weeks I've been really into what happens "behind the scenes" in softwares, and there is something that really interests me - how does an event in Event-Driven Programming know when to occur?
Let me explain: Let's say we have a button in a GUI, how does the button know when it was pressed? Is there a loop that runs constantly and once it detects the button press it activates the event or is there a more efficient method?
In the case of a button, and how it knows it was clicked, I can only speak from experience with Windows programming, though I'm pretty sure it can be extrapolated to cover other types of operating systems and windowing systems.
Windows, the operating system, will keep tabs on your input devices, such as your mouse. When it detects, or probably more appropriately, is told that you clicked on one of the mouse button, it records a lot of information and then goes searching for what to do with that information.
I am guessing here, but it probably gets told through an interrupt, something that pings the CPU and tells it something special just happened.
With the information the operating system record, such as which mouse, which button, and where the mouse pointer was at the time, is used to determine what happens with that information.
Specifically, Windows tries to find out which program, window, and component in that window should be told about the mouse click. When it has found out where the mouse click went, it puts a message into the message queue of the thread that owns that window.
A message queue is like a loop that runs constantly, but it will stop whenever nothing is happening, ie. when no messages are put into its queue. So the message that was created because you clicked your mouse is being put into the message queue of that thread, and the message queue gets that message and processes it.
A message queue loop looks somewhat like this:
Message msg;
while (GetNextMessage(out msg))
{
ProcessMessage(msg);
}
Processing it here means that the thread figures out which internal component of the window the message should go to, and then calls a method on that component, giving it the message.
So basically your mouse click ends up being a normal method call on the button object.
That's all there is to it.
In .NET the method in question is named WndProc.
Now, what do we have event driven programming. What's the alternative?
Well, one thing you could do was create a new button class every time you need a new button in a window, embedding the code that should happen when you clicked the button inside that class.
This would get tiresome really quick, you would do the same thing over and over again, every time you need a new button.
In truth, the only thing that differs is what happens when you click the button.
So instead of creating a new button every time you need a new one in your program, let's make one button class that can do everything.
Except, how can that one class do everything? It can't, which is why, when the button is clicked, it needs some way of informing the owning program / window that it was clicked, so that whatever is specific to this button can be done.
And that's why events was created. You can create a generic button type that will signal to the outside world (outside of the button type) that specific things, "events" happen, and not care one bit about what actually happens.
Basically the way it works is that when you click the mouse button it generates a hardware interrupt that halts the currently executing thread and causes the OS to run a specific piece of code for handling that type of interrupt. In Windows this causes a message to be generated which is added to the event queue in a running application. At this point the thread that was interrupted to handle the hardware interrupt is resumed (or possibly some other thread). A GUI application essentially is constantly looping and checking for messages in this queue and when it gets one it will process it by doing something like check the x and y position of the mouse click to see if that is within the bounds of a button and if it is it will call some user specified code (the click event handler for that button). All of this is usually abstracted away to the point where you just supply the code that should be called when the button is clicked.
There are two different models that may present themselves here:
1/ The library takes control of the "event loop" - intercepting user events, OS events, etc., and relies on a (user-defined) callback mechanism to deal with specific events: (keyboard, mouse, etc.)
2/ The user must manage the event loop: check for events, dispatch based on the event, and (typically) return to event loop.
It would be helpful if you could provide more specifics.

Is There A Way To Independently Loop Layers in Flash with Actionscript?

I'm new to Actionscript. There's probably a better way to do this, and if there is, I'm all ears.
What I'm trying to do is have a background layer run for, say 150 seconds in a loop. Then have another layer (we'll call it Layer 1) with a single object on it loop for 50 seconds. Is there a way to have Layer 1 loop 3 times inside of that 150 seconds that the background layer is looping?
Here's the reason I want Layer 1 to be shorter:
When a certain combination is entered (for example, A1), an item will pop out of and in front of the object on Layer 1.
I haven't written any code for it yet, but my hopeful plan is to have the background layer run continuously then have different scene sections on Layer 1 for each of the items coming out of the object on Layer 1. That way when A1 is entered, Layer1 can goToAndPlay(51) without messing up the background layer.
If it helps you understand it at all, it's a vending machine project. My group's vending machine is the TARDIS. The TARDIS is flying through space while you're entering what you want out of the vending machine and stuff is popping out of it.
If I understand correctly, the background is a MovieClip that loops within its own timeline. When Flash plays through a timeline, the timing is dependent on the performance of the computer and how complex the animation is. You can add an audio track set to 'streaming' to lock the timing down, which will then drop frames if the CPU is overloaded. I have used a silent sound set to loop infinitely and play mode 'streaming' to do this when there is no audio to be used.
Instead of using timeline animations I would recommend using TweenMax http://www.greensock.com/tweenmax/ as it allows tween chaining, that is creating a chain of sequential and parallel tweens. When you use a tween you define the timing in seconds and can use values like 1.25 seconds. It will be accurate to the timing you define. You can also run methods on complete, use easing and all sorts of goodies. If you get comfortable using this you will be able to create much more complex interactions in your Flash projects and also be able to change animations and timing much easier than messing with the timeline.
In fact when hiring Flash developers we always screen candidates by asking if they prefer to do animations on the timeline or programmatically. Although Flash is on its way out, still good to learn as the ideas will apply to javascript and whatever new technology comes about.

Clojure agents: rate limiting?

Okay, so I have this small procedural SVG editor in Clojure.
It has a code pane where the user creates code that generates a SVG document, and a preview pane. The preview pane is updated whenever the code changes.
Right now, on a text change event, the code gets recompiled on the UI thread (Ewwwww!) and the preview pane updated. The compilation step should instead happen asynchronously, and agents seem a good answer to that problem: ask an agent to recompile the code on an update, and pass the result to the image pane.
I have not yet used agents, and I do not know whether they work with an implicit queue, but I suspect so. In my case, I have zero interest in computing "intermediate" steps (think about fast keystrokes: if a keystroke happens before a recompilation has been started, I simply want to discard the recompilation) -- ie I want a send to overwrite any pending agent computation.
How do I make that happen? Any hints? Or even a code sample? Is my rambling even making sense?
Thanks!
You describe a problem that has more to deal with execution flow control rather than shared state management. Hence, you might want to leave STM apart for a moment and look into futures: they're still executed in a thread pool as agents, but instead of agents they can be stopped by calling future-cancel, and inspecting their status with future-cancelled?.
There are no strong guarantees that the thread the future is executing can be effectively stopped. Still, your code will be able to try to cancel the future, and move on to schedule the next recompilation.
agents to indeed work on a queue, so each function gets the state of the agent and produces the next state of the agent. Agents track an identity over time. this sounds like a little more than you need, atoms are a slightly better fit for your task and used in a very similar manner.

Actionscript 3: Memory Leak in Server Polling Presentation App

I'm building a remote presentation tool in AS3. In a nutshell, one user (the presenter) has access to a "table of contents" HTML page with links for each slide in the presentation, and an arbitrary number of viewers can watch the presentation on another page, which in turn is in the form of a SWF that polls the server every second to ensure that it's on the right slide. Whenever the admin clicks a slide link in the TOC, the database gets updated, and on its next request the presentation swf compares the label of the slide it's currently displaying to the response it got from the server. If the response differs from the current label, the swf scrubs through the timeline until it finds the right frame label; otherwise, it does nothing and waits for the next poll result (a second later).
Each slide consists of a movieclip with its own nested timeline that loops as long as the slide is displayed. There's no actionscript controlling any of the nested movieclips, nor is there any actionscript on the main timeline except the stop();s on each keyframe (each of which is a slide in the presentation).
Everything is built and working perfectly. The only thing that's troubling is that if the presentation swf is open for long enough (say, 20 minutes), the polling starts to have a noticeable effect on the framerate of the movieclips animating on any given slide. That is, every second, there's a noticeable drop in the framerate of the animations that lasts about three-tenths of a second, which is quite noticeable (and hence is a deal-breaker for the whole presentation suite!).
I know that AS3 has issues with memory management, and I've tried to be diligent in my re-use of objects and event listeners. The code itself is dead simple; there's a Timer instance that fires every second, which triggers a new URLRequest to be loaded by a URLLoader. The URLLoader is reused from call to call, while the URLRequest is not (it needs to be initialized with a new cache-killing value each time, retrieved from a call to new Date().time). The only objects instantiated in the entire class are the Timer, the URLLoader, the various URLRequests (which should be garbage-collected), and the only event listeners are on the Timer (added once), the URLLoader (added once), and on the routines that scrub backwards and forwards in the timeline to find the right slide (and they're removed once the correct slide is found).
I've been using mr doob's stats package to monitor memory usage, which definitely grows over time, so there's gotta be a leak somewhere (it grows from ~30 MB initially to > 200 MB after some scrubbing and about 25 minutes of uptime).
Does anyone have any ideas as to what might be causing the performance problems?
UPDATE: I'm not entirely sure the performance troubles are tied directly to memory; I ran an instance of the presentation swf for about 15 minutes and although memory usage only climbed to around 70 MB (and stayed there), a noticeable hiccup started appearing at one-second intervals, coinciding with the polling calls (tracked via Firebug's Net panel). What else might cause stuttering movieclips?
I know this is coming a bit late, but I have been using Flash Builder's profiler frequently and one thing I found is that the TimerEvent generated by the timer class
uses up quite a bit of memory individually and
seems to not get released properly during garbage collection (even if you stopped the timer and removed all references to it).
A new event is generated for each Timer tick. I use setInterval instead, even though a few AS3 evangelists seem to recommend against that. I don't know why. setInterval still generates timer events, but they appear to be garbage-collected properly over time.
So one strategy may be that
you replace the Timer with a call to setInterval() ... which is arguably more robust code anyway and
(CAUTION) force garbage collection on each slide scrub (but not on each poll). See this question for more details on the pros and cons.
The second suggestion is only a stop-gap measure. I really encourage you to use the profiling tools to find the leak. Flash Builder Pro has a 60-day trial that might help.
Finally, when moving to a completely new slide SWF (not a new timeline position in the current slide), how are you making sure that the previous slide SWF got unloaded properly? Or am I misunderstanding your setup and there is only one actual slide SWF?
Just two things that came into my mind:
Depending on the version of the Flash player and the cpu usage the garbage collections sometimes does not start before 250 MB (or even more) memory are consumed.
Moviesclips, Sprites, Loader and whatever that has an Eventlistener listening will not be killed by the garbage collection.
So I believe your problem is, that either the slides or the loader are not cleaned correctly after you used them, so the were keept in memory.
A good point to start reading: http://www.gskinner.com/blog/archives/2006/06/as3_resource_ma.html