Event-Driven Programming - How does an event know when to occur? - event-driven

In the past few weeks I've been really into what happens "behind the scenes" in softwares, and there is something that really interests me - how does an event in Event-Driven Programming know when to occur?
Let me explain: Let's say we have a button in a GUI, how does the button know when it was pressed? Is there a loop that runs constantly and once it detects the button press it activates the event or is there a more efficient method?

In the case of a button, and how it knows it was clicked, I can only speak from experience with Windows programming, though I'm pretty sure it can be extrapolated to cover other types of operating systems and windowing systems.
Windows, the operating system, will keep tabs on your input devices, such as your mouse. When it detects, or probably more appropriately, is told that you clicked on one of the mouse button, it records a lot of information and then goes searching for what to do with that information.
I am guessing here, but it probably gets told through an interrupt, something that pings the CPU and tells it something special just happened.
With the information the operating system record, such as which mouse, which button, and where the mouse pointer was at the time, is used to determine what happens with that information.
Specifically, Windows tries to find out which program, window, and component in that window should be told about the mouse click. When it has found out where the mouse click went, it puts a message into the message queue of the thread that owns that window.
A message queue is like a loop that runs constantly, but it will stop whenever nothing is happening, ie. when no messages are put into its queue. So the message that was created because you clicked your mouse is being put into the message queue of that thread, and the message queue gets that message and processes it.
A message queue loop looks somewhat like this:
Message msg;
while (GetNextMessage(out msg))
{
ProcessMessage(msg);
}
Processing it here means that the thread figures out which internal component of the window the message should go to, and then calls a method on that component, giving it the message.
So basically your mouse click ends up being a normal method call on the button object.
That's all there is to it.
In .NET the method in question is named WndProc.
Now, what do we have event driven programming. What's the alternative?
Well, one thing you could do was create a new button class every time you need a new button in a window, embedding the code that should happen when you clicked the button inside that class.
This would get tiresome really quick, you would do the same thing over and over again, every time you need a new button.
In truth, the only thing that differs is what happens when you click the button.
So instead of creating a new button every time you need a new one in your program, let's make one button class that can do everything.
Except, how can that one class do everything? It can't, which is why, when the button is clicked, it needs some way of informing the owning program / window that it was clicked, so that whatever is specific to this button can be done.
And that's why events was created. You can create a generic button type that will signal to the outside world (outside of the button type) that specific things, "events" happen, and not care one bit about what actually happens.

Basically the way it works is that when you click the mouse button it generates a hardware interrupt that halts the currently executing thread and causes the OS to run a specific piece of code for handling that type of interrupt. In Windows this causes a message to be generated which is added to the event queue in a running application. At this point the thread that was interrupted to handle the hardware interrupt is resumed (or possibly some other thread). A GUI application essentially is constantly looping and checking for messages in this queue and when it gets one it will process it by doing something like check the x and y position of the mouse click to see if that is within the bounds of a button and if it is it will call some user specified code (the click event handler for that button). All of this is usually abstracted away to the point where you just supply the code that should be called when the button is clicked.

There are two different models that may present themselves here:
1/ The library takes control of the "event loop" - intercepting user events, OS events, etc., and relies on a (user-defined) callback mechanism to deal with specific events: (keyboard, mouse, etc.)
2/ The user must manage the event loop: check for events, dispatch based on the event, and (typically) return to event loop.
It would be helpful if you could provide more specifics.

Related

Worry about a lot of event listener in AS3

I'm a new member of this site. I am making a game where I need to use a lot event listeners.
Problem:
There'll be around 300 event listeners in my game, I'm worried about if this will affect my game?
Yes it will affect your game, primarily because you will need to control all 300 so that they won't create memory leaks in form of defunct (released) objects left in memory because they have a listener attached to say stage. Secondary aspect is performance, each listener taking actions is several function calls below the curtains, thus it's better to organize those listeners somehow. It's fine to have a button listen on itself for say MouseEvent.CLICK and to have 300 buttons like that, because each time you click only a few (ideally one) listeners would react. It's not as fine to have 300 listeners listen for Event.ENTER_FRAME because each frame all of them would be invoked, and it's better to have one listener instead, but every subsystem or every object would then get called from that listener. This approach will also lessen the overhead on Flash event subsystem to direct calls, and lessen your hassle about unattached listeners.
There may be more performance aspects regarding listeners, especially since Flash engine developers started placing security checks into the engine, slowing event processing by a significant margin, these are however obscure and the only thing is known about them is "use fewer listeners". You will still have to rely on Flash event cycle at least on the top level, even if you devise an event processing system of your own, or use a system made by another, but the main point stands, "the fewer, the better". If you can lessen the number of listeners, please do so.
Well you are very vague on the kind of event listeners you use if they are enterframes it could be a issue try not using enterframs on objects and use them on the stage but if you are using 300.
I'm sure only a subset would be Enter_Frames and most will be mouse events. And i don't think most of them would be on active MovieClips.
So only a subset would be active at a time so mostly nothing to be worried about as long as there isn't any unwanted behaviour. I feel you should be good to go . But do Manage all of your enterframes.

listeners firing order messes up testing

I have encountered the following issue. I have an interactive Swing application. It basically creates a bunch of graphical objects on a canvas. You pick the kind you want to create from a palette (oval, circle, etc.) and click on the canvas. Everything works as expected. Now I want to record a test using Abbot/Costello testing framework. It is pretty simple. Fire up Costello app, create a new script and start recording events. Let's say I want to record this sequence: click on a palette and drop a graphic on canvas. It is natural to expect that testing app would record a click before the canvas component had a chance to process it and added a new graphic. For various reasons I need to capture the app's state before any changes are made to it, not after. It turns out that my app gets first crack at the click event resulting in creation of a new graphic and only after that my testing app receives the event for recording. At this point it is too late for me, the state has been irretrievably changed and I am basically recording a future state and not the state that preceded it.
I understand that this is a result of listeners firing in different order. I also understand that Swing makes no guarantees as to the order of firing listeners. Have I reached limits of possible or there is a solution?

Clojure agents: rate limiting?

Okay, so I have this small procedural SVG editor in Clojure.
It has a code pane where the user creates code that generates a SVG document, and a preview pane. The preview pane is updated whenever the code changes.
Right now, on a text change event, the code gets recompiled on the UI thread (Ewwwww!) and the preview pane updated. The compilation step should instead happen asynchronously, and agents seem a good answer to that problem: ask an agent to recompile the code on an update, and pass the result to the image pane.
I have not yet used agents, and I do not know whether they work with an implicit queue, but I suspect so. In my case, I have zero interest in computing "intermediate" steps (think about fast keystrokes: if a keystroke happens before a recompilation has been started, I simply want to discard the recompilation) -- ie I want a send to overwrite any pending agent computation.
How do I make that happen? Any hints? Or even a code sample? Is my rambling even making sense?
Thanks!
You describe a problem that has more to deal with execution flow control rather than shared state management. Hence, you might want to leave STM apart for a moment and look into futures: they're still executed in a thread pool as agents, but instead of agents they can be stopped by calling future-cancel, and inspecting their status with future-cancelled?.
There are no strong guarantees that the thread the future is executing can be effectively stopped. Still, your code will be able to try to cancel the future, and move on to schedule the next recompilation.
agents to indeed work on a queue, so each function gets the state of the agent and produces the next state of the agent. Agents track an identity over time. this sounds like a little more than you need, atoms are a slightly better fit for your task and used in a very similar manner.

Is there ever a time when an exception can occur due to a user-invoked action and does not require letting the user know?

In implementing exception handling, it seems to follow the same pattern that any code which is invokable by the user (i.e. behind a button), needs try/catch/finally, and then has to propagate to the user (throw) and then show a message box to the user.
Is there ever any time when an exception can occur due to an action invoked by the user, but does not require letting the user know?
Thanks
Sure. One common example: a window is opened which is supposed to monitor the progress of some long-running task (whose execution is independent of the window) and the window is subsequently closed and Disposed. Just as the window is being disposed, the thread whose progress is being monitored attempts to use a BeginInvoke to update its progress indicator. The BeginInvoke will end up throwing an InvalidOperationException as a direct consequence of the user having decided to close the window at the precise moment he did, but there's no need to bother the user about it. Simply swallow the exception and move on.

How do software events work internally?

I am a student of Computer Science and have learned many of the basic concepts of what is going on "under the hood" while a computer program is running. But recently I realized that I do not understand how software events work efficiently.
In hardware, this is easy: instead of the processor "busy waiting" to see if something happened, the component sends an interrupt request.
But how does this work in, for example, a mouse-over event? My guess is as follows: if the mouse sends a signal ("moved"), the operating system calculates its new position p, then checks what program is being drawn on the screen, tells that program position p, then the program itself checks what object is at p, checks if any event handlers are associated with said object and finally fires them.
That sounds terribly inefficient to me, since a tiny mouse movement equates to a lot of cpu context switches (which I learned are relatively expensive). And then there are dozens of background applications that may want to do stuff of their own as well.
Where is my intuition failing me? I realize that even "slow" 500MHz processors do 500 million operations per second, but still it seems too much work for such a simple event.
Thanks in advance!
Think of events like network packets, since they're usually handled by similar mechanisms. Now think, your mouse sends a couple of hundred packets a second, maximum, and they're around 6 bytes each. That's nothing compared to the bandwidth capabilities of modern machines.
In fact, you could make a responsive GUI where every mouse motion literally sent a network packet (86 bytes including headers) on hardware built around 20 years ago: X11, the fundamental GUI mechanism for Linux and most other Unixes, can do exactly that, and frequently was used that way in the late 80s and early 90s. When I first used a GUI, that's what it was, and while it wasn't great by current standards, given that it was running on 20 MHz machines, it really was usable.
My understanding is as follows:
Every application/window has an event loop which is filled by the OS-interrupts.
The mouse move will come in there.
All windows have a separate queue/process by my knowledge (in windows since 3.1)
Every window has controls.
The window will bubble up this events to the controls.
The control will determine if the event is for him.
So its not necessary to "compute" which item is drawn under the mouse cursor.
The window, and then the control will determine if the event is for them.
By what criteria do you determine that it's too much? It's as much work as it needs to be. Mouse events happen in the millisecond range. The work required to get it to the handler code is probably measured in microseconds. It's just not an issue.
You're pretty much right - though mouse events occur at a fixed rate(e.g. an USB mouse on linux gives you events 125 times a second by default - which really is not a lot),, and the OS or application might further merge mouse events that's close in time or position before sending it off to be handled