Here are a few questions in regard with event handling:
Question 1
Principally, may event handler (UI or not) methods execute for a relatively long time?
Question 2
If event handling may anyway take a lot of time in a given system, then this handling must, probably, be asynchronously performed, in order to avoid blocking the s/w. In this case, shall the class publishing this event asynchronously call all the registered handlers? Or may be it is better for this class to avoid any such assumptions, and have each handler, that takes a long time to execute, perform its massive work asynchronously, and immediately return without blocking?
Question 3
Anyway, when an event handler method is asynchronously called using BeginInvoke by the class publishing this event, is it a must to call the corresponding EndInvoke, and even take in account the possibility of an exception? Or may be it is better for the class raising the event to ignore them?
Answer to question 1
See jgauffin.
Basically, event handlers must perform in a relatively short time in order not to block the execution of the rest of the registered handlers, nor the handling of the following events. Needless to say, this demand becomes crucial when dealing with events whose handling holds the UI unresponsive!
Answer to question 2
Assuming that a class publishing an event isn't familiar with all possible event handlers (especially consider an event published by some DLL exported class), the class must call all the registered handlers asynchronously. But this has its cost (thread switching, caching, concurrency, synchronization, &c: see, for example, Brannon, Manoj Sharma, or even this wiki), which shouldn't be paid unless must be. Therefore the best alternatively is letting each registered handler, which, of course, knows how much time-consuming its own work is, decide whether it shall execute its own work synchronously or asynchronously. And here's a nice idea: an event may use its event-arguments structure to publish the time allocated for each handler (this time is basically calculated by dividing the overall event handling time permitted by the number of currently registered listeners), letting each handler use this info to decide whether to execute its own work synchronously or asynchronously.
Answer to question 3
See MSDN (note "Important!"), Hans Passant (a prominent world-class .NET expert), Bruno Brant, and eventually STW, which also supplies demo code; all of them seem to be strongly in favor of calling EndInvoke and catching possible exceptions. (Indeed, some programmers tend to avoid using exceptions, but exceptions are inherent with C++, Java, and .NET methods, and even more so with Python functions. If big-data s/w like Hadoop and the applications above it intensively use exceptions, then anyone may.)
Postscript
Eventually, after realizing that no one is going to reply my question, I turned to Microsoft's social MSDN, where I immediately received an answer from Magnus (also a prominent world-class .NET expert), which basically agreed with above arguments; see our correspondence here.
Related
I'm using a dynamic language (Clojure) to create CUDA contexts in a interactive development way using JCuda. Often I will call an initializer that includes the call to jcuda.driver.JCudaDriver/cuInit. Is it safe to call cuInit multiple times? In addition, is there something like a destroy method for cuInit? I ask since its possible for an error code CUDA_ERROR_DEINITIALIZED to be returned.
To answer the question, yes it is probably safe to call cuInit multiple times. I haven't noticed any side effects from doing so.
Note, however, that cuInit only triggers one-time initialisation processes inside the API. It doesn't do anything with devices, or contexts and it definitely can't return CUDA_ERROR_DEINITIALIZED. Doing the steps you would do after calling cuInit in an application (ie. creating a context) would have real implications - doing so creates a new context each time you call it and resource exhaustion will occur if contexts are not actively destroyed. There is no equivalent deinitialisation call for the API. I guess the intention is that once intialised, the runtime API is expected to stay in that state until an application terminates.
My professor gave my class an assignment today based on object oriented programming in Pygame. Basically he has said that the game that we are to create will be void of a main game loop. While I believe that it is possible to do this (and this question has stated that it is possible) I don't believe that this is required for adherence to the Object Oriented paradigm.
In a diagram that the professor gave, he showed the game initializing and as the objects were instantiated the control flow of the program would be distributed among the objects.
Basically I believe it would be possible to implement a game this way, but it would not be an ideal way nor is it required for Object Oriented adherence. Any thoughts?
EDIT: We are creating an asteroids clone, which I believe further complicates things due to the fact that it is a real time action game.
Turn based games or anything event driven would be the route to go. In other words, take desktop GUI apps. They'll just tick (wait) over until an event is fired. The same could be done for a simple game. Take Checkers for example. Looping each game cycle would be overkill. 90% of the time the game will be static. Using some form of events (the observer design pattern would be nice here) would provide a much better solution. You're using Pygame, so there may be support for this built in, through due to my limited use I cannot comment fully. Either way, the general principles are the same.
All in all it's a pretty rubbish assignment if you ask me. If it's to teach you event driven programming, a simple GUI application would be better. Even the simplest of games us a basic game loop, which can adhere to OO principles.
Hmm. In the general case, I think this idea is probably hokum. SDL (upon which PyGame is implemented), provides information to the program via an event queue, and consuming that queue requires some sort of repeatedly checking the queue for events, processing them, and waiting until the next event arrives.
There are some particular exceptions to this, though. You can poll the mouse and keyboard for their state without accessing the event queue. The problem with that is it still requires something like a loop, so that it happens over and over again until the game exits.
You could use pygame.time to wait on a timer instead of waiting on the event queue, and then pass control to the game objects which poll the mouse and keyboard as per above, but you are still 'looping', but bound by a timer instead of the event queue.
Instead of focusing on eliminating a main loop, how about instead think about using it in an object oriented way.
For instance, you could require a 'root' object, which actually has its own event loop, but instead of performing any action based on the incoming events, it calls a handler on several child objects. For instance when the root object recieves a pygame.event.MOUSEBUTTONDOWN event, it could search through it's children for a 'rect' attribute and determine if the event.pos attribute is inside that rect. if it is it can call a hypothetical onClick method on that child object.
I think it might qualify as event driven programming? Which can still be object oriented. You see this in Flash a lot.
There's a difference between a main loop in a main class. You can still have a game class initialize all of your objects, and then rely on inputs to move the game onward.
Kinda hard to say exactly without knowing the exact parameters of your assignment, the devil is in the details.
You might look at how python utilizes signals. A decent example I found is at: http://docs.python.org/library/signal.html
I have been reading over design-by-contract posts and examples, and there is something that I cannot seem to wrap my head around. In all of the examples I have seen, DbC is used on a trivial class testing its own state in the post-conditions (e.g. lots of Bank Accounts).
It seems to me that most of the time when you call a method of a class, it does much more work delegating method calls to its external dependencies. I understand how to check for this in a Unit-Test with specific scenarios using dependency inversion and mock objects that focus on the external behavior of the method, but how does this work with DbC and post-conditions?
My second question has to deal with understanding complex post-conditions. It seems to me that to write out a post-condition for many functions, that you basically have to re-write the body of the function for your post-condition to know what the new state is going to be. What is the point of that?
I really do like the notion of DbC and I think that it has great promise, particularly if I can figure out how to reproduce some failure state once I find a validated contract. Over the past couple of hours I have been reading some neat stuff wrt. automatic test generation in Eiffel. I am currently trying to improve my processes in C++ development, but I am open to learning something new if I can figure out how to not lose all of the ground I have made in my current projects. Thanks.
but how does this work with DbC and
post-conditions?
Every function is basically one of these:
A sequence of statements
A conditional statement
A loop
The idea is that you should check any postconditions about the results of the function that go beyond the union of the postconditions of all the functions called.
that you basically have to re-write
the body of the function for your
post-condition to know what the new
state is going to be
Think about it the other way round. What made you write the function in the first place? What were you pursuing? Can that be expressed in a postcondition which is more simple than the function body itself? A postcondition will typically use queries (what in C++ are const functions), while the body usually combines commands and queries (methods that modify the object and methods which only get information from it).
In some cases, yes, you will find out that you can really add little value with postconditions. In these cases, writing a bunch of tests will typically be enough.
See also:
Bertrand Meyer, Contract Driven
Development
Related questions 1, 2
Delegation at the contract level
most of the time when you call a
method of a class, it does much more
work delegating method calls to its
external dependencies
As for this first question: the implementation of a function/method may call many other function/methods, but if the designer of the code had a clear mind, this does not imply that the specification of the caller is the concatenation of the specifications of the callees. For a method that calls many others, the size of the specification can remain contained if the method accomplishes a precise and well-defined task. Which it should if the whole system was well designed.
You are clearly asking your question from the point of view of run-time assertion checking. In this context, the above would perhaps be expressed as "you don't need to re-check in the post-condition of the caller that all the callees have respected their respective contracts. These checks will already be made on each call. In the post-condition of the caller, only check the functionally visible result of the caller."
Understanding complex post-conditions
You may find this "ACSL by example" document interesting (although probably different from what you're used to). It contains many examples of formal contracts for C functions. The language of the contracts is intended for static verification instead of run-time checking, with all the advantages and the drawbacks that it entails. They are a little more sophisticated than the "Bank Accounts" that you mention — these functions implement real algorithms, although simple ones. The document keeps the contracts short and readable by introducing well-thought-out auxiliary predicates (which would be called queries in Eiffel, as Daniel points out in his answer).
I don't use this pattern, maybe there are some places where it would have been appropriate and I used something else. Have you used it in your daily coding? Feel free to give samples, in your language of choice, along with your explanation.
Callbacks aren't really a "pattern" - more like a building block. A number of the gang of four design patterns use virtual methods in a callback-like way. Justin Niessner has already mentioned Observer.
Callbacks are much older than OOP (and probably older than 3GLs and even assembler). Another old idea is the parameter block - the C interpretation being a struct full of related members to be passed to a function so that function doesn't need a huge parameter list.
OOP classes build upon the parameter block (and add a philosophy to it). The class instance itself is a parameter block passed by reference to its methods. The virtual table is a dispatch-handling parameter block. Every virtual method has a callback pointer in the dispatch-handling parameter block. A pure virtual method reserves space for the callback pointer in the parameter block, and promises to provide the actual pointer later.
Since the class is the building block for object oriented design patterns, and parameter blocks and callbacks are the building blocks of classes - well, you could claim that all OOP design patterns are built from these ideas.
I'd like to be able to say "parameter blocks and callbacks, plus style rules guiding their use, inspired object orientation" but as appealing as it sounds, I don't know whether it's true.
I use callbacks pretty much every day in the following scenarios:
Events: When the user clicks their mouse on a control, presses a key or otherwise interacts with the UI in a way I need to handle, I subscribe to the delegate that the control publishes for the event. I can then handle it by updating the UI, cancelling the event in certain circumstances or otherwise taking some special action.
Multithreaded Programming: When programming a GUI, it's important to keep the UI responsive and indicate the progress of a long-running background event to the user. To do this, I kick off the task in a separate thread and then publish delegates (events in the .NET world) that provide my UI with the opporutinty to notify the user about progress that's happening.
Lambda functions: In .NET, lambda functions are a form of a delegate, one that lets me interact with another piece of code's operation at a later point in time. LINQ is a great example of this. I can create a small matching function and then supply it to a LINQ query. Later, when I execute my query against a collection, the matching function is called to determine if there is a match for the query. This allows me to not have to build or worry about the query mechanism. I just have to tell the query mechanism where to go to find out if a comparison is a match or not.
These examples just scratch the surface, I'm sure. But they are useful examples of how I use callbacks every day.
The .NET platform uses callbacks heavily to implement the Observer pattern.
They also get used for handling Asynchronous processes.
Objective C and the Cocoa framework make a lot of use of it. An example would be NSURLConnection, which will inform an object given to it (called its delegate) when something happens on the connection:
NSURLConnection *foo = [[NSURLConnection alloc] initWithRequest:request delegate:self];
Note the passing of delegate there. The request proceeds in the background, and the instance will then send messages to the delegate (in this case, self), like:
connectionDidFinishLoading:
connection:didFailWithError:
You get the idea. I believe this is called the "observer pattern". It's all tied in to Cocoa's event loop (as far as I know, I'm still learning) and is cheap 'n easy asynchronous programming. A lot of frameworks in a variety of languages follow this approach.
.NET has delegates as well, which are similar. Think events.
I use it a great deal in javascript to let me know when an asynchronous call has finished, so the result can be processed.
But, in javascript, and now in C#3, I pass in functions as a parameter, so that the processing can go on without explicitly setting up a delegate to be called.
This one has been puzzling my for some time now.
Let's imagine a class which represents a resource, and in order to be able to use this resource one needs to first call the 'Open' method on it, or an InvalidOperationException will be thrown.
Should my code also check whether someone tries to open an already open resource, or close an already closed one?
Should code prevent a logically invalid invocation even when no harm would be done?
I think that programming this way would help writing better code at the other side, but I feel that I might be taking too much responsibility and affect reusability.
What do you guys think?
Edit:
I don't think this could be called defensive programming because it won't let a possible bad use to slip either, and another InvalidOperationException will be thrown.
This is called defensive programming. That's a good programming practice because you ensure that your application doesn't crash on misbehaviour.
That some method should be called first before another method is called, is not a good programming practice. It add's a lot of complexity, which is better handled by the class itself.
This is called sequential coupling. This wikipedia article says that it depends on the context if it's a bad practice, but it shouldn't crash when handled improperly. Sometimes it's necessary to throw an exception to make things clear.
This really depends on what the class actually does. In some cases failing silently is a good idea (eg, you want your DVD player to continue working, not show an error message if it opens the DVD tray that is already open) and in other cases you want as much information as possible (eg, if an airplane tries to close a door that is reportedly already closed, then something is wrong and the pilot should be alerted).
In most cases throwing an error when a logically invalid action is performed is useful for developers, so implementing those exceptions depends on who will use the code. If it is used internally for one application, then it's not vital. But if it is used by many different projects or developers, then I would look into it.
If your example is really the case, then the Open functionality should probably be invoked by the class's constructor.
If you consider the C++ iostream library (which is very widely used and considered quite a good example) you can call any operation on a stream class, whether it is open or not. The called function will simply return a failure indicator of some sort if the operation could not be performed. The functions must of course test the stream state in order to do this.
What you must not do is allow your programs to silently accept any old input as parameters. For example, this would be a broken implementation of strlen()
int strlen( const char * s )
{
if ( s == 0 )
{
return 0; // bad
}
else
{
// calculate length not shown
}
}
as it fields bad inputs without causing a fuss - it should instead throw an exception or use an assert(), depending on your exact development philosophy.
There's no substitute for taste, talent and experience in figuring out exactly how many safety checks should be in your code for best cost/benefit ratio for your organization.
A good quality APIs are expected to be fool-proof, and to guide the user with proper amount of warnings.
Sometimes, safety precautions may impair performance. Performance is one of the most counter-intuitive things in programming. Optimize with care, only when performance really matters.
If this is part of a public SDK that you're releasing to the wild, then the exposed API calls should have strong validation. It will help your 'users' (who are developers) and ensure you aren't stuck supporting usage you never intended to support.
Otherwise, I would not add such checks. I think they make the code harder to read, and these checks are rarely tested. In the past I would add a lot of code like this to make sure my code doesn't do the wrong thing. Now I write unit tests to verify my code does the right thing. The difference? I think tests are more maintainable, more readable, and they don't clutter your production code.
In the case of opening a file that is already open, it depends on knowing the effect of the request, will it reset the current read location for example.
In the case of closing a file that is already closed, think of it as a request for the file to be put in a known state. The code doesn't have to do anything but the desired state is acheived so the code can return a success condition. This is not true if there is some sort of file buffering that needs to be taken care of or maybe an interlinked resource to coordinate, like a modem/serial port or a printer/spooler.
Step back and think of the problem in terms of the desired outcome including any side-effects.
We once put a 'logout' link on an app menu that was displayed regardless of your login status. Why? Because it only took a simple (and very short) method to handle returning you to the login screen from the login screen and saved a large number of checks to handled tracking the login status just so the 'logout' menu-item was displayed only when you were logged in.
logical invalid invocations should always be reported to the user in debug mode..
When compiled in release mode, your code should not throw any unneeded exceptions or do anything else which could endanger the whole application.
Personally i prefer having some kind of logfile, and logging such logically invalid invocations surely will do no harm (at least when performance is not important)