What are some good use cases for the CALLBACK pattern/idiom? - language-agnostic

I don't use this pattern, maybe there are some places where it would have been appropriate and I used something else. Have you used it in your daily coding? Feel free to give samples, in your language of choice, along with your explanation.

Callbacks aren't really a "pattern" - more like a building block. A number of the gang of four design patterns use virtual methods in a callback-like way. Justin Niessner has already mentioned Observer.
Callbacks are much older than OOP (and probably older than 3GLs and even assembler). Another old idea is the parameter block - the C interpretation being a struct full of related members to be passed to a function so that function doesn't need a huge parameter list.
OOP classes build upon the parameter block (and add a philosophy to it). The class instance itself is a parameter block passed by reference to its methods. The virtual table is a dispatch-handling parameter block. Every virtual method has a callback pointer in the dispatch-handling parameter block. A pure virtual method reserves space for the callback pointer in the parameter block, and promises to provide the actual pointer later.
Since the class is the building block for object oriented design patterns, and parameter blocks and callbacks are the building blocks of classes - well, you could claim that all OOP design patterns are built from these ideas.
I'd like to be able to say "parameter blocks and callbacks, plus style rules guiding their use, inspired object orientation" but as appealing as it sounds, I don't know whether it's true.

I use callbacks pretty much every day in the following scenarios:
Events: When the user clicks their mouse on a control, presses a key or otherwise interacts with the UI in a way I need to handle, I subscribe to the delegate that the control publishes for the event. I can then handle it by updating the UI, cancelling the event in certain circumstances or otherwise taking some special action.
Multithreaded Programming: When programming a GUI, it's important to keep the UI responsive and indicate the progress of a long-running background event to the user. To do this, I kick off the task in a separate thread and then publish delegates (events in the .NET world) that provide my UI with the opporutinty to notify the user about progress that's happening.
Lambda functions: In .NET, lambda functions are a form of a delegate, one that lets me interact with another piece of code's operation at a later point in time. LINQ is a great example of this. I can create a small matching function and then supply it to a LINQ query. Later, when I execute my query against a collection, the matching function is called to determine if there is a match for the query. This allows me to not have to build or worry about the query mechanism. I just have to tell the query mechanism where to go to find out if a comparison is a match or not.
These examples just scratch the surface, I'm sure. But they are useful examples of how I use callbacks every day.

The .NET platform uses callbacks heavily to implement the Observer pattern.
They also get used for handling Asynchronous processes.

Objective C and the Cocoa framework make a lot of use of it. An example would be NSURLConnection, which will inform an object given to it (called its delegate) when something happens on the connection:
NSURLConnection *foo = [[NSURLConnection alloc] initWithRequest:request delegate:self];
Note the passing of delegate there. The request proceeds in the background, and the instance will then send messages to the delegate (in this case, self), like:
connectionDidFinishLoading:
connection:didFailWithError:
You get the idea. I believe this is called the "observer pattern". It's all tied in to Cocoa's event loop (as far as I know, I'm still learning) and is cheap 'n easy asynchronous programming. A lot of frameworks in a variety of languages follow this approach.
.NET has delegates as well, which are similar. Think events.

I use it a great deal in javascript to let me know when an asynchronous call has finished, so the result can be processed.
But, in javascript, and now in C#3, I pass in functions as a parameter, so that the processing can go on without explicitly setting up a delegate to be called.

Related

Replicate http.HandleFunc()'s "coding style" to create our own methods/functions

There is an answered question which will help you understand what exactly I want to say.
How does the function passed to http.HandleFunc get access to http.ResponseWriter and http.Request?
There are many built-in Go functions where the function parameters get assigned this way. I want to use that coding style in my daily coding life.
I want to write a similar function/method which will get its parameter values from somewhere just like http.Handlefunc's w and r.
func (s SchoolStruct) GetSchoolDetails(name string){
// here the parameter "name" should get assigned exactly like http.HandleFunc()'s "w" and "r".
}
What http does is that it registers a callback and uses it when the time comes. You don't have to pass the arguments it takes, as servers implementation provides these arguments with correct state. If you want to copy this approach, first you have to ask:
Is there some kind of generic abstraction that computes these parameters? Is the function I write just reacting to something? Does this function have any side effects? Does it return value back to the system?
This approach is very good when you are modifying existing system, extending its behavior with independent units. So to speak, integrating into robust API.
You may be correct that this is a style of doing things, but you cannot use this style on everything. Its just too specific and good at certain group of tasks.
As #mkopriva pointed out, declaring rules and requirements, your logic should satisfy, is known way to execute this style in Go. You have to realize that your logic, encapsulated behind function pointer or interface, has to be passed and controlled by some other code you call indirectly.
I cannot possibly imagine going to such lengths when all components of the system are under your control and system has only one logic to run.

Game programming without a main loop

My professor gave my class an assignment today based on object oriented programming in Pygame. Basically he has said that the game that we are to create will be void of a main game loop. While I believe that it is possible to do this (and this question has stated that it is possible) I don't believe that this is required for adherence to the Object Oriented paradigm.
In a diagram that the professor gave, he showed the game initializing and as the objects were instantiated the control flow of the program would be distributed among the objects.
Basically I believe it would be possible to implement a game this way, but it would not be an ideal way nor is it required for Object Oriented adherence. Any thoughts?
EDIT: We are creating an asteroids clone, which I believe further complicates things due to the fact that it is a real time action game.
Turn based games or anything event driven would be the route to go. In other words, take desktop GUI apps. They'll just tick (wait) over until an event is fired. The same could be done for a simple game. Take Checkers for example. Looping each game cycle would be overkill. 90% of the time the game will be static. Using some form of events (the observer design pattern would be nice here) would provide a much better solution. You're using Pygame, so there may be support for this built in, through due to my limited use I cannot comment fully. Either way, the general principles are the same.
All in all it's a pretty rubbish assignment if you ask me. If it's to teach you event driven programming, a simple GUI application would be better. Even the simplest of games us a basic game loop, which can adhere to OO principles.
Hmm. In the general case, I think this idea is probably hokum. SDL (upon which PyGame is implemented), provides information to the program via an event queue, and consuming that queue requires some sort of repeatedly checking the queue for events, processing them, and waiting until the next event arrives.
There are some particular exceptions to this, though. You can poll the mouse and keyboard for their state without accessing the event queue. The problem with that is it still requires something like a loop, so that it happens over and over again until the game exits.
You could use pygame.time to wait on a timer instead of waiting on the event queue, and then pass control to the game objects which poll the mouse and keyboard as per above, but you are still 'looping', but bound by a timer instead of the event queue.
Instead of focusing on eliminating a main loop, how about instead think about using it in an object oriented way.
For instance, you could require a 'root' object, which actually has its own event loop, but instead of performing any action based on the incoming events, it calls a handler on several child objects. For instance when the root object recieves a pygame.event.MOUSEBUTTONDOWN event, it could search through it's children for a 'rect' attribute and determine if the event.pos attribute is inside that rect. if it is it can call a hypothetical onClick method on that child object.
I think it might qualify as event driven programming? Which can still be object oriented. You see this in Flash a lot.
There's a difference between a main loop in a main class. You can still have a game class initialize all of your objects, and then rely on inputs to move the game onward.
Kinda hard to say exactly without knowing the exact parameters of your assignment, the devil is in the details.
You might look at how python utilizes signals. A decent example I found is at: http://docs.python.org/library/signal.html

Strategy for handling parameter validation in class library

I got a rather big class library that contains a lot of code.
I am looking at how to optimize the performance of some of the code, and for some rather simple utility methods I've found that the parameter validation occupies a rather large portion of the runtime for some core methods.
Let me give a typical example:
A.MethodA1 runs a loop, iterating over a collection, calling B.MethodB1 for each element
B.MethodB1 processes the element and returns the result, it's a rather basic calculation, but since it is used many places, it has been put into its own method instead of being copied and pasted where needed
A.MethodA1 calls C.MethodC1 with the results of B.MethodB1, and puts the result into a list that is returned at the end of the loop
In the case I've found now, B.MethodB1 does rudimentary parameter validation. Since the method calls other internal methods, I'd like to avoid having NullReferenceExceptions several layers deep into the code, and rather fail early, hence B.MethodB1 validates the parameters, like checking for null and some basic range checks on another parameter.
However, in this particular call scenario, it is impossible (due to other program logic) for these parameters to ever have the wrong values. If they had, from the program standpoint, B.MethodB1 would never be called at all for those values, A.MethodA1 would fail before the call to B.MethodB1.
So I was considering removing the parameter validation in B.MethodB1, since it occupies roughly 65% of the method runtime (and this is part of some heavily used code.)
However, B.MethodB1 is a public method, and can thus be called from the program, in which case I want the parameter validation.
So how would you solve this dilemma?
Keep the parameter validation, and take the performance hit
Remove the parameter validation, and have potentially fail-late problems in the method
Split the method into two, one internal that doesn't have parameter validation, called by the "safe" path, and one public that has the parameter validation + a call to the internal version.
The latter one would give me the benefits of having no parameter validation, while still exposing a public entrypoint which does have parameter validation, but for some reason it doesn't sit right with me.
Opinions?
I would go with option 3. I tend to use assertions for private and internal methods and do all the validation in public methods.
By the way, is the performance hit really that big?
That's an interesting question.
Hmmm, makes me think ... "code contracts" .. It would seem like it might be technically possible to statically (at compile time) have certain code contracts be proven to be fulfilled. If this were the case and you had such a compilation validation option you could state these contracts without ever having to validate the conditions at runtime.
It would require that the client code itself be validated against the code contacts.
And, of course it would inevitably be highly dependent on the type of conditions you'd want to write, and it would probably only be feasible to prove these contracts to a certain point (how far up the possible call graph would you go?). Beyond this point the validator might have to beg off, and insist that you place a runtime check (or maybe a validation warning suppression?).
All just idle speculation. Does make me wonder a bit more about C# 4.0 code contracts. I wonder if these have support for static analysis. Have you checked them out? I've been meaning to, but learning F# is having to take priority at the moment!
Update:
Having read up a little on it, it appears that C# 4.0 does indeed have a 'static checker' as well as a binary rewriter (which takes care of altering the output binary so that pre and post condition checks are in the appropriate location)
What's not clear from my extremely quick read, is whether you can opt out of the binary rewriting - what I'm thinking here is that what you'd really be looking for is to use the code contracts, have the metadata (or code) for the contracts maintained within the various assemblies but use only the static checker for at least a selected subset of contracts, so that you in theory get proven safety without any runtime hit.
Here's a link to an article on the code contracts

Understanding complex post-conditions in DbC

I have been reading over design-by-contract posts and examples, and there is something that I cannot seem to wrap my head around. In all of the examples I have seen, DbC is used on a trivial class testing its own state in the post-conditions (e.g. lots of Bank Accounts).
It seems to me that most of the time when you call a method of a class, it does much more work delegating method calls to its external dependencies. I understand how to check for this in a Unit-Test with specific scenarios using dependency inversion and mock objects that focus on the external behavior of the method, but how does this work with DbC and post-conditions?
My second question has to deal with understanding complex post-conditions. It seems to me that to write out a post-condition for many functions, that you basically have to re-write the body of the function for your post-condition to know what the new state is going to be. What is the point of that?
I really do like the notion of DbC and I think that it has great promise, particularly if I can figure out how to reproduce some failure state once I find a validated contract. Over the past couple of hours I have been reading some neat stuff wrt. automatic test generation in Eiffel. I am currently trying to improve my processes in C++ development, but I am open to learning something new if I can figure out how to not lose all of the ground I have made in my current projects. Thanks.
but how does this work with DbC and
post-conditions?
Every function is basically one of these:
A sequence of statements
A conditional statement
A loop
The idea is that you should check any postconditions about the results of the function that go beyond the union of the postconditions of all the functions called.
that you basically have to re-write
the body of the function for your
post-condition to know what the new
state is going to be
Think about it the other way round. What made you write the function in the first place? What were you pursuing? Can that be expressed in a postcondition which is more simple than the function body itself? A postcondition will typically use queries (what in C++ are const functions), while the body usually combines commands and queries (methods that modify the object and methods which only get information from it).
In some cases, yes, you will find out that you can really add little value with postconditions. In these cases, writing a bunch of tests will typically be enough.
See also:
Bertrand Meyer, Contract Driven
Development
Related questions 1, 2
Delegation at the contract level
most of the time when you call a
method of a class, it does much more
work delegating method calls to its
external dependencies
As for this first question: the implementation of a function/method may call many other function/methods, but if the designer of the code had a clear mind, this does not imply that the specification of the caller is the concatenation of the specifications of the callees. For a method that calls many others, the size of the specification can remain contained if the method accomplishes a precise and well-defined task. Which it should if the whole system was well designed.
You are clearly asking your question from the point of view of run-time assertion checking. In this context, the above would perhaps be expressed as "you don't need to re-check in the post-condition of the caller that all the callees have respected their respective contracts. These checks will already be made on each call. In the post-condition of the caller, only check the functionally visible result of the caller."
Understanding complex post-conditions
You may find this "ACSL by example" document interesting (although probably different from what you're used to). It contains many examples of formal contracts for C functions. The language of the contracts is intended for static verification instead of run-time checking, with all the advantages and the drawbacks that it entails. They are a little more sophisticated than the "Bank Accounts" that you mention — these functions implement real algorithms, although simple ones. The document keeps the contracts short and readable by introducing well-thought-out auxiliary predicates (which would be called queries in Eiffel, as Daniel points out in his answer).

Doesn't Passing in Parameters that Should Be Known Implicitly Violate Encapsulation?

I often hear around here from test driven development people that having a function get large amounts of information implicitly is a bad thing. I can see were this would be bad from a testing perspective, but isn't it sometimes necessary from an encapsulation perspective? The following question comes to mind:
Is using Random and OrderBy a good shuffle algorithm?
Basically, someone wanted to create a function in C# to randomly shuffle an array. Several people told him that the random number generator should be passed in as a parameter. This seems like an egregious violation of encapsulation to me, even if it does make testing easier. Isn't the fact that an array shuffling algorithm requires any state at all other than the array it's shuffling an implementation detail that the caller should not have to care about? Wouldn't the correct place to get this information be implicitly, possibly from a thread-local singleton?
I don't think it breaks encapsulation. The only state in the array is the data itself - and "a source of randomness" is essentially a service. Why should an array naturally have an associated source of randomness? Why should that have to be a singleton? What about different situations which have different requirements - e.g. speed vs cryptographically secure randomness? There's a reason why java.util.Random has a SecureRandom subclass :) Perhaps it doesn't matter whether the shuffle's results are predictable with a lot of effort and observation - or perhaps it does. That will depend on the context, and that's information that the shuffle algorithm shouldn't care about.
Once you start thinking of it as a service, it makes sense that it's passed in as a dependency.
Yes, you could get it from a thread-local singleton (and indeed I'm going to blog about exactly that in the next few days) but I would generally code it so that the caller gets to make that decision.
One benefit of the "randomness as a service" concept is that it makes for repeatability - if you've got a test which fails, you can pass in a Random with a specific seed and know you'll always get the same results, which makes debugging easier.
Of course, there's always the option of making the Random optional - use a thread-local singleton as a default if the caller doesn't provide their own.
Yes, that does break encapsulation. As with most software design decisions, this is a trade-off between two opposing forces. If you encapsulate the RNG then you make it difficult to change for a unit test. If you make it a parameter then you make it easy for a user to change the RNG (and potentially get it wrong).
My personal preference is to make it easy to test, then provide a default implementation (a default constructor that creates its own RNG, in this particular case) and good documentation for the end user. Adding a method with the signature
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source)
that creates a Random using the current system time as its seed would take care of most normal use cases of this method. The original method
public static IEnumerable<T> Shuffle<T>(this IEnumerable<T> source, Random rng)
could be used for testing (pass in a Random object with a known seed) and also in those rare cases where a user decides they need a cryptographically secure RNG. The one-parameter implementation should call this method.
I don't think this violates encapsulation.
Your Example
I would say that being able to provide an RNG is a feature of the class. I would obviously provide a method that doesn't require it, but I can see times where it may be useful to be able to duplicate the randomization.
What if the array shuffler was part of a game that used the RNG for level generation. If a user wanted to save the level and play it again later, it may be more efficient to store the RNG seed.
General Case
Simple classes that have a single task like this typically don't need to worry about divulging their inner workings. What they encapsulate is the logic of the task, not the elements required by that logic.