What are the best practices for Design by Contract programming.
At college I learned the design by contract paradigma
(in an OO environment)
We've learned three ways to tackle the problem :
1) Total Programming : Covers all possible exceptional cases in its
effect (cf. Math)
2) Nominal Programming : Only 'promises' the right effects when the preconditions are met. (otherwise effect is undefined)
3) Defensive Programming : Use exceptions to signal illegal invocations of methods
Now, we have focussed in different OO scenarios on the correct use in each situation, but we haven't learned WHEN to use WHICH...
(Mostly the tactics where inforced by the exercice..)
Now I think it's very very strange that I haven't asked my teacher (but then again, during lectures, noone has)
Personally, I never use nominal now, and tend to replace preconditions with exceptions (so i rather use : throws IllegalDivisionByZero, than stating 'precondition : divider should differ from zero) and only program total what makes sense (so I wouldn't return a conventional value on division by zero), but this method is just based on personal findings and likes.
so I am asking you guys :
Are there any best practises??
I didn't know about this division, and it doesn't really reflect my experience.
Total Programming is virtually impossible. You could not guarantee that you cover all exceptional cases. So basically you should limit your scope and reject the situations that are out of scope (that's the role of the Pre-conditions)
Nominal Programming is not desired. Undefined effect should be banned.
Defensive Programming is a must. You should always signal illegal invocations of methods.
I'm in favour of the implementation of the complete Design-by-Contract elements, which is, in my opinions a practical and affortable version of the Total Programming
Preconditions (a kind of Defensive Programming) to signal illegal invocation of the method. Try to limit your scope as much as you can so that you could simplify the code. Avoid complex implementation if possible by narrowing a little bit the scope.
Postconditions to raise an error if the desired effect is not obtained. Even if it your fault, you should notify the caller that you miss your goal.
Invariants to check that the object consistency is preserved.
It all boils down to what responsibilities do you wish to assign to the client and the implementer of the contract.
In defensive programming you force the implementer to check for error conditions which can be costy or even impossible in some cases. Imagine a contract specified by the binarySearch for example your input array has to be sorted. you can't detect this while running the algorithm. you have to do a manual check for it which will actually bump the execution time an order of magnitude. to back my opinion up is the signature of the method from the javadocs.
Another point is People and frameworks now tend to implement exception translation mechanisms which is used mainly to translate checked exceptions (defensive style) to runtime exceptions that will just pop up if something wrong happens. this way the client and implementer of the contract has less to worry about while dealing with each other.
Again this is my personal opinion backed only with what limited experience I have, I'd love to hear more about this subject.
...but we haven't learned WHEN
to use WHICH...
I think the best practice is to be "as defensive as possible". Do your runtime checks if you can. As #MahdeTo has mentioned sometimes that's impossible for performance reasons; in such cases fall back on undefined or unsatisfactory behavior.
That said, be explicit in your documentation as to what has runtime checks and what does not.
Like much of computing "it depends" is probably the best answer.
Design by contract/programming by contract can help development greatly by explicitly documenting the conditions for a function. Just the documentation can be a help without even making it into (compiled) code.
Where feasible I recommend defensive - checking every condition. BUT only for development and debug builds. In this way most invalid assumptions are caught when the conditions are broken. A good build system would allow you to turn the different condition types on and off at a module or file level - as well as globally.
The actions taken in release versions of software then depend upon the system and how the condition is triggered ( usual distinction between external and internal interfaces ). The release version could be 'total programming' - all conditions give a defined result (which can include errors or NaN)
To me "nominal programming" is a dead end in the real world. You assume that if you passed the right values in (which of course you did) then the value you receive is good. If your assumption was wrong - you break down.
I think that test driven programming is the answer. Before actually implementing the module, you first create a unit test (call it a contract). Then gradually implement the functionality and make sure the contract is still valid as you go. Usually I start with plain stubs and mockups, then gradually fill out the rest replacing the stabs with real stuff. Keep improving and making the test stronger. At the end you end up with a robust implementation of said module plus you've got a fantastic test bed - coded implementation of the contract. Later on, if someone modifies the module, first you see if it can still fit the test bed. If it doesn't, the contract is broken - reject the changes. Or, the contract is outdated, - fix the unit tests. And so on.. Boring cycle of software development :)
Related
Native support for differential programming has been added to Swift for the Swift for Tensorflow project. Julia has similar with Zygote.
What exactly is differentiable programming?
what does it enable? Wikipedia says
the programs can be differentiated throughout
but what does that mean?
how would one use it (e.g. a simple example)?
and how does it relate to automatic differentiation (the two seem conflated a lot of the time)?
I like to think about this question in terms of user-facing features (differentiable programming) vs implementation details (automatic differentiation).
From a user's perspective:
"Differentiable programming" is APIs for differentiation. An example is a def gradient(f) higher-order function for computing the gradient of f. These APIs may be first-class language features, or implemented in and provided by libraries.
"Automatic differentiation" is an implementation detail for automatically computing derivative functions. There are many techniques (e.g. source code transformation, operator overloading) and multiple modes (e.g. forward-mode, reverse-mode).
Explained in code:
def f(x):
return x * x * x
∇f = gradient(f)
print(∇f(4)) # 48.0
# Using the `gradient` API:
# ▶ differentiable programming.
# How `gradient` works to compute the gradient of `f`:
# ▶ automatic differentiation.
I never heard the term "differentiable programming" before reading your question, but having used the concepts noted in your references, both from the side of creating code to solve a derivative with Symbolic differentiation and with Automatic differentiation and having written interpreters and compilers, to me this just means that they have made the ability to calculate the numeric value of the derivative of a function easier. I don't know if they made it a First-class citizen, but the new way doesn't require the use of a function/method call; it is done with syntax and the compiler/interpreter hides the translation into calls.
If you look at the Zygote example it clearly shows the use of prime notation
julia> f(10), f'(10)
Most seasoned programmers would guess what I just noted because there was not a research paper explaining it. In other words it is just that obvious.
Another way to think about it is that if you have ever tried to calculate a derivative in a programming language you know how hard it can be at times and then ask yourself why don't they (the language designers and programmers) just add it into the language. In these cases they did.
What surprises me is how long it to took before derivatives became available via syntax instead of calls, but if you have ever worked with scientific code or coded neural networks at at that level then you will understand why this is a concept that is being touted as something of value.
Also I would not view this as another programming paradigm, but I am sure it will be added to the list.
How does it relate to automatic differentiation (the two seem conflated a lot of the time)?
In both cases that you referenced, they use automatic differentiation to calculate the derivative instead of using symbolic differentiation. I do not view differentiable programming and automatic differentiation as being two distinct sets, but instead that differentiable programming has a means of being implemented and the way they chose was to use automatic differentiation, they could have chose symbolic differentiation or some other means.
It seems you are trying to read more into what differential programming is than it really is. It is not a new way of programming, but just a nice feature added for doing derivatives.
Perhaps if they named it differentiable syntax it might have been more clear. The use of the word programming gives it more panache than I think it deserves.
EDIT
After skimming Swift Differentiable Programming Mega-Proposal and trying to compare that with the Julia example using Zygote, I would have to modify the answer into parts that talk about Zygote and then switch gears to talk about Swift. They each took a different path, but the commonality and bottom line is that the languages know something about differentiation which makes the job of coding them easier and hopefully produces less errors.
About the Wikipedia quote that
the programs can be differentiated throughout
At first reading it seems nonsense or at least lacks enough detail to understand it in context which is why I am sure you asked.
In having many years of digging into what others are trying to communicate, one learns that unless the source has been peer reviewed to take it with a grain of salt, and unless it is absolutely necessary to understand, then just ignore it. In this case if you ignore the sentence most of what your reference makes sense. However I take it that you want an answer, so let's try and figure out what it means.
The key word that has me perplexed is throughout, but since you note the statement came from Wikipedia and in Wikipedia they give three references for the statement, a search of the word throughout appears only in one
∂P: A Differentiable Programming System to Bridge Machine Learning and Scientific Computing
Thus, since our ∂P system does not require primitives to handle new
types, this means that almost all functions and types defined
throughout the language are automatically supported by Zygote, and
users can easily accelerate specific functions as they deem necessary.
So my take on this is that by going back to the source, e.g. the paper, you can better understand how that percolated up into Wikipedia, but it seems that the meaning was lost along the way.
In this case if you really want to know the meaning of that statement you should ask on the Wikipedia talk page and ask the author of the statement directly.
Also note that the paper referenced is not peer reviewed, so the statements in there may not have any meaning amongst peers at present. As I said, I would just ignore it and get on with writing wonderful code.
You can guess its definition by application of differentiability.
It's been used for optimization i.e. to calculate minimum value or maximum value
Many of these problems can be solved by finding the appropriate function and then using techniques to find the maximum or the minimum value required.
It's a pretty popular and well known phrase that you should "only catch/throw exceptions which are exceptional". However, how is an "exceptional" exception determined?
For example, a bad password is very routine in logging into a service, so this is not exceptional. Statistics for a web app would probably show something like one bad login attempt for every 5 attempts (from no specific user). Likewise, with attempting to go to a checkout with a basket in an online store, this could be very commmon (especially for new users). However, a file not found could go either way. I usually work along the lines that if a method is missing something to do its work, throw an exception, but then it gets a little confusing here. In some cases, a file not found could be common (e.g. a file share used by many users with no tight controls), compared to a very locked down production environment missing a file, which would be exceptional.
Is this the right way to deduce between whether an exception is exceptional or not? I can easily filter things like no network connection etc as exceptional, but some cases are hard to judge. Is it subjective?
Thanks
I think it's pretty subjective, honestly, so I prefer to avoid that method of figuring out when I should use exceptions.
Instead, I prefer to consider three things:
Is it likely that I might want to let the call stack unwind more than one level?
Is there another way? (Return null or an error code, etc.) If so, do I have even the slightest performance concern?
If neither of those lead to a clear decision, which is easier to read by someone who has to maintain the code?
If #1 is true, and I don't have a MAJOR performance concern, I will probably opt to use exceptions because it will speed up my development time not to have to code return codes (and manually code the logic to have them propagate up the call stack if needed). When you use exceptions, call stack unwinding is free of charge for development time.
If #2 is true, and either I'm not going more than one frame (maybe two?) up the call stack or I have a serious performance concern (in a tight loop, for example), then I'll try really hard to find another way that doesn't involve exceptions.
Exceptions are only a tool for programmers in a language which supports them. I don't believe they have to have any intrinsic value as to what is "exceptional" or not. Instead, I say use them when they are the best tool for the job.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni.
On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.
I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously.
What's the best practice here?
I have seen no "hard and fast" rule on how to check preconditions, but I generally treat it like method documentation. If it is publicly scoped, I assert that the preconditions are met. The logic behind this would be that the scope dictates that you are expecting consumption on a broader scale and with less influence.
Personally, the effort to put assertions around private methods is something I reserve for "mission critical" methods, which would basically be ones that either perform a critical task, are subject to external compliance requirements, or are non-recoverable in the event of an exception. These are largely "judgment calls".
The time saved can be reinvested in thorough unit and integration test enhancement to try and flush out these issues and puts the tooling in place to help enforce the quality of the input assertions as it would be consumed by a client, whether it is a class in your control or not.
I think it depends on how the team is organized: check inputs which come from outside your team.
Check inputs from end-users
Check inputs from software components written by other teams
Trust inputs received from within your own component / within your own team.
The reason for this is for support and maintenance (i.e. bug-fixing): when there's a bug report, you want to be able as quickly as possible to know which component is at fault, i.e. which team to assign the bug to.
if a function is delegating to another
function, the first function should
check them but checking them again in
the second one is redundant.
What if you change the way those functions call each other? Or you introduce new validation requirements in the second function, that the first one delegates to? I'd say it's safer to always check them.
I've made the habit of distinguishing between checking and asserting the preconditions, depending (as people pointed out in the comments) on whether a call comes from the outside (unchecked exception, may happen) or the inside (assert, shouldn't happen).
Ideally, the production system won't have the penalty of the asserts, and you could even use a Design By Contract(TM)-like mechanism to discharge the assertions statically.
In my experience, it depends on your encapsulation.
If the inner function is private then you can be pretty sure that its preconditions are set.
Guess it's all about the Law of Demeter, talk only to your friends and so on.
As a basis for best practise, if the call is public, you should check your inputs.
I think that best practice is to do these checks only if they're going to fail some day. If will help when you do the following.
Debugging
There's no point to check them when several private functions in one module, which has a single maintainer, exchange data. Of course, there are exceptions, especially if your language doesn't have a static type system, or your data are "stringly typed".
However, if you expose public API, one day, someone will fail your precondition. The further is the person that maintains the calling module from you (in organizational structure and in physical location), the more likely it will happen. And when it happens, a clear statement of precondition failure, with a specification where it happened, may save hours of debugging. The Law of Leaky Abstractions is still true...
QA
Precondition failure helps QA to debug their tests. If a unit-test for a module causes the module to yield precondition failure, it means that the test is incorrect, not your code. (Or, that your precondition check is incorrect, but that's less likely).
If one of the means to perform QA is static analysis, then precondition checks, if they have a specific notation (for example, only these checks use assert_precondition macro), will also help. In static analysis it's very important to distinguish incorrect input and source code errors.
Documentation
If you don't have much time to create documentation, you may make your code aid the text that accompanies it. Clear and visible precondition checks, which are perceived separate from the rest of the implementation, "document" possible inputs to some extent. (Another way to document your code this way is writing unit tests).
As with everything, evaluate your requirements to find the best solution for each situation.
When preconditions are easier to check ("pointer isn't null"), you might as well do that often. Preconditions which are hard to check ("points to a valid null-terminated string") or are expensive in time, memory, or other resources may be handled a different way. Use the Pareto principle and gather the low-hanging fruit.
// C, C++:
void example(char const* s) {
// precondition: s points to a valid null-terminated string
assert(s); // tests that s is non-null, which is required for it to point to
// a valid null-terminated string. the real test is nearly impossible from
// within this function
}
Guaranteeing preconditions is the responsibility of the caller. Because of this, several languages offer an "assert" construct that can optionally be skipped (e.g. defining NDEBUG for C/C++, command-line switch for Python) so that you can more extensively test preconditions in special builds without impacting final performance. However, how to use assert can be a heated debate—again, figure out your requirements and be consistent.
It is a little bit old question, but no, preconditions do not have to be checked every time. It really depends.
For example, what if you have binary search over vector. Precondition is sorted vector. Now, if you check every time if vector is sorted this takes a linear time (for each vector), so it is not efficient. Client must be aware of precondition and be sure to meet it.
The best practice is to always check them.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am currently doing a dissertation about the implications or dangers that today's software development practices or teachings may have on the long term effects of programming.
Just to make it clear: I am not attacking the use abstractions in programming. Every programmer knows that abstractions are the bases for modularity.
What I want to investigate with this dissertation are the positive and negative effects abstractions can have in software development. As regards the positive, I am sure that I can find many sources that can confirm this. But what about the negative effects of abstractions? Do you have any stories to share that talk about when certain abstractions failed on you?
The main concern is that many programmers today are programming against abstractions without having the faintest idea of what the abstraction is doing under-the-covers. This may very well lead to bugs and bad design. So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
Taking a simple example from Joel's Back to Basics, C's strcat:
void strcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
}
The above function hosts the issue that if you are doing string concatenation, the function is always starting from the beginning of the dest pointer to find the null terminator character, whereas if you write the function as follows, you will return a pointer to where the concatenated string is, which in turn allows you to pass this new pointer to the concatenation function as the *dest parameter:
char* mystrcat( char* dest, char* src )
{
while (*dest) dest++;
while (*dest++ = *src++);
return --dest;
}
Now this is obviously a very simple as regards abstractions, but it is the same concept I shall be investigating.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Can you please give your opinions and your says as regards this subject?
Thank you for your time and I appreciate every comment.
First of all, abstractions are inevitable because they help us to deal with the mind-blowing complexity of things.
Abstractions are also inevitable because it is more and more required of an individual to undertake more tasks or even complete projects. To address the problem, one uses libraries which wrap lower-level concepts and expose more complex behavior.
Naturally, a developer has less and less time to know the intrinsics of the things. The latest concern I heard about on SO pages is starting to learn JavaScript with jQuery library, ignoring the raw JavaScript at all.
The issue is about the balance between:
Know the little tiniest details of some technology and be a master of it, but at the same time being unable to work with anything else.
Superficial knowledge of a wide variety of technologies and tools which however proves sufficient for common everyday tasks which allows an individual to perform in multiple areas possibly covering all sides of some (moderately big) project.
Take your pick.
Some work requires the one, another position requires the other.
So, in you're opinion, how important is it that programmers actually know what is going below the abstractions?
It would be nice if people knew what is happening behind the scenes. This knowledge comes with time and practice, up to a certain degree. Depends on what kind of tasks you have. You certainly shouldn't blame people for not knowing everything. If you wish a person to be able to perform in a variety of fields, it is inevitable he won't have time to cover each up to the last bit.
What is essential, is the knowledge of the basic building blocks. Data structures, algorithms, complexity. That should provide a basis for everything else.
Knowing tiniest details of some particular technology is good, but not essential. Anyway, you can't learn them all. They're too many and they keep coming.
Finally, what do you think about the issue that schools are preferring to teach Java instead of C and Lisp ?
Schools shouldn't be teaching programming languages at all. They're to teach basics of theoretical and practical CS, social skills, communication, team work. To cover a vast variety of topics and problems to provide a wide angle view for their graduates. This will help them to find their way. Whatever they need to know in details, they'll do it on their own.
An example where abstraction has failed:
In this case, a piece of software was needed to communicate to many different third party data processors. The communication was done through various messaging protocols; the transport method/protocol is not important in this case. Just assume everyone communicated through messaging.
The idea was to abstract the features of each of these third parties into a single, unified message format. It seemed relatively straightforward because each of the third parties performed a similar service. The problem was that some third parties used different terms to explain similar features. It was also found that some third parties had additional features that other third parties did not have.
The designers of the abstraction did not see through the difference of third party terms nor did they think it was reasonable to limit the scope of the unified features to only support the common features of the third parties. Instead, a single, monolithic message schema was developed to support any and all features of the third parties considered at the time. In what was probably considered a future-proofing move, they added a means of also passing an infinite number of name/value pairs along with the monolithic message in case there were future data elements that the monolithic message could not handle.
Early on, it became clear that changing the monolithic message was going to be difficult due to so many people using it in mission critical systems. The use of the name/value pairs increased. Each name that could be used was documented inside a large spreadsheet, and developers were required to consult the spreadsheet to avoid duplication of name value function. The list got so large, however, that it was found that there were frequently collisions in purposes of name values.
The majority of the monolithic message's fields now have no purpose and are kept mainly for backwards compatibility. There are name values that can be used to replace fields in the monolithic message. The majority of the interfacing is now done through the name/value pairs. In cases where the client is intending to communicate with more than one third party, each client needs to reconcile the name values available for each third party. It would be almost simpler to interface directly to the third party themselves.
I believe this illustrates that, from a consumer of the monolithic message perspective, that it is important that developers of the consuming code not know what is happening under the covers. If the designers had considered that the consumers of the monolithic message should not have to understand the abstraction in great detail, the monolithic message and it's associated name/value pairs might never have happened. Documenting the abstraction with assertions regarding input and expected output would make life so much simpler.
As for colleges not teaching C and Lisp....they are cheating the students. You get a better understanding of what is going on with the machine and OS with C. You get a bit of a different perspective on processing data and approaching problems with Lisp. I have used some of the ideas I learned using Lisp in programs written in C, C++, .Net, and Java. Learning Java after knowing even just C is not very difficult. The OO part is really not programming language specific, so perhaps using Java for that is acceptable.
An understanding of fundamentals of algorithms (e.g. time complexity) and some knowledge about the metal is essential to designing/writing smells-good code.
I would suggest, though, that just as important is education in modern abstractions and profiling. I feel that modern abstractions make me so much more productive than I would be without them that they are at least as important as good fundamentals, if not more so.
An important element that lacked in my education was the use of profilers. When used routinely and correctly, profilers can help mitigate problems with poor fundamentals.
Since you quote Joel Spolsky, I take it your aware of his "Law of Leaky Abstractions"? I'll mention it for future readers. http://www.joelonsoftware.com/articles/LeakyAbstractions.html
Green & Blackwell's Ironies of Abstractions talks a bit about the effort of learning the abstraction. http://homepage.ntlworld.com/greenery/workStuff/Papers/index.html
The term "astronaut architecture" is a reaction to over-abstraction.
I know I certainly curse abstraction when I haven't touched Java or C# in a while and i want to write to a file, but have to instance a Stream...Writer...Adaptor....Handler....
Also, Patterns, as in Gang Of Four. Seemed great when I first read about them in the mid-90's, but can never remember factory, facade, interface, helper, worker, flyweight....
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you use Design by Contract professionally? Is it something you have to do from the beginning of a project, or can you change gears and start to incorporate it into your software development lifecycle? What have you found to be the pros/cons of the design approach?
I came across the Design by Contract approach in a grad school course. In the academic setting, it seemed to be a pretty useful technique. But I don't currently use Design by Contract professionally, and I don't know any other developers that are using it. It would be good to hear about its actual usage from the SO crowd.
I can't recommend it highly enough. It's particularly nice if you have a suite that takes inline documentation contract specifications, like so:
// #returns null iff x = 0
public foo(int x) {
...
}
and turns them into generated unit tests, like so:
public test_foo_returns_null_iff_x_equals_0() {
assertNull foo(0);
}
That way, you can actually see the tests you're running, but they're auto-generated. Generated tests shouldn't be checked into source control, by the way.
You really get to appreciate design by contract when you have an interface between to applications that have to talk to each other.
Without contracts this situation quickly becomes a game of blame tennis. The teams keep knocking accusations back and forth and huge amounts of time get wasted.
With a contract, the blame is clear.
Did the caller satisfy the preconditions? If not the client team need to fix it.
Given a valid request, did the receiver satisfy the post conditions? If not the server team need to fix that.
Did both parties adhere to the contract, but the result is unsatisfactory? The contract is insufficient and the issue needs to be escalated.
For this you don't need to have the contracts implemented in the form of assertions, you just need to make sure they are documented and agreed on by all parties.
If you look into STL, boost, MFC, ATL and many open source projects, you can see there are so many ASSERTION statements and that makes project going further more safely.
Design-By-Contract! It really works in real product.
Frank Krueger writes:
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue.
I have two responses to this:
Null was just an example. For square(x), I'd want to test that the square root of the result is (approximately) the value of the parameter. For setters, I'd want to test that the value actually changed. For atomic operations, I'd want to check that all component operations succeeded or all failed (really one test for success and n tests for failure). For factory methods in weakly-typed languages, I want to check that the right kind of object is returned. The list goes on and on. Basically, anything that can be tested in one line of code is a very good candidate for a code contract in a prologue comment.
I disagree that you shouldn't test things because they generate runtime exceptions. If anything, you should test things that might generate runtime exceptions. I like runtime exceptions because they make the system fail fast, which helps debugging. But the null in the example was a result value for some possible input. There's an argument to be made for never returning null, but if you're going to, you should test it.
It's absolutely foolish to not design by contract when doing anything in an SOA realm, and it's always helpful if you're working on any sort of modular work, where bits & pieces might be swapped out later on, especially if any black boxen are involved.
In lieu of more expressive type systems, I would absolutely use design by contract on military grade projects.
For weakly typed languages or languages with dynamic scope (PHP, JavaScript), functional contracts are also very handy.
For everything else, I would toss it aside an rely upon beta testers and unit tests.
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue. If you are more interested in documentation, then I would use annotations that can be used with static analyzers and the like (to make sure the code isn't breaking your annotations for example).
A stronger type system coupled with Design by Contract seems to be the way to go. Take a look at Spec# for an example:
The Spec# programming language. Spec#
is an extension of the object-oriented
language C#. It extends the type
system to include non-null types and
checked exceptions. It provides
method contracts in the form of pre-
and postconditions as well as object
invariants.
Both Unit testing and Design by Contract are valuable test approaches in my experince.
I have tried using Design by Contract in a System Automatic Testing framework and my experience is that is gives a flexibility and possibilities not easily obtained by unit testing. For example its possible to run longer sequence and verify that
the respons times are within limits every time an action is executed.
Looking at the presentations at InfoQ it appears that Design by contract is a valuable addition to the conventional Unit tests in the integration phase of components.
For example it possible to create a mock interface first and then use the component after-
or when a new version of a component is released.
I have not found a toolkit covering all my design requirement to design by contract testing
in the .Net/Microsoft platform.
I find it telling that Go programming language has no constructs that make design by contract possible. panic/defer/recover aren't exactly that as defer and recover logic make it possible to ignore panic, IOW to ignore broken contract. What's needed at very least is some form of unrecoverable panic, which is assert really. Or, at best, direct language support of design by contract constructs (pre and post-conditions, implementation and class invariants). But given strong-headedness of language purists at the helm of Go ship, I give little change of any of this done.
One can implement assert-like behaviour by checking for special assert error in last defer function in panicking function and calling runtime.Breakpoint() to dump stack during recovery. To be assert-like that behaviour needs to be conditional. Of course this approach fells apart when new defer function is added after the one doing assert. Which will happen in large project exactly at the wrong time, resulting in missed bugs.
My point is that is that assert is useful in so many ways that having to dance around it may be a headache.
I don't actually use Design by Contract, on a daily basis. I do, however know that it has been incorporated into the D language, as part of the language.
Yes, it does! Actually a few years ago, I designed a little framework for Argument Validation. I was doing a SOA project, in which the different back-end system, did all kind of validation and checking. But to increase response times (in cases where the input was invalid, and to reduce to load those back-end systems), we started to validate the input parameters of the provided services. Not only for Not Null, but also for String patterns. Or values from within sets. And also the cases where parameters had dependencies between them.
Now I realize we implemented at that time a small design by contract framework :)
Here is the link for those who are interested in the small Java Argument Validation framework. Which is implemented as plain Java solution.