Does Design By Contract Work For You? [closed] - design-by-contract

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Do you use Design by Contract professionally? Is it something you have to do from the beginning of a project, or can you change gears and start to incorporate it into your software development lifecycle? What have you found to be the pros/cons of the design approach?
I came across the Design by Contract approach in a grad school course. In the academic setting, it seemed to be a pretty useful technique. But I don't currently use Design by Contract professionally, and I don't know any other developers that are using it. It would be good to hear about its actual usage from the SO crowd.

I can't recommend it highly enough. It's particularly nice if you have a suite that takes inline documentation contract specifications, like so:
// #returns null iff x = 0
public foo(int x) {
...
}
and turns them into generated unit tests, like so:
public test_foo_returns_null_iff_x_equals_0() {
assertNull foo(0);
}
That way, you can actually see the tests you're running, but they're auto-generated. Generated tests shouldn't be checked into source control, by the way.

You really get to appreciate design by contract when you have an interface between to applications that have to talk to each other.
Without contracts this situation quickly becomes a game of blame tennis. The teams keep knocking accusations back and forth and huge amounts of time get wasted.
With a contract, the blame is clear.
Did the caller satisfy the preconditions? If not the client team need to fix it.
Given a valid request, did the receiver satisfy the post conditions? If not the server team need to fix that.
Did both parties adhere to the contract, but the result is unsatisfactory? The contract is insufficient and the issue needs to be escalated.
For this you don't need to have the contracts implemented in the form of assertions, you just need to make sure they are documented and agreed on by all parties.

If you look into STL, boost, MFC, ATL and many open source projects, you can see there are so many ASSERTION statements and that makes project going further more safely.
Design-By-Contract! It really works in real product.

Frank Krueger writes:
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue.
I have two responses to this:
Null was just an example. For square(x), I'd want to test that the square root of the result is (approximately) the value of the parameter. For setters, I'd want to test that the value actually changed. For atomic operations, I'd want to check that all component operations succeeded or all failed (really one test for success and n tests for failure). For factory methods in weakly-typed languages, I want to check that the right kind of object is returned. The list goes on and on. Basically, anything that can be tested in one line of code is a very good candidate for a code contract in a prologue comment.
I disagree that you shouldn't test things because they generate runtime exceptions. If anything, you should test things that might generate runtime exceptions. I like runtime exceptions because they make the system fail fast, which helps debugging. But the null in the example was a result value for some possible input. There's an argument to be made for never returning null, but if you're going to, you should test it.

It's absolutely foolish to not design by contract when doing anything in an SOA realm, and it's always helpful if you're working on any sort of modular work, where bits & pieces might be swapped out later on, especially if any black boxen are involved.

In lieu of more expressive type systems, I would absolutely use design by contract on military grade projects.
For weakly typed languages or languages with dynamic scope (PHP, JavaScript), functional contracts are also very handy.
For everything else, I would toss it aside an rely upon beta testers and unit tests.
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue. If you are more interested in documentation, then I would use annotations that can be used with static analyzers and the like (to make sure the code isn't breaking your annotations for example).
A stronger type system coupled with Design by Contract seems to be the way to go. Take a look at Spec# for an example:
The Spec# programming language. Spec#
is an extension of the object-oriented
language C#. It extends the type
system to include non-null types and
checked exceptions. It provides
method contracts in the form of pre-
and postconditions as well as object
invariants.

Both Unit testing and Design by Contract are valuable test approaches in my experince.
I have tried using Design by Contract in a System Automatic Testing framework and my experience is that is gives a flexibility and possibilities not easily obtained by unit testing. For example its possible to run longer sequence and verify that
the respons times are within limits every time an action is executed.
Looking at the presentations at InfoQ it appears that Design by contract is a valuable addition to the conventional Unit tests in the integration phase of components.
For example it possible to create a mock interface first and then use the component after-
or when a new version of a component is released.
I have not found a toolkit covering all my design requirement to design by contract testing
in the .Net/Microsoft platform.

I find it telling that Go programming language has no constructs that make design by contract possible. panic/defer/recover aren't exactly that as defer and recover logic make it possible to ignore panic, IOW to ignore broken contract. What's needed at very least is some form of unrecoverable panic, which is assert really. Or, at best, direct language support of design by contract constructs (pre and post-conditions, implementation and class invariants). But given strong-headedness of language purists at the helm of Go ship, I give little change of any of this done.
One can implement assert-like behaviour by checking for special assert error in last defer function in panicking function and calling runtime.Breakpoint() to dump stack during recovery. To be assert-like that behaviour needs to be conditional. Of course this approach fells apart when new defer function is added after the one doing assert. Which will happen in large project exactly at the wrong time, resulting in missed bugs.
My point is that is that assert is useful in so many ways that having to dance around it may be a headache.

I don't actually use Design by Contract, on a daily basis. I do, however know that it has been incorporated into the D language, as part of the language.

Yes, it does! Actually a few years ago, I designed a little framework for Argument Validation. I was doing a SOA project, in which the different back-end system, did all kind of validation and checking. But to increase response times (in cases where the input was invalid, and to reduce to load those back-end systems), we started to validate the input parameters of the provided services. Not only for Not Null, but also for String patterns. Or values from within sets. And also the cases where parameters had dependencies between them.
Now I realize we implemented at that time a small design by contract framework :)
Here is the link for those who are interested in the small Java Argument Validation framework. Which is implemented as plain Java solution.

Related

Do preconditions ALWAYS have to be checked? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni.
On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.
I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously.
What's the best practice here?
I have seen no "hard and fast" rule on how to check preconditions, but I generally treat it like method documentation. If it is publicly scoped, I assert that the preconditions are met. The logic behind this would be that the scope dictates that you are expecting consumption on a broader scale and with less influence.
Personally, the effort to put assertions around private methods is something I reserve for "mission critical" methods, which would basically be ones that either perform a critical task, are subject to external compliance requirements, or are non-recoverable in the event of an exception. These are largely "judgment calls".
The time saved can be reinvested in thorough unit and integration test enhancement to try and flush out these issues and puts the tooling in place to help enforce the quality of the input assertions as it would be consumed by a client, whether it is a class in your control or not.
I think it depends on how the team is organized: check inputs which come from outside your team.
Check inputs from end-users
Check inputs from software components written by other teams
Trust inputs received from within your own component / within your own team.
The reason for this is for support and maintenance (i.e. bug-fixing): when there's a bug report, you want to be able as quickly as possible to know which component is at fault, i.e. which team to assign the bug to.
if a function is delegating to another
function, the first function should
check them but checking them again in
the second one is redundant.
What if you change the way those functions call each other? Or you introduce new validation requirements in the second function, that the first one delegates to? I'd say it's safer to always check them.
I've made the habit of distinguishing between checking and asserting the preconditions, depending (as people pointed out in the comments) on whether a call comes from the outside (unchecked exception, may happen) or the inside (assert, shouldn't happen).
Ideally, the production system won't have the penalty of the asserts, and you could even use a Design By Contract(TM)-like mechanism to discharge the assertions statically.
In my experience, it depends on your encapsulation.
If the inner function is private then you can be pretty sure that its preconditions are set.
Guess it's all about the Law of Demeter, talk only to your friends and so on.
As a basis for best practise, if the call is public, you should check your inputs.
I think that best practice is to do these checks only if they're going to fail some day. If will help when you do the following.
Debugging
There's no point to check them when several private functions in one module, which has a single maintainer, exchange data. Of course, there are exceptions, especially if your language doesn't have a static type system, or your data are "stringly typed".
However, if you expose public API, one day, someone will fail your precondition. The further is the person that maintains the calling module from you (in organizational structure and in physical location), the more likely it will happen. And when it happens, a clear statement of precondition failure, with a specification where it happened, may save hours of debugging. The Law of Leaky Abstractions is still true...
QA
Precondition failure helps QA to debug their tests. If a unit-test for a module causes the module to yield precondition failure, it means that the test is incorrect, not your code. (Or, that your precondition check is incorrect, but that's less likely).
If one of the means to perform QA is static analysis, then precondition checks, if they have a specific notation (for example, only these checks use assert_precondition macro), will also help. In static analysis it's very important to distinguish incorrect input and source code errors.
Documentation
If you don't have much time to create documentation, you may make your code aid the text that accompanies it. Clear and visible precondition checks, which are perceived separate from the rest of the implementation, "document" possible inputs to some extent. (Another way to document your code this way is writing unit tests).
As with everything, evaluate your requirements to find the best solution for each situation.
When preconditions are easier to check ("pointer isn't null"), you might as well do that often. Preconditions which are hard to check ("points to a valid null-terminated string") or are expensive in time, memory, or other resources may be handled a different way. Use the Pareto principle and gather the low-hanging fruit.
// C, C++:
void example(char const* s) {
// precondition: s points to a valid null-terminated string
assert(s); // tests that s is non-null, which is required for it to point to
// a valid null-terminated string. the real test is nearly impossible from
// within this function
}
Guaranteeing preconditions is the responsibility of the caller. Because of this, several languages offer an "assert" construct that can optionally be skipped (e.g. defining NDEBUG for C/C++, command-line switch for Python) so that you can more extensively test preconditions in special builds without impacting final performance. However, how to use assert can be a heated debate—again, figure out your requirements and be consistent.
It is a little bit old question, but no, preconditions do not have to be checked every time. It really depends.
For example, what if you have binary search over vector. Precondition is sorted vector. Now, if you check every time if vector is sorted this takes a linear time (for each vector), so it is not efficient. Client must be aware of precondition and be sure to meet it.
The best practice is to always check them.

What programming practice that you once liked have you since changed your mind about? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
As we program, we all develop practices and patterns that we use and rely on. However, over time, as our understanding, maturity, and even technology usage changes, we come to realize that some practices that we once thought were great are not (or no longer apply).
An example of a practice I once used quite often, but have in recent years changed, is the use of the Singleton object pattern.
Through my own experience and long debates with colleagues, I've come to realize that singletons are not always desirable - they can make testing more difficult (by inhibiting techniques like mocking) and can create undesirable coupling between parts of a system. Instead, I now use object factories (typically with a IoC container) that hide the nature and existence of singletons from parts of the system that don't care - or need to know. Instead, they rely on a factory (or service locator) to acquire access to such objects.
My questions to the community, in the spirit of self-improvement, are:
What programming patterns or practices have you reconsidered recently, and now try to avoid?
What did you decide to replace them with?
//Coming out of university, we were taught to ensure we always had an abundance
//of commenting around our code. But applying that to the real world, made it
//clear that over-commenting not only has the potential to confuse/complicate
//things but can make the code hard to follow. Now I spend more time on
//improving the simplicity and readability of the code and inserting fewer yet
//relevant comments, instead of spending that time writing overly-descriptive
//commentaries all throughout the code.
Single return points.
I once preferred a single return point for each method, because with that I could ensure that any cleanup needed by the routine was not overlooked.
Since then, I've moved to much smaller routines - so the likelihood of overlooking cleanup is reduced and in fact the need for cleanup is reduced - and find that early returns reduce the apparent complexity (the nesting level) of the code. Artifacts of the single return point - keeping "result" variables around, keeping flag variables, conditional clauses for not-already-done situations - make the code appear much more complex than it actually is, make it harder to read and maintain. Early exits, and smaller methods, are the way to go.
Trying to code things perfectly on the first try.
Trying to create perfect OO model before coding.
Designing everything for flexibility and future improvements.
In one word overengineering.
Hungarian notation (both Forms and Systems).
I used to prefix everything. strSomeString or txtFoo.
Now I use someString and textBoxFoo. It's far more readable and easier for someone new to come along and pick up. As an added bonus, it's trivial to keep it consistant -- camelCase the control and append a useful/descriptive name. Forms Hungarian has the drawback of not always being consistent and Systems Hungarian doesn't really gain you much. Chunking all your variables together isn't really that useful -- especially with modern IDE's.
The "perfect" architecture
I came up with THE architecture a couple of years ago. Pushed myself technically as far as I could so there were 100% loosely coupled layers, extensive use of delegates, and lightweight objects. It was technical heaven.
And it was crap. The technical purity of the architecture just slowed my dev team down aiming for perfection over results and I almost achieved complete failure.
We now have much simpler less technically perfect architecture and our delivery rate has skyrocketed.
The use of caffine. It once kept me awake and in a glorious programming mood, where the code flew from my fingers with feverous fluidity. Now it does nothing, and if I don't have it I get a headache.
Commenting out code. I used to think that code was precious and that you can't just delete those beautiful gems that you crafted. I now delete any commented-out code I come across unless there's a TODO or NOTE attached because it's too perilous to leave it in. To wit, I've come across old classes with huge commented-out portions and it really confused me why they were there: were they recently commented out? is this a dev environment change? why does it do this unrelated block?
Seriously consider not commenting out code and just deleting it instead. If you need it, it's still in source control. YAGNI though.
The overuse / abuse of #region directives. It's just a little thing, but in C#, I previously would use #region directives all over the place, to organize my classes. For example, I'd group all class properties together in a region.
Now I look back at old code and mostly just get annoyed by them. I don't think it really makes things clearer most of the time, and sometimes they just plain slow you down.
So I have now changed my mind and feel that well laid out classes are mostly cleaner without region directives.
Waterfall development in general, and in specific, the practice of writing complete and comprehensive functional and design specifications that are somehow expected to be canonical and then expecting an implementation of those to be correct and acceptable. I've seen it replaced with Scrum, and good riddance to it, I say. The simple fact is that the changing nature of customer needs and desires makes any fixed specification effectively useless; the only way to really properly approach the problem is with an iterative approach. Not that Scrum is a silver bullet, of course; I've seen it misused and abused many, many times. But it beats waterfall.
Never crashing.
It seems like such a good idea, doesn't it? Users don't like programs that crash, so let's write programs that don't crash, and users should like the program, right? That's how I started out.
Nowadays, I'm more inclined to think that if it doesn't work, it shouldn't pretend it's working. Fail as soon as you can, with a good error message. If you don't, your program is going to crash even harder just a few instructions later, but with some nondescript null-pointer error that'll take you an hour to debug.
My favorite "don't crash" pattern is this:
public User readUserFromDb(int id){
User u = null;
try {
ResultSet rs = connection.execute("SELECT * FROM user WHERE id = " + id);
if (rs.moveNext()){
u = new User();
u.setFirstName(rs.get("fname"));
u.setSurname(rs.get("sname"));
// etc
}
} catch (Exception e) {
log.info(e);
}
if (u == null){
u = new User();
u.setFirstName("error communicating with database");
u.setSurname("error communicating with database");
// etc
}
u.setId(id);
return u;
}
Now, instead of asking your users to copy/paste the error message and sending it to you, you'll have to dive into the logs trying to find the log entry. (And since they entered an invalid user ID, there'll be no log entry.)
I thought it made sense to apply design patterns whenever I recognised them.
Little did I know that I was actually copying styles from foreign programming languages, while the language I was working with allowed for far more elegant or easier solutions.
Using multiple (very) different languages opened my eyes and made me realise that I don't have to mis-apply other people's solutions to problems that aren't mine. Now I shudder when I see the factory pattern applied in a language like Ruby.
Obsessive testing. I used to be a rabid proponent of test-first development. For some projects it makes a lot of sense, but I've come to realize that it is not only unfeasible, but rather detrimental to many projects to slavishly adhere to a doctrine of writing unit tests for every single piece of functionality.
Really, slavishly adhering to anything can be detrimental.
This is a small thing, but: Caring about where the braces go (on the same line or next line?), suggested maximum line lengths of code, naming conventions for variables, and other elements of style. I've found that everyone seems to care more about this than I do, so I just go with the flow of whoever I'm working with nowadays.
Edit: The exception to this being, of course, when I'm the one who cares the most (or is the one in a position to set the style for a group). In that case, I do what I want!
(Note that this is not the same as having no consistent style. I think a consistent style in a codebase is very important for readability.)
Perhaps the most important "programming practice" I have since changed my mind about, is the idea that my code is better than everyone else's. This is common for programmers (especially newbies).
Utility libraries. I used to carry around an assembly with a variety of helper methods and classes with the theory that I could use them somewhere else someday.
In reality, I just created a huge namespace with a lot of poorly organized bits of functionality.
Now, I just leave them in the project I created them in. In all probability I'm not going to need it, and if I do, I can always refactor them into something reusable later. Sometimes I will flag them with a //TODO for possible extraction into a common assembly.
Designing more than I coded.
After a while, it turns into analysis paralysis.
The use of a DataSet to perform business logic. This binds the code too tightly to the database, also the DataSet is usually created from SQL which makes things even more fragile. If the SQL or the Database changes it tends to trickle to everything the DataSet touches.
Performing any business logic inside an object constructor. With inheritance and the ability to create overloaded constructors tend to make maintenance difficult.
Abbreviating variable/method/table/... Names
I used to do this all of the time, even when working in languages with no enforced limits on lengths of names (well they were probably 255 or something). One of the side-effects were a lot of comments littered throughout the code explaining the (non-standard) abbreviations. And of course, if the names were changed for any reason...
Now I much prefer to call things what they really are, with good descriptive names. including standard abbreviations only. No need to include useless comments, and the code is far more readable and understandable.
Wrapping existing Data Access components, like the Enterprise Library, with a custom layer of helper methods.
It doesn't make anybody's life easier
Its more code that can have bugs in it
A lot of people know how to use the EntLib data access components. No one but the local team knows how to use the in house data access solution
I first heard about object-oriented programming while reading about Smalltalk in 1984, but I didn't have access to an o-o language until I used the cfront C++ compiler in 1992. I finally got to use Smalltalk in 1995. I had eagerly anticipated o-o technology, and bought into the idea that it would save software development.
Now, I just see o-o as one technique that has some advantages, but it's just one tool in the toolbox. I do most of my work in Python, and I often write standalone functions that are not class members, and I often collect groups of data in tuples or lists where in the past I would have created a class. I still create classes when the data structure is complicated, or I need behavior associated with the data, but I tend to resist it.
I'm actually interested in doing some work in Clojure when I get the time, which doesn't provide o-o facilities, although it can use Java objects if I understand correctly. I'm not ready to say anything like o-o is dead, but personally I'm not the fan I used to be.
In C#, using _notation for private members. I now think it's ugly.
I then changed to this.notation for private members, but found I was inconsistent in using it, so I dropped that too.
I stopped going by the university recommended method of design before implementation. Working in a chaotic and complex system has forced me to change attitude.
Of course I still do code research, especially when I'm about to touch code I've never touched before, but normally I try to focus on as small implementations as possible to get something going first. This is the primary goal. Then gradually refine the logic and let the design just appear by itself. Programming is an iterative process and works very well with an agile approach and with lots of refactoring.
The code will not look at all what you first thought it would look like. Happens every time :)
I used to be big into design-by-contract. This meant putting a lot of error checking at the beginning of all my functions. Contracts are still important, from the perspective of separation of concerns, but rather than try to enforce what my code shouldn't do, I try to use unit tests to verify what it does do.
I would use static's in a lot of methods/classes as it was more concise. When I started writing tests that practice changed very quickly.
Checked Exceptions
An amazing idea on paper - defines the contract clearly, no room for mistake or forgetting to check for some exception condition. I was sold when I first heard about it.
Of course, it turned to be such a mess in practice. To the point of having libraries today like Spring JDBC, which has hiding legacy checked exceptions as one of its main features.
That anything worthwhile was only coded in one particular language. In my case I believed that C was the best language ever and I never had any reason to code anything in any other language... ever.
I have since come to appreciate many different languages and the benefits/functionality they offer. If I want to code something small - quickly - I would use Python. If I want to work on a large project I would code in C++ or C#. If I want to develop a brain tumour I would code in Perl.
When I needed to do some refactoring, I thought it was faster and cleaner to start straightaway and implement the new design, fixing up the connections until they work. Then I realized it's better to do a series of small refactorings to slowly but reliably progress towards the new design.
Perhaps the biggest thing that has changed in my coding practices, as well as in others, is the acceptance of outside classes and libraries downloaded from the internet as the basis for behaviors and functionality in applications. In school at the time I attended college we were encouraged to figure out how to make things better via our own code and rely upon the language to solve our problems. With the advances in all aspects of user interface and service/data consumption this is no longer a realistic notion.
There are certain things which will never change in a language, and having a library that wraps this code in a simpler transaction and in fewer lines of code that I have to write is a blessing. Connecting to a database will always be the same. Selecting an element within the DOM will not change. Sending an email via a server-side script will never change. Having to write this time and again wastes time that I could be using to improve my core logic in the application.
Initializing all class members.
I used to explicitly initialize every class member with something, usually NULL. I have come to realize that this:
normally means that every variable is initialized twice before ever being read
is silly because in most languages automatically initialize variables to NULL.
actually enforces a slight performance hit in most languages
can bloat code on larger projects
Like you, I also have embraced IoC patterns in reducing coupling between various components of my apps. It makes maintenance and parts-swapping much simpler, as long as I can keep each component as independent as possible. I'm also utilizing more object-relational frameworks such as NHibernate to simplify database management chores.
In a nutshell, I'm using "mini" frameworks to aid in building software more quickly and efficiently. These mini-frameworks save lots of time, and if done right can make an application super simple to maintain down the road. Plug 'n Play for the win!

What are the best practices for Design by Contract programming

What are the best practices for Design by Contract programming.
At college I learned the design by contract paradigma
(in an OO environment)
We've learned three ways to tackle the problem :
1) Total Programming : Covers all possible exceptional cases in its
effect (cf. Math)
2) Nominal Programming : Only 'promises' the right effects when the preconditions are met. (otherwise effect is undefined)
3) Defensive Programming : Use exceptions to signal illegal invocations of methods
Now, we have focussed in different OO scenarios on the correct use in each situation, but we haven't learned WHEN to use WHICH...
(Mostly the tactics where inforced by the exercice..)
Now I think it's very very strange that I haven't asked my teacher (but then again, during lectures, noone has)
Personally, I never use nominal now, and tend to replace preconditions with exceptions (so i rather use : throws IllegalDivisionByZero, than stating 'precondition : divider should differ from zero) and only program total what makes sense (so I wouldn't return a conventional value on division by zero), but this method is just based on personal findings and likes.
so I am asking you guys :
Are there any best practises??
I didn't know about this division, and it doesn't really reflect my experience.
Total Programming is virtually impossible. You could not guarantee that you cover all exceptional cases. So basically you should limit your scope and reject the situations that are out of scope (that's the role of the Pre-conditions)
Nominal Programming is not desired. Undefined effect should be banned.
Defensive Programming is a must. You should always signal illegal invocations of methods.
I'm in favour of the implementation of the complete Design-by-Contract elements, which is, in my opinions a practical and affortable version of the Total Programming
Preconditions (a kind of Defensive Programming) to signal illegal invocation of the method. Try to limit your scope as much as you can so that you could simplify the code. Avoid complex implementation if possible by narrowing a little bit the scope.
Postconditions to raise an error if the desired effect is not obtained. Even if it your fault, you should notify the caller that you miss your goal.
Invariants to check that the object consistency is preserved.
It all boils down to what responsibilities do you wish to assign to the client and the implementer of the contract.
In defensive programming you force the implementer to check for error conditions which can be costy or even impossible in some cases. Imagine a contract specified by the binarySearch for example your input array has to be sorted. you can't detect this while running the algorithm. you have to do a manual check for it which will actually bump the execution time an order of magnitude. to back my opinion up is the signature of the method from the javadocs.
Another point is People and frameworks now tend to implement exception translation mechanisms which is used mainly to translate checked exceptions (defensive style) to runtime exceptions that will just pop up if something wrong happens. this way the client and implementer of the contract has less to worry about while dealing with each other.
Again this is my personal opinion backed only with what limited experience I have, I'd love to hear more about this subject.
...but we haven't learned WHEN
to use WHICH...
I think the best practice is to be "as defensive as possible". Do your runtime checks if you can. As #MahdeTo has mentioned sometimes that's impossible for performance reasons; in such cases fall back on undefined or unsatisfactory behavior.
That said, be explicit in your documentation as to what has runtime checks and what does not.
Like much of computing "it depends" is probably the best answer.
Design by contract/programming by contract can help development greatly by explicitly documenting the conditions for a function. Just the documentation can be a help without even making it into (compiled) code.
Where feasible I recommend defensive - checking every condition. BUT only for development and debug builds. In this way most invalid assumptions are caught when the conditions are broken. A good build system would allow you to turn the different condition types on and off at a module or file level - as well as globally.
The actions taken in release versions of software then depend upon the system and how the condition is triggered ( usual distinction between external and internal interfaces ). The release version could be 'total programming' - all conditions give a defined result (which can include errors or NaN)
To me "nominal programming" is a dead end in the real world. You assume that if you passed the right values in (which of course you did) then the value you receive is good. If your assumption was wrong - you break down.
I think that test driven programming is the answer. Before actually implementing the module, you first create a unit test (call it a contract). Then gradually implement the functionality and make sure the contract is still valid as you go. Usually I start with plain stubs and mockups, then gradually fill out the rest replacing the stabs with real stuff. Keep improving and making the test stronger. At the end you end up with a robust implementation of said module plus you've got a fantastic test bed - coded implementation of the contract. Later on, if someone modifies the module, first you see if it can still fit the test bed. If it doesn't, the contract is broken - reject the changes. Or, the contract is outdated, - fix the unit tests. And so on.. Boring cycle of software development :)

Design By Contract and Test-Driven Development [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on improving our group's development process, and I'm considering how best to implement Design By Contract with Test-Driven Development. It seems the two techniques have a lot of overlap, and I was wondering if anyone had some insight on the following (related) questions:
Isn't it against the DRY principle to have TDD and DbC unless you're using some kind of code generator to generate the unit tests based on contracts? Otherwise, you have to maintain the contract in two places (the test and the contract itself), or am I missing something?
To what extent does TDD make DbC redundant? If I write tests well enough, aren't they equivalent to writing a contract? Do I only get added benefit if I enforce the contract at run time as well as through the tests?
Is it significantly easier/more flexible to only use TDD rather than TDD with DbC?
The main point of these questions is this more general question: If we're already doing TDD properly, will we get a significant benefit for the overhead if we also use DbC?
A couple of details, though I think the question is largely language-agnostic:
Our team is very small, <10 programmers.
We mostly use Perl.
Note the differences.
Design driven by contract. Contract Driven Design.
Develop driven by test. Test Driven Development.
They are related in that one precedes the other. They describe software at different levels of abstraction.
Do you discard the design when you go to implementation? Do you consider that a design document is a violation of DRY? Do you maintain the contract and the code separately?
Software is one implementation of the contract. Tests are another. User's manual is a third. Operations guide is a fourth. Database backup/restore procedures are one part of an implementation of the contract.
I can't see any overhead from Design by Contract.
If you're already doing design, then you change the format from too many words to just the right words to outline the contractual relationship.
If you're not doing design, then writing a contract will eliminate problems, reducing cost and complexity.
I can't see any loss of flexibility.
start with a contract,
then
a. write tests and
b. write code.
See how the two development activities are essentially intertwined and both come from the contract.
I think there is overlap between DbC and TDD, however, I don't think there is duplicated work: introducing DbC will probably result in a reduction of test cases.
Let me explain.
In TDD, tests aren't really tests. They are behavioral specifications. However, they are also design tools: by writing the test first, you use the external API of your object under test – that you haven't actually written yet – in the same way that a user would. That way, you design the API in a way that makes sense to a user, and not in the way that makes it easiest for you to implement. Something like queue.full? instead of queue.num_entries == queue.size.
This second part cannot be replaced by Contracts.
The first part can be partially replaced by contracts, at least for unit tests. TDD tests serve as specifications of behavior, both to other developers (unit tests) and domain experts (acceptance tests). Contracts also specify behavior, to other developers, to domain experts, but also to the compiler and the runtime library.
But contracts have fixed granularity: you have method pre- and postconditions, object invariants, module contracts and so on. Maybe loop variants and invariants. Unit tests however, test units of behavior. Those might be smaller than a method or consist of multiple methods. That's not something you can do with contracts. And for the "big picture" you still need integration tests, functional tests and acceptance tests.
And there is another important part of TDD that DbC doesn't cover: the middle D. In TDD, tests drive your development process: you never write a single line of implementation code unless you have a failing test, you never write a single line of test code unless your tests all pass, you only write the minimal amount of implementation code to make the tests pass, you only write the minimal amount of test code to produce a failing test.
In conclusion: use tests to design the "flow", the "feel" of the API. Use contracts to design the, well, contract of the API. Use tests to provide the "rhythm" for the development process.
Something like this:
Write an acceptance test for a feature
Write a unit test for a unit that implements a part of that feature
Using the method signature you designed in step 2, write the method prototype
Add the postcondition
Add the precondition
Implement the method body
If the acceptance test passes, goto 1, otherwise goto 2
If you want to know what Bertrand Meyer, the inventor of Design by Contract, thinks about combining TDD and DbC, there is a nice paper by his group, called Contract-Driven Design = Test-Driven Development - Writing Test Cases. The basic premise is that contracts provide an abstract representation of all possible cases, whereas test cases only test specific cases. Therefore, a suitable test harness can be automatically generated from the contracts.
I would add:
the API is the contract for the programmers, the UI definition is the contract with the clients, the protocol is the contract for client-server interactions. Get those first, then you can take advantage of parallel development tracks and not get lost in the weeds. Yes, periodically review to make sure requirements are met, but never start a new track without the contract. And 'contract' is a strong word: once deployed, it must never change. You should include versioning management and introspection from the get-go, changes to the contract are only implemented by extension sets, version numbers change with these, and then you can do things like graceful degradation when dealing with mixed or old installations.
I learned this lesson the hard way, with a large project that wandered off into never-never land, then applied it the right way later when seriously under the gun, company-survival, short fuse timeline. We defined the protocol, defined and wrote a set of protocol emulations for each side of the transactions (basically canned message generators and received message checker, one evenings' worth of two-brained coding), then parted to separately write the server and client ends of the app. We recombined the night of the show, and it just worked. Requirements, design, contract, test, code, integrate. In that order. Repeat until baked.
I am a little leery of design by TLA. As with Patterns, buzz-word compliant recipes are a good guide, but it is my experience that there is no such thing as a one-size-fits-all design or project management procedure. If you are doing things precisely By The Book (tm) then, unless it is a DOD contract with DOD procedural requirements, you will probably get into trouble somewhere along the way. Read the Book(s), yes, but be sure tounderstand them, and then take into account also the people side of your team. Rules that are only enforced by the Book will not get enforced uniformly - even when tool-enforced there can be drop-outs (e.g. svn comments left empty or cryptically brief). Procedures only tend to get followed when the tool chain not only enforces them but makes following easier than any possible short-cuts. Believe me, when the going gets tough, the short-cuts get found, and you may not know about the ones that got used at 3am until it is too late.
You can also use executable acceptance tests that are written in the domain language of the contract. It might not be the actual "contract", but half way between unit tests and the contract.
I would recomment using Ruby Cucumber
http://github.com/aslakhellesoy/cucumber
But since you are a Perl shop, then maybe you can use my own small attempt at p5-cucumber.
http://github.com/kesor/p5-cucumber
Microsoft has done work on automatic generation of unit tests, based on code contracts and parameterized unit test. E.g. the contract says the count must be increased by one when an item is added to a collection, and the parameterized unit test say how to add “n” items to a collection. Pex will then try to create a unit test that proves the contract is broken. See this video for a overview.
If this works, your unit test will only have to be written for one example of each thing you are trying to test, and PEX will be able to then work out witch data items will break the test.
I had some ruminations about that topic some time ago.
You may want to take a look at
http://gleichmann.wordpress.com/2007/12/09/test-driven-development-and-design-by-contract-friend-or-foe/
When you are using TDD to implement a new method, you need some input: you need to know the assertions to check in your tests. Design-by-contract gives you those assertions: they are the post-conditions and invariants of the method.
I have found DbC very handy for jumpstarting the red-green-refactor cycle because it helps in identifying unit tests to start with. With DbC I start thinking about pre-conditions that the object being TDD-ed must handle and each pre-condition might represent a failing unit test to start a red-green-refactor cycle. At some point I switch to start the cycle with a failing unit test for a post-condition, and then just keep the TDD flow going. I have tried this approach with newcomers to TDD and it really works in kickstarting the TDD mindset.
In summary, think of DbC as an effective way to identify key behavioral unit tests. DbC helps at analyzing inputs (pre-conditions) and outputs(post-conditions), which are the two things that we need to control (inputs) and observe (outputs) to write testable software (a similar aim of TDD).

What is a good technique or exercise when learning a new language? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
When you are learning a new language, what is there a particularly good/effective exercise to help get the hang of it? And why?
EDIT:
Preferably looking for things that are more complicated that 'Hello World'.
I usually do the following (in the order presented):
Print a pyramid with height provided by the user (checks basic I/O, conditionals and loops)
Write a class hierarchy with polymorphism etc... (checks OO concepts)
Convert decimals to roman numerals (checks enums and basic data structures)
Write a linkedlist implementation (checks memory allocation/deallocation)
Write clones of JUnit and JMock (checks refelction/metaprogramming)
Write a console based chat system (checks basic networking)
Modify (6) to support group chat via multicasting (checks advanced networking)
Write a GUI for (7) (checks GUI library)
After that its on to a real project...
other than hello world, I try to port one of the existing programs to the new languange. this will challenge me to learn some good old techniques in the new language and help me build a new library of classes or helpers..
Larry O'Brien had a great series of blogs titled '15 Exercises to know A programming Language' Part 1 Part 2 Part 3
See Larry's Blog for the details.
Part 1. Calculations
Write a program that takes as its first argument one of the words 'sum,' 'product,' 'mean,' or 'sqrt' and for further arguments a series of numbers. The program applies the appropriate function to the series.
Write a program that calculates a Haar wavelet on an array of numbers. .
Write a program that takes as its arguments a the name of a bitmapped image. Apply the Haar wavelet to the pixel values. Save the results to a file.
Using the outputs of the previous exercise file, write a GUI program that reconstitutes the original bitmap (N.B.: The Haar wavelet is lossless).
Write a GUI program that deals with bitmaps images
Part 2. Data Structures
Write a class (or module or what-have-you: please map OOP terminology into whatever paradigm appropriate) that only stores objects of the same type as the first object placed in it and raises an exception if a non-compatible type is added.
Using the language's idioms, implement a tree-based datastructure (splay, AVL, or red-black).
Create a new type that uses a custom comparator (i.e., overrides "Equals"). Place more of these objects than can fit in memory into the datastructure created above as well as into standard libraries, put more objects into it than can fit in memory. Compare performance of the standard libraries with your own implementation.
Implement an iterator for your datastructure. Consider multithreading issues.
Write a multithreaded application that uses your data structure, comparable types, and iterators to implement the type-specific storage functionality as described in Exercise 6. How do you deal with concurrent inserts and traversals?
Part 3. Libraries
Write a program that outputs the current date and time to a Web page as a reversed ISO 8601-formatted value (i.e.: "2006-06-16T13:15:30Z" becomes "Z03:51:31T61-60-6002"). Create an XML interface (either POX or WS-*) to the same.
Write a client-side program that can both scrape the above Web page and the XML return and redisplays the date in a different format.
Write a daemon program that monitors an email account. When a strongly-encoded email arrives that decrypts to a valid ISO 8601 time, the program sets the system time to that value.
Write a program that connects to your mail client, performs a statistical analysis of its contents (see A Plan for Spam ) and stores the results in a database.
Using previous Exercise, write a spam filter, including moving messages within your mail client
If you can do all these things in 2 languages, I'm sure google has a job for you
'hello world!'
I really do think this a good place to start. Its basic and only takes a few seconds but you make sure your compiler is running and you have everything in place. Once you have that done you can keep going. Add a variable, print to database, print to file. Make sure you know how to leave comments. This could all take a mater of 5 minutes. But its important stuff.
Connect to data somehow, whether it be a database, file or other...
Red-Black tree.
I usually don't do very well with it unless I have a "real" project to apply it to. Even made up ones get boring fast. In fact, I find it helpful to throw yourself in the middle of a bigger project and make small changes to something that already works.
YMMV
My equivalent of a hello world is to do the following:
Retrieve multiple inputs (ie, parms from command line, text boxes on a gui)
Manipulate that input (ie, do math on numbers and manipulate text)
On a gui use a list box.
read and write files.
I feel after doing the above I get a good feel for the language and a good introduction to the IDE and how easy (or really how difficult) it is to work with the language and the environment it runs in.
After that if I want to go further I will use the language in a real project that I need to do (probably a utility of some kind).
Personally I like to make a simple echo server and client to get the hang of network programming with that language.
Ray tracer.
I like to learn a new language by doing a "real" task (for "personal" use)
My first java program was a client for an online multiplayer game (that I then released into public domain)
My first vb.net program was a front-end for my digital video recorder
My first VHDL "program" was a 64x32 led array controller
Often I'll implement the k-means clustering algorithm.
Drag-and-drop image gallery.
When I was cutting my teeth on Win32 and MFC, this was one of my first projects. Pretty quickly I ported all my code into ActiveX controls. Then I rewrote the thing in Java. For kicks, I rewrote it again in pure Javascript. When I broke into .Net, I rewrote the thing again in C#. Last but not least, I used it as an exercise for learning Objective-C and UIKit.
Why? It's a visually appealing toy, for one thing. It's nice to get instant gratification from your code, I think, and working with images is one of the most gratifying things I can think of.
Console based Tetris
I like games for learning programming because the business rules are carefully delineated. The first three programs I write in a new language are Ro-Sham-Bo, Blackjack, and Video Poker.
Pick a task(s) that you already understand. That way you limit the amount of "new stuff" you need to assimilate.
I think, for me, learning by porting existing code (for example, from another platform) is always a challenge and fun. just simple demos, boardgames, etc.
Mandelbrot set.