If Idris thinks things may be total that are not, can Idris be used for proofs? - proof

http://docs.idris-lang.org/en/v0.99/tutorial/theorems.html#totality-checking-issues states that:
Secondly, the current implementation has had limited effort put into it so far, so there may still be cases where it believes a function is total which is not. Do not rely on it for your proofs yet!
Does this mean that Idris cannot be relied upon for proofs, or is there a way to create proofs that do not need the totality checker?

All proof assistants run the risk of implementation bugs (or even hardware bugs!) that may make the system inconsistent and allow users to prove anything. This risk can never be zero. Even if we prove the correctness of the implementation of a proof assistant, that proof also has to be proved in some other formal or informal system, subject to the same risks.
Therefore, what we should expect from a proof assistant is not infallible truth, merely strong evidence of validity. How strong that evidence is depends on our prior information about the trustworthiness of the system, and the extent to which we're able to look at a particular proof and determine whether it makes use of inconsistencies.
So, it's not a clear-cut case how strong evidence Idris proofs constitute. I would say they're quite strong in comparison to informal proofs. Also, Idris proofs don't yet scale as far as Agda or especially Coq proofs, so they are likely feasible to be checked by human inspection for "exploits".

Related

Is there a way to prove a program has no bug?

I was thinking about the fact that we can prove a program has bugs. We can test it to assess that it is more or less bug resistant.
But is there a way (even theoretically) to prove that a program has no bug ?
For simple programs, such as a "Hello World", I guess we should be able to do it.
But what about larger programs ?
There are nowadays many different formalisms that can be used to prove programs correct (e.g., formalizations in proof assistants, dependently typed programming languages, separation logic, ...). As noted by others, there is no automatic way to prove any given program correct (see the halting problem). However, those mentioned formalisms are often applicable to specific programs. (Such an application can be far from automatic and require a tremendous amount of creativity.)
Another very important point is what we actually mean by proving a program correct or as you stated prove that a program has no bug. Even with formal methods there is typically no way to say that nothing whatsoever can go wrong with a program. The reason is that formal methods usually show that a program conforms to a specification.
You can think of a specification as a logical formula (that states some property about a program) and of the correctness proof as a formal proof that a program satisfies this formula (i.e., enjoys the corresponding property). Due to this setup, everything outside the specification is not even "considered" by the proof. So to really show that a program has no bugs you would first have to write down a logical formula that states when a program does not have bugs.
So it would maybe be more honest to say that formal methods are often able to prove (without doubt) that a program does not have certain kinds of bugs (depending on the used specification).

How large a role does subjectiveness play in programming?

I often read about the importance of readability and maintainability. Or, I read very strong opinions about which syntax features are bad or good. Or discussions about the values of certain paradigms, like OOP.
Aside from that, this same question floats about in my mind whenever I read debates on SO or Meta about subjective questions. Or read questions about best practices and sometimes find myself or others disagreeing.
What role does subjectiveness play within the programming realm?
Sometimes I think it plays a large role. Software developers are engineers in a way, but also people. A large part of programming is dealing with code that's human readable. This is very different from Math or Physics or other disciplines with very exact and structured rules. Here the exact structure and rules are largely up in the air, changeable on a whim, and hence the amount of languages in existence. And one person may find one language very readable, and another person may find their own language the most comforting.
The same with practices. One person may not like certain accepted practices. I myself find splitting classes into different files very unreadable, for instance.
But, I can't say rules haven't helped in general. Certain practices have and do make life easier. And new languages have given rise to syntax and structure that make life easier. There's certainly been a progression towards code that is easier to read and maintain even given a largely diverse group of people. So maybe these things aren't as subjective as I thought.
It reminds me, in a way, of UI design. Certainly it's subjective, but then there's an entire discipline involved in crafting good UI and it tends to work.
Is there something non-subjective about the ideas behind maintainability, readability, and other best practices? Is there something tangible to grasp when one develops a new language or thinks of new practices?
Arguably your question is really about the distinction between programming, which is mathematical, algorithmic and scientific, and software engineering, which is subjective, variable and human-focused.
Great programmers are not necessarily great software engineers, and vice versa. The two skillsets, while not exclusive by any means, have less overlap than they appear at first. Their relative importance depends a lot on the project: a brilliant programmer working alone can turn out amazing examples of technical genius, and it doesn't matter that nobody else can understand or maintain it, because he's not going to share the code anyway. But move into an enterprise environment -- like corporate in-house software development -- and I'll gladly trade you ten "cave troll" geniuses for a mediocre programmer who understands the importance of readability and documentation.
It's been my experience that the world needs great software engineers more than it needs great programmers. Relatively few people in this day and age are writing software which is truly performance-critical (OS kernels, compilers, graphics engines, realtime embedded systems, etc), and the Internet allows mediocre programmers to quickly grab algorithmic solutions for problems they couldn't solve alone. But nearly everyone writing professional code has to work within a team. And team productivity rises and falls dramatically on the ability of its members to communicate effectively and distribute workload efficiently, two skills which are highly subjective and impossible to prove by rigid formula.
Most software engineering principles are built on experience rather than objective law. Much like the social sciences, we study, learn, adapt and apply -- but with no real guarantees of outcome. All we can say is that some things seem to work better than others in most groups.
I think, a lot of it is necessarily determined by how much our mind is able to process at one time. So it comes down to how much the language and tools enable a team or a developer to break down the problem into chunks that are meaningful by themselves, but not so large that it becomes too hard to grasp them. The common theme is the art of organizing information (in this case, the code, the logic, ...) But that's not so different from Maths or Physics, by the way.
Just as the best authors borrow from many styles, the best programmers keep a huge range of patterns in their mental arsenal. Slavishly following a few patterns and adhering to some absolute truth is both lazy and dangerous.
Put it another way, the day we rely on robots for code review is the day I quit.
It all depends on your point of view :-)
But to answer your questions, I think one way to view subjectivity is to recognize that software languages, tools, and best practices are a shared means of communication among individuals. Yes, a programming language is a formal way of instructing a computer how to behave, but a programming language may also be viewed as a way to define and communicate specifications to a high level of detail (the code is the ultimate spec, is it not?).
So as far as we may want to concern ourselves with the degree of subjectivity in software languages, tools, and best practices, I would say that the lack of subjectivity may indicate how well communication is facilitated.
Yes, individuals have certain proclivities that are expressed in their habits and tendencies, but that should not ultimately matter too much in the perfect platform for development.
Turning to my Maths PhD wife I asked if there's any subjectivity in mathematics. Her answer is yes there is, mainly in the way we as humans achieve the answer.
If a mathematical proof is the result, how you get to that result can vary. If the dataset is large you may need to use a computer, which can introduce errors, and thus debated about whether that is the right approach. Or sometimes mathematicians can disagree on the theory - one is trying to prove that x is true while the other is trying to prove that x is false.
I think the same thing exists in computer science. A correct answer is a program that runs correctly, but that definition of correct may be different for each project. Sometimes correct means no bugs. Sometimes it means running efficiently.
From here programmers can argue how best to achieve the "correct" result. A good example of this is is the FizzBuzz application. A simple answer would be just a for loop, but Enterprise FizzBuzz is also "correct" in that it produces the correct answer, but is generally laughed at as "bad" engineering due to its overcomplication of the idea (it was a joke app after all).
How large a role does subjectiveness play in programming? I'd say it's a very large part of what we do, simply because we are human, and because there are multiple ways of getting the "correct" answer so there is disagreement over which way is the best.
Studies have been done showing that certain practices reduce defect rates in software. For instance, a study found a strong correlation between cyclomatic complexity and the probability of being fault-prone. Other studies show the average effectiveness of design and code inspections are 55 and 60 percent. So it appears to be in our best interests to favor simplicity, check metrics, and do code reviews.
We're talking probabilities here, though. If I review your code, I'm not guaranteed to find 60% of your bugs. There are also few absolutes in software development; experienced developers know that the correct answer is generally "it depends." That said, there are a number of practices with objective data in their favor.

Consequences of doing "good enough" software

Does doing "good enough" software take anything from you being a programmer?
Here are my thoughts on this:
Well Joel Spolsky from JoelOnSoftware says that programmers gets bored because they do "good enough" (software that satisfies the requirements even though they are not that optimized). I agree, because people like to do things that are right all the way. On one side of the spectra, I want to go as far as:
Optimizing software in such a way as I can apply all my knowledge in Math and Computer Science I acquired in college as much as possible.
Do all of the possible software development process say: get specs from a repository, generate the code, build, test, deploy complete with manuals in a single automated build step.
On the other hand, a trait to us human is that we like variety. In order to us to maintain attraction (love programming), we need to jump from one project or technology to the other in order for us to not get bored and have "fun".
I would like your opinion if there is any good or bad side effects in doing "good enough" software to you as a programmer or human being?
I actually consider good-enough programmers to be better than the blue-sky-make-sure-everything-is-perfect variety.
That's because, although I'm a coder, I'm also a businessman and realize that programs are not for the satisfaction of programmers, they're to meet a specific business need.
I actually had an argument in another question regarding the best way to detect a won tic-tac-toe/noughts-and-crosses game (an interview question).
The best solution that I'd received was from a candidate that simply checked all 8 possibilities with if statements. There were some that gave a generalized solution which, while workable, were totally unnecessary since the specs were quite clear it was for a 3x3 board only.
Many people thought I was being too restrictive and the "winning" solution was rubbish but my opinion is that it's not the job of a programmer to write perfect beautifully-extendable software. It's their job to meet a business need.
If that business need allows them the freedom to do more than necessary, that's fine, but most software and fixes are delivered under time and cost constraints. Programmers (or any profession) don't work in a vacuum.
As a programmer I want to write excellent software that's defect-free. I'm not particularly interested in gold-plating, the act of adding unnecessary features that "improve" the software, though we all do it to a certain extent. In that sense, I'm satisfied with "good enough" software, if by good enough you mean that I've done what the customer asked and, at the same time, crafted it well and ensured that it is high quality.
What bothers me is when I take short-cuts and write crappy, untested code. I hate writing code that is buggy or where I've failed to refactor it into a better design as I've gone along. when I let a lot of technical debt creep in -- getting too busy writing new features instead of consistently improving old features as I'm adding new ones -- then I know that eventually I'll have something that, while the customer may be happy with it, I won't be.
Fortunately, in my workplace, management knows the value of keeping the code clean and I know the value of not obsessing over the elusive goal of perfection. No code is ever perfect, but "good enough" has to mean that the code is well-crafted. I've learned, and am still learning, to be happy with code that meets the customer's requirements and that the best feature is the one that doesn't need to be implemented. Fortunately, I have enough work to do that dropping features because they're not needed is a good thing.
In my experience, "good enough" always includes hacks, sloppiness, bad commenting, and spaghetti hell, thus leads to lack of scalability, bugs, lagginess, and prevents others from being able to build effectively on your work.
Pax, while I recognize your points about business needs and pragmatism, doing things "by the book" is for the business side. "Good enough for now" and "just get something working right quick" always leads to far more work-hours later on fixing everything, or downright redoing it when it comes to that, than would be spent doing it right the first time. "The book" was written for a reason.
IMO there is a big difference between "good enough" and crappy code. For me "good enough" is all about satisfying the requirements (both functional and non functional). I think it is dangerous for people to assume that "good enough" means taking short cuts or not optimzing code. If the non functional requirements call for optimized code then that is part of my definition of "good enough".
The key to your question is how one defines "good". To a business person, "good" software is software that solves the business need. In that case it is more about insuring that the specifications were well understood and properly implimented. The business person may very well not care if the program is not as fast or memory efficient as it could be.
Think about the commercial software you use, is it perfect? I really don't know anyone, including my friends at Microsoft, who would argue that the code in Windows is "perfect" or anything close to it. But it is undenyable that Windows is (and always has been) "good enough" to get millions of people to use it on a daily basis.
This issue goes back long before programming. I'm sure you have heard "If it ain't broke, don't fix it" or the original in French "Le mieux est l'ennemi du bien." It may have been Voltaire that wrote about the "good being the enemy of the great".
ANd consider what would happen if hiring managers decided to stop hiring "good" programmers and insisted that every applicant had a perfect 4.0 average in college, I for one would never have gotten a job as a programmer ;-)
So for me it is a case of do the best you can given the time and budget constraints. With more time and or more money I could always do better.
"Good enough" is in the eye of the beholder. Far too often, "good enough" is the refuge of incompetent people who write something which creates the impression of satisfying the requirements of a job. My "good enough" is unlikely to be the same as their "good enough".
Ultimately, everything we do must involve trade-offs. Some people will make the wrong trade-offs and deliver crappy software and some people will make the wrong trade-offs and fail to deliver. Rare are the ones who can make the right trade-offs and deliver software that really is good enough.
There are at least two aspects of quality that we have to take into account:
software quality: does the software meet the desired goals/requirements? do we deliver builds which have critical bugs? is it easy for end users to operate?
code quality: how hard is it to maintain the code? is it easy to implement new features?
If you're building a productized software, I think it is good to assume that it's never good enough in both aspects. Every little feature counts and if the users will not find what they need or the product is not stable enough, they will take a look at the competition. You also want to implement new features as quickly as possible, so that you have a competitive advantage in the market.
The situation gets interesting if you're building custom business software, where the end users and decision makers are usually not the same people, then the features/quality/money trade off becomes part of the negotiation process. What we usually do is we put "good enough" constraint on these three aspects: we have a set of requirements to meet, a quality to maintain and usually not enough time to keep both.
What is usually forgotten in this process is the second point: code quality or maintainability. We, programmers understand that sooner or latter crappy code will take its revenge and result in critical bugs or maintance costs. Decision makers don't. The problem is, the responsibility and risks are taken by you (your company, your division etc.) and you will be first to blame if something goes wrong.
My opinion is: for software quality do what the client tells you to do, they know best which features are critical for them, how many buggy the software can be etc. For code quality and maintainability: do as best as you can, learn to do more and teach others to do the same. This is where I get the fun from.
Depends what you mean by "good enough". I can see some risk at the design level, if you make it good enough you may find maintaining and extending your applicataion painful.
I think of programming as an Art. An art that requires efficiency. Is efficient code incompatible with beautiful code ? I doubt that. In fact, i think that when you solve a problem creatively it may mean multiplied performance. I don't think that programming should only be about learning a new libraries for each new needs, nor about bug tracking and fixing. I think it should be about beauty. Of course code cannot be always art, and sometimes one should be pragmatic about the encountered problems.

Do formal methods of program verfication have a place in industry?

I took a glimpse on Hoare Logic in college. What we did was really simple. Most of what I did was proving the correctness of simple programs consisting of while loops, if statements, and sequence of instructions, but nothing more. These methods seem very useful!
Are formal methods used in industry widely?
Are these methods used to prove mission-critical software?
Well, Sir Tony Hoare joined Microsoft Research about 10 years ago, and one of the things he started was a formal verification of the Windows NT kernel. Indeed, this was one of the reasons for the long delay of Windows Vista: starting with Vista, large parts of the kernel are actually formally verified wrt. to certain properties like absence of deadlocks, absence of information leaks etc.
This is certainly not typical, but it is probably the single most important application of formal program verification, in terms of its impact (after all, almost every human being is in some way, shape or form affected by a computer running Windows).
This is a question close to my heart (I'm a researcher in Software Verification using formal logics), so you'll probably not be surprised when I say I think these techniques have a useful place, and are not yet used enough in the industry.
There are many levels of "formal methods", so I'll assume you mean those resting on a rigourous mathematical basis (as opposed to, say, following some 6-Sigma style process). Some types of formal methods have had great success - type systems being one example. Static analysis tools based on data flow analysis are also popular, model checking is almost ubiquitous in hardware design, and computational models like Pi-Calculus and CCS seem to be inspiring some real change in practical language design for concurrency. Termination analysis is one that's had a lot of press recently - The SDV project at Microsoft and work by Byron Cook are recent examples of research/practice crossover in formal methods.
Hoare Reasoning has not, so far, made great inroads in the industry - this is for more reasons than I can list, but I suspect is mostly around the complexity of writing then proving specifications for real programs (they tend to get big, and fail to express properties of many real world environments). Various sub-fields in this type of reasoning are now making big inroads into these problems - Separation Logic being one.
This is partially the nature of ongoing (hard) research. But I must confess that we, as theorists, have entirely failed to educate the industry on why our techniques are useful, to keep them relevant to industry needs, and to make them approachable to software developers. At some level, that's not our problem - we're researchers, often mathematicians, and practical usage is not foremost in our minds. Also, the techniques being developed are often too embryonic for use in large scale systems - we work on small programs, on simplified systems, get the math working, and move on. I don't much buy these excuses though - we should be more active in pushing our ideas, and getting a feedback loop between the industry and our work (one of the main reasons I went back to research).
It's probably a good idea for me to resurrect my weblog, and make some more posts on this stuff...
I cannot comment much on mission-critical software, although I know that the avionics industry uses a wide variety of techniques to validate software, including Hoare-style methods.
Formal methods have suffered because early advocates like Edsger Dijkstra insisted that they ought to be used everywhere. Neither the formalisms nor the software support were up to the job. More sensible advocates believe that these methods should be used on problems that are hard. They are not widely used in industry, but adoption is increasing. Probably the greatest inroads have been in the use of formal methods to check safety properties of software. Some of my favorite examples are the SPIN model checker and George Necula's proof-carrying code.
Moving away from practice and into research, Microsoft's Singularity operating-system project is about using formal methods to provide safety guarantees that ordinarily require hardware support. This in turn leads to faster performance and stronger guarantees. For example, in singularity they have proved that if a third-party device driver is allowed into the system (which means basic verification conditions have been proved), then it cannot possibly bring down that whole OS–he worst it can do is hose its own device.
Formal methods are not yet widely used in industry, but they are more widely used than they were 20 years ago, and 20 years from now they will be more widely used still. So you are future-proofed :-)
Yes, they are used, but not widely in all areas. There are more methods than just hoare logic, some are used more, some less, depending on suitability for given task. The common problem is that sofware is biiiiiiig and verifying that all of it is correct is still too hard a problem.
For example the theorem-prover (a software that aids humans in proving program correctness) ACL2 has been used to prove that a certain floating-point processing unit does not have a certain type of bug. It was a big task, so this technique is not too common.
Model checking, another kind of formal verification, is used rather widely nowadays, for example Microsoft provides a type of model checker in the driver development kit and it can be used to verify the driver for a set of common bugs. Model checkers are also often used in verifying hardware circuits.
Rigorous testing can be also thought of as formal verification - there are some formal specifications of which paths of program should be tested and so on.
"Are formal methods used in industry?"
Yes.
The assert statement in many programming languages is related to formal methods for verifying a program.
"Are formal methods used in industry widely ?"
No.
"Are these methods used to prove mission-critical software ?"
Sometimes. More often, they're used to prove that the software is secure. More formally, they're used to prove certain security-related assertions about the software.
There are two different approaches to formal methods in the industry.
One approach is to change the development process completely. The Z notation and the B method that were mentioned are in this first category. B was applied to the development of the driverless subway line 14 in Paris (if you get a chance, climb in the front wagon. It's not often that you get a chance to see the rails in front of you).
Another, more incremental, approach is to preserve the existing development and verification processes and to replace only one of the verification tasks at a time by a new method. This is very attractive but it means developing static analysis tools for exiting, used languages that are often not easy to analyse (because they were not designed to be).
If you go to (for instance)
http://dblp.uni-trier.de/db/indices/a-tree/d/Delmas:David.html
(sorry, only one hyperlink allowed for new users :( )
you will find instances of practical applications of formal methods to the verification of C programs (with static analyzers Astrée, Caveat, Fluctuat, Frama-C) and binary code (with tools from AbsInt GmbH).
By the way, since you mentioned Hoare Logic, in the above list of tools, only Caveat is based on Hoare logic (and Frama-C has a Hoare logic plug-in). The others rely on abstract interpretation, a different technique with a more automatic approach.
My area of expertise is the use of formal methods for static code analysis to show that software is free of run-time errors. This is implemented using a formal methods technique known "abstract interpretation". The technique essentially enables you to prove certain atributes of a s/w program. E.g. prove that a+b will not overflow or x/(x-y) will not result in a divide by zero. An example static analysis tool that uses this technique is Polyspace.
With respect to your question: "Are formal methods used in industry widely?" and "Are these methods used to prove mission-critical software?"
The answer is yes. This opinion is based on my experience and supporting the Polyspace tool for industries that rely on the use of embedded software to control safety critical systems such as electronic throttle in an automobile, braking system for a train, jet engine controller, drug delivery infusion pump, etc. These industries do indeed use these types of formal methods tools.
I don't believe all 100% of these industry segments are using these tools, but the use is increasing. My opinion is that the Aerospace and Automotive industries lead with the Medical Device industry quickly ramping up use.
Polyspace is a a (hideously expensive, but very good) commercial product based on program verification. It's fairly pragmatic, in that it scales up from 'enhanced unit testing that will probably find some bugs' to 'the next three years of your life will be spent showing these 10 files have zero defects'.
It is based more on negative verification ('this program won't corrupt your stack') instead positive verification ('this program will do precisely what these 50 pages of equations say it will').
To add to Jorg's answer, here's an interview with Tony Hoare. The tools Jorg's referring to, I think, are PREfast and PREfix. See here for more information.
Besides of other more procedural approaches, Hoare logic was in the basis of Design by Contract, introduced as an object oriented technique by Bertrand Meyer in Eiffel (see Meyer's article of 1992, page 4). While Design by Contract is not the same as formal verification methods (for one thing, DbC doesn't prove anything until the software is executed), in my opinion it provides a more practical use.

What are the best practices for Design by Contract programming

What are the best practices for Design by Contract programming.
At college I learned the design by contract paradigma
(in an OO environment)
We've learned three ways to tackle the problem :
1) Total Programming : Covers all possible exceptional cases in its
effect (cf. Math)
2) Nominal Programming : Only 'promises' the right effects when the preconditions are met. (otherwise effect is undefined)
3) Defensive Programming : Use exceptions to signal illegal invocations of methods
Now, we have focussed in different OO scenarios on the correct use in each situation, but we haven't learned WHEN to use WHICH...
(Mostly the tactics where inforced by the exercice..)
Now I think it's very very strange that I haven't asked my teacher (but then again, during lectures, noone has)
Personally, I never use nominal now, and tend to replace preconditions with exceptions (so i rather use : throws IllegalDivisionByZero, than stating 'precondition : divider should differ from zero) and only program total what makes sense (so I wouldn't return a conventional value on division by zero), but this method is just based on personal findings and likes.
so I am asking you guys :
Are there any best practises??
I didn't know about this division, and it doesn't really reflect my experience.
Total Programming is virtually impossible. You could not guarantee that you cover all exceptional cases. So basically you should limit your scope and reject the situations that are out of scope (that's the role of the Pre-conditions)
Nominal Programming is not desired. Undefined effect should be banned.
Defensive Programming is a must. You should always signal illegal invocations of methods.
I'm in favour of the implementation of the complete Design-by-Contract elements, which is, in my opinions a practical and affortable version of the Total Programming
Preconditions (a kind of Defensive Programming) to signal illegal invocation of the method. Try to limit your scope as much as you can so that you could simplify the code. Avoid complex implementation if possible by narrowing a little bit the scope.
Postconditions to raise an error if the desired effect is not obtained. Even if it your fault, you should notify the caller that you miss your goal.
Invariants to check that the object consistency is preserved.
It all boils down to what responsibilities do you wish to assign to the client and the implementer of the contract.
In defensive programming you force the implementer to check for error conditions which can be costy or even impossible in some cases. Imagine a contract specified by the binarySearch for example your input array has to be sorted. you can't detect this while running the algorithm. you have to do a manual check for it which will actually bump the execution time an order of magnitude. to back my opinion up is the signature of the method from the javadocs.
Another point is People and frameworks now tend to implement exception translation mechanisms which is used mainly to translate checked exceptions (defensive style) to runtime exceptions that will just pop up if something wrong happens. this way the client and implementer of the contract has less to worry about while dealing with each other.
Again this is my personal opinion backed only with what limited experience I have, I'd love to hear more about this subject.
...but we haven't learned WHEN
to use WHICH...
I think the best practice is to be "as defensive as possible". Do your runtime checks if you can. As #MahdeTo has mentioned sometimes that's impossible for performance reasons; in such cases fall back on undefined or unsatisfactory behavior.
That said, be explicit in your documentation as to what has runtime checks and what does not.
Like much of computing "it depends" is probably the best answer.
Design by contract/programming by contract can help development greatly by explicitly documenting the conditions for a function. Just the documentation can be a help without even making it into (compiled) code.
Where feasible I recommend defensive - checking every condition. BUT only for development and debug builds. In this way most invalid assumptions are caught when the conditions are broken. A good build system would allow you to turn the different condition types on and off at a module or file level - as well as globally.
The actions taken in release versions of software then depend upon the system and how the condition is triggered ( usual distinction between external and internal interfaces ). The release version could be 'total programming' - all conditions give a defined result (which can include errors or NaN)
To me "nominal programming" is a dead end in the real world. You assume that if you passed the right values in (which of course you did) then the value you receive is good. If your assumption was wrong - you break down.
I think that test driven programming is the answer. Before actually implementing the module, you first create a unit test (call it a contract). Then gradually implement the functionality and make sure the contract is still valid as you go. Usually I start with plain stubs and mockups, then gradually fill out the rest replacing the stabs with real stuff. Keep improving and making the test stronger. At the end you end up with a robust implementation of said module plus you've got a fantastic test bed - coded implementation of the contract. Later on, if someone modifies the module, first you see if it can still fit the test bed. If it doesn't, the contract is broken - reject the changes. Or, the contract is outdated, - fix the unit tests. And so on.. Boring cycle of software development :)