Compile time checking in Design by Contract? - design-by-contract

I read that compiler can enforce dbc at compile time.. How does it do it?

As far as I know, the most powerful static DbC language so far is Spec# by Microsoft Research. It uses a powerful static analysis tool called Boogie which in turn uses a powerful Theorem Prover / Constraint Solver called Z3 to prove either the fulfillment or violation of contracts at design time.
If the Theorem Prover can prove that a contract will always be violated, that's a compile error. If the Theorem Prover can prove that a contract will never be violated, that's an optimization: the contract checks are removed from the final DLL.
As Charlie Martin points out, proving contracts in general is equivalent to solving the Halting Problem and thus not possible. So, there will be a lot of cases, where the Theorem Prover can neither prove nor disprove the contract. In that case, a runtime check is emitted, just like in other, less powerful contract systems.
Please note that Spec# is no longer being developed. The contract engine has been extracted into a library, called Code Contracts for .NET, which will be a part of .NET 4.0 / Visual Studio 2010. However, there will be no language support for contracts.

The compiler can use static analysis to look at your program and determine whether it does the right thing. As a simple example, the following code might try to take the square root of a negative number (C++):
double x;
cin >> x;
cout << sqrt(x) << endl;
If the compiler knows that sqrt should never be called with a negative number, it could flag this as a problem because it knows that reading from user input could return a negative number. On the other hand, if you do this:
double x;
cin >> x;
if (x >= 0) {
cout << sqrt(x) << endl;
} else {
cout << "Can't take square root of negative number" << endl;
}
then the compiler can tell that this code won't ever call sqrt with a negative number.

Design by Contract is a very abstract term, since there can be many specification formalisms with different powers of expression. In addition, there is at present a limit to the abilities of static analysis to check and enforce specifications. It's one of the most active academic and industrial research fields in computer science.
In practice, it is likely that you will use some subset of contracts and checking, and that depends on the language that you are using and on the plugins or programs you install.
In general, static analysis tries to build a model of the contract, and a model of the actual program, and compare them. For example, if the contract doesn't let you invoke a function when an object is in state S, it will try to determine whether in any invocation sequence you could end up in state S.

Which compiler and what language? Eiffel can do it to some extent. But remember that completely enforcing design-by-contract would mean being able to solve the Halting Problem (proof: assume you have a compiler that could do it. Then the compiler would have to be able to identify an arbitrary function with a true exit condition that can't achieve the exit condition because of an infinite loop or infinite recursion. Thus it reduces to halting.)
What is usually meant by this is that if you have a call
foo(a);
and somewhere else define
function foo(a:int) is
assert 0 < a && a < 128
...
end
then the compiler can check that a is, in fact, going to be in the open interval (0..128).

Some languages like D have reasonably powerful compile time constant folding and compile time condition checking (for D static assert(boolCond, msg);, IIRC C/C++ can use #if and pragma or #error)

Related

How to detect function pointers from assignment statements in LLVM IR?

I want to detect all occurences of function pointers, calls as well as assignments. I am able to detect indirect calls but how to do the assignment ones? Is there a char Iterator in the Instruction.h header or something which iterates character by character for each instruction in the .ll file?
Any help or links will be appreciated.
Start with unoptimized LLVM IR, then scan the use-list for all functions.
In the comments we established that this is for a school project, not a production system. The difference when building a production system is that you should rely only on the properties that are guaranteed, while for research or prototyping or schoolwork, you can build something that happens to work by relying on properties that are not guaranteed. That's what we're going to do here.
It so happens (but is not guaranteed!) that as clang converts C to LLVM IR, it will[1] emit a store for every function pointer used. As long as we don't run the LLVM optimizer, they will still be there.
The "forwards" direction would be to look into instructions and see whether any of them do the action you want (assign or call a function pointer). I've got a better idea: let's do it backwards. Every llvm::Value has a list of all places where that value is used, called the use-list. For example %X = add i32 %Y, %Z, %X would appear in the use-list of %Y because %X is using %Y.
Starting with your Module, iterate over every function (ie. for (auto &F : M->functions()) {), then scan the use-list for the function (ie. for (const auto *U : F.users())) and look at those Values (ie. U.get()). If the user value is an Instruction, you can query which Function this Instruction belongs to -- the Instruction has a parent BasicBlock and the BasicBlock has a parent Function. If it's a ConstantExpr, you'll need to recurse through that ConstantExpr's use-list until you find all the instructions using those constants. If the instruction is a direct call, you want to skip it (unless it's a direct call with an argument that is also a function pointer, like somefun(1, &somefun, 2);).
This will miss any code that uses C typed function pointers but never point to any functions, for instance C code like void (*fun_ptr)(int) = NULL;. If this is important to you then I suggest writing a Clang AST tool with the RecursiveASTVisitor instead of using LLVM.
[1] clang is free not to, for instance given if (0) { /* ... */ } clang could skip emitting LLVM IR for the disabled code because it's faster to do that. If your function pointer use was in there, LLVM would never get the opportunity to see it.

Does D std lib include something like boost.fusion and boost.mpl?

I'm still evaluating if i should start using D for prototyping numerical code in physics.
One thing that stops me is I like boost, specifically fusion and mpl.
D is amazing for template meta-programming and i would think it can do mpl and fusion stuff but I would like to make sure.
Even if i'll start using d, it would take me a while to get to the mpl level. So i'd like someone to share their experience.
(by mpl i mean using stl for templates and by fusion, i mean stl for tuples.)
a note on performance would be nice too, since it's critical in physics simulations.
In D, for the most part, meta-programming is just programming. There's not really any need for a library like boost.mpl
For example, consider the lengths you would have to go to in C++ to sort an array of numbers at compile time. In D, you just do the obvious thing: use std.algorithm.sort
import std.algorithm;
int[] sorted(int[] xs)
{
int[] ys = xs.dup;
sort(ys);
return ys;
}
pragma(msg, sorted([2, 1, 3]));
This prints out [1, 2, 3] at compile time. Note: sort is not built into the language and has absolutely no special code for working at compile time.
Here's another example that builds a lookup table for Fibonacci sequence at compile time.
int[] fibs(int n)
{
auto fib = recurrence!("a[n-1] + a[n-2]")(1, 1);
int[] ret = new int[n];
copy(fib.take(n), ret);
return ret;
}
immutable int[] fibLUT = fibs(10).assumeUnique();
Here, fibLUT is constructed entirely at compile time, again without any special compile time code needed.
If you want to work with types, there are a few type meta functions in std.typetuple. For example:
static assert(is(Filter!(isUnsigned, int, byte, ubyte, dstring, dchar, uint, ulong) ==
TypeTuple!(ubyte, uint, ulong)));
That library, I believe, contains most of the functionality you can get from Fusion. Remember though, you really don't need to use much of template meta-programming stuff in D as much as you do in C++, because most of the language is available at compile time anyway.
I can't really comment on performance because I don't have vast experience with both. However, my instinct would be that D's compile time execution is faster because you generally don't need to instantiate numerous templates. Of course, C++ compilers are more mature, so I could be wrong here. The only way you'll really find out is by trying it for your particular use case.

Implementation techniques for FSM states

How do you go about implementing FSM(EDIT:Finite State Machine) states?
I usually think about an FSM like a set of functions,
a dispatcher,
and a thread as to indicate the current running state.
Meaning, I do blocking calls to functions/functors representing
states.
Just now I have implemented one in a different style,
where I still represent states with function(object)s, but the thread
just calls a state->step() method, which tries to return
as quickly as possible. In case the state has finished and a
transition should take place, it indicates that accordingly.
I would call this the 'polling' style since the functions mostly look
like:
void step()
{
if(!HaveReachedGoal)
{
doWhateverNecessary();
return; // get out as fast as possible
}
// ... test perhaps some more subgoals
indicateTransition();
}
I am aware that it is an FSM within an FSM.
It feels rather simplistic, but it has certain advantages.
While a thread being blocked, or held in some kind of
while (!CanGoForward)checkGoForward();
loop can be cumbersome and unwieldy,
the polling felt much easier to debug.
That's because the FSM object regains control after
every step, and putting out some debug info is a breeze.
Well I am deviating from my question:
How do you implement states of FSMs?
The state Design Pattern is an interesting way of implementing a FSM:
http://en.wikipedia.org/wiki/State_pattern
It's a very clean way of implementing the FSM but it can be messy depending on the complexity of your FSM (but not the amount of states). However, the advantages are that:
you eliminate code duplication (especially if/else statements)
It is easier to extend with new states
Your classes have better cohesion so all related logic is in one place - this should also make your code easier to writ tests for.
There is a Java and C++ implementation at this site:
http://www.vincehuston.org/dp/state.html
There’s always what I call the Flying Spaghetti Monster’s style of implementing FSMs (FSM-style FSMs): using lotsa gotos. For example:
state1:
do_something();
goto state2;
state2:
if (condition) goto state1;
else goto state3;
state3:
accept;
Very nice spaghetti code :-)
I did it as a table, a flat array in the memory, each cell is a state. Please have a look at the cvs source of the abandoned DFA project. For example:
class DFA {
DFA();
DFA(int mychar_groups,int mycharmap[256],int myi_state);
~DFA();
void add_trans(unsigned int from,char sym,unsigned int to);
void add_trans(unsigned int from,unsigned int symn,unsigned int to);
/*adds a transition between state from to state to*/
int add_state(bool accepting=false);
int to(int state, int symn);
int to(int state, char sym);
void set_char(char used_chars[],int);
void set_char(set<char> char_set);
vector<int > table; /*contains the table of the dfa itself*/
void normalize();
vector<unsigned int> char_map;
unsigned int char_groups; /*number of characters the DFA uses,
char_groups=0 means 1 character group is used*/
unsigned int i_state; /*initial state of the DFA*/
void switch_table_state(int first,int sec);
unsigned int num_states;
set<int > accepting_states;
};
But this was for a very specific need (matching regular expressions)
I remember my first FSM program. I wrote it in C with a very simple switch statement. Switching from one state to another or following through to the next state seemed natural.
Then I progressed to use a table lookup approach. I was able to write some very generic coding style using this approach. However, I was caught out a couple of times when the requirements changed and I have to support some extra events.
I have not written any FSMs lately. The last one I wrote was for a comms module in C++ where I used a "state design pattern" in conjunction with a "command pattern" (action).
If you are creating a complex state machine then you may want to check out SMC - the State Machine Compiler. This takes a textual representation of a state machine and compiles it into the language of your choice - it supports Java, C, C++, C#, Python, Ruby, Scala and many others.

Can every recursion be converted into iteration?

A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end

Why don't languages raise errors on integer overflow by default?

In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition.
For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.)
//Returns the sum of the values in the specified list.
private static int sumList(List<int> list)
{
int sum = 0;
foreach (int listItem in list)
{
sum += listItem;
}
return sum;
}
If this method is called as follows:
List<int> list = new List<int>();
list.Add(2000000000);
list.Add(2000000000);
int sum = sumList(list);
An overflow will occur in the sumList() method (because the int type in C# is a 32-bit signed integer, and the sum of the values in the list exceeds the value of the maximum 32-bit signed integer). The sum variable will have a value of -294967296 (not a value of 4000000000); this most likely is not what the (hypothetical) developer of the sumList method intended.
Obviously, there are various techniques that can be used by developers to avoid the possibility of integer overflow, such as using a type like Java's BigInteger, or the checked keyword and /checked compiler switch in C#.
However, the question that I'm interested in is why these languages were designed to by default allow integer overflows to happen in the first place, instead of, for example, raising an exception when an operation is performed at runtime that would result in an overflow. It seems like such behavior would help avoid bugs in cases where a developer neglects to account for the possibility of overflow when writing code that performs an arithmetic operation that could result in overflow. (These languages could have included something like an "unchecked" keyword that could designate a block where integer overflow is permitted to occur without an exception being raised, in those cases where that behavior is explicitly intended by the developer; C# actually does have this.)
Does the answer simply boil down to performance -- the language designers didn't want their respective languages to default to having "slow" arithmetic integer operations where the runtime would need to do extra work to check whether an overflow occurred, on every applicable arithmetic operation -- and this performance consideration outweighed the value of avoiding "silent" failures in the case that an inadvertent overflow occurs?
Are there other reasons for this language design decision as well, other than performance considerations?
In C#, it was a question of performance. Specifically, out-of-box benchmarking.
When C# was new, Microsoft was hoping a lot of C++ developers would switch to it. They knew that many C++ folks thought of C++ as being fast, especially faster than languages that "wasted" time on automatic memory management and the like.
Both potential adopters and magazine reviewers are likely to get a copy of the new C#, install it, build a trivial app that no one would ever write in the real world, run it in a tight loop, and measure how long it took. Then they'd make a decision for their company or publish an article based on that result.
The fact that their test showed C# to be slower than natively compiled C++ is the kind of thing that would turn people off C# quickly. The fact that your C# app is going to catch overflow/underflow automatically is the kind of thing that they might miss. So, it's off by default.
I think it's obvious that 99% of the time we want /checked to be on. It's an unfortunate compromise.
I think performance is a pretty good reason. If you consider every instruction in a typical program that increments an integer, and if instead of the simple op to add 1, it had to check every time if adding 1 would overflow the type, then the cost in extra cycles would be pretty severe.
You work under the assumption that integer overflow is always undesired behavior.
Sometimes integer overflow is desired behavior. One example I've seen is representation of an absolute heading value as a fixed point number. Given an unsigned int, 0 is 0 or 360 degrees and the max 32 bit unsigned integer (0xffffffff) is the biggest value just below 360 degrees.
int main()
{
uint32_t shipsHeadingInDegrees= 0;
// Rotate by a bunch of degrees
shipsHeadingInDegrees += 0x80000000; // 180 degrees
shipsHeadingInDegrees += 0x80000000; // another 180 degrees, overflows
shipsHeadingInDegrees += 0x80000000; // another 180 degrees
// Ships heading now will be 180 degrees
cout << "Ships Heading Is" << (double(shipsHeadingInDegrees) / double(0xffffffff)) * 360.0 << std::endl;
}
There are probably other situations where overflow is acceptable, similar to this example.
C/C++ never mandate trap behaviour. Even the obvious division by 0 is undefined behaviour in C++, not a specified kind of trap.
The C language doesn't have any concept of trapping, unless you count signals.
C++ has a design principle that it doesn't introduce overhead not present in C unless you ask for it. So Stroustrup would not have wanted to mandate that integers behave in a way which requires any explicit checking.
Some early compilers, and lightweight implementations for restricted hardware, don't support exceptions at all, and exceptions can often be disabled with compiler options. Mandating exceptions for language built-ins would be problematic.
Even if C++ had made integers checked, 99% of programmers in the early days would have turned if off for the performance boost...
Because checking for overflow takes time. Each primitive mathematical operation, which normally translates into a single assembly instruction would have to include a check for overflow, resulting in multiple assembly instructions, potentially resulting in a program that is several times slower.
It is likely 99% performance. On x86 would have to check the overflow flag on every operation which would be a huge performance hit.
The other 1% would cover those cases where people are doing fancy bit manipulations or being 'imprecise' in mixing signed and unsigned operations and want the overflow semantics.
Backwards compatibility is a big one. With C, it was assumed that you were paying enough attention to the size of your datatypes that if an over/underflow occurred, that that was what you wanted. Then with C++, C# and Java, very little changed with how the "built-in" data types worked.
If integer overflow is defined as immediately raising a signal, throwing an exception, or otherwise deflecting program execution, then any computations which might overflow will need to be performed in the specified sequence. Even on platforms where integer overflow checking wouldn't cost anything directly, the requirement that integer overflow be trapped at exactly the right point in a program's execution sequence would severely impede many useful optimizations.
If a language were to specify that integer overflows would instead set a latching error flag, were to limit how actions on that flag within a function could affect its value within calling code, and were to provide that the flag need not be set in circumstances where an overflow could not result in erroneous output or behavior, then compilers could generate more efficient code than any kind of manual overflow-checking programmers could use. As a simple example, if one had a function in C that would multiply two numbers and return a result, setting an error flag in case of overflow, a compiler would be required to perform the multiplication whether or not the caller would ever use the result. In a language with looser rules like I described, however, a compiler that determined that nothing ever uses the result of the multiply could infer that overflow could not affect a program's output, and skip the multiply altogether.
From a practical standpoint, most programs don't care about precisely when overflows occur, so much as they need to guarantee that they don't produce erroneous results as a consequence of overflow. Unfortunately, programming languages' integer-overflow-detection semantics have not caught up with what would be necessary to let compilers produce efficient code.
My understanding of why errors would not be raised by default at runtime boils down to the legacy of desiring to create programming languages with ACID-like behavior. Specifically, the tenet that anything that you code it to do (or don't code), it will do (or not do). If you didn't code some error handler, then the machine will "assume" by virtue of no error handler, that you really want to do the ridiculous, crash-prone thing you're telling it to do.
(ACID reference: http://en.wikipedia.org/wiki/ACID)