How to handle maximum recursion depth? - language-agnostic

Many languages (such as python) have a set maximum recursion depth. I realize you can change that depth, or simply not write recursive functions altogether, but if you do write a recursive function and you hit that maximum recursion depth, how would you prepare for and handle that?

Have a parameter in the function signature that gets incremented for each call. When it gets near the maximum recursion depth, do something before it is reached.
Here is a ruby-ish pseudo code example:
def my_recursive_function(current_depth)
# do stuff
if current_depth >= MAX_RECURSION_LIMIT
# throw exception, or output helpful information or return default value
else
my_recursive_function(current_depth+1)
end
end

The only thing you really can do at that point is to let the user know that something has gone wrong and the task cannot be performed as designed.

I think the best way is to avoid writing recursive code that has any chance of reaching the maximum depth. There's always a way to re-write a recursive algorithm as an iterative one, so just do that.
If you're dead set on writing recursive code that may hit the limit, then write a backup iterative version, catch the recursion exceeded exception and switch to the iterative one.

Related

How can I get better randomization in my sql query?

I am attempting to get a random bearing, from 0 to 359.9.
SET bearing = FLOOR((RAND() * 359.9));
I may call the procedure that runs this request within the same while loop, immediately one after the next. Unfortunately, the randomization seems to be anything but unique. e.g.
Results
358.07
359.15
357.85
I understand how randomization works, and I know because of my quick calls to the same function, the ticks used to generate the random number are very close to one another.
In any other situation, I would wait a few milliseconds in between calls or reinit my Random object (such as in C#), which would greatly vary my randomness. However, I don't want to wait in this situation.
How can I increase randomness without waiting?
I understand how randomization works, and I know because of my quick calls to the same function, the ticks used to generate the random number are very close to one another.
That's not quite right. Where folks get into trouble is when they re-seed a random number generator repeatedly with the current time, and because they do it very quickly the time is the same and they end up re-seeding the RNG with the same seed. This results in the RNG spitting out the same sequence of numbers each time it is re-seeded.
Importantly, by "the same" I mean exactly the same. An RNG is either going to return an identical sequence or a completely different one. A "close" seed won't result in a "similar" sequence. You will either get an identical sequence or a totally different one.
The correct solution to this is not to stagger your re-seeds, but actually to stop re-seeding the RNG. You only need to seed an RNG once.
Anyways, that is neither here nor there. MySQL's RAND() function does not require explicit seeding. When you call RAND() without arguments the seeding is taken care of for you meaning you can call it repeatedly without issue. There's no time-based limitation with how often you can call it.
Actually your SQL looks fine as is. There's something missing from your post, in fact. Since you're calling FLOOR() the result you get should always be an integer. There's no way you'll get a fractional result from that assignment. You should see integral results like this:
187
274
89
345
That's what I got from running SELECT FLOOR(RAND() * 359.9) repeatedly.
Also, for what it's worth RAND() will never return 1.0. Its range is 0 &leq; RAND() < 1.0. You are safe using 360 vs. 359.9:
SET bearing = FLOOR(RAND() * 360);

Technical non-terminating condition in a loop

Most of us know that a loop should not have a non-terminating condition. For example, this C# loop has a non-terminating condition: any even value of i. This is an obvious logic error.
void CountByTwosStartingAt(byte i) { // If i is even, it never exceeds 254
for(; i < 255; i += 2) {
Console.WriteLine(i);
}
}
Sometimes there are edge cases that are extremely unlikeley, but technically constitute non-exiting conditions (stack overflows and out-of-memory errors aside). Suppose you have a function that counts the number of sequential zeros in a stream:
int CountZeros(Stream s) {
int total = 0;
while(s.ReadByte() == 0) total++;
return total;
}
Now, suppose you feed it this thing:
class InfiniteEmptyStream:Stream
{
// ... Other members ...
public override int Read(byte[] buffer, int offset, int count) {
Array.Clear(buffer, offset, count); // Output zeros
return count; // Never returns -1 (end of stream)
}
}
Or more realistically, maybe a stream that returns data from external hardware, which in certain cases might return lots of zeros (such as a game controller sitting on your desk). Either way we have an infinite loop. This particular non-terminating condition stands out, but sometimes they don't.
A completely real-world example as in an app I'm writing. An endless stream of zeros will be deserialized into infinite "empty" objects (until the collection class or GC throws an exception because I've exceeded two billion items). But this would be a completely unexpected circumstance (considering my data source).
How important is it to have absolutely no non-terminating conditions? How much does this affect "robustness?" Does it matter if they are only "theoretically" non-terminating (is it okay if an exception represents an implicit terminating condition)? Does it matter whether the app is commercial? If it is publicly distributed? Does it matter if the problematic code is in no way accessible through a public interface/API?
Edit:
One of the primary concerns I have is unforseen logic errors that can create the non-terminating condition. If, as a rule, you ensure there are no non-terminating conditions, you can identify or handle these logic errors more gracefully, but is it worth it? And when? This is a concern orthogonal to trust.
You either "trust" your data source, or you don't.
If you trust it, then probably you want to make a best effort to process the data, no matter what it is. If it sends you zeros for ever, then it has posed you a problem too big for your resources to solve, and you expend all your resources on it and fail. You say this is "completely unexpected", so the question is whether it's OK for it to merely be "completely unexpected" for your application to fall over because it's out of memory. Or does it need to actually be impossible?
If you don't trust your data source, then you might want to put an artificial limit on the size of problem you will attempt, in order to fail before your system runs out of memory.
In either case it might be possible to write your app in such a way that you recover gracefully from an out-of-memory exception.
Either way it's a robustness issue, but falling over because the problem is too big to solve (your task is impossible) is usually considered more acceptable than falling over because some malicious user is sending you a stream of zeros (you accepted an impossible task from some script-kiddie DoS attacker).
Things like that have to decided on a case-by-case basis. If may make sense to have additional sanity checks, but it is too much work too make every piece of code completely foolproof; and it is not always possible to anticipate what fools come up with.
You either "trust" your data source, or you don't.
I'd say that you either "support" the software being used with that data source, or you don't. For example I've seen software which doesn't handle an insufficient-memory condition: but insufficient memory isn't "supported" for that software (or less specifically it isn't supported for that system); so, for that system, if an insufficient-memory condition occurs, the fix is to reduce the load on the system or to increase the memory (not to fix the software). For that system, handling insufficient memory isn't a requirement: what is a requirements is to manage the load put on the system, and to provide sufficient memory for that given load.
How important is it to have absolutely
no non-terminating conditions?
It isn't important at all. That is, it's not a goal by itself. The important thing is that the code correctly implements the spec. For example, an interactive shell may have a bug if the main loop does terminate.
In the scenario you're describing, the problem of infinite zeros is actually a special case of memory exhaustion. It's not a theoretical question but something that can actually happen. You should decide how to handle this.

Is there really a performance hit when catching exceptions?

I asked a question about exceptions and I am getting VERY annoyed at people saying throwing is slow. I asked in the past How exceptions work behind the scenes and I know in the normal code path there are no extra instructions (as the accepted answer says) but I am not entirely convinced throwing is more expensive then checking return values. Consider the following:
{
int ret = func();
if (ret == 1)
return;
if (ret == 2)
return;
doSomething();
}
vs
{
try{
func();
doSomething();
}
catch (SpecificException1 e)
{
}
catch (SpecificException2 e)
{
}
}
As far as I know there isn't a difference except the ifs are moved out of the normal code path into an exception path and an extra jump or two to get to the exception code path. An extra jump or two doesn't sound like much when it reduces a few ifs in your main and more often run) code path. So are exceptions actually slow? Or is this a myth or an old issue with old compilers?
(I'm talking about exceptions in general. Specifically, exceptions in compiled languages like C++ and D; though C# was also in my mind.)
Okay - I just ran a little test to make sure that exceptions are actually slower. Summary: On my machine a call w/ return is 30 cycles per iteration. A throw w/ catch is 20370 cycles per iteration.
So to answer the question - yes - throwing exceptions is slow.
Here's the test code:
#include <stdio.h>
#include <intrin.h>
int Test1()
{
throw 1;
// return 1;
}
int main(int argc, char*argv[])
{
int result = 0;
__int64 time = 0xFFFFFFFF;
for(int i=0; i<10000; i++)
{
__int64 start = __rdtsc();
try
{
result += Test1();
}
catch(int x)
{
result += x;
}
__int64 end = __rdtsc();
if(time > end - start)
time = end - start;
}
printf("%d\n", result);
printf("time: %I64d\n", time);
}
alternative try/catch written by op
try
{
if(Test1()!=0)
result++;
}
catch(int x)
{
result++;
I don't know exactly how slow it is, but throwing an exception that already exists (say it was created by the CLR) is not much slower, cause you've already incurred the hit of constructing the exception. ... I believe it's the construction of an exception that creates the majority of the addtional performance hit ... Think about it, it has to create a stack trace, (including reading debug symbols to add lines numbers and stuff) and potentially bundle up inner exceptions, etc.
actually throwing an exception only adds the additional code to traverse up the stack to find the appropriate catch clause (if one exists) or transfer control to the CLRs unhandled Exception handler... This portion could be expensive for a very deep stack, but if the catch block is just at the bottom of the same method you are throwing it in, for example, then it will be relatively cheap.
If you are using exceptions to actually control the flow it can be a pretty big hit.
I was digging in some old code to see why it ran so slow. In a big loop instead of checking for null and performing a different action it caught the null exception and performed the alternative action.
So don't use exceptions for things they where not designed to do because they are slower.
Use exceptions and generally anything without worrying about performance. Then, when you are finished, measure the performance with profiling tools. If it's not acceptable, you can find the bottlenecks (which probably won't be the exception handling) and optimize.
In C# raising exceptions do have an every so slight performance hit, but this shouldn't scare you away from using them. If you have a reason, you should throw an exception. Most people who have problems with using them cite the reason being because they can disrupt the flow of a program.
Really if your reasons for not using them is a performance hit, your time can be better spent optimizing other parts of your code. I have never run into a situation where throwing an exception caused the program to behave so slowly that it had to be re-factored out (well the act of throwing the exception, not how the code treated it).
Thinking about it a little more, with all that being said, I do try and use methods which avoid throwing exceptions. If possible I'll use TryParse instead of Parse, or use KeyExists etc. If you are doing the same operation 100s of times over and throwing many exception small amounts of inefficiency can add up.
Yes. Exceptions make your program slower in C++. I created an 8086 CPU Emulator a while back. In the code I used exceptions for CPU Interrupts and Faults. I made a little test case of a big complex loop that ran for about 2 minutes doing emulated opcodes. When I ran this test through a profiler, my main loop was making a significant amount of calls to an "exception checker" function of gcc(actually there were two different functions related to this. My test code only threw one exception at the end however.) These exception functions were called in my main loop I believe every time(this is where I had the try{}catch{} part.). The exception functions cost me about 20% of my runtime speed.(the code spent 20% of it's time in there). And the exception functions were also the 3rd and 4th most called functions in the profiler...
So yes, using exceptions at all can be expensive, even without constant exception throwing.
tl;dr IMHO, Avoiding exceptions for performance reasons hits both categories of premature and micro- optimizations. Don't do it.
Ah, the religious war of exceptions.
The various types of answers to this are usually:
the usual mantra (a good one, IMHO): "use exceptions for exceptional situations" (IOW, not part of "normal" code paths).
If your normal user paths involved intentionally using exceptions as a control-flow mechanism, that's a smell.
tons of detail, without really answering the original question
if you really want detail:
http://blogs.msdn.com/cbrumme/archive/2003/10/01/51524.aspx
http://blogs.msdn.com/ricom/archive/2006/09/14/754661.aspx
etc.
someone pointing at microbenchmarks showing that something like i/j with j == 0 is 10x slower catching div-by-zero than checking j == 0
pragmatic answer of how to approach performance for apps in general
usually along the lines of:
make perf goals for your scenarios (ideally working with customers)
build it so it's maintainable, readable, and robust
run it and check perf of goal scenarios
if a set of scenarios aren't making goal, USE A PROFILER to tell you where your time is being spent and go from there.
IOW, any perf changes, especially micro-optimizations like this, made without profiling data driving that decision, is typically a huge waste of time.
Keep in mind that your perf wins will typically come from algorithmic changes (adding an index to a table to avoid table scans, moving something with large n from O(n^3) to O(n ln n), etc.).
More fun links:
http://en.wikipedia.org/wiki/Program_optimization
http://www.flounder.com/optimization.htm
If you want to know how exceptions work in Windows SEH, then I believe this article by Matt Pietrik is considered the definitive reference. It isn't light reading. If you want to extend this to how exceptions work in .NET, then you need to read this article by Chris Brumme, which is most definitely the definitive reference. It isn't light reading either.
The summary of Chris Brumme's article gives a detailed explanation as to why exception are significantly slower than using return codes. It's too long to reproduce here, and you've got a lot of reading to do before you can fully understand why.
Part of the answer is that the compiler isn't trying very hard to optimize the exceptional code path.
A catch block is a very strong hint to the compiler to agressively optimize the non-exceptional code path at the expense of the exceptional code path. To reliably hint to a compiler which branch of an if statement is the exceptional one you need profile guided optimization.
The exception object must be stored somewhere, and because throwing an exception implies stack unwinding, it can't be on the stack. The compiler knows that exceptions are rare - so the optimizer isn't going to do anything that might slow down normal execution - like keeping registers or 'fast' memory of any kind available just in case it needs to put an exception in one. You may find you get a page fault. In contrast, return codes typically end up in a register (e.g. EAX).
it's like concating strings vs stringbuilder. it's only slow if you do it a billion times.

Can every recursion be converted into iteration?

A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end

What is an invariant?

The word seems to get used in a number of contexts. The best I can figure is that they mean a variable that can't change. Isn't that what constants/finals (darn you Java!) are for?
An invariant is more "conceptual" than a variable. In general, it's a property of the program state that is always true. A function or method that ensures that the invariant holds is said to maintain the invariant.
For instance, a binary search tree might have the invariant that for every node, the key of the node's left child is less than the node's own key. A correctly written insertion function for this tree will maintain that invariant.
As you can tell, that's not the sort of thing you can store in a variable: it's more a statement about the program. By figuring out what sort of invariants your program should maintain, then reviewing your code to make sure that it actually maintains those invariants, you can avoid logical errors in your code.
It is a condition you know to always be true at a particular place in your logic and can check for when debugging to work out what has gone wrong.
The magic of wikipedia: Invariant (computer science)
In computer science, a predicate that,
if true, will remain true throughout a
specific sequence of operations, is
called (an) invariant to that
sequence.
This answer is for my 5 year old kid. Do not think of an invariant as a constant or fixed numerical value. But it can be. However, it is more than that.
Rather, an invariant is something like of a fixed relationship between varying entities. For example, your age will always be less than that compared to your biological parents. Both your age, and your parent's age changes in the passage of time, but the relationship that i mentioned above is an invariant.
An invariant can also be a numerical constant. For example, the value of pi is an invariant ratio between the circle's circumference over its diameter. No matter how big or small the circle is, that ratio will always be pi.
I usually view them more in terms of algorithms or structures.
For example, you could have a loop invariant that could be asserted--always true at the beginning or end of each iteration. That is, if your loop was supposed to process a collection of objects from one stack to another, you could say that |stack1|+|stack2|=c, at the top or bottom of the loop.
If the invariant check failed, it would indicate something went wrong. In this example, it could mean that you forgot to push the processed element onto the final stack, etc.
As this line states:
In computer science, a predicate that, if true, will remain true throughout a specific sequence of operations, is called (an) invariant to that sequence.
To better understand this hope this example in C++ helps.
Consider a scenario where you have to get some values and get the total count of them in a variable called as count and add them in a variable called as sum
The invariant (again it's more like a concept):
// invariant:
// we have read count grades so far, and
// sum is the sum of the first count grades
The code for the above would be something like this,
int count=0;
double sum=0,x=0;
while (cin >> x) {
++count;
sum+=x;
}
What the above code does?
1) Reads the input from cin and puts them in x
2) After one successful read, increment count and sum = sum + x
3) Repeat 1-2 until read stops ( i.e ctrl+D)
Loop invariant:
The invariant must be True ALWAYS. So initially you start out your code with just this
while(cin>>x){
}
This loop reads data from standard input and stores in x. Well and good. But the invariant becomes false because the first part of our invariant wasn't followed (or kept true).
// we have read count grades so far, and
How to keep the invariant true?
Simple! increment count.
So ++count; would do good!. Now our code becomes something like this,
while(cin>>x){
++count;
}
But
Even now our invariant (a concept which must be TRUE) is False because now we didn't satisfy the second part of our invariant.
// sum is the sum of the first count grades
So what to do now?
Add x to sum and store it in sum ( sum+=x) and the next time
cin>>x will read a new value into x.
Now our code becomes something like this,
while(cin>>x){
++count;
sum+=x;
}
Let's check
Whether code matches our invariant
// invariant:
// we have read count grades so far, and
// sum is the sum of the first count grades
code:
while(cin>>x){
++count;
sum+=x;
}
Ah!. Now the loop invariant is True always and code works fine.
The above example was taken and modified from the book Accelerated C++ by Andrew-koening and Barbara-E
Something that doesn't change within a block of code
All the answers here are great, but i felt that i can shed more light on the matter:
Invariant from a language point of view means something that never changes. The concept though comes actually from math, it's one of the popular proof techniques when combined with induction.
Here is how a proof goes, If you can find an invariant that is in the initial state, And that this invariant persists regardless of any [legal] transformation applied to the state, then you can prove that If a certain state does not have this invariant then it can never occur, no matter what sequence of transformations are applied to the initial state.
Now the previous way of thinking (again combined with induction) makes it possible to predicate the logic of computer software. Especially important when the execution goes in loops, in which an invariant can be used to prove that a certain loop will yield a certain result or that it will never change the state of a program in a certain way.
When invariant is used to predicate a loop logic its called loop invariant. It can be used outside loops, but for loops it is really important, because you often have a lot of possibilities, or an infinite number of possibilities.
Notice that i use the word "predicate" the logic of a computer software, and not prove. And that's because while in math invariant can be used as a proof, it can never prove that the computer software when executed will yield what is expected, due to the fact that the software is executed on top of many abstractions, that can never be proved that they will yield what is expected (think of the hardware abstraction for example).
Finally while theoretically and rigorously predicting software logic is only important for high critical applications like Medical, and Military ones. Invariant can still be used to aid the typical programmer when debugging. It can be used to know where at a certain location The program failed because it has failed to maintain a certain invariant - many of us use it anyway without giving a thought about it.
Class Invariant
Class Invariant is a condition which should be always true before and after calling relevant function
For example balanced tree has an Invariant which is called isBalanced. When you modify your tree through some methods (e.g. addNode, removeNode...) - isBalanced should be always true before and after modifying the tree
Following on from what it is, invariants are quite useful in writing clean code, since knowing conceptually what invariants should be present in your code allows you to easily decide how to organize your code to reach those aims. As mentioned ealier, they're also useful in debugging, as checking to see if the invariant's being maintained is often a good way of seeing if whatever manipulation you're attempting to perform is actually doing what you want it to.
It's typically a quantity that does not change under certain mathematical operations.
An example is a scalar, which does not change under rotations. In magnetic resonance imaging, for example, it is useful to characterize a tissue property by a rotational invariant, because then its estimation ideally does not depend on the orientation of the body in the scanner.
The ADT invariant specifes relationships
among the data fields (instance variables)
that must always be true before and after
the execution of any instance method.
There is an excellent example of an invariant and why it matters in the book Java Concurrency in Practice.
Although Java-centric, the example describes some code that is responsible for calculating the factors of a provided integer. The example code attempts to cache the last number provided, and the factors that were calculated to improve performance. In this scenario there is an invariant that was not accounted for in the example code which has left the code susceptible to race conditions in a concurrent scenario.