What is an invariant? - language-agnostic

The word seems to get used in a number of contexts. The best I can figure is that they mean a variable that can't change. Isn't that what constants/finals (darn you Java!) are for?

An invariant is more "conceptual" than a variable. In general, it's a property of the program state that is always true. A function or method that ensures that the invariant holds is said to maintain the invariant.
For instance, a binary search tree might have the invariant that for every node, the key of the node's left child is less than the node's own key. A correctly written insertion function for this tree will maintain that invariant.
As you can tell, that's not the sort of thing you can store in a variable: it's more a statement about the program. By figuring out what sort of invariants your program should maintain, then reviewing your code to make sure that it actually maintains those invariants, you can avoid logical errors in your code.

It is a condition you know to always be true at a particular place in your logic and can check for when debugging to work out what has gone wrong.

The magic of wikipedia: Invariant (computer science)
In computer science, a predicate that,
if true, will remain true throughout a
specific sequence of operations, is
called (an) invariant to that
sequence.

This answer is for my 5 year old kid. Do not think of an invariant as a constant or fixed numerical value. But it can be. However, it is more than that.
Rather, an invariant is something like of a fixed relationship between varying entities. For example, your age will always be less than that compared to your biological parents. Both your age, and your parent's age changes in the passage of time, but the relationship that i mentioned above is an invariant.
An invariant can also be a numerical constant. For example, the value of pi is an invariant ratio between the circle's circumference over its diameter. No matter how big or small the circle is, that ratio will always be pi.

I usually view them more in terms of algorithms or structures.
For example, you could have a loop invariant that could be asserted--always true at the beginning or end of each iteration. That is, if your loop was supposed to process a collection of objects from one stack to another, you could say that |stack1|+|stack2|=c, at the top or bottom of the loop.
If the invariant check failed, it would indicate something went wrong. In this example, it could mean that you forgot to push the processed element onto the final stack, etc.

As this line states:
In computer science, a predicate that, if true, will remain true throughout a specific sequence of operations, is called (an) invariant to that sequence.
To better understand this hope this example in C++ helps.
Consider a scenario where you have to get some values and get the total count of them in a variable called as count and add them in a variable called as sum
The invariant (again it's more like a concept):
// invariant:
// we have read count grades so far, and
// sum is the sum of the first count grades
The code for the above would be something like this,
int count=0;
double sum=0,x=0;
while (cin >> x) {
++count;
sum+=x;
}
What the above code does?
1) Reads the input from cin and puts them in x
2) After one successful read, increment count and sum = sum + x
3) Repeat 1-2 until read stops ( i.e ctrl+D)
Loop invariant:
The invariant must be True ALWAYS. So initially you start out your code with just this
while(cin>>x){
}
This loop reads data from standard input and stores in x. Well and good. But the invariant becomes false because the first part of our invariant wasn't followed (or kept true).
// we have read count grades so far, and
How to keep the invariant true?
Simple! increment count.
So ++count; would do good!. Now our code becomes something like this,
while(cin>>x){
++count;
}
But
Even now our invariant (a concept which must be TRUE) is False because now we didn't satisfy the second part of our invariant.
// sum is the sum of the first count grades
So what to do now?
Add x to sum and store it in sum ( sum+=x) and the next time
cin>>x will read a new value into x.
Now our code becomes something like this,
while(cin>>x){
++count;
sum+=x;
}
Let's check
Whether code matches our invariant
// invariant:
// we have read count grades so far, and
// sum is the sum of the first count grades
code:
while(cin>>x){
++count;
sum+=x;
}
Ah!. Now the loop invariant is True always and code works fine.
The above example was taken and modified from the book Accelerated C++ by Andrew-koening and Barbara-E

Something that doesn't change within a block of code

All the answers here are great, but i felt that i can shed more light on the matter:
Invariant from a language point of view means something that never changes. The concept though comes actually from math, it's one of the popular proof techniques when combined with induction.
Here is how a proof goes, If you can find an invariant that is in the initial state, And that this invariant persists regardless of any [legal] transformation applied to the state, then you can prove that If a certain state does not have this invariant then it can never occur, no matter what sequence of transformations are applied to the initial state.
Now the previous way of thinking (again combined with induction) makes it possible to predicate the logic of computer software. Especially important when the execution goes in loops, in which an invariant can be used to prove that a certain loop will yield a certain result or that it will never change the state of a program in a certain way.
When invariant is used to predicate a loop logic its called loop invariant. It can be used outside loops, but for loops it is really important, because you often have a lot of possibilities, or an infinite number of possibilities.
Notice that i use the word "predicate" the logic of a computer software, and not prove. And that's because while in math invariant can be used as a proof, it can never prove that the computer software when executed will yield what is expected, due to the fact that the software is executed on top of many abstractions, that can never be proved that they will yield what is expected (think of the hardware abstraction for example).
Finally while theoretically and rigorously predicting software logic is only important for high critical applications like Medical, and Military ones. Invariant can still be used to aid the typical programmer when debugging. It can be used to know where at a certain location The program failed because it has failed to maintain a certain invariant - many of us use it anyway without giving a thought about it.

Class Invariant
Class Invariant is a condition which should be always true before and after calling relevant function
For example balanced tree has an Invariant which is called isBalanced. When you modify your tree through some methods (e.g. addNode, removeNode...) - isBalanced should be always true before and after modifying the tree

Following on from what it is, invariants are quite useful in writing clean code, since knowing conceptually what invariants should be present in your code allows you to easily decide how to organize your code to reach those aims. As mentioned ealier, they're also useful in debugging, as checking to see if the invariant's being maintained is often a good way of seeing if whatever manipulation you're attempting to perform is actually doing what you want it to.

It's typically a quantity that does not change under certain mathematical operations.
An example is a scalar, which does not change under rotations. In magnetic resonance imaging, for example, it is useful to characterize a tissue property by a rotational invariant, because then its estimation ideally does not depend on the orientation of the body in the scanner.

The ADT invariant specifes relationships
among the data fields (instance variables)
that must always be true before and after
the execution of any instance method.

There is an excellent example of an invariant and why it matters in the book Java Concurrency in Practice.
Although Java-centric, the example describes some code that is responsible for calculating the factors of a provided integer. The example code attempts to cache the last number provided, and the factors that were calculated to improve performance. In this scenario there is an invariant that was not accounted for in the example code which has left the code susceptible to race conditions in a concurrent scenario.

Related

TFPCustomHashTable constructor use 196613 constant. Why use this particular value?

Following code is part of contnrs unit of FreePascal
constructor TFPCustomHashTable.Create;
begin
CreateWith(196613,#RSHash);
end;
I am curious about 196613. I know it is hash table size. Is there any particular reason why this value is used?
In my test, constructor took about 3-4 ms to execute, which in my particular situation is not accepted. I suspect this is related to this constant value.
Update:
My question are:
Why 196613 is chosen? Why not other prime number?
Does it affect constructor call execution time?
196613 is one of the numbers being recommended for hash tables sizes (prime and far away from powers of two), for more info see e.g. https://planetmath.org/goodhashtableprimes.
It affects the constructor call execution time, yes. You can always construct TFPCustomHashTable using CreateWith and pass a size of your choice (any number is fine, as resizing algorithm checks for suitable sizes anyways) as well as your own hash function (or the pre-defined RSHash):
MyHashTable:=TFPCustomHashTable.CreateWith(193,#RSHash);
Keep in mind though, that resizing a hash table is an expensive operation as it requires to recalculate the hash function for all elements, so starting with a too small value is neither a good idea.

Simple examples for using while loops

I'm trying to write up some examples to explain when a while loop should be used, and when a for loop should be used.
When looking for 'interesting' cases to show young and novice programmers, I realized that the vast majority of textbook examples for while loops will look something like this:
i = 0
while i < 10:
do something
i = i + 1
'do something' might be printing the odd numbers, squaring i, etc... However all these are obviously easier written with a for loop!
I'm looking for more interesting examples. They would have to be:
Suitable for younger programmers (e.g. not too much math such as numerical root finding or the sequence in Collatz conjecture)
Easier (or more intuitive) to be solved with while loops rather than for.
Have some real use to it (e.g. I could do while random() < 0.95, but what's a real use for this?)
The only example I could come up with is when getting a list input from the user one-by-one (e.g. numbers to be summed), but the user will have to terminate it with a special input, and also this seems pointless as the user could just say in advance how many entries there will be in the sequence.
The fundamental difference between a FOR loop and a WHILE loop is that for a FOR loop, the number of iterations is bounded by a constant that is known before the loop starts, whereas for a WHILE loop, the number of iterations can be unbounded, unknown, or infinite.
As a result, a language offering only WHILE loops is Turing-complete, a language offering only FOR loops is not.
So, the first obvious thing that only a WHILE loop can do, is an infinite loop. Things that are easily modeled as infinite loops are, for example, a web server, a Netflix client, a game loop, a GUI event loop, or an operating system:
WHILE (r = nextHttpRequest):
handle(r)
END
WHILE (p = nextVideoStreamPacket):
frame = decode(p)
draw(frame)
END
WHILE (a = playerAction):
computeNextFrame(a)
END
WHILE (e = nextEvent):
handle(e)
END
WHILE (s = sysCall):
process(s)
END
A good example where the loop is not infinite, but the bound is not known in advance, is (as you already mentioned in your question) asking for user input. Something like this:
WHILE (askBoolean("Do you want to play again?")):
playGame()
END
Another good example is processing a C-like string, where the length of the string is unknown but finite. This is the same situation for a linked list, or for any data structure where there is a notion of "next", but not a notion of "size", instead there is some sentinel value that marks the end (e.g. NUL-terminated strings in C) or a way to check whether there is a next element (e.g. Iterator in Java):
WHILE ((element = getNext()) != END_MARKER):
process(element)
END
WHILE (hasNextElement):
process(getNext())
END
There are also situations that can be handled with a FOR loop, but a WHILE loop is more elegant. One situation I can think of, is that the bound for the number of iterations is known in advance, it is constant, but the known bound is ridiculously large, and the actual number of iterations required is significantly less than the bound.
Unfortunately, I cannot come up with a good real-life example of this, maybe someone else can. A FOR loop for this will then typically look like this, in order to skip the iterations from the actual number of iterations up to the upper bound:
FOR (i FROM 1 TO $SOME_LARGE_UPPER_BOUND):
IF (terminationConditionReached):
NOOP()
ELSE:
doSomethingInteresting()
END
END
Which would much better be expressed as
WHILE (NOT terminationConditionReached):
doSomethingInteresting()
END
Using the FOR loop could make sense in this situation, if the value of i is of interest:
FOR (i FROM 1 TO $SOME_LARGE_UPPER_BOUND):
IF (terminationConditionReached):
NOOP()
ELSE:
doSomethingInterestingWithI(i)
END
END
A last situation I can think of, where a WHILE loop is more appropriate than a FOR loop, even though the number of iterations is bounded by a known constant, is if that constant is not "semantically interesting" for the loop.
For example, a game loop for Tic-Tac-Toe only needs at most 9 moves, so it could be modeled as a FOR loop:
FOR (i FROM 1 TO 9):
IF (player1Won OR player2Won):
NOOP
ELSE:
makeMove()
END
END
But, the number "9" is not really interesting here. It's much more interesting whether one player has one or the board is full:
WHILE (NOT (player1Won OR player2Won OR boardFull)):
makeMove()
END
[Note: at least if playing against a child, this is also an example of the second-to-last situation, namely that the upper bound is known to be 9, but a lot of games will be shorter than 9 moves. However, I would still like to find an example for that, which is not also an example of a semantically un-interesting termination condition.]
So, we have two classes of situations here: one, where a FOR loop simply cannot be used (when the bound is unknown, non-existant, or infinite), and one, where a FOR loop can be used, but a WHILE loop is more intention-revealing.

PIC Assembly: Calling functions with variables

So say I have a variable, which holds a song number. -> song_no
Depending upon the value of this variable, I wish to call a function.
Say I have many different functions:
Fcn1
....
Fcn2
....
Fcn3
So for example,
If song_no = 1, call Fcn1
If song_no = 2, call Fcn2
and so forth...
How would I do this?
you should have compare function in the instruction set (the post suggests you are looking for assembly solution), the result for that is usually set a True bit or set a value in a register. But you need to check the instruction set for that.
the code should look something like:
load(song_no, $R1)
cmpeq($1,R1) //result is in R3
jmpe Fcn1 //jump if equal
cmpeq ($2,R1)
jmpe Fcn2
....
Hope this helps
I'm not well acquainted with the pic, but these sort of things are usually implemented as a jump table. In short, put pointers to the target routines in an array and call/jump to the entry indexed by your song_no. You just need to calculate the address into the array somehow, so it is very efficient. No compares necessary.
To elaborate on Jens' reply the traditional way of doing on 12/14-bit PICs is the same way you would look up constant data from ROM, except instead of returning an number with RETLW you jump forward to the desired routine with GOTO. The actual jump into the jump table is performed by adding the offset to the program counter.
Something along these lines:
movlw high(table)
movwf PCLATH
movf song_no,w
addlw table
btfsc STATUS,C
incf PCLATH
addwf PCL
table:
goto fcn1
goto fcn2
goto fcn3
.
.
.
Unfortunately there are some subtleties here.
The PIC16 only has an eight-bit accumulator while the address space to jump into is 11-bits. Therefore both a directly writable low-byte (PCL) as well as a latched high-byte PCLATH register is available. The value in the latch is applied as MSB once the jump is taken.
The jump table may cross a page, hence the manual carry into PCLATH. Omit the BTFSC/INCF if you know the table will always stay within a 256-instruction page.
The ADDWF instruction will already have been read and be pointing at table when PCL is to be added to. Therefore a 0 offset jumps to the first table entry.
Unlike the PIC18 each GOTO instruction fits in a single 14-bit instruction word and PCL addresses instructions not bytes, so the offset should not be multiplied by two.
All things considered you're probably better off searching for general PIC16 tutorials. Any of these will clearly explain how data/jump tables work, not to mention begin with the basics of how to handle the chip. Frankly it is a particularly convoluted architecture and I would advice staying with the "free" hi-tech C compiler unless you particularly enjoy logic puzzles or desperately need the performance.

What's that CS "big word" term for the same action always having the same effect

There's a computer science term for this that escapes my head, one of those words that ends with "-icity".
It means something like a given action will always produce the same result, IE there won't be any hysteresis, or the action will not alter the functioning of the system...
Ring a bell, anyone? Thanks.
Apologies for the tagging, I'm only tagging it Java b/c I learned about this in a Java class back in school and I figure that crowd tends to have more CS background...
This could mean two different things:
deterministic - meaning that given the same initial state, the same operation (with exactly the same data) will always produce the same resulting state (and optional output.) - http://en.wikipedia.org/wiki/Deterministic_algorithm
i.e. same action has the same effect - assuming you start from the same place in the same system. (Nothing random about it, nothing fed in from the outside that could effect the result...)
idempotent - meaning applying a function to a value once e.g. f(x) = v produces the same result as applying the function multiple times e.g. f(f(f(x))) = v - http://en.wikipedia.org/wiki/Idempotence
i.e. one or more function applications yields the same value given the same initial value
you mean idempotent ??
Referential transparency is also used in some CS circles.
Nullipotent?
deterministic ,.,-=
Are you looking for invariant?
http://en.wikipedia.org/wiki/Invariant_%28computer_science%29
In computer science, a predicate is
called an invariant to a sequence of
operations if the predicate always
evaluates at the end of the sequence
to the same value as before starting
the sequence.
side effect-free?
In math, a function 'f' is idempotent if multiple applications do not change the result.
you mean idempotence?
or the action will not alter the functioning of the system...
Are you looking for ‘idempotence’?
The "ends with -icity" part of your question makes me think you might be looking for monotonicity, even though it does not quite match description/definition of the word. From the Wikipedia article:
In mathematics, a monotonic function (or monotone function) is a function which preserves the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
In the following illustrations (also borrowed from the Wikipedia article) three functions are drawn:
A:
B:
C:
A and B and both monotonic (increasing and decreasing respectively), while C is not monotonic.
You mean an atomic block of code?
The A in ACID.
Atomicity - states that database modifications must follow an “all or nothing” rule. Each transaction is said to be “atomic.” If one part of the transaction fails, the entire transaction fails.
It sounds like what you're describing would be a memoryless function. Although the term memorylessness is usually used for stochastic distributions, I don't quite remember if it has a programming equivalent...

Creating a logic gate simulator

I need to make an application for creating logic circuits and seeing the results. This is primarily for use in A-Level (UK, 16-18 year olds generally) computing courses.
Ive never made any applications like this, so am not sure on the best design for storing the circuit and evaluating the results (at a resomable speed, say 100Hz on a 1.6Ghz single core computer).
Rather than have the circuit built from the basic gates (and, or, nand, etc) I want to allow these gates to be used to make "chips" which can then be used within other circuits (eg you might want to make a 8bit register chip, or a 16bit adder).
The problem is that the number of gates increases massively with such circuits, such that if the simulation worked on each individual gate it would have 1000's of gates to simulate, so I need to simplify these components that can be placed in a circuit so they can be simulated quickly.
I thought about generating a truth table for each component, then simulation could use a lookup table to find the outputs for a given input. The problem occurred to me though that the size of such tables increase massively with inputs. If a chip had 32 inputs, then the truth table needs 2^32 rows. This uses a massive amount of memory in many cases more than there is to use so isn't practical for non-trivial components, it also wont work with chips that can store their state (eg registers) since they cant be represented as a simply table of inputs and outputs.
I know I could just hardcode things like register chips, however since this is for educational purposes I want it so that people can make their own components as well as view and edit the implementations for standard ones. I considered allowing such components to be created and edited using code (eg dlls or a scripting language), so that an adder for example could be represented as "output = inputA + inputB" however that assumes that the students have done enough programming in the given language to be able to understand and write such plugins to mimic the results of their circuit which is likly to not be the case...
Is there some other way to take a boolean logic circuit and simplify it automatically so that the simulation can determine the outputs of a component quickly?
As for storing the components I was thinking of storing some kind of tree structure, such that each component is evaluated once all components that link to its inputs are evaluated.
eg consider: A.B + C
The simulator would first evaluate the AND gate, and then evaluate the OR gate using the output of the AND gate and C.
However it just occurred to me that in cases where the outputs link back round to the inputs, will cause a deadlock because there inputs will never all be evaluated...How can I overcome this, since the program can only evaluate one gate at a time?
Have you looked at Richard Bowles's simulator?
You're not the first person to want to build their own circuit simulator ;-).
My suggestion is to settle on a minimal set of primitives. When I began mine (which I plan to resume one of these days...) I had two primitives:
Source: zero inputs, one output that's always 1.
Transistor: two inputs A and B, one output that's A and not B.
Obviously I'm misusing the terminology a bit, not to mention neglecting the niceties of electronics. On the second point I recommend abstracting to wires that carry 1s and 0s like I did. I had a lot of fun drawing diagrams of gates and adders from these. When you can assemble them into circuits and draw a box round the set (with inputs and outputs) you can start building bigger things like multipliers.
If you want anything with loops you need to incorporate some kind of delay -- so each component needs to store the state of its outputs. On every cycle you update all the new states from the current states of the upstream components.
Edit Regarding your concerns on scalability, how about defaulting to the first principles method of simulating each component in terms of its state and upstream neighbours, but provide ways of optimising subcircuits:
If you have a subcircuit S with inputs A[m] with m < 8 (say, giving a maximum of 256 rows) and outputs B[n] and no loops, generate the truth table for S and use that. This could be done automatically for identified subcircuits (and reused if the subcircuit appears more than once) or by choice.
If you have a subcircuit with loops, you may still be able to generate a truth table. There are fixed-point finding methods which can help here.
If your subcircuit has delays (and they are significant to the enclosing circuit) the truth table can incorporate state columns. E.g. if the subcircuit has input A, inner state B, and output C, where C <- A and B, B <- A, the truth table could be:
A B | B C
0 0 | 0 0
0 1 | 0 0
1 0 | 1 0
1 1 | 1 1
If you have a subcircuit that the user asserts implements a particular known pattern such as "adder", provide an option for using a hard-coded implementation for updating that subcircuit instead of by simulating its inner parts.
When I made a circuit emulator (sadly, also incomplete and also unreleased), here's how I handled loops:
Each circuit element stores its boolean value
When an element "E0" changes its value, it notifies (via the observer pattern) all who depend on it
Each observing element evaluates its new value and does likewise
When the E0 change occurs, a level-1 list is kept of all elements affected. If an element already appears on this list, it gets remembered in a new level-2 list but doesn't continue to notify its observers. When the sequence which E0 began has stopped notifying new elements, the next queue level is handled. Ie: the sequence is followed and completed for the first element added to level-2, then the next added to level-2, etc. until all of level-x is complete, then you move to level-(x+1)
This is in no way complete. If you ever have multiple oscillators doing infinite loops, then no matter what order you take them in, one could prevent the other from ever getting its turn. My next goal was to alleviate this by limiting steps with clock-based sync'ing instead of cascading combinatorials, but I never got this far in my project.
You might want to take a look at the From Nand To Tetris in 12 steps course software. There is a video talking about it on youtube.
The course page is at: http://www1.idc.ac.il/tecs/
If you can disallow loops (outputs linking back to inputs), then you can significantly simplify the problem. In that case, for every input there will be exactly one definite output. Cycles however can make the output undecideable (or rather, constantly changing).
Evaluating a circuit without loops should be easy - just use the BFS algorithm with "junctions" (connections between logic gates) as the items in the list. Start off with all the inputs to all the gates in an "undefined" state. As soon as a gate has all inputs "defined" (either 1 or 0), calculate its output and add its output junctions to the BFS list. This way you only have to evaluate each gate and each junction once.
If there are loops, the same algorithm can be used, but the circuit can be built in such a way that it never comes to a "rest" and some junctions are always changing between 1 and 0.
OOps, actually, this algorithm can't be used in this case because the looped gates (and gates depending on them) would forever stay as "undefined".
You could introduce them to the concept of Karnaugh maps, which would help them simplify truth values for themselves.
You could hard code all the common ones. Then allow them to build their own out of the hard coded ones (which would include low level gates), which would be evaluated by evaluating each sub-component. Finally, if one of their "chips" has less than X inputs/outputs, you could "optimize" it into a lookup table. Maybe detect how common it is and only do this for the most used Y chips? This way you have a good speed/space tradeoff.
You could always JIT compile the circuits...
As I haven't really thought about it, I'm not really sure what approach I'd take.. but it would possibly be a hybrid method and I'd definitely hard code popular "chips" in too.
When I was playing around making a "digital circuit" simulation environment, I had each defined circuit (a basic gate, a mux, a demux and a couple of other primitives) associated with a transfer function (that is, a function that computes all outputs, based on the present inputs), an "agenda" structure (basically a linked list of "when to activate a specific transfer function), virtual wires and a global clock.
I arbitrarily set the wires to hard-modify the inputs whenever the output changed and the act of changing an input on any circuit to schedule a transfer function to be called after the gate delay. With this at hand, I could accommodate both clocked and unclocked circuit elements (a clocked element is set to have its transfer function run at "next clock transition, plus gate delay", any unclocked element just depends on the gate delay).
Never really got around to build a GUI for it, so I've never released the code.