Why doesn't `list::iterator + 1` work? - stl

std::list<CPoint>::iterator iter= vertices.end();
CPoint point = *(iter+1);
In such cases I've tried to assign to variables the value of (iter-1) or (iter+1). Why doesn't it work? whereas iter++ or iter-- works.

Simply, these operations are not part of the iterator definition. You can use the std::advance() function for that.
Obviously, the operator+(int) could be overriden to do that, just as operator++() is, but probably it is not, because this operation is not guaranteed to be of constant complexity, and a syntax like (iter + n) could suggest otherwise.
From advance:
Complexity: Linear. However, if InputIt additionally meets the requirements of RandomAccessIterator, complexity is constant.

Related

cublas<>gemmBatched with aliased Carray parameter

I'm trying to implement something like scipy.sparse.bsr_matrix operations with cublas<>gemmBatched. Unfortunately I can't do this with cusparse since my BSR matrix isn't square.
I'm new to cublas, I wonder if it's ok (correctness-wise and performance-wise) to use aliased pointer (as in pointer aliasing) array for float * Carray[]
e.g.
/* given float * out as the real output array */
float * Carray[] = {
out + 1*stride, out + 2*stride, out + 3*stride,
out + 1*stride, out + 2*stride, out + 3*stride,
/* and repeat */
};
Also, Although I'm pretty sure it will be correct if I use aliased Aarray or Barray, is there any performance impact?
Thanks!
In general, there is no problem with that sort of aliasing in CUBLAS. In fact, it is the normal way to deal with submatrices, and most LAPACK style solvers use pointer indexing or aliasing extensively to perform sub-block operations on matrices.
I don't believe there is a performance penalty in working this way, at least for the batch solvers, although the only way to be certain would be via benchmarking, which is probably trivial to test yourself.

Function types declarations in Mathematica

I have bumped into this problem several times on the type of input data declarations mathematica understands for functions.
It Seems Mathematica understands the following types declarations:
_Integer,
_List,
_?MatrixQ,
_?VectorQ
However: _Real,_Complex declarations for instance cause the function sometimes not to compute. Any idea why?
What's the general rule here?
When you do something like f[x_]:=Sin[x], what you are doing is defining a pattern replacement rule. If you instead say f[x_smth]:=5 (if you try both, do Clear[f] before the second example), you are really saying "wherever you see f[x], check if the head of x is smth and, if it is, replace by 5". Try, for instance,
Clear[f]
f[x_smth]:=5
f[5]
f[smth[5]]
So, to answer your question, the rule is that in f[x_hd]:=1;, hd can be anything and is matched to the head of x.
One can also have more complicated definitions, such as f[x_] := Sin[x] /; x > 12, which will match if x>12 (of course this can be made arbitrarily complicated).
Edit: I forgot about the Real part. You can certainly define Clear[f];f[x_Real]=Sin[x] and it works for eg f[12.]. But you have to keep in mind that, while Head[12.] is Real, Head[12] is Integer, so that your definition won't match.
Just a quick note since no one else has mentioned it. You can pattern match for multiple Heads - and this is quicker than using the conditional matching of ? or /;.
f[x:(_Integer|_Real)] := True (* function definition goes here *)
For simple functions acting on Real or Integer arguments, it runs in about 75% of the time as the similar definition
g[x_] /; Element[x, Reals] := True (* function definition goes here *)
(which as WReach pointed out, runs in 75% of the time
as g[x_?(Element[#, Reals]&)] := True).
The advantage of the latter form is that it works with Symbolic constants such as Pi - although if you want a purely numeric function, this can be fixed in the former form with the use of N.
The most likely problem is the input your using to test the the functions. For instance,
f[x_Complex]:= Conjugate[x]
f[x + I y]
f[3 + I 4]
returns
f[x + I y]
3 - I 4
The reason the second one works while the first one doesn't is revealed when looking at their FullForms
x + I y // FullForm == Plus[x, Times[ Complex[0,1], y]]
3 + I 4 // FullForm == Complex[3,4]
Internally, Mathematica transforms 3 + I 4 into a Complex object because each of the terms is numeric, but x + I y does not get the same treatment as x and y are Symbols. Similarly, if we define
g[x_Real] := -x
and using them
g[ 5 ] == g[ 5 ]
g[ 5. ] == -5.
The key here is that 5 is an Integer which is not recognized as a subset of Real, but by adding the decimal point it becomes Real.
As acl pointed out, the pattern _Something means match to anything with Head === Something, and both the _Real and _Complex cases are very restrictive in what is given those Heads.

Repeated application of functions

Reading this question got me thinking: For a given function f, how can we know that a loop of this form:
while (x > 2)
x = f(x)
will stop for any value x? Is there some simple criterion?
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
Specifically, can we prove this for sqrt and for log?
For these functions, a proof that ceil(f(x))<x for x > 2 would suffice. You could do one iteration -- to arrive at an integer number, and then proceed by simple induction.
For the general case, probably the best idea is to use well-founded induction to prove this property. However, as Moron pointed out in the comments, this could be impossible in the general case and the right ordering is, in many cases, quite hard to find.
Edit, in reply to Amnon's comment:
If you wanted to use well-founded induction, you would have to define another strict order, that would be well-founded. In case of the functions you mentioned this is not hard: you can take x << y if and only if ceil(x) < ceil(y), where << is a symbol for this new order. This order is of course well-founded on numbers greater then 2, and both sqrt and log are decreasing with respect to it -- so you can apply well-founded induction.
Of course, in general case such an order is much more difficult to find. This is also related, in some way, to total correctness assertions in Hoare logic, where you need to guarantee similar obligations on each loop construct.
There's a general theorem for when then sequence of iterations will converge. (A convergent sequence may not stop in a finite number of steps, but it is getting closer to a target. You can get as close to the target as you like by going far enough out in the sequence.)
The sequence x, f(x), f(f(x)), ... will converge if f is a contraction mapping. That is, there exists a positive constant k < 1 such that for all x and y, |f(x) - f(y)| <= k |x-y|.
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
If we're talking about floats here, that's not true. If for all x > n f(x) is strictly less than x, it will reach n at some point (because there's only a limited number of floating point values between any two numbers).
Of course this means you need to prove that f(x) is actually less than x using floating point arithmetic (i.e. proving it is less than x mathematically does not suffice, because then f(x) = x may still be true with floats when the difference is not enough).
There is no general algorithm to determine whether a function f and a variable x will end or not in that loop. The Halting problem is reducible to that problem.
For sqrt and log, we could safely do that because we happen to know the mathematical properties of those functions. Say, sqrt approaches 1, log eventually goes negative. So the condition x < 2 has to be false at some point.
Hope that helps.
In the general case, all that can be said is that the loop will terminate when it encounters xi≤2. That doesn't mean that the sequence will converge, nor does it even mean that it is bounded below 2. It only means that the sequence contains a value that is not greater than 2.
That said, any sequence containing a subsequence that converges to a value strictly less than two will (eventually) halt. That is the case for the sequence xi+1 = sqrt(xi), since x converges to 1. In the case of yi+1 = log(yi), it will contain a value less than 2 before becoming undefined for elements of R (though it is well defined on the extended complex plane, C*, but I don't think it will, in general converge except at any stable points that may exist (i.e. where z = log(z)). Ultimately what this means is that you need to perform some upfront analysis on the sequence to better understand its behavior.
The standard test for convergence of a sequence xi to a point z is that give ε > 0, there is an n such that for all i > n, |xi - z| < ε.
As an aside, consider the Mandelbrot Set, M. The test for a particular point c in C for an element in M is whether the sequence zi+1 = zi2 + c is unbounded, which occurs whenever there is a |zi| > 2. Some elements of M may converge (such as 0), but many do not (such as -1).
Sure. For all positive numbers x, the following inequality holds:
log(x) <= x - 1
(this is a pretty basic result from real analysis; it suffices to observe that the second derivative of log is always negative for all positive x, so the function is concave down, and that x-1 is tangent to the function at x = 1). From this it follows essentially immediately that your while loop must terminate within the first ceil(x) - 2 steps -- though in actuality it terminates much, much faster than that.
A similar argument will establish your result for f(x) = sqrt(x); specifically, you can use the fact that:
sqrt(x) <= x/(2 sqrt(2)) + 1/sqrt(2)
for all positive x.
If you're asking whether this result holds for actual programs, instead of mathematically, the answer is a little bit more nuanced, but not much. Basically, many languages don't actually have hard accuracy requirements for the log function, so if your particular language implementation had an absolutely terrible math library this property might fail to hold. That said, it would need to be a really, really terrible library; this property will hold for any reasonable implementation of log.
I suggest reading this wikipedia entry which provides useful pointers. Without additional knowledge about f, nothing can be said.

Static analysis of multiple if statements (conditions)

I have code similar to:
if conditionA(x, y, z) then doA()
else if conditionB(x, y, z) then doB()
...
else if conditionZ(x, y, z) then doZ()
else throw ShouldNeverHappenException
I would like to validate two things (using static analysis):
If all conditions conditionA, conditionB, ..., conditionZ are mutually exclusive (i.e. it is not possible that two or more conditions are true in the same time).
All possible cases are covered, i.e. "else throw" statement will never be called.
Could you recommend me a tool and/or a way I could (easily) do this?
I would appreciate more detailed informations than "use Prolog" or "use Mathematica"... ;-)
UPDATE:
Let assume that conditionA, conditionB, ..., conditionZ are (pure) functions and x, y, z have "primitive" types.
The item 1. that you want to do is a stylistic issue. The program makes sense even if the conditions are not exclusive. Personally, as an author of static analysis tools, I think that users get enough false alarms without trying to force style on them (and since another programmer would write overlapping conditions on purpose, to that other programmer what you ask would be a false alarm). This said, there are tools that are configurable: for one of those, you could write a rule stating that the cases have to be exclusive when this construct is encountered. And as suggested by Jeffrey, you can wrap your code in a context in which you compute a boolean condition that is true iff there is no overlap, and check that condition instead.
The item 2. is not a style issue: you want to know if the exception can be raised.
The problem is difficult in theory and in practice, so tools usually give up at least one of correctness (never fail to warn if there is an issue) or completeness (never warn for a non-issue). If the types of the variables were unbounded integers, computability theory would state that an analyzer cannot be both correct and complete and terminate for all input programs. But enough with the theory. Some tools give up both correctness and completeness, and that doesn't mean they are not useful either.
An example of tool that is correct is Frama-C's value analysis: if it says that a statement (such as the last case in the sequence of elseifs) is unreachable, you know that it is unreachable. It is not complete, so when it doesn't say that the last statement is unreachable, you don't know.
An example of tool that is complete is Cute: it uses the so-called concolic approach to generate test cases automatically, aiming for structural coverage (that is, it will more or less heuristically try to generate tests that activate the last case once all the others have been taken). Because it generates test cases (each a single, definite input vector on which the code is actually executed), it never warns for a non-problem. This is what it means to be complete. But it may fail to find the test case that causes the last statement to be reached even though there is one: it is not correct.
This appears to be isomorphic to solving a 3-sat equation, which is NP-hard. It is unlikely a static analyzer would attempt to cover this domain, unfortunately.
In the general case this is—as #Michael Donohue ponts out—an NP-hard problem.
But if you have only a reasonable number of conditions to check, you could just write a program that checks all of them.
for (int x = lowestX; x <= highestX; x++)
for (int y ...)
for (int z ...)
{
int conditionsMet = 0;
if conditionA(x, y, z) then conditionsMet++;
if conditionB(x, y, z) then conditionsMet++;
...
if conditionZ(x, y, z) then conditionsMet++;
if (conditionsMet != 1)
PrintInBlinkingRed("Found an exception!", x, y, z)
}
Assuming your conditions are boolean expression (and/or/not) over boolean-valued predicates X,Y,Z, your question is easily solved with a symbolic boolean evaluation engine.
The question about whether they cover all cases is answered by taking a disjunctiton of all the conditions and asking if is a tautology. Wang's algorithm does this just fine.
The question about whether they intersect is answered pairwise; for formulas a and b,
symbolically construct a & b == false and apply Wang's tautology test again.
We've used the DMS Software Reengineering Toolkit to carry out similar boolean value computations (partial evaluations) over preprocessor conditionals in C. DMS provides the ability to parse source code (important if you intend to do this across a large code base and/or repeatedly as you modify your program over time), extract the program fragments, symbolically compose them, and then apply rewriting rules (to carry out boolean simplifications or algorithms such as Wang's).

Loop termination conditions

These for-loops are among the first basic examples of formal correctness proofs of algorithms. They have different but equivalent termination conditions:
1 for ( int i = 0; i != N; ++i )
2 for ( int i = 0; i < N; ++i )
The difference becomes clear in the postconditions:
The first one gives the strong guarantee that i == N after the loop terminates.
The second one only gives the weak guarantee that i >= N after the loop terminates, but you will be tempted to assume that i == N.
If for any reason the increment ++i is ever changed to something like i += 2, or if i gets modified inside the loop, or if N is negative, the program can fail:
The first one may get stuck in an infinite loop. It fails early, in the loop that has the error. Debugging is easy.
The second loop will terminate, and at some later time the program may fail because of your incorrect assumption of i == N. It can fail far away from the loop that caused the bug, making it hard to trace back. Or it can silently continue doing something unexpected, which is even worse.
Which termination condition do you prefer, and why? Are there other considerations? Why do many programmers who know this, refuse to apply it?
I tend to use the second form, simply because then I can be more sure that the loop will terminate. I.e. it's harder to introduce a non-termination bug by altering i inside the loop.
Of course, it also has the slightly lazy advantage of being one less character to type ;)
I would also argue, that in a language with sensible scope rules, as i is declared inside the loop construct, it shouldn't be available outside the loop. This would mitigate any reliance on i being equal to N at the end of the loop...
We shouldn't look at the counter in isolation - if for any reason someone changed the way the counter is incremented they would change the termination conditions and the resulting logic if it's required for i==N.
I would prefer the the second condition since it's more standard and will not result in endless loop.
In C++, using the != test is preferred for generality. Iterators in C++ have various concepts, like input iterator, forward iterator, bidirectional iterator, random access iterator, each of which extends the previous one with new capabilities. For < to work, random access iterator is required, whereas != merely requires input iterator.
If you trust your code, you can do either.
If you want your code to be readable and easily understood (and thus more tolerant to change from someone who you've got to assume to be a klutz), I'd use something like;
for ( int i = 0 ; i >= 0 && i < N ; ++i)
I always use #2 as then you can be sure the loop will terminate... Relying on it being equal to N afterwards is relying on a side effect... Wouldn't you just be better using the variable N itself?
[edit] Sorry...I meant #2
I think most programmers use the 2nd one, because it helps figure out what goes on inside the loop. I can look at it, and "know" that i will start as 0, and will definitely be less than N.
The 1st variant doesn't have this quality. I can look at it, and all I know is that i will start as 0 and that it won't ever be equal to N. Not quite as helpful.
Irrespective of how you terminate the loop, it is always good to be very wary of using a loop control variable outside the loop. In your examples you (correctly) declare i inside the loop, so it is not in scope outside the loop and the question of its value is moot...
Of course, the 2nd variant also has the advantage that it's what all of the C references I have seen use :-)
In general I would prefer
for ( int i = 0; i < N; ++i )
The punishment for a buggy program in production, seems a lot less severe, you will not have a thread stuck forever in a for loop, a situation that can be very risky and very hard to diagnose.
Also, in general I like to avoid these kind of loops in favour of the more readable foreach style loops.
I prefer to use #2, only because I try not to extend the meaning of i outside of the for loop. If I were tracking a variable like that, I would create an additional test. Some may say this is redundant or inefficient, but it reminds the reader of my intent: At this point, i must equal N
#timyates - I agree one shouldn't rely on side-effects
I think you stated very well the difference between the two. I do have the following comments, though:
This is not "language-agnostic", I can see your examples are in C++ but there
are languages where you are not allowed to modify the loop variable inside the
loop and others that don't guarantee that the value of the index is usable after
the loop (and some do both).
You have declared the i
index within the for so I would not bet on the value of i after the loop.
The examples are a little bit misleading as they implictly assume that for is
a definite loop. In reality it is just a more convenient way of writing:
// version 1
{ int i = 0;
while (i != N) {
...
++i;
}
}
Note how i is undefined after the block.
If a programmer knew all of the above would not make general assumption of the value of i and would be wise enough to choose i<N as the ending conditions, to ensure that the the exit condition will be eventually met.
Using either of the above in c# would cause a compiler error if you used i outside the loop
I prefer this sometimes:
for (int i = 0; (i <= (n-1)); i++) { ... }
This version shows directly the range of values that i can have. My take on checking lower and upper bound of the range is that if you really need this, your code has too many side effects and needs to be rewritten.
The other version:
for (int i = 1; (i <= n); i++) { ... }
helps you determine how often the loop body is called. This also has valid use cases.
For general programming work I prefer
for ( int i = 0; i < N; ++i )
to
for ( int i = 0; i != N; ++i )
Because it is less error prone, especially when code gets refactored. I have seen this kind of code turned into an infinite loop by accident.
That argument made that "you will be tempted to assume that i == N", I don't believe is true. I have never made that assumption or seen another programmer make it.
From my standpoint of formal verification and automatic termination analysis, I strongly prefer #2 (<). It is quite easy to track that some variable is increased (before var = x, after var = x+n for some non-negative number n). However, it is not that easy to see that i==N eventually holds. For this, one needs to infer that i is increased by exactly 1 in each step, which (in more complicated examples) might be lost due to abstraction.
If you think about the loop which increments by two (i = i + 2), this general idea becomes more understandable. To guarantee termination one now needs to know that i%2 == N%2, whereas this is irrelevant when using < as the condition.