We all know that the logical && operator short circuits if the left operand is false, because we know that if one operand is false, then the result is also false.
Why doesn't the bitwise & operator also short-circuit? If the left operand is 0, then we know that the result is also 0. Every language I've tested this in (C, Javascript, C#) evaluates both operands instead of stopping after the first.
Is there any reason why it would be a bad idea the let the & operator short-circuit? If not, why don't most languages make it short-cicuit? It seems like an obvious optimization.
I'd guess it's because a bitwise and in the source language typically gets translated fairly directly to a bitwise and instruction to be executed by the processor. That, in turn, is implemented as a group of the proper number of and gates in the hardware.
I don't see this as optimizing much of anything in most cases. Evaluating the second operand will normally cost less than testing to see whether you should evaluate it.
Short-circuiting is not an optimization device. It is a control flow device. If you fail to short-circuit p != NULL && *p != 0, you will not get a marginally slower program, you will get a crashing program.
This kind of short-circuiting almost never makes sense for bitwise operators, and is more expensive than normal non-short-circuiting operator.
Bitwise operations are usually so cheap that the check would make the operation twice as long or more, whereas the gain from short-circuiting a logical operator is potentially very great.
If the compiler has to emit a check for both operands of & I guess that you it'll be much slower in any NORMAL condition.
For the same reason that * does not short-circuit if the first operand is 0 -- it would be an obscure special case and adding special runtime tests for it would make all multiplies slower.
When the operands are not constants, short circuiting is more expensive than not short circuiting, so you don't want to do it unless the programmer explicitly requests it. So you really want to have clean and simple rules as to when it occurs.
Related
Is there an established idiom for implementing (-1)^n * a?
The obvious choice of pow(-1,n) * a seems wasteful, and (1-2*(n%2)) * a is ugly and not perfectly efficient either (two multiplications and one addition instead of just setting the sign). I think I will go with n%2 ? -a : a for now, but introducing a conditional seems a bit dubious as well.
Making certain assumptions about your programming language, compiler, and CPU...
To repeat the conventional -- and correct -- wisdom, do not even think about optimizing this sort of thing unless your profiling tool says it is a bottleneck. If so, n % 2 ? -a : a will likely generate very efficient code; namely one AND, one test against zero, one negation, and one conditional move, with the AND+test and negation independent so they can potentially execute simultaneously.
Another option looks something like this:
zero_or_minus_one = (n << 31) >> 31;
return (a ^ zero_or_minus_one) - zero_or_minus_one;
This assumes 32-bit integers, arithmetic right shift, defined behavior on integer overflow, twos-complement representation, etc. It will likely compile into four instructions as well (left shift, right shift, XOR, and subtract), with a dependency between each... But it can be better for certain instruction sets; e.g., if you are vectorizing code using SSE instructions.
Incidentally, your question will get a lot more views -- and probably more useful answers -- if you tag it with a specific language.
As others have written, in most cases, readability is more important than performance and compilers, interpreters and libraries are better at optimizing than most people think. Therfore pow(-1,n) * a is likely to be an efficient solution on your platform.
If you really have a performance issue, your own suggestion n%2 ? -a : a is fine. I don't see a reason to worry about the conditional assignment.
If your language has a bitwise AND operator, you could also use n & 1 ? -a : a which should be very efficient even without any optimization. It is likely that on many platforms, this is what pow(a,b) actually does in the special case of a == -1 and b being an integer.
Is there a way to ensure that:
if a==b then devfun(a)==devfun(b);
where devfun() is a device function involves some floating point maths ops (e.g. polynomials) and returns floating point results, a and b are floating point variables.
I don't care about cross-implentation consistence (e.g. different compiler/different OS/different driver versions or different compiler options), I only care about, within the same building/program, at runtime, can it ensure that during each function call, the result returned by devfun() are consistent in a way such that as long as a==b, devfun(a)==devfun(b)?
I am talking about SM2.0+ hardware and CUDA 5.0+, just in case being relevant.
Let's assume that your numbers a and b represent properly normalized IEEE-754 representation floating point numbers and that niether a nor b is a NaN value. Let's also assume a and b are both 32-bit, or else a and b are both 64-bit (IEEE-754 floating point representations).
In that case, I believe the (ISO C/C++, or CUDA C/C++) floating point test for equality (==) will return TRUE when the two numbers a and b are bitwise identical (and FALSE otherwise).
Under the TRUE case, with one exception, I believe it is safe to assume that devfun(a) == devfun(b) without any additional conditions except the obvious ones: there is no difference in the behavior of devfun on either side of the == operation, that is, it's the same code, compiled in the same way, executed under the same conditions (e.g. other variables that may be taking part in devfun, same GPU type, etc.), just as you've indicated in your question: "same building/program".
The one exception is if the result of devfun(a) is NaN, since (IEEE-754) NaN != NaN.
It would be interesting (to me) if you think you have a piece of code that disproves this assertion.
Perhaps floating point ninjas will come along and correct me.
Perhaps also I would be remiss if I did not say something about the hazards of floating point comparisons. If you're not familiar with this (most folks would never recommend performing a test a==b on two floating point numbers) you can find many questions about it on SO.
For the same reasons that floating point equality comparison (==) in general is unwise, I think relying on the above assertion, even if it's true, is unwise. Let me give you one example.
Suppose you compile code for architecture sm_20. Now you run the code on an sm_21 device. This one simple variation could result in a JIT-compile at runtime. Now you are no longer running the same code, and all bets are off.
So, again, even if the above is true, I think it's unwise for you to rely on such a statement:
if a==b, then devfun(a) == devfun(b)
Quick question about reverse polish notation.
Why is 2*3/(2-1)+5*(4-1)?: (original)
23*21-/541-*+
rather than 23*21-/5+41-*?
I am just confusing myself. Personally I'd have adding extra brackets to the original question to make it clear where the 5 is added. If its not there what order do I assume it goes in?
Thanks
If we assume a conventional order of operations, then any multiplications get computed before any additions. So, when you have y+x*z, x*z gets computed first, according to usual order of operations. More explicitly, y+x*z means (y+(x*z)). Thus, 2*3/(2-1)+5*(4-1) means (((2*3)/(2-1))+(5*(4-1))).
If you were to explicitly state up front that you stipulated your order of operations as additions happening before multiplication, then if you wrote 4+5*6 you would mean ((4+5)*6). If you did that, then you could state the distributive law as x*y+z=(x*y)+(x*z). What would expressions mean when you omit operations? Consider xy&z, where & is binary, and the binary operation for xy gets omitted. If the omitted binary operation is *, and & is +, then this would mean that the expressed operation & would happen before the suppressed multiplication operation. Usually, omitted operations get assumed to happen first. So, if you addition had binding priority over multiplication, then it probably would make sense for an expression like xy to mean x+y instead of the more usual x*y. In principle, there seems nothing wrong with letting additions happen before multiplications, so long as you state that you want to do that up front and stick to that convention and its implications in whatever you write. That all said, except for communicating with people who don't understand RPN or PN, I simply don't see why you would write in infix notation once you understand RPN and PN.
It's because multiplication has higher precedence than addition. When you don't have the braces, 5(only) is first multiplied with (4-1) and added to rest of the expression. When you haven't used braces, it is evaluated according to order of precedence only.
In the PHP code
if(a() && b())
when the first operand evaluates to false, b() will not be evaluated.
Similarly, in
if (a() || b())
when the first operand evaluates to true, b() will not be evaluated..
Is this true for all languages, like Java, C#, etc?
This is the test code we used.
<?php
function a(){
echo 'a';
return false;
}
function b(){
echo 'b';
return true;
}
if(a() && b()){
echo 'c';
}
?>
This is called short-circuit evaluation.
It is generally true for languages derived from C (C, C++, Java, C#) but not true for all languages.
For example, VB6 does not do this, nor was it done in early versions of VB.NET. VB8 (in Visual studio 2005) introduced the AndAlso and OrElse operators for this purpose.
Also, from comments, it seems that csh performs short-circuit evaluation from right to left, to make matters even more confusing.
It should also be pointed out that short-circuit evaluation (or lack of) has its dangers to be aware of. For example, if the second operand is a function that has any side effects, then the code may not perform exactly as the programmer intended.
It's not true for VB6.
In VB.net you have to use "AndAlso" instead of "And" if you want it to skip evaluating the second expression.
Is this true for ALL languages, like JAVA, C#, etc?
In C# this is only true for the short-circuiting operators '||' and '&&'; if you just use '|' or '&' it will evaluate both sides every time.
It's called short-circuit evaluation and most languages do this. In some languages there exists operators that don't do this.
The original version of Pascal did not, which caused lots of grief. Modern Pascals, such as Delphi work the same way as C et al.
Ada has special short-circuited forms of conditionals:
and then
or else
used like this:
if p.next /= null and then p.next.name = 'foo'
if x = 0 or else 1/x = y
In some ways it's kind of nice because you can deduce that the programmer knew the expression needed to be short-circuited and that the conditional is not working by accident.
It is true for languages that are "children" of the C : PHP, Java, C++, C#, ... or in the same "inspiration", like Perl.
But it is not true for VB (at least before .NET, which introduced new keywords for that).
(And that's really disturbing the first you work with VB ^^ )
Microsoft VBScript (often used in conjunction with 'Classic' ASP) had no short-circuit evaluation for boolean operators, instead it uses bitwise evaluation. Which is one of the many reasons it is possibly the worst language ever!
"What's going on is that VBScript is
not logical. VBScript is bitwise. All
the so-called logical operators work
on numbers, not on Boolean values!
Not, And, Or, XOr, Eqv and Imp all
convert their arguments to four-byte
integers, do the logical operation on
each pair of bits in the integers, and
return the result. If True is -1 and
False is 0 then everything works out,
because -1 has all its bits turned on
and 0 has all its bits turned off. But
if other numbers get in there, all
bets are off".
Taken from this blog. by Eric Lippert.
In Delphi it's a compiler option.
In standard FORTRAN or Fortran, the operands of a boolean expression can be evaluated in any order. Incomplete evaluation is permitted, but implementation defined.
This allows optimisation of boolean expressions that would not be permitted if strict Left-To-Right ordering was enforced. Expressions which require strict ordering must be decomposed into seperate conditionals, or implementation-dependent assumptions can be made.
Since decomposition is used to enforce ordering, it follows that seperate IF statements can not always be optimised into a single expression. However, short-circuit evaluation is explicit with decomposition, and this is never worse than languages which enforce strict left-to-right ordering to allow lazy evaluation.
Languages wich are derived from FORTRAN (Fortran, BASIC, VBn), and languages which were designed to achieve FORTRAN-like efficiency (Pascal, Ada) initially followed the FORTRAN example of allowing out-of-order evaluation.
This is true for Java as well but the operators |, & etc will evaluate both sides.
In Erlang, the and and or operators do not do short-circuit evaluation; you have to use orelse and andalso operators if you want short-circuiting behavior.
Most languages (all that I've seen) use short circuit evaluation on CONDITIONAL operators such as && and ||. They will stop evaluating as soon as one of the conditions has satisfied the requirement. (The first false on &&. The first true on ||)
All BINARY operators such as & and |, are processed. (Original)
All BITWISE operators such as & and |, are processed. (Edit: 5/10/17)
This is called short-circuit evaluation and it is common for all of the languages that I have ever worked in (C, C++, C#, Java, Smalltalk, Javascript, Lisp) except for VB, VB.NET and Fortran.
It's actually a pretty useful feature. Without short-circuiting you wouldn't be able to do this:
if (a != null && a.isBlank())
Without short-circuiting you would have to have nested if statements because the second part would throw an error if a was null.
Coldfusion will natively do short-circut evaluation. I am sure all CF developers have written:
<cfif isdefined("somevariable") and somevariable eq something>
//do logic
</cfif>
MATLAB is one language that distinguishes between "standard" logical operators and short-circuit operators:
& (AND operator) and | (OR operator) can operate on arrays in an element-wise fashion.
&& and || are short-circuit versions for which the second operand is evaluated only when the result is not fully determined by the first operand. These can only operate on scalars, not arrays.
Other answers have given good examples of languages with and without short circuit evaluation so I won't repeat them.
Just one interesting point to add: Lisps such as Clojure have boolean short circuit evaluation, but in addition you can quite trivially define any operator you like with short circuit evaluation through the use of macros.
Example of a short-circuiting "nand" operation in Clojure:
(defmacro nand
([x]
`(not ~x))
([x & xs]
`(let [nand# (not ~x)]
(if nand#
true ; short circuit if we can prove the nand is true
(nand ~#xs))))) ; continue with the other expressions otherwise
(nand true true)
=> false
(nand false (println "Expression with a side effect!"))
=> true
A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end