is this primes generator pythonic - generator

Is the following code for generating primes pythonic?
def get_primes(n):
primes=[False,False]+[True]*(n-1)
next_p=(i for i,j in enumerate(primes) if j)
while True:
p=next(next_p)
yield p
primes[p*p::p]=[False]*((n-p*p)//p+1)
Note that next(next_p) will eventually throw a StopIteration error which somehow ends the function get_primes. Is that bad?
Also note that next_p is a generator which iterates over primes, however primes changes during iteration. Is that bad style?
adding the following if statement gets it under 0.25 seconds for the first million primes:
if p*p<=n:
primes[p*p::p]=[False]*((n-p*p)//p+1)

It's not bad that next(next_p) throws a StopIteration error -- that's what a generator always does when it runs out of items!
Changing the length of a list while iterating over it is a bad idea. But there's nothing wrong with simply changing the contents. Overall, I think this is a rather elegant, if basic, seive.
One small observation: when you "cross out" the multiples of prime numbers, you'll find, if you think about it for a bit, that you don't have to start with p * 2. You can skip ahead to p ** 2.

There is nothing wrong with the StopIteration, indeed that is the expected behaviour for generators.
I would say this implementation is more pythonic (not necessarily more efficient):
def get_primes(n):
"""Generates prime numbers < n"""
return (x for x in xrange(2,n) if all(x % i for i in xrange(2,x)))
Pythonic to me means clear, concise, readable, and using the strengths of the language. While I can see your implementation is some sort of sieve, I only know that from prior experience with those kind of algorithms. The implementation above I can read directly as a straight-forward test of divisibility.
Note: there is a minor difference in the interface, your implementation would yield primes <= n whereas my implementation would yield primes < n. Obviously this can be changed easily and trivially (just change n to n+1 in the function body), but I feel it is more pythonic to generate primes up-to-but-not including n to be more consistent with the way, say, range() builtin works.
EDIT: JUST FOR FUN
Here is a least pythonic implementation, and probably pretty inefficient too :)
def get_primes(n):
import re
return (x for x in xrange(2,n) if re.match(r'^1?$|^(11+?)\1+$', '1' * x) is None)
I call this the least pythonic because you would be scratching your head for some days to figure out how it works if you haven't seen this trick before!!

Here is another somewhat pythonic solution motivated by #wim, however you can see it is slightly slower than the first method.
def get_primes2(n):
primes=[]
for i in range(2,n+1):
small_primes=(p for p in primes if p*p<=i)
if all(i%p for p in small_primes):
yield i
primes.append(i)
import timeit
print(timeit.timeit("list(get_primes(10**5))",number=5,setup="from __main__ import get_primes")/5.0)
"0.0350940692182945"
print(timeit.timeit("list(get_primes2(10**5))",number=5,setup="from __main__ import get_primes2")/5.0)
"8.226938898658908"

Related

Is there a way to avoid creating an array in this Julia expression?

Is there a way to avoid creating an array in this Julia expression:
max((filter(n -> string(n) == reverse(string(n)), [x*y for x = 1:N, y = 1:N])))
and make it behave similar to this Python generator expression:
max(x*y for x in range(N+1) for y in range(x, N+1) if str(x*y) == str(x*y)[::-1])
Julia version is 2.3 times slower then Python due to array allocation and N*N iterations vs. Python's N*N/2.
EDIT
After playing a bit with a few implementations in Julia, the fastest loop style version I've got is:
function f(N) # 320ms for N=1000 Julia 0.2.0 i686-w64-mingw32
nMax = NaN
for x = 1:N, y = x:N
n = x*y
s = string(n)
s == reverse(s) || continue
nMax < n && (nMax = n)
end
nMax
end
but an improved functional version isn't far behind (only 14% slower or significantly faster, if you consider 2x larger domain):
function e(N) # 366ms for N=1000 Julia 0.2.0 i686-w64-mingw32
isPalindrome(n) = string(n) == reverse(string(n))
max(filter(isPalindrome, [x*y for x = 1:N, y = 1:N]))
end
There is 2.6x unexpected performance improvement by defining isPalindrome function, compared to original version on the top of this page.
We have talked about allowing the syntax
max(f(x) for x in itr)
as a shorthand for producing each of the values f(x) in one coroutine while computing the max in another coroutine. This would basically be shorthand for something like this:
max(#task for x in itr; produce(f(x)); end)
Note, however, that this syntax that explicitly creates a task already works, although it is somewhat less pretty than the above. Your problem can be expressed like this:
max(#task for x=1:N, y=x:N
string(x*y) == reverse(string(x*y)) && produce(x*y)
end)
With the hypothetical producer syntax above, it could be reduced to something like this:
max(x*y if string(x*y) == reverse(string(x*y) for x=1:N, y=x:N)
While I'm a fan of functional style, in this case I would probably just use a for loop:
m = 0
for x = 1:N, y = x:N
n = x*y
string(n) == reverse(string(n)) || continue
m < n && (m = n)
end
Personally, I don't find this version much harder to read and it will certainly be quite fast in Julia. In general, while functional style can be convenient and pretty, if your primary focus is on performance, then explicit for loops are your friend. Nevertheless, we should make sure that John's max/filter/product version works. The for loop version also makes other optimizations easier to add, like Harlan's suggestion of reversing the loop ordering and exiting on the first palindrome you find. There are also faster ways to check if a number is a palindrome in a given base than actually creating and comparing strings.
As to the general question of "getting flexible generators and list comprehensions in Julia", the language already has
A general high-performance iteration protocol based on the start/done/next functions.
Far more powerful multidimensional array comprehensions than most languages. At this point, the only missing feature is the if guard, which is complicated by the interaction with multidimensional comprehensions and the need to potentially dynamically grow the resulting array.
Coroutines (aka tasks) which allow, among other patterns, the producer-consumer pattern.
Python has the if guard but doesn't worry about comprehension performance nearly as much – if we're going to add that feature to Julia's comprehensions, we're going to do it in a way that's both fast and interacts well with multidimensional arrays, hence the delay.
Update: The max function is now called maximum (maximum is to max as sum is to +) and the generator syntax and/or filters work on master, so for example, you can do this:
julia> #time maximum(100x - x^2 for x = 1:100 if x % 3 == 0)
0.059185 seconds (31.16 k allocations: 1.307 MB)
2499
Once 0.5 is out, I'll update this answer more thoroughly.
There are two questions being mixed together here: (1) can you filter a list comprehension mid-comprehension (for which the answer is currently no) and (2) can you use a generator that doesn't allocate an array (for which the answer is partially yes). Generators are provided by the Iterators package, but the Iterators package seems to not play well with filter at the moment. In principle, the code below should work:
max((x, y) -> x * y,
filter((x, y) -> string(x * y) == reverse(string(x * y)),
product(1:N, 1:N)))
I don't think so. There aren't currently filters in Julia array comprehensions. See discussion in this issue.
In this particular case, I'd suggest just nested for loops if you want to get faster computation.
(There might be faster approaches where you start with N and count backwards, stopping as soon as you find something that succeeds. Figuring out how to do that correctly is left as an exercise, etc...)
As mentioned, this is now possible (using Julia 0.5.0)
isPalindrome(n::String) = n == reverse(n)
fun(N::Int) = maximum(x*y for x in 1:N for y in x:N if isPalindrome(string(x*y)))
I'm sure there are better ways that others can comment on. Time (after warm-up):
julia> #time fun(1000);
0.082785 seconds (2.03 M allocations: 108.109 MB, 27.35% gc time)

What's So Good About Recursion? [duplicate]

Is there a performance hit if we use a loop instead of recursion or vice versa in algorithms where both can serve the same purpose? Eg: Check if the given string is a palindrome.
I have seen many programmers using recursion as a means to show off when a simple iteration algorithm can fit the bill.
Does the compiler play a vital role in deciding what to use?
Loops may achieve a performance gain for your program. Recursion may achieve a performance gain for your programmer. Choose which is more important in your situation!
It is possible that recursion will be more expensive, depending on if the recursive function is tail recursive (the last line is recursive call). Tail recursion should be recognized by the compiler and optimized to its iterative counterpart (while maintaining the concise, clear implementation you have in your code).
I would write the algorithm in the way that makes the most sense and is the clearest for the poor sucker (be it yourself or someone else) that has to maintain the code in a few months or years. If you run into performance issues, then profile your code, and then and only then look into optimizing by moving over to an iterative implementation. You may want to look into memoization and dynamic programming.
Comparing recursion to iteration is like comparing a phillips head screwdriver to a flat head screwdriver. For the most part you could remove any phillips head screw with a flat head, but it would just be easier if you used the screwdriver designed for that screw right?
Some algorithms just lend themselves to recursion because of the way they are designed (Fibonacci sequences, traversing a tree like structure, etc.). Recursion makes the algorithm more succinct and easier to understand (therefore shareable and reusable).
Also, some recursive algorithms use "Lazy Evaluation" which makes them more efficient than their iterative brothers. This means that they only do the expensive calculations at the time they are needed rather than each time the loop runs.
That should be enough to get you started. I'll dig up some articles and examples for you too.
Link 1: Haskel vs PHP (Recursion vs Iteration)
Here is an example where the programmer had to process a large data set using PHP. He shows how easy it would have been to deal with in Haskel using recursion, but since PHP had no easy way to accomplish the same method, he was forced to use iteration to get the result.
http://blog.webspecies.co.uk/2011-05-31/lazy-evaluation-with-php.html
Link 2: Mastering Recursion
Most of recursion's bad reputation comes from the high costs and inefficiency in imperative languages. The author of this article talks about how to optimize recursive algorithms to make them faster and more efficient. He also goes over how to convert a traditional loop into a recursive function and the benefits of using tail-end recursion. His closing words really summed up some of my key points I think:
"recursive programming gives the programmer a better way of organizing
code in a way that is both maintainable and logically consistent."
https://developer.ibm.com/articles/l-recurs/
Link 3: Is recursion ever faster than looping? (Answer)
Here is a link to an answer for a stackoverflow question that is similar to yours. The author points out that a lot of the benchmarks associated with either recursing or looping are very language specific. Imperative languages are typically faster using a loop and slower with recursion and vice-versa for functional languages. I guess the main point to take from this link is that it is very difficult to answer the question in a language agnostic / situation blind sense.
Is recursion ever faster than looping?
Recursion is more costly in memory, as each recursive call generally requires a memory address to be pushed to the stack - so that later the program could return to that point.
Still, there are many cases in which recursion is a lot more natural and readable than loops - like when working with trees. In these cases I would recommend sticking to recursion.
Typically, one would expect the performance penalty to lie in the other direction. Recursive calls can lead to the construction of extra stack frames; the penalty for this varies. Also, in some languages like Python (more correctly, in some implementations of some languages...), you can run into stack limits rather easily for tasks you might specify recursively, such as finding the maximum value in a tree data structure. In these cases, you really want to stick with loops.
Writing good recursive functions can reduce the performance penalty somewhat, assuming you have a compiler that optimizes tail recursions, etc. (Also double check to make sure that the function really is tail recursive---it's one of those things that many people make mistakes on.)
Apart from "edge" cases (high performance computing, very large recursion depth, etc.), it's preferable to adopt the approach that most clearly expresses your intent, is well-designed, and is maintainable. Optimize only after identifying a need.
Recursion is better than iteration for problems that can be broken down into multiple, smaller pieces.
For example, to make a recursive Fibonnaci algorithm, you break down fib(n) into fib(n-1) and fib(n-2) and compute both parts. Iteration only allows you to repeat a single function over and over again.
However, Fibonacci is actually a broken example and I think iteration is actually more efficient. Notice that fib(n) = fib(n-1) + fib(n-2) and fib(n-1) = fib(n-2) + fib(n-3). fib(n-1) gets calculated twice!
A better example is a recursive algorithm for a tree. The problem of analyzing the parent node can be broken down into multiple smaller problems of analyzing each child node. Unlike the Fibonacci example, the smaller problems are independent of each other.
So yeah - recursion is better than iteration for problems that can be broken down into multiple, smaller, independent, similar problems.
Your performance deteriorates when using recursion because calling a method, in any language, implies a lot of preparation: the calling code posts a return address, call parameters, some other context information such as processor registers might be saved somewhere, and at return time the called method posts a return value which is then retrieved by the caller, and any context information that was previously saved will be restored. the performance diff between an iterative and a recursive approach lies in the time these operations take.
From an implementation point of view, you really start noticing the difference when the time it takes to handle the calling context is comparable to the time it takes for your method to execute. If your recursive method takes longer to execute then the calling context management part, go the recursive way as the code is generally more readable and easy to understand and you won't notice the performance loss. Otherwise go iterative for efficiency reasons.
I believe tail recursion in java is not currently optimized. The details are sprinkled throughout this discussion on LtU and the associated links. It may be a feature in the upcoming version 7, but apparently it presents certain difficulties when combined with Stack Inspection since certain frames would be missing. Stack Inspection has been used to implement their fine-grained security model since Java 2.
http://lambda-the-ultimate.org/node/1333
There are many cases where it gives a much more elegant solution over the iterative method, the common example being traversal of a binary tree, so it isn't necessarily more difficult to maintain. In general, iterative versions are usually a bit faster (and during optimization may well replace a recursive version), but recursive versions are simpler to comprehend and implement correctly.
Recursion is very useful is some situations. For example consider the code for finding the factorial
int factorial ( int input )
{
int x, fact = 1;
for ( x = input; x > 1; x--)
fact *= x;
return fact;
}
Now consider it by using the recursive function
int factorial ( int input )
{
if (input == 0)
{
return 1;
}
return input * factorial(input - 1);
}
By observing these two, we can see that recursion is easy to understand.
But if it is not used with care it can be so much error prone too.
Suppose if we miss if (input == 0), then the code will be executed for some time and ends with usually a stack overflow.
In many cases recursion is faster because of caching, which improves performance. For example, here is an iterative version of merge sort using the traditional merge routine. It will run slower than the recursive implementation because of caching improved performances.
Iterative implementation
public static void sort(Comparable[] a)
{
int N = a.length;
aux = new Comparable[N];
for (int sz = 1; sz < N; sz = sz+sz)
for (int lo = 0; lo < N-sz; lo += sz+sz)
merge(a, lo, lo+sz-1, Math.min(lo+sz+sz-1, N-1));
}
Recursive implementation
private static void sort(Comparable[] a, Comparable[] aux, int lo, int hi)
{
if (hi <= lo) return;
int mid = lo + (hi - lo) / 2;
sort(a, aux, lo, mid);
sort(a, aux, mid+1, hi);
merge(a, aux, lo, mid, hi);
}
PS - this is what was told by Professor Kevin Wayne (Princeton University) on the course on algorithms presented on Coursera.
Using recursion, you're incurring the cost of a function call with each "iteration", whereas with a loop, the only thing you usually pay is an increment/decrement. So, if the code for the loop isn't much more complicated than the code for the recursive solution, loop will usually be superior to recursion.
Recursion and iteration depends on the business logic that you want to implement, though in most of the cases it can be used interchangeably. Most developers go for recursion because it is easier to understand.
It depends on the language. In Java you should use loops. Functional languages optimize recursion.
Recursion has a disadvantage that the algorithm that you write using recursion has O(n) space complexity.
While iterative aproach have a space complexity of O(1).This is the advantange of using iteration over recursion.
Then why do we use recursion?
See below.
Sometimes it is easier to write an algorithm using recursion while it's slightly tougher to write the same algorithm using iteration.In this case if you opt to follow the iteration approach you would have to handle stack yourself.
If you're just iterating over a list, then sure, iterate away.
A couple of other answers have mentioned (depth-first) tree traversal. It really is such a great example, because it's a very common thing to do to a very common data structure. Recursion is extremely intuitive for this problem.
Check out the "find" methods here:
http://penguin.ewu.edu/cscd300/Topic/BSTintro/index.html
Recursion is more simple (and thus - more fundamental) than any possible definition of an iteration. You can define a Turing-complete system with only a pair of combinators (yes, even a recursion itself is a derivative notion in such a system). Lambda calculus is an equally powerful fundamental system, featuring recursive functions. But if you want to define an iteration properly, you'd need much more primitives to start with.
As for the code - no, recursive code is in fact much easier to understand and to maintain than a purely iterative one, since most data structures are recursive. Of course, in order to get it right one would need a language with a support for high order functions and closures, at least - to get all the standard combinators and iterators in a neat way. In C++, of course, complicated recursive solutions can look a bit ugly, unless you're a hardcore user of FC++ and alike.
I would think in (non tail) recursion there would be a performance hit for allocating a new stack etc every time the function is called (dependent on language of course).
it depends on "recursion depth".
it depends on how much the function call overhead will influence the total execution time.
For example, calculating the classical factorial in a recursive way is very inefficient due to:
- risk of data overflowing
- risk of stack overflowing
- function call overhead occupy 80% of execution time
while developing a min-max algorithm for position analysis in the game of chess that will analyze subsequent N moves can be implemented in recursion over the "analysis depth" (as I'm doing ^_^)
Recursion? Where do I start, wiki will tell you “it’s the process of repeating items in a self-similar way"
Back in day when I was doing C, C++ recursion was a god send, stuff like "Tail recursion". You'll also find many sorting algorithms use recursion. Quick sort example: http://alienryderflex.com/quicksort/
Recursion is like any other algorithm useful for a specific problem. Perhaps you mightn't find a use straight away or often but there will be problem you’ll be glad it’s available.
In C++ if the recursive function is a templated one, then the compiler has more chance to optimize it, as all the type deduction and function instantiations will occur in compile time. Modern compilers can also inline the function if possible. So if one uses optimization flags like -O3 or -O2 in g++, then recursions may have the chance to be faster than iterations. In iterative codes, the compiler gets less chance to optimize it, as it is already in the more or less optimal state (if written well enough).
In my case, I was trying to implement matrix exponentiation by squaring using Armadillo matrix objects, in both recursive and iterative way. The algorithm can be found here... https://en.wikipedia.org/wiki/Exponentiation_by_squaring.
My functions were templated and I have calculated 1,000,000 12x12 matrices raised to the power 10. I got the following result:
iterative + optimisation flag -O3 -> 2.79.. sec
recursive + optimisation flag -O3 -> 1.32.. sec
iterative + No-optimisation flag -> 2.83.. sec
recursive + No-optimisation flag -> 4.15.. sec
These results have been obtained using gcc-4.8 with c++11 flag (-std=c++11) and Armadillo 6.1 with Intel mkl. Intel compiler also shows similar results.
Mike is correct. Tail recursion is not optimized out by the Java compiler or the JVM. You will always get a stack overflow with something like this:
int count(int i) {
return i >= 100000000 ? i : count(i+1);
}
You have to keep in mind that utilizing too deep recursion you will run into Stack Overflow, depending on allowed stack size. To prevent this make sure to provide some base case which ends you recursion.
Using just Chrome 45.0.2454.85 m, recursion seems to be a nice amount faster.
Here is the code:
(function recursionVsForLoop(global) {
"use strict";
// Perf test
function perfTest() {}
perfTest.prototype.do = function(ns, fn) {
console.time(ns);
fn();
console.timeEnd(ns);
};
// Recursion method
(function recur() {
var count = 0;
global.recurFn = function recurFn(fn, cycles) {
fn();
count = count + 1;
if (count !== cycles) recurFn(fn, cycles);
};
})();
// Looped method
function loopFn(fn, cycles) {
for (var i = 0; i < cycles; i++) {
fn();
}
}
// Tests
var curTest = new perfTest(),
testsToRun = 100;
curTest.do('recursion', function() {
recurFn(function() {
console.log('a recur run.');
}, testsToRun);
});
curTest.do('loop', function() {
loopFn(function() {
console.log('a loop run.');
}, testsToRun);
});
})(window);
RESULTS
// 100 runs using standard for loop
100x for loop run.
Time to complete: 7.683ms
// 100 runs using functional recursive approach w/ tail recursion
100x recursion run.
Time to complete: 4.841ms
In the screenshot below, recursion wins again by a bigger margin when run at 300 cycles per test
If the iterations are atomic and orders of magnitude more expensive than pushing a new stack frame and creating a new thread and you have multiple cores and your runtime environment can use all of them, then a recursive approach could yield a huge performance boost when combined with multithreading. If the average number of iterations is not predictable then it might be a good idea to use a thread pool which will control thread allocation and prevent your process from creating too many threads and hogging the system.
For example, in some languages, there are recursive multithreaded merge sort implementations.
But again, multithreading can be used with looping rather than recursion, so how well this combination will work depends on more factors including the OS and its thread allocation mechanism.
I found another differences between those approaches.
It looks simple and unimportant, but it has a very important role while you prepare for interviews and this subject arises, so look closely.
In short:
1) iterative post-order traversal is not easy - that makes DFT more complex
2) cycles check easier with recursion
Details:
In the recursive case, it is easy to create pre and post traversals:
Imagine a pretty standard question: "print all tasks that should be executed to execute the task 5, when tasks depend on other tasks"
Example:
//key-task, value-list of tasks the key task depends on
//"adjacency map":
Map<Integer, List<Integer>> tasksMap = new HashMap<>();
tasksMap.put(0, new ArrayList<>());
tasksMap.put(1, new ArrayList<>());
List<Integer> t2 = new ArrayList<>();
t2.add(0);
t2.add(1);
tasksMap.put(2, t2);
List<Integer> t3 = new ArrayList<>();
t3.add(2);
t3.add(10);
tasksMap.put(3, t3);
List<Integer> t4 = new ArrayList<>();
t4.add(3);
tasksMap.put(4, t4);
List<Integer> t5 = new ArrayList<>();
t5.add(3);
tasksMap.put(5, t5);
tasksMap.put(6, new ArrayList<>());
tasksMap.put(7, new ArrayList<>());
List<Integer> t8 = new ArrayList<>();
t8.add(5);
tasksMap.put(8, t8);
List<Integer> t9 = new ArrayList<>();
t9.add(4);
tasksMap.put(9, t9);
tasksMap.put(10, new ArrayList<>());
//task to analyze:
int task = 5;
List<Integer> res11 = getTasksInOrderDftReqPostOrder(tasksMap, task);
System.out.println(res11);**//note, no reverse required**
List<Integer> res12 = getTasksInOrderDftReqPreOrder(tasksMap, task);
Collections.reverse(res12);//note reverse!
System.out.println(res12);
private static List<Integer> getTasksInOrderDftReqPreOrder(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
reqPreOrder(tasksMap,task,result, visited);
return result;
}
private static void reqPreOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) {
if(!visited.contains(task)) {
visited.add(task);
result.add(task);//pre order!
List<Integer> children = tasksMap.get(task);
if (children != null && children.size() > 0) {
for (Integer child : children) {
reqPreOrder(tasksMap,child,result, visited);
}
}
}
}
private static List<Integer> getTasksInOrderDftReqPostOrder(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
reqPostOrder(tasksMap,task,result, visited);
return result;
}
private static void reqPostOrder(Map<Integer, List<Integer>> tasksMap, int task, List<Integer> result, Set<Integer> visited) {
if(!visited.contains(task)) {
visited.add(task);
List<Integer> children = tasksMap.get(task);
if (children != null && children.size() > 0) {
for (Integer child : children) {
reqPostOrder(tasksMap,child,result, visited);
}
}
result.add(task);//post order!
}
}
Note that the recursive post-order-traversal does not require a subsequent reversal of the result. Children printed first and your task in the question printed last. Everything is fine. You can do a recursive pre-order-traversal (also shown above) and that one will require a reversal of the result list.
Not that simple with iterative approach! In iterative (one stack) approach you can only do a pre-ordering-traversal, so you obliged to reverse the result array at the end:
List<Integer> res1 = getTasksInOrderDftStack(tasksMap, task);
Collections.reverse(res1);//note reverse!
System.out.println(res1);
private static List<Integer> getTasksInOrderDftStack(Map<Integer, List<Integer>> tasksMap, int task) {
List<Integer> result = new ArrayList<>();
Set<Integer> visited = new HashSet<>();
Stack<Integer> st = new Stack<>();
st.add(task);
visited.add(task);
while(!st.isEmpty()){
Integer node = st.pop();
List<Integer> children = tasksMap.get(node);
result.add(node);
if(children!=null && children.size() > 0){
for(Integer child:children){
if(!visited.contains(child)){
st.add(child);
visited.add(child);
}
}
}
//If you put it here - it does not matter - it is anyway a pre-order
//result.add(node);
}
return result;
}
Looks simple, no?
But it is a trap in some interviews.
It means the following: with the recursive approach, you can implement Depth First Traversal and then select what order you need pre or post(simply by changing the location of the "print", in our case of the "adding to the result list"). With the iterative (one stack) approach you can easily do only pre-order traversal and so in the situation when children need be printed first(pretty much all situations when you need start print from the bottom nodes, going upwards) - you are in the trouble. If you have that trouble you can reverse later, but it will be an addition to your algorithm. And if an interviewer is looking at his watch it may be a problem for you. There are complex ways to do an iterative post-order traversal, they exist, but they are not simple. Example:https://www.geeksforgeeks.org/iterative-postorder-traversal-using-stack/
Thus, the bottom line: I would use recursion during interviews, it is simpler to manage and to explain. You have an easy way to go from pre to post-order traversal in any urgent case. With iterative you are not that flexible.
I would use recursion and then tell: "Ok, but iterative can provide me more direct control on used memory, I can easily measure the stack size and disallow some dangerous overflow.."
Another plus of recursion - it is simpler to avoid / notice cycles in a graph.
Example (preudocode):
dft(n){
mark(n)
for(child: n.children){
if(marked(child))
explode - cycle found!!!
dft(child)
}
unmark(n)
}
It may be fun to write it as recursion, or as a practice.
However, if the code is to be used in production, you need to consider the possibility of stack overflow.
Tail recursion optimization can eliminate stack overflow, but do you want to go through the trouble of making it so, and you need to know you can count on it having the optimization in your environment.
Every time the algorithm recurses, how much is the data size or n reduced by?
If you are reducing the size of data or n by half every time you recurse, then in general you don't need to worry about stack overflow. Say, if it needs to be 4,000 level deep or 10,000 level deep for the program to stack overflow, then your data size need to be roughly 24000 for your program to stack overflow. To put that into perspective, a biggest storage device recently can hold 261 bytes, and if you have 261 of such devices, you are only dealing with 2122 data size. If you are looking at all the atoms in the universe, it is estimated that it may be less than 284. If you need to deal with all the data in the universe and their states for every millisecond since the birth of the universe estimated to be 14 billion years ago, it may only be 2153. So if your program can handle 24000 units of data or n, you can handle all data in the universe and the program will not stack overflow. If you don't need to deal with numbers that are as big as 24000 (a 4000-bit integer), then in general you don't need to worry about stack overflow.
However, if you reduce the size of data or n by a constant amount every time you recurse, then you can run into stack overflow when n becomes merely 20000. That is, the program runs well when n is 1000, and you think the program is good, and then the program stack overflows when some time in the future, when n is 5000 or 20000.
So if you have a possibility of stack overflow, try to make it an iterative solution.
As far as I know, Perl does not optimize tail-recursive calls, but you can fake it.
sub f{
my($l,$r) = #_;
if( $l >= $r ){
return $l;
} else {
# return f( $l+1, $r );
#_ = ( $l+1, $r );
goto &f;
}
}
When first called it will allocate space on the stack. Then it will change its arguments, and restart the subroutine, without adding anything more to the stack. It will therefore pretend that it never called its self, changing it into an iterative process.
Note that there is no "my #_;" or "local #_;", if you did it would no longer work.
"Is there a performance hit if we use a loop instead of
recursion or vice versa in algorithms where both can serve the same purpose?"
Usually yes if you are writing in a imperative language iteration will run faster than recursion, the performance hit is minimized in problems where the iterative solution requires manipulating Stacks and popping items off of a stack due to the recursive nature of the problem. There are a lot of times where the recursive implementation is much easier to read because the code is much shorter,
so you do want to consider maintainability. Especailly in cases where the problem has a recursive nature. So take for example:
The recursive implementation of Tower of Hanoi:
def TowerOfHanoi(n , source, destination, auxiliary):
if n==1:
print ("Move disk 1 from source",source,"to destination",destination)
return
TowerOfHanoi(n-1, source, auxiliary, destination)
print ("Move disk",n,"from source",source,"to destination",destination)
TowerOfHanoi(n-1, auxiliary, destination, source)
Fairly short and pretty easy to read. Compare this with its Counterpart iterative TowerOfHanoi:
# Python3 program for iterative Tower of Hanoi
import sys
# A structure to represent a stack
class Stack:
# Constructor to set the data of
# the newly created tree node
def __init__(self, capacity):
self.capacity = capacity
self.top = -1
self.array = [0]*capacity
# function to create a stack of given capacity.
def createStack(capacity):
stack = Stack(capacity)
return stack
# Stack is full when top is equal to the last index
def isFull(stack):
return (stack.top == (stack.capacity - 1))
# Stack is empty when top is equal to -1
def isEmpty(stack):
return (stack.top == -1)
# Function to add an item to stack.
# It increases top by 1
def push(stack, item):
if(isFull(stack)):
return
stack.top+=1
stack.array[stack.top] = item
# Function to remove an item from stack.
# It decreases top by 1
def Pop(stack):
if(isEmpty(stack)):
return -sys.maxsize
Top = stack.top
stack.top-=1
return stack.array[Top]
# Function to implement legal
# movement between two poles
def moveDisksBetweenTwoPoles(src, dest, s, d):
pole1TopDisk = Pop(src)
pole2TopDisk = Pop(dest)
# When pole 1 is empty
if (pole1TopDisk == -sys.maxsize):
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When pole2 pole is empty
else if (pole2TopDisk == -sys.maxsize):
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# When top disk of pole1 > top disk of pole2
else if (pole1TopDisk > pole2TopDisk):
push(src, pole1TopDisk)
push(src, pole2TopDisk)
moveDisk(d, s, pole2TopDisk)
# When top disk of pole1 < top disk of pole2
else:
push(dest, pole2TopDisk)
push(dest, pole1TopDisk)
moveDisk(s, d, pole1TopDisk)
# Function to show the movement of disks
def moveDisk(fromPeg, toPeg, disk):
print("Move the disk", disk, "from '", fromPeg, "' to '", toPeg, "'")
# Function to implement TOH puzzle
def tohIterative(num_of_disks, src, aux, dest):
s, d, a = 'S', 'D', 'A'
# If number of disks is even, then interchange
# destination pole and auxiliary pole
if (num_of_disks % 2 == 0):
temp = d
d = a
a = temp
total_num_of_moves = int(pow(2, num_of_disks) - 1)
# Larger disks will be pushed first
for i in range(num_of_disks, 0, -1):
push(src, i)
for i in range(1, total_num_of_moves + 1):
if (i % 3 == 1):
moveDisksBetweenTwoPoles(src, dest, s, d)
else if (i % 3 == 2):
moveDisksBetweenTwoPoles(src, aux, s, a)
else if (i % 3 == 0):
moveDisksBetweenTwoPoles(aux, dest, a, d)
# Input: number of disks
num_of_disks = 3
# Create three stacks of size 'num_of_disks'
# to hold the disks
src = createStack(num_of_disks)
dest = createStack(num_of_disks)
aux = createStack(num_of_disks)
tohIterative(num_of_disks, src, aux, dest)
Now the first one is way easier to read because suprise suprise shorter code is usually easier to understand than code that is 10 times longer. Sometimes you want to ask yourself is the extra performance gain really worth it? The amount of hours wasted debugging the code. Is the iterative TowerOfHanoi faster than the Recursive TowerOfHanoi? Probably, but not by a big margin. Would I like to program Recursive problems like TowerOfHanoi using iteration? Hell no. Next we have another recursive function the Ackermann function:
Using recursion:
if m == 0:
# BASE CASE
return n + 1
elif m > 0 and n == 0:
# RECURSIVE CASE
return ackermann(m - 1, 1)
elif m > 0 and n > 0:
# RECURSIVE CASE
return ackermann(m - 1, ackermann(m, n - 1))
Using Iteration:
callStack = [{'m': 2, 'n': 3, 'indentation': 0, 'instrPtr': 'start'}]
returnValue = None
while len(callStack) != 0:
m = callStack[-1]['m']
n = callStack[-1]['n']
indentation = callStack[-1]['indentation']
instrPtr = callStack[-1]['instrPtr']
if instrPtr == 'start':
print('%sackermann(%s, %s)' % (' ' * indentation, m, n))
if m == 0:
# BASE CASE
returnValue = n + 1
callStack.pop()
continue
elif m > 0 and n == 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after first recursive case'
callStack.append({'m': m - 1, 'n': 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif m > 0 and n > 0:
# RECURSIVE CASE
callStack[-1]['instrPtr'] = 'after second recursive case, inner call'
callStack.append({'m': m, 'n': n - 1, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after first recursive case':
returnValue = returnValue
callStack.pop()
continue
elif instrPtr == 'after second recursive case, inner call':
callStack[-1]['innerCallResult'] = returnValue
callStack[-1]['instrPtr'] = 'after second recursive case, outer call'
callStack.append({'m': m - 1, 'n': returnValue, 'indentation': indentation + 1, 'instrPtr': 'start'})
continue
elif instrPtr == 'after second recursive case, outer call':
returnValue = returnValue
callStack.pop()
continue
print(returnValue)
And once again I will argue that the recursive implementation is much easier to understand. So my conclusion is use recursion if the problem by nature is recursive and requires manipulating items in a stack.
I'm going to answer your question by designing a Haskell data structure by "induction", which is a sort of "dual" to recursion. And then I will show how this duality leads to nice things.
We introduce a type for a simple tree:
data Tree a = Branch (Tree a) (Tree a)
| Leaf a
deriving (Eq)
We can read this definition as saying "A tree is a Branch (which contains two trees) or is a leaf (which contains a data value)". So the leaf is a sort of minimal case. If a tree isn't a leaf, then it must be a compound tree containing two trees. These are the only cases.
Let's make a tree:
example :: Tree Int
example = Branch (Leaf 1)
(Branch (Leaf 2)
(Leaf 3))
Now, let's suppose we want to add 1 to each value in the tree. We can do this by calling:
addOne :: Tree Int -> Tree Int
addOne (Branch a b) = Branch (addOne a) (addOne b)
addOne (Leaf a) = Leaf (a + 1)
First, notice that this is in fact a recursive definition. It takes the data constructors Branch and Leaf as cases (and since Leaf is minimal and these are the only possible cases), we are sure that the function will terminate.
What would it take to write addOne in an iterative style? What will looping into an arbitrary number of branches look like?
Also, this kind of recursion can often be factored out, in terms of a "functor". We can make Trees into Functors by defining:
instance Functor Tree where fmap f (Leaf a) = Leaf (f a)
fmap f (Branch a b) = Branch (fmap f a) (fmap f b)
and defining:
addOne' = fmap (+1)
We can factor out other recursion schemes, such as the catamorphism (or fold) for an algebraic data type. Using a catamorphism, we can write:
addOne'' = cata go where
go (Leaf a) = Leaf (a + 1)
go (Branch a b) = Branch a b

Repeated application of functions

Reading this question got me thinking: For a given function f, how can we know that a loop of this form:
while (x > 2)
x = f(x)
will stop for any value x? Is there some simple criterion?
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
Specifically, can we prove this for sqrt and for log?
For these functions, a proof that ceil(f(x))<x for x > 2 would suffice. You could do one iteration -- to arrive at an integer number, and then proceed by simple induction.
For the general case, probably the best idea is to use well-founded induction to prove this property. However, as Moron pointed out in the comments, this could be impossible in the general case and the right ordering is, in many cases, quite hard to find.
Edit, in reply to Amnon's comment:
If you wanted to use well-founded induction, you would have to define another strict order, that would be well-founded. In case of the functions you mentioned this is not hard: you can take x << y if and only if ceil(x) < ceil(y), where << is a symbol for this new order. This order is of course well-founded on numbers greater then 2, and both sqrt and log are decreasing with respect to it -- so you can apply well-founded induction.
Of course, in general case such an order is much more difficult to find. This is also related, in some way, to total correctness assertions in Hoare logic, where you need to guarantee similar obligations on each loop construct.
There's a general theorem for when then sequence of iterations will converge. (A convergent sequence may not stop in a finite number of steps, but it is getting closer to a target. You can get as close to the target as you like by going far enough out in the sequence.)
The sequence x, f(x), f(f(x)), ... will converge if f is a contraction mapping. That is, there exists a positive constant k < 1 such that for all x and y, |f(x) - f(y)| <= k |x-y|.
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
If we're talking about floats here, that's not true. If for all x > n f(x) is strictly less than x, it will reach n at some point (because there's only a limited number of floating point values between any two numbers).
Of course this means you need to prove that f(x) is actually less than x using floating point arithmetic (i.e. proving it is less than x mathematically does not suffice, because then f(x) = x may still be true with floats when the difference is not enough).
There is no general algorithm to determine whether a function f and a variable x will end or not in that loop. The Halting problem is reducible to that problem.
For sqrt and log, we could safely do that because we happen to know the mathematical properties of those functions. Say, sqrt approaches 1, log eventually goes negative. So the condition x < 2 has to be false at some point.
Hope that helps.
In the general case, all that can be said is that the loop will terminate when it encounters xi≤2. That doesn't mean that the sequence will converge, nor does it even mean that it is bounded below 2. It only means that the sequence contains a value that is not greater than 2.
That said, any sequence containing a subsequence that converges to a value strictly less than two will (eventually) halt. That is the case for the sequence xi+1 = sqrt(xi), since x converges to 1. In the case of yi+1 = log(yi), it will contain a value less than 2 before becoming undefined for elements of R (though it is well defined on the extended complex plane, C*, but I don't think it will, in general converge except at any stable points that may exist (i.e. where z = log(z)). Ultimately what this means is that you need to perform some upfront analysis on the sequence to better understand its behavior.
The standard test for convergence of a sequence xi to a point z is that give ε > 0, there is an n such that for all i > n, |xi - z| < ε.
As an aside, consider the Mandelbrot Set, M. The test for a particular point c in C for an element in M is whether the sequence zi+1 = zi2 + c is unbounded, which occurs whenever there is a |zi| > 2. Some elements of M may converge (such as 0), but many do not (such as -1).
Sure. For all positive numbers x, the following inequality holds:
log(x) <= x - 1
(this is a pretty basic result from real analysis; it suffices to observe that the second derivative of log is always negative for all positive x, so the function is concave down, and that x-1 is tangent to the function at x = 1). From this it follows essentially immediately that your while loop must terminate within the first ceil(x) - 2 steps -- though in actuality it terminates much, much faster than that.
A similar argument will establish your result for f(x) = sqrt(x); specifically, you can use the fact that:
sqrt(x) <= x/(2 sqrt(2)) + 1/sqrt(2)
for all positive x.
If you're asking whether this result holds for actual programs, instead of mathematically, the answer is a little bit more nuanced, but not much. Basically, many languages don't actually have hard accuracy requirements for the log function, so if your particular language implementation had an absolutely terrible math library this property might fail to hold. That said, it would need to be a really, really terrible library; this property will hold for any reasonable implementation of log.
I suggest reading this wikipedia entry which provides useful pointers. Without additional knowledge about f, nothing can be said.

Static analysis of multiple if statements (conditions)

I have code similar to:
if conditionA(x, y, z) then doA()
else if conditionB(x, y, z) then doB()
...
else if conditionZ(x, y, z) then doZ()
else throw ShouldNeverHappenException
I would like to validate two things (using static analysis):
If all conditions conditionA, conditionB, ..., conditionZ are mutually exclusive (i.e. it is not possible that two or more conditions are true in the same time).
All possible cases are covered, i.e. "else throw" statement will never be called.
Could you recommend me a tool and/or a way I could (easily) do this?
I would appreciate more detailed informations than "use Prolog" or "use Mathematica"... ;-)
UPDATE:
Let assume that conditionA, conditionB, ..., conditionZ are (pure) functions and x, y, z have "primitive" types.
The item 1. that you want to do is a stylistic issue. The program makes sense even if the conditions are not exclusive. Personally, as an author of static analysis tools, I think that users get enough false alarms without trying to force style on them (and since another programmer would write overlapping conditions on purpose, to that other programmer what you ask would be a false alarm). This said, there are tools that are configurable: for one of those, you could write a rule stating that the cases have to be exclusive when this construct is encountered. And as suggested by Jeffrey, you can wrap your code in a context in which you compute a boolean condition that is true iff there is no overlap, and check that condition instead.
The item 2. is not a style issue: you want to know if the exception can be raised.
The problem is difficult in theory and in practice, so tools usually give up at least one of correctness (never fail to warn if there is an issue) or completeness (never warn for a non-issue). If the types of the variables were unbounded integers, computability theory would state that an analyzer cannot be both correct and complete and terminate for all input programs. But enough with the theory. Some tools give up both correctness and completeness, and that doesn't mean they are not useful either.
An example of tool that is correct is Frama-C's value analysis: if it says that a statement (such as the last case in the sequence of elseifs) is unreachable, you know that it is unreachable. It is not complete, so when it doesn't say that the last statement is unreachable, you don't know.
An example of tool that is complete is Cute: it uses the so-called concolic approach to generate test cases automatically, aiming for structural coverage (that is, it will more or less heuristically try to generate tests that activate the last case once all the others have been taken). Because it generates test cases (each a single, definite input vector on which the code is actually executed), it never warns for a non-problem. This is what it means to be complete. But it may fail to find the test case that causes the last statement to be reached even though there is one: it is not correct.
This appears to be isomorphic to solving a 3-sat equation, which is NP-hard. It is unlikely a static analyzer would attempt to cover this domain, unfortunately.
In the general case this is—as #Michael Donohue ponts out—an NP-hard problem.
But if you have only a reasonable number of conditions to check, you could just write a program that checks all of them.
for (int x = lowestX; x <= highestX; x++)
for (int y ...)
for (int z ...)
{
int conditionsMet = 0;
if conditionA(x, y, z) then conditionsMet++;
if conditionB(x, y, z) then conditionsMet++;
...
if conditionZ(x, y, z) then conditionsMet++;
if (conditionsMet != 1)
PrintInBlinkingRed("Found an exception!", x, y, z)
}
Assuming your conditions are boolean expression (and/or/not) over boolean-valued predicates X,Y,Z, your question is easily solved with a symbolic boolean evaluation engine.
The question about whether they cover all cases is answered by taking a disjunctiton of all the conditions and asking if is a tautology. Wang's algorithm does this just fine.
The question about whether they intersect is answered pairwise; for formulas a and b,
symbolically construct a & b == false and apply Wang's tautology test again.
We've used the DMS Software Reengineering Toolkit to carry out similar boolean value computations (partial evaluations) over preprocessor conditionals in C. DMS provides the ability to parse source code (important if you intend to do this across a large code base and/or repeatedly as you modify your program over time), extract the program fragments, symbolically compose them, and then apply rewriting rules (to carry out boolean simplifications or algorithms such as Wang's).

How do I explain what a "naive implementation" is? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the clearest explanation of what computer scientists mean by "the naive implementation"? I need a good clear example which will illustrate — ideally, even to non-technical people — that the naive implementation may technically be a functioning solution to the problem, but practically be utterly unusable.
I'd try to keep it away from computers altogether. Ask your audience how they find an entry in a dictionary. (A normal dictionary of word definitions.)
The naive implementation is to start at the very beginning, and look at the first word. Oh, that's not the word we're looking for - look at the next one, etc. It's worth pointing out to the audience that they probably didn't even think of that way of doing things - we're smart enough to discount it immediately! It is, however, about the simplest way you could think of. (It might be interesting to ask them whether they can think of anything simpler, and check that they do really understand why it's simpler than the way we actually do it.)
The next implementation (and a pretty good one) is to start in the middle of the dictionary. Does the word we're looking for come before or after that? If it's before, turn to the page half way between the start and where we are now - otherwise, turn to the page half way between where we are now and the end, etc - binary chop.
The actual human implementation is to use our knowledge of letters to get very rapidly to "nearly the right place" - if we see "elephant" then we'll know it'll be "somewhere near the start" maybe about 1/5th of the way through. Once we've got to E (which we can do with very, very simple comparisons) we find EL etc.
StackOverflow's Jeff Atwood had a great example of a naive algorithm related to shuffling an array.
Doing it the most straightforward, least tricky way available. One example is selection sort.
In this case naive does not mean bad or unusable. It just means not particularly good.
Taking Jon Skeet's advice to heart you can describe selection sort as:
Find the highest value in the list and put it first
Find the next highest value and add it to the list
Repeat step 2 until you run out of list
It is easy to do and easy to understand, but not necessarily the best.
another naive implementation would be the use of recursion in computing for an integer's factorial in an imperative language. a more efficient solution in that case is to just use a loop.
What's the most obvious, naive algorithm for exponentiation that you could think of?
base ** exp is base * base * ... * base, exp times:
double pow(double base, int exp) {
double result = 1;
for (int i = 0; i < exp; i++)
result *= base;
return result;
}
It doesn't handle negative exponents, though. Remembering that base ** exp == 1 / base ** (-exp) == (1 / base) ** (-exp):
double pow(double base, int exp) {
double result = 1;
if (exp < 0) {
base = 1 / base;
exp = -exp;
}
for (int i = 0; i < exp; i++)
result *= base;
return result;
}
It's actually possible to compute base ** exp with less than exp multiplications, though!
double pow(double base, int exp) {
double result = 1;
if (exp < 0) {
base = 1 / base;
exp = -exp;
}
while (exp) {
if (exp % 2) {
result *= base;
exp--;
}
else {
base *= base;
exp /= 2;
}
}
return result * base;
}
This takes advantage of the fact that base ** exp == (base * base) ** (exp / 2) if exp is even, and will only require about log2(exp) multiplications.
I took the time to read your question a little closer, and I have the perfect example.
a good clear example which will illustrate -- ideally, even to non-technical people -- that the naive implementation may technically be a functioning solution to the problem, but practically be utterly unusable.
Try Bogosort!
If bogosort were used to sort a deck of cards, it would consist of checking if the deck were in order, and if it were not, one would throw the deck into the air, pick up the cards up at random, and repeat the process until the deck is sorted.
"Naive implementation" is almost always synonymous with "brute-force implementation". Naive implementations are often intuitive and the first to come to mind, but are also often O(n^2) or worse, thus taking too long too be practical for large inputs.
Programming competitions are full of problems where the naive implementation will fail to run in an acceptable amount of time, and the heart of the problem is coming up with an improved algorithm that is generally much less obvious but runs much more quickly.
Naive implementation is:
intuitive;
first to come in mind;
often inffective and/or buggy incorner cases;
Let's say that someone figures out how to extract a single field from a database and then proceeds to write a web page in PHP or any language that makes a separate query on the database for each field on the page. It works, but will be incredibly slow, inefficient, and difficult to maintain.
Naive doesn't mean bad or unusable - it means having certain qualities which pose a problem in a specific context and for a specific purpose.
The classic example of course is sorting. In the context of sorting a list of ten numbers, any old algorithm (except pogo sort) would work pretty well. However, when we get to the scale of thousands of numbers or more, typically we say that selection sort is the naive algorithm because it has the quality of O(n^2) time which would be too slow for our purposes, and that the non-naive algorithm is quicksort because it has the quality of O(n lg n) time which is fast enough for our purposes.
In fact, the case could be made that in the context of sorting a list of ten numbers, quicksort is the naive algorithm, since it will take longer than selection sort.
Determining if a number is prime or not (primality test) is an excellent example.
The naive method just check if n mod x where x = 2..square root(n) is zero for at least one x. This method can get really slow for very large prime numbers and it is not feasible to use in cryptography.
On the other hand there are a couple of probability or fast deterministic tests. These are too complicated to explain here but you might want to check the relevant Wikipedia article on the subject for more information: http://en.wikipedia.org/wiki/Primality_test
Bubble sort over 100,000 thousand entries.
The intuitive algorithms you normally use to sort a deck of cards (insertion sort or selection sort, both O(n^2)) can be considered naive, because they are easy to learn and implement, but would not scale well to a deck of, say, 100000 cards :D . In a general setting, there are faster (O(n log n)) ways to sort a list.
Note, however, that naive does not necessarily mean bad. There are situations where insertion sort is a good choice (say, when you have an already sorted big deck and few unsorted cards to add).
(Haven't seen a truly naive implementation posted yet so...)
The following implementation is "naive", because it does not cover the edge cases, and will break in other cases. It is very simple to understand, and can convey a programming message.
def naive_inverse(x):
return 1/x
It will:
Break on x=0
Do a bad job when passed an integer
You could make it more "mature" by adding these features.
A O(n!) algorithm.
foreach(object o in set1)
{
foreach(object p in set1)
{
// codez
}
}
This will perform fine with small sets and then exponentially worse with larger ones.
Another might be a naive Singleton that doesn't account for threading.
public static SomeObject Instance
{
get
{
if(obj == null)
{
obj = new SomeObject();
}
return obj;
}
}
If two threads access that at the same time it's possible for them to get two different versions. Leading to seriously weird bugs.