Laguerre's method - numerical-methods

I'm trying to implement Laguerre's method for following equation:
1/(99*x+1)=2. (general form is more complex - polynomial of n-th degree 1/(a*x+1)+...+1/(z*x+1)=res) where a,b,...z>=0 and 0<res<N
but it quickly terminates and goes to infinity.
Solution for this case is very simple - -0.00505050505050505.
Since they say that Laguerre's method is working in 99.999% of cases, I hope this is not that 0.001?
Is there some other way that I could use to polynomial roots that is working in all cases? I need just one real root (and there is always 1 in my case).

Laguerres method works for polynomials exclusively, so you need to first transform your expressions into a polynomial form. Your first equation then becomes linear, which any method solves in one step. Your general problem has the form 1/x*q'(1/x)/q(1/x)=res with q(z)=(z+a1)*...*(z+an), so that the polynomial is z*q'(z)-res*q(z) or x^(n-1)*q'(1/x)-res*x^n*q(1/x).
If you want a third order method for general functions, try the Halley method.

Related

Looking for a way to solve this integral of the form 1/sqrt of a fourth order polynomial (with negative coefficients) analytically

I have a 4th order polynomial I'm using to model an asymmetric double well potential. I need to find the integral of 1/sqrt of a fourth order polynomial, where the variables of this integrand are also functionals. Using Maple spits out a elliptic function which I can't use for the subsequent problems. Does anyone have any intuition on how to attack this analytically?

Flexible programing for inverse function or root finding in Freepascal

I have a huge lib of math functions, like pdf or cdf of statistical distributions. But often e.g. the inverse cdf can be only calculated numerically, e.g. using Newton-Raphson or bisection, in the latter we would need to check if cdf(x) is > or < then the target y0.
However, many functions have further parameters like a Gaussian distribution having certain mean and sigma, so cdf is cdf(x,mean,sigma). Whereas other functions, such as standard normal cdf, have no further parameters, or some have even 3 or 4 further parameters.
A similar problem would happen if you want to apply bisection for either linear functions (2 parameters) or parabolas (3 parameters). Or if you want not the inverse function, but e.g. the integral of f.
The easiest implementation would be to define cdf as global function f(x); and to check for >y0 or global variables.
However, this is a very old-fashioned way, and Freepascal also supports procedural parameters, for calls like x=icdf(0.9987,#cdfStdNorm)
Even overloading is supported to allow calls like x2=icdf(0.9987,0,2,#cdfNorm) to pass also mean and sigma.
But this ends up still in two separate code blocks (even whole functions), because in one case we need to call cdf only with x, and in 2nd example also with mean and sigma.
Is there an elegant solution for this problem in Freepascal? Maybe using variant records? Or an object-oriented approach? I have no glue about OO, but I know the variant object style would require to change at least the headers of many functions because I want to apply the technique not only for inverse cdf calculation, but also to numerical integration, root finding, optimization, etc.
Or is it "best" just to define a real function type with e.g. x + 5 parameters (maybe as array), and to ignore the unused parameters? But for me it looks that then I would need many "wrapper" functions or to re-code all the existing functions (to use the arrays, even if they are sometimes not needed!).
Maybe macros can help as well? Any Freepascal hints are very welcome!
If you make it a (function .. of object), mean and sigma could be part of the class, and the function could internally just access it. Only the really changing parameters during the iteration would be parameters. (read: x)
Anonymous methods as talked about by David and Rudy is a further step to avoid having to declare a class for each such invocation, but that is convenience thing and IMHO not the core of the question. At the expense of declaring the class, your core code is free of global variable use and anonymous methods might also come with a performance cost, depending on usage.
Free Pascal also supports nested functions (function... is nested), which is the original Pascal closure-like way which was never adopted by Pascal compilers from Borland. A nested procedure passed as callback can access local variables in the procedure where it was declared. The Free Pascal numlib numeric math package uses this in some cases for similar cases like yours. For math it is even more natural.
Delphi never implements old constructs because borrowing syntax from other languages looks better on bulletlists and keeps the subscriptions flowing.

Are idempotent functions the same as pure functions?

I read Wikipedia's explanation of idempotence.
I know it means a function's output is determined by it's input.
But I remember that I heard a very similar concept: pure function.
I Google them but can't find their difference...
Are they equivalent?
An idempotent function can cause idempotent side-effects.
A pure function cannot.
For example, a function which sets the text of a textbox is idempotent (because multiple calls will display the same text), but not pure.
Similarly, deleting a record by GUID (not by count) is idempotent, because the row stays deleted after subsequent calls. (additional calls do nothing)
A pure function is a function without side-effects where the output is solely determined by the input - that is, calling f(x) will give the same result no matter how many times you call it.
An idempotent function is one that can be applied multiple times without changing the result - that is, f(f(x)) is the same as f(x).
A function can be pure, idempotent, both, or neither.
No, an idempotent function will change program/object/machine state - and will make that change only once (despite repeated calls). A pure function changes nothing, and continues to provide a (return) result each time it is called.
Functional purity means that there are no side effects. On the other hand, idempotence means that a function is invariant with respect to multiple calls.
Every pure function is side effect idempotent because pure functions never produce side effects even if they are called more then once. However, return value idempotence means that f(f(x)) = f(x) which is not effected by purity.
A large source of confusion is that in computer science, there seems to be different definitions for idempotence in imperative and functional programming.
From wikipedia (https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning)
In computer science, the term idempotent is used more comprehensively to describe an operation that will produce the same results if executed once or multiple times. This may have a different meaning depending on the context in which it is applied. In the case of methods or subroutine calls with side effects, for instance, it means that the modified state remains the same after the first call. In functional programming, though, an idempotent function is one that has the property f(f(x)) = f(x) for any value x.
Since a pure function does not produce side effects, i am of the opinion that idempotence has nothing to do with purity.
I've found more places where 'idempotent' is defined as f(f(x)) = f(x) but I really don't believe that's accurate.
Instead I think this definition is more accurate (but not totally):
describing an action which, when performed multiple times on the same
subject, has no further effect on its subject after the first time it
is performed. A projection operator is idempotent.
The way I interpret this, if we apply f on x (the subject) twice like:
f(x);f(x);
then the (side-)effect is the same as
f(x);
Because pure functions dont allow side-effects then pure functions are trivially also 'idempotent'.
A more general (and more accurate) definition of idempotent also includes functions like
toggle(x)
We can say the degree of idempotency of a toggle is 2, because after applying toggle every 2 times we always end up with the same State

algorithm to solve related equations

I am working on a project to create a generic equation solver... envision this to take the form of 25-30 equations that will be saved in a table- variable names along with the operators.
I would then call this table for solving any equation with a missing variable and it would move operators/ other pieces to the other side of the missing variable
e.g. 2x+ 3y=z and if x were missing variable. I would call equation with values for y and z and it would convert to solve for x=(z-3y)/2
equations could be linear, polynomial, binary(yes/no result)...
i am not sure if i can get any light-weight library available or whether this needs to built from scratch... any pointers or guidance will be appreciated
See Maxima.
I rather like it for my symbolic computation needs.
If such a general black-box algorithm could be made accurate, robust and stable, pigs could fly. Solutions can be nonexistent, multiple, parametrized, etc.
Even for linear equations it gets tricky to do it right.
Your best bet is some form of Newton algorithm, but generally you tailor it to your problem at hand.
EDIT: I didn't see you wanted something symbolic, rather than numerical. It's another bag of worms.

Can every recursion be converted into iteration?

A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end