I am trying to solve this problem :
(Write a program to compute the real roots of a quadratic equation (ax2 + bx + c = 0). The roots can be calculated using the following formulae:
x1 = (-b + sqrt(b2 - 4ac))/2a
and
x2 = (-b - sqrt(b2 - 4ac))/2a
I wrote the following code, but its not correct:
program week7_lab2_a1;
var a,b,c,i:integer;
x,x1,x2:real;
begin
write('Enter the value of a :');
readln(a);
write('Enter the value of b :');
readln(b);
write('Enter the value of c :');
readln(c);
if (sqr(b)-4*a*c)>=0 then
begin
if ((a>0) and (b>0)) then
begin
x1:=(-1*b+sqrt(sqr(b)-4*a*c))/2*a;
x2:=(-1*b-sqrt(sqr(b)-4*a*c))/2*a;
writeln('x1=',x1:0:2);
writeln('x2=',x2:0:2);
end
else
if ((a=0) and (b=0)) then
write('The is no solution')
else
if ((a=0) and (b<>0)) then
begin
x:=-1*c/b;
write('The only root :',x:0:2);
end;
end
else
if (sqr(b)-4*a*c)<0 then
write('The is no real root');
readln;
end.
do you know why?
and taking a=-6,b=7,c=8 .. can you desk-check it after writing the pesudocode?
You have an operator precedence error here:
x1:=(-1*b+sqrt(sqr(b)-4*a*c))/2*a;
x2:=(-1*b-sqrt(sqr(b)-4*a*c))/2*a;
See at the end, the 2 * a doesn't do what you think it does. It does divide the expression by 2, but then multiplies it by a, because of precedence rules. This is what you want:
x1:=(-1*b+sqrt(sqr(b)-4*a*c))/(2*a);
x2:=(-1*b-sqrt(sqr(b)-4*a*c))/(2*a);
In fact, this is because the expression is evaluated left-to-right wrt brackets and that multiplication and division have the same priority. So basically, once it's divided by 2, it says "I'm done with division, I will multiply what I have now with a as told".
As it doesn't really seem clear from the formula you were given, this is the quadratic formula:
As you can see you need to divide by 2a, so you must use brackets here to make it work properly, just as the correct text-only expression for this equation is x = (-b +- sqrt(b^2 - 4ac)) / (2a).
Otherwise the code looks fine, if somewhat convoluted (for instance, you could discard cases where (a = 0) and (b = 0) right after input, which would simplify the logic a bit later on). Did you really mean to exclude negative coefficients though, or just zero coefficients? You should check that.
Also be careful with floating-point equality comparison - it works fine with 0, but will usually not work with most constants, so use an epsilon instead if you need to check if one value is equal to another (like such: abs(a - b) < 1e-6)
Completely agree with what Thomas said in his answer. Just want to add some optimization marks:
You check the discriminant value in if-statement, and then use it again:
if (sqr(b)-4*a*c)>=0 then
...
x1:=(-1*b+sqrt(sqr(b)-4*a*c))/2*a;
x2:=(-1*b-sqrt(sqr(b)-4*a*c))/2*a;
This not quite efficient - instead of evaluating discriminant value at once you compute it multiple times. You should first compute discriminant value and store it into some variable:
D := sqr(b)-4*a*c;
and after that you can use your evaluated value in all expressions, like this:
if (D >= 0) then
...
x1:=(-b+sqrt(D)/(2*a);
x2:=(-b-sqrt(D)/(2*a);
and so on.
Also, I wouldn't write -1*b... Instead of this just use -b or 0-b in worst case, but not multiplication. Multiplication here is not needed.
EDIT:
One more note:
Your code:
if (sqr(b)-4*a*c)>=0 then
begin
...
end
else
if (sqr(b)-4*a*c)<0 then
write('The is no real root');
You here double check the if-condition. I simplify this:
if (a) then
begin ... end
else
if (not a)
...
Where you check for not a (in your code it corresponds to (sqr(b)-4*a*c)<0) - in this case condition can be only false (for a) and there is no need to double check it. You should just throw it out.
Related
Sorry for the basic question, but it's quite hard to find too much discussion on Maxima specifics.
I'm trying to learn some Maxima and wanted to use something like
x:2
x+=2
which as far as I can tell doesn't exist in Maxima. Then I discovered that I can define my own operators as infix operators, so I tried doing
infix("+=");
"+=" (a,b):= a:(a+b);
However this doesn't work, as if I first set x:1 then try calling x+=2, the function returns 3, but if I check the value of x I see it hasn't changed.
Is there a way to achieve what I was trying to do in Maxima? Could anyone explain why the definition I gave fails?
Thanks!
The problem with your implementation is that there is too much and too little evaluation -- the += function doesn't see the symbol x so it doesn't know to what variable to assign the result, and the left-hand side of an assignment isn't evaluated, so += thinks it is assigning to a, not x.
Here's one way to get the right amount of evaluation. ::= defines a macro, which is just a function which quotes its arguments, and for which the return value is evaluated again. buildq is a substitution function which quotes the expression into which you are substituting. So the combination of ::= and buildq here is to construct the x: x + 2 expression and then evaluate it.
(%i1) infix ("+=") $
(%i2) "+="(a, b) ::= buildq ([a, b], a: a + b) $
(%i3) x: 100 $
(%i4) macroexpand (x += 1);
(%o4) x : x + 1
(%i5) x += 1;
(%o5) 101
(%i6) x;
(%o6) 101
(%i7) x += 1;
(%o7) 102
(%i8) x;
(%o8) 102
So it is certainly possible to do so, if you want to do that. But may I suggest maybe you don't need it? Modifying a variable makes it harder to keep track, mentally, what is going on. A programming policy such as one-time assignment can make it easier for the programmer to understand the program. This is part of a general approach called functional programming; perhaps you can take a look at that. Maxima has various features which make it possible to use functional programming, although you are not required to use them.
I encountered some problems while trying to solve an integration equations using MapleSoft.There are 4 functions that are important. Here is my code defining the functions and trying to solve the problem:
"T is the starting point of the problem where it's given."
{T := proc (t) options operator, arrow; sqrt(t)/(sqrt(t)+sqrt(1-t))^2 end proc;}
"Now I solved for the inverse of function T."
V := proc (t) options operator, arrow;([solve(t = T(y), y, useassumptions)], [0 <= t and t <= 1]) end proc; '
"Only the first solution to the above is the correct inverse as we can see from the plot below."
sol := [allvalues(V(t))]; plot([t, T(t), op(1, sol), op(2, sol)], t = 0 .. 1, legend = [typeset("Curve: ", "t"), typeset("Curve: ", "T(t)"), typeset("Curve: ", "V(t)"), typeset("Curve: ", "V2(t)")]);
"Now I define the first solution as my inverse function called V1."
V1 := proc (t) options operator, arrow; evalf(op(1, sol)) end proc
"As the problem required, I have to find the derivative of V1. I call it dV."
dV := proc (t) options operator, arrow; diff(V1(t), t) end proc
"Then define a new function called F"
F := proc (t) options operator, arrow; -10*ln(1-t*(1-exp(-1))) end proc
"With V1(t), F(t), dV(t) defined, we define a new function U."
U := proc (t,lambda) options operator, arrow; piecewise(t <= .17215, min(IVF(V1(t)), max(0, 12-(1/4)/(lambda^2*dV(t)^2))), .17215 <= t, min(IVF(V1(t)), max(0, 12-(1/4)*.7865291304*lambda^-2))) end proc;
"Next I will be trying to find the value of lambda, such that the"
solve(int(U(t,lambda)*dV(t),t=0..1)= R,lambda)
"where the R will be a real number, let's say 2.93 for now."
I think the code works fine all the way till the last step where I had to solve the integration. I couldn't figure out why.
I was trying to progress further, solving the U, i.e U(t) will be 0 if t<=0.17215 and 12-(1/4)/(lambda^2*dV(t)^2)<=0 or t>=0.17215 and 12-(1/4)*0.7865291304*lambda^-2<=0 and so on so forth. But had problem solving the inequality. For example, solve(12-1/4*lambda^-2*dV(t)^-2<=0,t). The programme runs indefinitely.
Appreciate your input!
I'm guessing that by IVF(V1(t)) you really meant F(V(t)) .
It seemed to me that computation of your dV(t) (and perhaps V(t) sometimes) for very small t might exhibit some numerical difficulty.
And I was worried that using a narrower range of integration like, say, t=1.0e-2 .. 1.0, might mess with some significant contribution to the integral from that portion.
So below I've constructed V and dV so that they raised Digits (only) for greater working precision on (only) their own body of calculations.
And I've switched it from symbolic integration to numeric/float integration. And I've added the epsilon option for that, in case you need to control the accuracy tolerance more finely.
Some of these details help with the speed issues you mentioned above. And some of those details slow the computation down a bit, while perhaps giving a greater sense of robustness of it.
The plotting command calls below are just there for illustration. Comment them out if you please.
I used Maple 2018.0 for these. If you have difficulties then please report your particular version used.
restart;
T:=sqrt(x)/(sqrt(x)+sqrt(1-x))^2:
Vexpr:=solve(t=T,x,explicit)[1]:
Vexpr:=simplify(combine(Vexpr)) assuming t>0, t<1:
The procedure for computing V is constructed next. It's constructed in this subtle way for two reasons:
1) It internally raises working precision Digits to avoid round-off error.
2) It creates (and throws away) an empty list []. The effect of this is that it will not run under fast hardware-precision evalhf mode. This is key later on, during the numeric integration.
V:=subs(__dummy=Vexpr,
proc(t)
if not type(t,numeric) then
return 'procname'(t);
end if;
[];
Digits:=100;
evalf(__dummy);
end proc):
V(0.25);
0.2037579200498004992002294012453548811286405373\
653346413644953624868320875151070347969077227713469370
V(1e-9);
1.0000000040000000119998606410199120814521886485524\
-18
22617659574502917213370846924673012122642160567565 10
Let's plot T and V,
plot([T, V(x)], x=0..1);
Now let's create a procedure for the derivative of V, using the same technique so that it too raises Digits (only internal to itself) and will not run under evalhf.
dV:=subs(__dummy=diff(Vexpr,t),
proc(t)
if not type(t,numeric) then
return 'procname'(t);
end if;
[];
Digits:=100;
evalf(__dummy);
end proc):
Note that calling dV(t) for unknown symbol t makes it return unevaluated. This is convenient later on.
dV(t);
dV(t)
dV(0.25);
2.44017580084567947538393626436824494366329948208270464559139762\
2347525580165201957710520046760103982
dV(1e-15);
-5.1961525404198771358909147606209930290335590838862019038834313\
73611362758951860627325613490378754702
evalhf(dV(1e-15)); # I want this to error-out.
Error, unable to evaluate expression to hardware floats: []
This is your F.
F:=t->-10*ln(1-t*(1-exp(-1))):
Now create the procedure U. This too returns unevaluated when either of its argument is not an actual number. Note that lambda is its second parameter.
U:=proc(t,lambda)
if not ( type(t,numeric) and
type(lambda,numeric) ) then
return 'procname'(args);
end if;
piecewise(t <= .17215,
min(F(V(t)), max(0, 12-(1/4)/(lambda^2*dV(t)^2))),
t >= .17215,
min(F(V(t)), max(0, 12-(1/4)*.7865291304*lambda^(-2))));
end proc:
Let's try a particular value of t=0.2 and lambda=2.5.
evalf(U(0.2, 2.5));
.6774805135
Let's plot it. This takes a little while. Note that it doesn't seem to blow up for small t.
plot3d([Re,Im](U(t,lambda)),
t=0.0001 .. 1.0, lambda=-2.0 .. 2.0,
color=[blue,red]);
Let's also plot dV. This takes a while, since dV raises Digits. Note that it appears accurate (not blowing up) for small t.
plot(dV, 0.0 .. 1.0, color=[blue,red], view=-1..4);
Now let's construct a procedure H which computes the integral numerically.
Note that lambda is it's first parameter, which it passes as the second argument to U. This relates to your followup Comment.
The second parameter of H is passed to evalf(Int(...)) to specify its accuracy tolerance.
Note that the range of integration is 0.001 to 1.0 (for t). You can try a lower end-point closer to 0.0 but it will take even longer for compute (and may not succeed).
There is a complicated interplay between the lower end-point of numeric integration, and accuracy tolerance eps, and the numeric behavior for U and V and dV for very small t.
(The working precision Digits is set to 15 inside H. That's just to allow the method _d01ajc to be used. It's still the case that V and dV will raise working precision for their own computations, at each numeric invocation of the integrand. Also, I'm using unapply and a forced choice of numeric integration method as a tricky way to prevent evalf(Int(...)) from wasting a lot of time poking at the integrand to find singularities. You don't need to completely understand all these details. I'm just explaining that I had reasons for this complicated set-up.)
H:=proc(lambda,eps)
local res,t;
Digits:=15;
res:=evalf(Int(unapply(U(t,lambda)*dV(t),t),
1e-3..1,
epsilon=eps, method=_d01ajc) - R);
if type(res,numeric) then
res;
else
Float(undefined);
end if;
end proc:
Let's try calling H for a specific R. Note that it's not fast.
R:=2.93;
R := 2.93
H(0.1,1e-3);
-2.93
H(0.2,1e-5);
0.97247264305333
Let's plot H for a fixed integration tolerance. This takes quite a while.
[edited] I originally plotted t->H(t,1e-3) here. But your followup Comment showed a misunderstanding in the purpose of the first parameter of that operator. The t in t->H(t,1e-3) is just a dummy name and DOES NOT relate to the t parameter of V and dV, and the second parameter of U, and the integration varaible. What matters is that this dummy is passed as the first argument to H, which in turn passes it as the second argument to U, which is used for lambda. Since there was confusion earlier, I'll use the dummy name LAMBDA now.
plot(LAMBDA->H(LAMBDA,1e-3), 0.13 .. 0.17 , adaptive=false, numpoints=15);
Now let's try and find the value of that root (intercept).
[edited] Here too I was earlier using the dummy name t for the first argument of H. It's just a dummy name and does not relate to the t parameter of V and dV, and the second parameter of U, and the variable of integration. So I'll change it below to LAMBDA to lessen confusion.
This is not fast. I'm minimizing the absolute value because numeric accuracy issues make this problematic for numeric root-finder fsolve.
Osol:=Optimization:-Minimize('abs(H(LAMBDA,1e-3))', LAMBDA=0.15..0.17);
[ -8 ]
Osol := [1.43561000000000 10 , [LAMBDA = 0.157846355888300]]
We can pick off the numeric value of lambda (which "solved" the integration).
rhs(Osol[2,1]);
0.157846355888300
If we call H using this value then we'll get a residual (similar to the first entry in Osol, but without absolute value being taken).
H(rhs(Osol[2,1]), 1e-3);
-8
-1.435610 10
Now you have the tools to try and obtain the root more accurately. To do so you may experiment with:
tighter (smaller) eps the tolerance of numeric integration.
A lower end-point of the integration closer to 0.0
Even higher Digits forced within the procedures V and dV.
That will require patience, since the computations take a while, and since the numeric integration can fail (taking forever) when those three aspects are not working properly together.
I've I edit the body of procedure H and change the integration range to instead be 1e-4 .. 1.0 then I get the following result (specifying 1e-5 for eps in the call to H).
Osol:=Optimization:-Minimize('abs(H(LAMBDA,1e-5))', LAMBDA=0.15..0.17);
[ -7 ]
Osol := [1.31500690000000 10 , [LAMBDA = 0.157842787382700]]
So it's plausible that this approximate root lambda is accurate to perhaps 4 or 5 decimal places.
Good luck.
I have bumped into this problem several times on the type of input data declarations mathematica understands for functions.
It Seems Mathematica understands the following types declarations:
_Integer,
_List,
_?MatrixQ,
_?VectorQ
However: _Real,_Complex declarations for instance cause the function sometimes not to compute. Any idea why?
What's the general rule here?
When you do something like f[x_]:=Sin[x], what you are doing is defining a pattern replacement rule. If you instead say f[x_smth]:=5 (if you try both, do Clear[f] before the second example), you are really saying "wherever you see f[x], check if the head of x is smth and, if it is, replace by 5". Try, for instance,
Clear[f]
f[x_smth]:=5
f[5]
f[smth[5]]
So, to answer your question, the rule is that in f[x_hd]:=1;, hd can be anything and is matched to the head of x.
One can also have more complicated definitions, such as f[x_] := Sin[x] /; x > 12, which will match if x>12 (of course this can be made arbitrarily complicated).
Edit: I forgot about the Real part. You can certainly define Clear[f];f[x_Real]=Sin[x] and it works for eg f[12.]. But you have to keep in mind that, while Head[12.] is Real, Head[12] is Integer, so that your definition won't match.
Just a quick note since no one else has mentioned it. You can pattern match for multiple Heads - and this is quicker than using the conditional matching of ? or /;.
f[x:(_Integer|_Real)] := True (* function definition goes here *)
For simple functions acting on Real or Integer arguments, it runs in about 75% of the time as the similar definition
g[x_] /; Element[x, Reals] := True (* function definition goes here *)
(which as WReach pointed out, runs in 75% of the time
as g[x_?(Element[#, Reals]&)] := True).
The advantage of the latter form is that it works with Symbolic constants such as Pi - although if you want a purely numeric function, this can be fixed in the former form with the use of N.
The most likely problem is the input your using to test the the functions. For instance,
f[x_Complex]:= Conjugate[x]
f[x + I y]
f[3 + I 4]
returns
f[x + I y]
3 - I 4
The reason the second one works while the first one doesn't is revealed when looking at their FullForms
x + I y // FullForm == Plus[x, Times[ Complex[0,1], y]]
3 + I 4 // FullForm == Complex[3,4]
Internally, Mathematica transforms 3 + I 4 into a Complex object because each of the terms is numeric, but x + I y does not get the same treatment as x and y are Symbols. Similarly, if we define
g[x_Real] := -x
and using them
g[ 5 ] == g[ 5 ]
g[ 5. ] == -5.
The key here is that 5 is an Integer which is not recognized as a subset of Real, but by adding the decimal point it becomes Real.
As acl pointed out, the pattern _Something means match to anything with Head === Something, and both the _Real and _Complex cases are very restrictive in what is given those Heads.
This discussion came up in a previous question and I'm interested in knowing the difference between the two. Illustration with an example would be nice.
Basic Example
Here is an example from Leonid Shifrin's book Mathematica programming: an advanced introduction
It is an excellent resource for this kind of question. See: (1) (2)
ClearAll[a, b]
a = RandomInteger[{1, 10}];
b := RandomInteger[{1, 10}]
Table[a, {5}]
{4, 4, 4, 4, 4}
Table[b, {5}]
{10, 5, 2, 1, 3}
Complicated Example
The example above may give the impression that once a definition for a symbol is created using Set, its value is fixed, and does not change. This is not so.
f = ... assigns to f an expression as it evaluates at the time of assignment. If symbols remain in that evaluated expression, and later their values change, so does the apparent value of f.
ClearAll[f, x]
f = 2 x;
f
2 x
x = 7;
f
14
x = 3;
f
6
It is useful to keep in mind how the rules are stored internally. For symbols assigned a value as symbol = expression, the rules are stored in OwnValues. Usually (but not always), OwnValues contains just one rule. In this particular case,
In[84]:= OwnValues[f]
Out[84]= {HoldPattern[f] :> 2 x}
The important part for us now is the r.h.s., which contains x as a symbol. What really matters for evaluation is this form - the way the rules are stored internally. As long as x did not have a value at the moment of assignment, both Set and SetDelayed produce (create) the same rule above in the global rule base, and that is all that matters. They are, therefore, equivalent in this context.
The end result is a symbol f that has a function-like behavior, since its computed value depends on the current value of x. This is not a true function however, since it does not have any parameters, and triggers only changes of the symbol x. Generally, the use of such constructs should be discouraged, since implicit dependencies on global symbols (variables) are just as bad in Mathematica as they are in other languages - they make the code harder to understand and bugs subtler and easier to overlook. Somewhat related discussion can be found here.
Set used for functions
Set can be used for functions, and sometimes it needs to be. Let me give you an example. Here Mathematica symbolically solves the Sum, and then assigns that to aF(x), which is then used for the plot.
ClearAll[aF, x]
aF[x_] = Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}];
DiscretePlot[aF[x], {x, 1, 50}]
If on the other hand you try to use SetDelayed then you pass each value to be plotted to the Sum function. Not only will this be much slower, but at least on Mathematica 7, it fails entirely.
ClearAll[aF, x]
aF[x_] := Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}];
DiscretePlot[aF[x], {x, 1, 50}]
If one wants to make sure that possible global values for formal parameters (x here) do not interfere and are ignored during the process of defining a new function, an alternative to Clear is to wrap Block around the definition:
ClearAll[aF, x];
x = 1;
Block[{x}, aF[x_] = Sum[x^n Fibonacci[n], {n, 1, \[Infinity]}]];
A look at the function's definition confirms that we get what we wanted:
?aF
Global`aF
aF[x_]=-(x/(-1+x+x^2))
In[1]:= Attributes[Set]
Out[1]= {HoldFirst, Protected, SequenceHold}
In[2]:= Attributes[SetDelayed]
Out[2]= {HoldAll, Protected, SequenceHold}
As you can see by their attributes, both functions hold their first argument (the symbol to which you are assigning), but they differ in that SetDelayed also holds its second argument, while Set does not. This means that Set will evaluate the expression to the right of = at the time the assignment is made. SetDelayed does not evaluate the expression to the right of the := until the variable is actually used.
What's happening is more clear if the right hand side of the assignment has a side effect (e.g. Print[]):
In[3]:= x = (Print["right hand side of Set"]; 3)
x
x
x
During evaluation of In[3]:= right hand side of Set
Out[3]= 3
Out[4]= 3
Out[5]= 3
Out[6]= 3
In[7]:= x := (Print["right hand side of SetDelayed"]; 3)
x
x
x
During evaluation of In[7]:= right hand side of SetDelayed
Out[8]= 3
During evaluation of In[7]:= right hand side of SetDelayed
Out[9]= 3
During evaluation of In[7]:= right hand side of SetDelayed
Out[10]= 3
:= is for defining functions and = is for setting a value, basically.
ie := will evaluate when its read, = will be evaluated when it is set.
think about:
x = 2
y = x
z := x
x = 4
Now, z is 4 if evaluated while y is still 2
Reading this question got me thinking: For a given function f, how can we know that a loop of this form:
while (x > 2)
x = f(x)
will stop for any value x? Is there some simple criterion?
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
Specifically, can we prove this for sqrt and for log?
For these functions, a proof that ceil(f(x))<x for x > 2 would suffice. You could do one iteration -- to arrive at an integer number, and then proceed by simple induction.
For the general case, probably the best idea is to use well-founded induction to prove this property. However, as Moron pointed out in the comments, this could be impossible in the general case and the right ordering is, in many cases, quite hard to find.
Edit, in reply to Amnon's comment:
If you wanted to use well-founded induction, you would have to define another strict order, that would be well-founded. In case of the functions you mentioned this is not hard: you can take x << y if and only if ceil(x) < ceil(y), where << is a symbol for this new order. This order is of course well-founded on numbers greater then 2, and both sqrt and log are decreasing with respect to it -- so you can apply well-founded induction.
Of course, in general case such an order is much more difficult to find. This is also related, in some way, to total correctness assertions in Hoare logic, where you need to guarantee similar obligations on each loop construct.
There's a general theorem for when then sequence of iterations will converge. (A convergent sequence may not stop in a finite number of steps, but it is getting closer to a target. You can get as close to the target as you like by going far enough out in the sequence.)
The sequence x, f(x), f(f(x)), ... will converge if f is a contraction mapping. That is, there exists a positive constant k < 1 such that for all x and y, |f(x) - f(y)| <= k |x-y|.
(The fact that f(x) < x for x > 2 doesn't seem to help since the series may converge).
If we're talking about floats here, that's not true. If for all x > n f(x) is strictly less than x, it will reach n at some point (because there's only a limited number of floating point values between any two numbers).
Of course this means you need to prove that f(x) is actually less than x using floating point arithmetic (i.e. proving it is less than x mathematically does not suffice, because then f(x) = x may still be true with floats when the difference is not enough).
There is no general algorithm to determine whether a function f and a variable x will end or not in that loop. The Halting problem is reducible to that problem.
For sqrt and log, we could safely do that because we happen to know the mathematical properties of those functions. Say, sqrt approaches 1, log eventually goes negative. So the condition x < 2 has to be false at some point.
Hope that helps.
In the general case, all that can be said is that the loop will terminate when it encounters xi≤2. That doesn't mean that the sequence will converge, nor does it even mean that it is bounded below 2. It only means that the sequence contains a value that is not greater than 2.
That said, any sequence containing a subsequence that converges to a value strictly less than two will (eventually) halt. That is the case for the sequence xi+1 = sqrt(xi), since x converges to 1. In the case of yi+1 = log(yi), it will contain a value less than 2 before becoming undefined for elements of R (though it is well defined on the extended complex plane, C*, but I don't think it will, in general converge except at any stable points that may exist (i.e. where z = log(z)). Ultimately what this means is that you need to perform some upfront analysis on the sequence to better understand its behavior.
The standard test for convergence of a sequence xi to a point z is that give ε > 0, there is an n such that for all i > n, |xi - z| < ε.
As an aside, consider the Mandelbrot Set, M. The test for a particular point c in C for an element in M is whether the sequence zi+1 = zi2 + c is unbounded, which occurs whenever there is a |zi| > 2. Some elements of M may converge (such as 0), but many do not (such as -1).
Sure. For all positive numbers x, the following inequality holds:
log(x) <= x - 1
(this is a pretty basic result from real analysis; it suffices to observe that the second derivative of log is always negative for all positive x, so the function is concave down, and that x-1 is tangent to the function at x = 1). From this it follows essentially immediately that your while loop must terminate within the first ceil(x) - 2 steps -- though in actuality it terminates much, much faster than that.
A similar argument will establish your result for f(x) = sqrt(x); specifically, you can use the fact that:
sqrt(x) <= x/(2 sqrt(2)) + 1/sqrt(2)
for all positive x.
If you're asking whether this result holds for actual programs, instead of mathematically, the answer is a little bit more nuanced, but not much. Basically, many languages don't actually have hard accuracy requirements for the log function, so if your particular language implementation had an absolutely terrible math library this property might fail to hold. That said, it would need to be a really, really terrible library; this property will hold for any reasonable implementation of log.
I suggest reading this wikipedia entry which provides useful pointers. Without additional knowledge about f, nothing can be said.