wrong calculation by borland builder c++ - date-arithmetic

I'm using Borland BCB6 C++ Builder. My code contains the following computation:
float result= (152*pow(cos(80),2)) + (70*pow(sin(80),2));
In debug mode, I find that this expression evaluates to about 70.992, whereas Wolfram Alpha tells me that the value should be about 72.4726.
What could be the reason for this discrepancy?

In math.h the goniometric functions use radians!!!
so instead 80.0 [deg] use 80.0*M_PI/180.0 [rad]
that will convert the angle from degrees to radians
I also usually define deg,rad constants and use them:
const double deg=M_PI/180.0;
const double rad=180.0/M_PI;
double result= (152.0*pow(cos(80.0*deg),2.0)) + (70.0*pow(sin(80.0*deg),2.0));

Related

How to resolve mismatch in argument error [duplicate]

I'm having trouble with the precision of constant numerics in Fortran.
Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?
I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.
In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.
However, "double precision" means different things in different contexts.
In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.
gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.
So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.
Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.
Adding to #francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test
Compiling this code with gfortran gives:
$gfortran -std=gnu *.f95 -o main
$main
real64: .666666666666667
double precision: .666666666666667
single precision: .666666686534882
Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.
Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
real(RK) :: real_64
real_64 = 2.e0 / 3.e0
write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
real_64 = 2._RK / 3._RK
write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test
which gives the following output,
$gfortran -std=gnu *.f95 -o main
$main
32-bit accuracy is returned: 0.666666686534882
64-bit accuracy is returned: 0.666666666666667
Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.
So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.
The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.
This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.
However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.
So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

In Matlab, There is a point after a variable what does this expression mean?

I am looking into a octave/matlab code and find the following:
deltaT = 1; % sampling period for data
......
R = rcValues(2:2:end)/1000; % convert these also
C = rcValues(3:2:end)*1000; % convert kF to F
RCfact = exp(-deltaT./(R.*C));
What does the point (.) mean in -deltaT. and R. in this mathematical expression?
Thanks
The dot operator is used to execute an operation on each element of a matrix. In your case, if deltaT and R are single elements, using the dot operator doesn't do anything. HOWEVER, if they were a matrix, then the operation would've been executed in each element of the matrix.
The operator is used with multiplication, division, and exponentiation.
For more info visit https://www.mathworks.com/matlabcentral/answers/506078-please-help-me-understand-the-use-of-dot-operator#accepted_answer_416043

I want to convert this Cos up -1 to Tcl

can anyone tell me how can I convert this cos-1 to TCL, due to in TCL
just work with normal cos, not like this cos-1, also it calls "The inverse cosine".
If you want the reverse cosine in degree, you could use this:
expr {acos($x)*180/acos(-1)}
acos(-1) is pi.
Tcl has always called the inverse cosine function acos; it's part of expressions:
% expr { acos(0.123) }
1.4474840516030245
Result is in radians, of course.

Is it possible to write (display) exponential equations in scilab?

I've been trying to display in my console an exponential equation like the following one:
y(t) = a*e^t + b*e^t + c*e^t
I would write it as a string, however the coefficients a,b and c, are numbers in a vector V = [a b c]. So I was trying to concatenate the numbers with strings "e^t", but I failed to do it. I know scilab displays polynomial equations, but I don't know it is possible to display exponential one. Anyone can help?
Usually this kind of thing is done with mprintf command, which places given numerical arguments into a string with formatting instructions.
V = [3 5 -7]
mprintf("y(t) = %f*e^t + %f*e^t + %f*e^t", V)
The output is
y(t) = 3.000000*e^t + 5.000000*e^t + -7.000000*e^t
which isn't ideal, and can be improved in some ways by tweaking the formatters, but is readable regardless.
Notice we don't have to list every entry V(1), V(2), ... individually; the vector V gets "unpacked" automatically.
If you wanted to have 2D output like what we get for polynomials,
then no, this kind of thing is what Scilab does for polynomials and rational functions only, not for general expressions.
There is also prettyprint but its output is LaTeX syntax, like $1+s+s^{2}-s^{123}$. It works for a few things: polynomials, rational functions, matrices... but again, Scilab is not meant for symbolic manipulations, and does not really support symbolic expressions.

Get right-hand side of equation

I am calling the function mnewton(0=expr, alpha, %pi/4) to get the root of a rather complex equation expr.
%(i1) mnewton(0=expr, alpha, %pi/4)
%(o1) [alpha=0.678193754078621]
I need to apply another function to this result (e.g. sin) and then want to plot it. Just linking the functions does not work:
%(i2) sin(mnewton(0=expr, alpha, %pi/4)[1])
%(o2) sin(alpha=0.678193754078621)
This is because the expression alpha=0.678193754078621 is not a number. How do I convert alpha=0.678193754078621 to just 0.678193754078621?
I can't just copy the numerical value and add it manually as I want to plot this and my expr will have a different root for each y.
The function rhs(expr) does exactly that.
Check the manual for more information on this.