ScoreAttained condition throws arithmatic exception - OptaPlanner - exception

Using HARDSOFTBIGDECIMAL score.
In the config file I set scoreAttained to 0hard/0soft.
I get this error:
Exception in thread "main" java.lang.ArithmeticException: Non-terminating decimal expansion; no exact representable decimal result.
at java.math.BigDecimal.divide(Unknown Source)
at org.optaplanner.core.impl.score.buildin.hardsoftbigdecimal.HardSoftBigDecimalScoreDefinition.calculateTimeGradient(HardSoftBigDecimalScoreDefinition.java:96)
at org.optaplanner.core.impl.score.buildin.hardsoftbigdecimal.HardSoftBigDecimalScoreDefinition.calculateTimeGradient(HardSoftBigDecimalScoreDefinition.java:27)
at org.optaplanner.core.impl.termination.ScoreAttainedTermination.calculateSolverTimeGradient(ScoreAttainedTermination.java:50)
at org.optaplanner.core.impl.termination.OrCompositeTermination.calculateSolverTimeGradient(OrCompositeTermination.java:69)
at org.optaplanner.core.impl.termination.OrCompositeTermination.calculateSolverTimeGradient(OrCompositeTermination.java:69)
at org.optaplanner.core.impl.termination.PhaseToSolverTerminationBridge.calculatePhaseTimeGradient(PhaseToSolverTerminationBridge.java:80)
at org.optaplanner.core.impl.localsearch.DefaultLocalSearchSolverPhase.solve(DefaultLocalSearchSolverPhase.java:60)
at org.optaplanner.core.impl.solver.DefaultSolver.runSolverPhases(DefaultSolver.java:190)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:155)
EDIT:
solved the problem by extending HARDSOFTBIGDECIMALSCOREDEFINITION and overriding calculateTimeGradient(..) method. Here when the divide method is called on BigDecimal, I rounded up.

You are more than likely dividing two numbers that have an exact quotient that is infinite.
For example: 1/3 = 0.33333...
From the docs:
When a MathContext object is supplied with a precision setting of 0
(for example, MathContext.UNLIMITED), arithmetic operations are exact,
as are the arithmetic methods which take no MathContext object. (This
is the only behavior that was supported in releases prior to 5.)
As a corollary of computing the exact result, the rounding mode
setting of a MathContext object with a precision setting of 0 is not
used and thus irrelevant. In the case of divide, the exact quotient
could have an infinitely long decimal expansion; for example, 1
divided by 3.
If the quotient has a nonterminating decimal expansion and the
operation is specified to return an exact result, an
ArithmeticException is thrown. Otherwise, the exact result of the
division is returned, as done for other operations.
So to fix this you need to provide a scale of precision for BigDecimal, like so:
x.divide(y, 2, RoundingMode.HALF_UP)
Where 2 is the precision and RoundingMode.HALF_UP is the rounding mode.
You can read more about rounding here.

Related

How to resolve mismatch in argument error [duplicate]

I'm having trouble with the precision of constant numerics in Fortran.
Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?
I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.
In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.
However, "double precision" means different things in different contexts.
In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.
gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.
So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.
Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.
Adding to #francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test
Compiling this code with gfortran gives:
$gfortran -std=gnu *.f95 -o main
$main
real64: .666666666666667
double precision: .666666666666667
single precision: .666666686534882
Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.
Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
real(RK) :: real_64
real_64 = 2.e0 / 3.e0
write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
real_64 = 2._RK / 3._RK
write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test
which gives the following output,
$gfortran -std=gnu *.f95 -o main
$main
32-bit accuracy is returned: 0.666666686534882
64-bit accuracy is returned: 0.666666666666667
Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.
So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.
The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.
This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.
However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.
So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

What term is rounded in the CUDA __fsqrt round intrinsics?

I need the square root of a float, in CUDA device code. Hard to say whether speed matters more than accuracy in my use case.
__sqrtf CUDA intrinsic is the natural choice
But then I saw the various __fsqrt with rounding CUDA intrinsics;
What is rounded in these intrinsics; the argument "x" or the return value? Or do I misunderstand the meaning of rounding here?
My testing suggests neither is rounded! I wrote a kernel that evaluates:
__fsqrt_rn(42 * 42 + 0.1)
and the return value is always 42.0011902, which is equal to the square root of 42 * 42 + 0.1. So what is being rounded?
It's a rounding mode for the result. Input arguments are not "rounded" before they are injected into the arithmetic flow.
the "rn" rounding "direction" is "round-to-nearest"
It means that at whatever precision the interim result is being calculated to, that result will be rounded to the nearest available representation. In the case of a float final result, it will be rounded to the nearest available float representation.
Let's revisit your example. When I put your problem into the windows 10 calculator, the result I get is 42.001190459319126303634970957554 The way we get from that "correct result at arbitrary precision" to a 32-bit floating point "rn" result is to take the two 32-bit floating point numbers, one which is closest but numerically higher, and one which is closest but numerically lower, and of those 2, select the one that is closest. That is apparently 42.0011902.

What is the implementation of a comparison instruction in a MIPS Processor?

So I am trying to replicate a basic MIPS processor, and I was wondering how you would implement comparison instructions? It just doesn't seem to fit in with the rest of the design? I have been reading and apparently the set less than operation in the alu can be used as a comparator, but couldn't you just use a bit-sliced comparator in the alu? What is the implementation of a comparison instruction in a MIPS processor? [https://i.stack.imgur.com/QtX6D.png][1]
One possible way is to use subtraction. The result of the subtraction is compared to zero, basically ORing all bits of the result together, and the sign bit is extracted.
If the two values are equal the result would be zero and zero flag is set and if they are unequal the zero flag is unset. If the first value is less than the second value than the result would be negative and if it is greater than the second value than the result would be not zero and not negative.
Also this comparisons can be combined, e.g. less then or equal is simply a negative result or the zero flag is set. With this it should be possible to implement all comparisons that the MIPS ISA supports.

Why does division by zero in IEEE754 standard results in Infinite value?

I'm just curious, why in IEEE-754 any non zero float number divided by zero results in infinite value? It's a nonsense from the mathematical perspective. So I think that correct result for this operation is NaN.
Function f(x) = 1/x is not defined when x=0, if x is a real number. For example, function sqrt is not defined for any negative number and sqrt(-1.0f) if IEEE-754 produces a NaN value. But 1.0f/0 is Inf.
But for some reason this is not the case in IEEE-754. There must be a reason for this, maybe some optimization or compatibility reasons.
So what's the point?
It's a nonsense from the mathematical perspective.
Yes. No. Sort of.
The thing is: Floating-point numbers are approximations. You want to use a wide range of exponents and a limited number of digits and get results which are not completely wrong. :)
The idea behind IEEE-754 is that every operation could trigger "traps" which indicate possible problems. They are
Illegal (senseless operation like sqrt of negative number)
Overflow (too big)
Underflow (too small)
Division by zero (The thing you do not like)
Inexact (This operation may give you wrong results because you are losing precision)
Now many people like scientists and engineers do not want to be bothered with writing trap routines. So Kahan, the inventor of IEEE-754, decided that every operation should also return a sensible default value if no trap routines exist.
They are
NaN for illegal values
signed infinities for Overflow
signed zeroes for Underflow
NaN for indeterminate results (0/0) and infinities for (x/0 x != 0)
normal operation result for Inexact
The thing is that in 99% of all cases zeroes are caused by underflow and therefore in 99%
of all times Infinity is "correct" even if wrong from a mathematical perspective.
I'm not sure why you would believe this to be nonsense.
The simplistic definition of a / b, at least for non-zero b, is the unique number of bs that has to be subtracted from a before you get to zero.
Expanding that to the case where b can be zero, the number that has to be subtracted from any non-zero number to get to zero is indeed infinite, because you'll never get to zero.
Another way to look at it is to talk in terms of limits. As a positive number n approaches zero, the expression 1 / n approaches "infinity". You'll notice I've quoted that word because I'm a firm believer in not propagating the delusion that infinity is actually a concrete number :-)
NaN is reserved for situations where the number cannot be represented (even approximately) by any other value (including the infinities), it is considered distinct from all those other values.
For example, 0 / 0 (using our simplistic definition above) can have any amount of bs subtracted from a to reach 0. Hence the result is indeterminate - it could be 1, 7, 42, 3.14159 or any other value.
Similarly things like the square root of a negative number, which has no value in the real plane used by IEEE754 (you have to go to the complex plane for that), cannot be represented.
In mathematics, division by zero is undefined because zero has no sign, therefore two results are equally possible, and exclusive: negative infinity or positive infinity (but not both).
In (most) computing, 0.0 has a sign. Therefore we know what direction we are approaching from, and what sign infinity would have. This is especially true when 0.0 represents a non-zero value too small to be expressed by the system, as it frequently the case.
The only time NaN would be appropriate is if the system knows with certainty that the denominator is truly, exactly zero. And it can't unless there is a special way to designate that, which would add overhead.
NOTE:
I re-wrote this following a valuable comment from #Cubic.
I think the correct answer to this has to come from calculus and the notion of limits. Consider the limit of f(x)/g(x) as x->0 under the assumption that g(0) == 0. There are two broad cases that are interesting here:
If f(0) != 0, then the limit as x->0 is either plus or minus infinity, or it's undefined. If g(x) takes both signs in the neighborhood of x==0, then the limit is undefined (left and right limits don't agree). If g(x) has only one sign near 0, however, the limit will be defined and be either positive or negative infinity. More on this later.
If f(0) == 0 as well, then the limit can be anything, including positive infinity, negative infinity, a finite number, or undefined.
In the second case, generally speaking, you cannot say anything at all. Arguably, in the second case NaN is the only viable answer.
Now in the first case, why choose one particular sign when either is possible or it might be undefined? As a practical matter, it gives you more flexibility in cases where you do know something about the sign of the denominator, at relatively little cost in the cases where you don't. You may have a formula, for example, where you know analytically that g(x) >= 0 for all x, say, for example, g(x) = x*x. In that case the limit is defined and it's infinity with sign equal to the sign of f(0). You might want to take advantage of that as a convenience in your code. In other cases, where you don't know anything about the sign of g, you cannot generally take advantage of it, but the cost here is just that you need to trap for a few extra cases - positive and negative infinity - in addition to NaN if you want to fully error check your code. There is some price there, but it's not large compared to the flexibility gained in other cases.
Why worry about general functions when the question was about "simple division"? One common reason is that if you're computing your numerator and denominator through other arithmetic operations, you accumulate round-off errors. The presence of those errors can be abstracted into the general formula format shown above. For example f(x) = x + e, where x is the analytically correct, exact answer, e represents the error from round-off, and f(x) is the floating point number that you actually have on the machine at execution.

flash: Math.pow calculates wrong answers for larger numbers

why is this so?
when i try out:
Math.pow(2,58)=288230376151711740
while in fact, it is 288230376151711744
or
Math.pow(2,57)=144115188075855870
while it really equals 144115188075855872
it just throws that number without any warning.
i would understand if it stopped going above some number in case of maximum value reached. however, this seems to calculate the first n digits correctly and then go wrong at the very end of the digits only
You've ran out of Number type display precision. The trick is that with powers of 2 the actual value stored in the variable will be precise, while when you'll trace it the engine will truncate the displayed value by 16 digits, as it divides by 10 in process, and leftovers will eventually hit "machine zero" if compared to original value taken without exponential part. This is made to prevent white noise generated by imprecise floating-point division to be displayed. You can work around this issue if you'll advance to big integers/floating point numbers, that store more bits than a double precision number.