Hi after calling this code (Octave) I get an answer with 7 digits of precision, I need only 6. It is worth mentioning that on different data-set the output is normal(with 6 digits);
output_precision(6);
Prev
output:
Prev =
0.1855318
0.2181108
0.1796457
I know this is a little late but I wanted to add an answer for anyone with the same question in the future.
According to the function reference for output_precision(), the argument passed to the function (in this case, 6) specifies the minimum number of significant figures, which only guarantees that future numeric output won't have less than that number of significant figures.
From what I've seen, if you use output_precision(new_val) before displaying an array (e.g., Prev in the question), then octave will round the element with the least digits before the decimal place to have new_val significant figures and then all other elements will be rounded to have the same number of digits after the decimal place as that initial rounded result. If you use a statement to output a single value instead of an array, then the output is just rounded to new_val significant figures. However, I don't know if this behavior is guaranteed .
Here's a short example of what I mean:
% array defined with values having 5 digits after decimal
F = [401.51670 313.70753 -88.55225 188.50067 280.21988 354.51821 54.51821 350];
output_precision(4)
F
output_precision(6)
F
Output:
F =
401.52 313.71 -88.55 188.50 280.22 354.52 54.52 350.00
F =
401.5167 313.7075 -88.5523 188.5007 280.2199 354.5182 54.5182 350.0000
It can be a little quirky if you try to round the values too much. When I used output_precision(3) and then output F, the numbers were actually rounded as if my system's default precision, 5, was still active. However, when I used elements with only 2 or 3 digits after the decimal to define another array, it displayed as expected with output_precision(3).
Check out Octave Forge if you ever need docs for octave features. It's not perfect but it's something. Hope this was helpful.
Related
If A = 01110011, B = 10010100, how would I add these?
I did this:
i.e: 01110011 + 10010100 = 100000111
Though, isn't it essentially 115 + (-108) = 7, whereas, I'm getting -249
Edit: I see that removing the highest order bit (overflow) I get 7 which is what I'm looking for but I'm not getting why you wouldn't have the extra bit.
Edit**: Ok, I figured it out. There was no overflow as I had assumed there was because 7 is within [-128, 127] (8-bits). Instead, like Omar hinted at I was supposed to drop the "extra" 1 from addition.
Your calculation is correct and the result is correct.
You stated that the second number is -108, so both your numbers are interpreted as signed 8-bit values. Thus, you should also interpret your result as an 8-bit signed value, this is why the 9th bit must be dropped, and so the result is 7 (00000111).
On a real hardware, like an 8-bit CPU for example, as all the registers are 8-bit wide, you are only be able to store the lowest 8-bit of the result, which here is 7 (00000111).
In some cases, the 9th bit may also be put inside a carry/overflow flag so it's not completely "dropped".
When the eigenvalues of matrices with integer entries (even 3x3 matrices) are calculated in Octave, sometimes it reports a floating point value like 9.1940e-17, whereas it should really be zero.
For example, when I am plotting some numbers modulo 1, a value -2e-17 is becoming 1 in the plot, but it should actually be 0.
Although I can approximate the answers upto some decimal places later, it will still internally calculate upto 17 decimal places while calculating the eigenvalue and waste calculation time. Is it possible to work with only upto something like 8 decimal places for all quantities from the beginning?
If that is not possible, is it possible to tell octave (once and for all in a script) to approximate all quantities upto 8 decimal places while reporting?
To complement Daniel's answer, the additional precision doesn't cost any additional time. The computer works with the number as a whole, not with individual digits.
You might want to use
A(abs(A)<1e-8) = 0;
after computing your final result A to set the near-zero values to zero.
What you are asking for won't help, it makes it worse. Setting the precision to 8 decimal points (or significant digits) won't make numerical errors disappear, it will increase the numerical errors to that magnitude. Take a simple calculation:
x=1/3
y=x*6
Now you would end up with 1.99999998, a much larger error.
There are libraries doing the opposite, increasing the precision. In Octave that's called vpa.
We are working with Amounts of which value are higher. We are displaying the formatted amount in the respective spark TextInput. We are using the simple mx CurrencyFormatter for formatting the amount values. We dont have any problems till 16 digits . But after crossing 16 digits , the numbers are automtically rounded off. We are using the CurrencyFormatter with the following configurations,
<mx:CurrencyFormatter id="formateer" thousandsSeparatorTo="," decimalSeparatorTo="."
precision="2" currencySymbol="" rounding="none" />
My output:
We dont have any problem upto 16 digits
original-->1234567890123456
Number(txtInput.text)-->1234567890123456
formatted-->1,234,567,890,123,456.00
Erroneous output:
original-->12345678901234567
Number(txtInput.text)-->12345678901234568
formatted-->12,345,678,901,234,568.00
Here the last digit 7 is rounded to 8.
Erroneous output:
original-->12345678901234567890
Number(txtInput.text)-->12345678901234567000
formatted-->12,345,678,901,234,567,000.00
I have debugged the code and had gone into the format() method CurrencyFormatter . There actually the problem occurs from the Number conversion. I am wondering since the Number.MAX_VALUE is 1.79769313486231e+308 .
Also I found one more weird behavior of the Number. I described below,
var a:Number = 2.03;
var b:Number = 0.03
var c:Number = a- b;
trace("c --> "+c);
Output : c --> 1.9999999999999998
This kind of output is obtaining for this numbers only.
Please suggest me how to solve this issue or suggest me a workaround method.
Thanks in advance.
Vengatesh s
It's a common problem with big numbers in languages that use 64-bit floating point arithmetic (Actionscript and Javascript are the same in this, to make an example).
It has nothing to do with the CurrencyFormatter, if you try to trace(12345678901234566+1) you'll get 12345678901234568. That's because that number has so many digits that fills the 64-bit storage space and so it gets rounded off. I realise the explanation is quite simplistic, the argument is in fact quite complex.
There are a few BigInt libraries already available (i think as3crypt has one) that can be used if you have to do some arithmetic ... for the formatting i think you'll have to roll your own
EDIT:
out of curiosity, you can use this to see how your number is being represented in the IEEE754 binary format
This question already has answers here:
How to deal with floating point number precision in JavaScript?
(47 answers)
Unexpected Output when adding two float numbers
(3 answers)
Closed 10 years ago.
Good afternoon.
Noticed the strangeness in calculating the amount in the field.
Vaues table:
Type field - float.
I make select sum:
SELECT SUM(cost) as cost FROM Table
In result i get sum = 20.47497010231018;
I use calculator) and i get sum = 20,47497
Tell me please why do I get different results?
Here, read this: http://floating-point-gui.de/
So, the problem is that floating point numbers - the internal implementation for decimal numbers in hardware, and used by most languages and applications, does not map 1 to 1 to numbers in base 10, as we are used. The underlying values are expressed in base 2, and have a limited precision. Some numbers that can be represented with few digits in the decimal notation can actually need much more digits in the native format - leading to rounding errors.
The article linked above explains it in detail.
Use DECIMAL(16, 4) (for example) type for your currency columns, where 4 is the number of digits after the .
You will have to trim the trailing zeros though, since for instance 10 would appear 10.0000. This is however easily done.
These kinds of differences are not enough to be worried about in most situations.
Floating point numbers are notorious for being slightly off of the intended values because of the number of bits and how floating point numbers are stored in binary. For example, as demonstrated here, a decimal value of 0.1 has an actual double value of 0.10000000149011612. Close, but not exact.
A technique I've seen used in some applications that need absolutely accurate latitude and longitude numbers is that they'll keep the values in integral data types that are equivalent to the float multiplied by some power of 10. For example, a GeoPoint in the Google Maps API v1 on Android measures in micro degrees. Instead of a latitude and longitude like 54.123428, 60.809234 to preserve the values precisely they'd be ints: 54123428, 60809234. To depict this, they'd call the variables latitudeE6 and longitudeE6 to indicate that it's the real latitude or longitude multiplied by 1E6 (the scientific notation of 10^6).
I've looked all over and can't find this answer.
How many actual digits are there for a MySQL FLOAT?
I know (think?) that it truncates what's in excess of the FLOAT's 4 byte limit, but what exactly is that?
From the manual (emphasis mine):
For FLOAT, the SQL standard permits an optional specification of the
precision (but not the range of the exponent) in bits following the
keyword FLOAT in parentheses. MySQL also supports this optional
precision specification, but the precision value is used only to
determine storage size. A precision from 0 to 23 results in a 4-byte
single-precision FLOAT column. A precision from 24 to 53 results in an
8-byte double-precision DOUBLE column.
So up to 23 bits of precision for the mantissa can be stored in a FLOAT, which is equivalent to about 7 decimal digits because 2^23 ~ 10^7 (8,388,608 vs 10,000,000). I tested it here. You can see that 12 decimal digits are returned, of which only the first 7 are really accurate.
for those of you who think that MySQL treats floats the same as, for example JAVA, I got some SHOCKING news: MySQL degrades the available accuracy which is possible to a float, in order to hide from you decimal places which might be incorrect! Check this out:
JAVA:
public static void main(String[] args) {
long i = 16777225;
DecimalFormat myFormatter = new DecimalFormat("##,###,###");
float iAsFloat = Float.parseFloat("" + i);
System.out.println("long i = " + i + " becomes " + myFormatter.format(iAsFloat));
}
the output is
long i = 16777225 becomes 16,777,224
So far, so normal. Our example integer is just above 2^24 = 16777216. Due to the 23 bit mantissa, between 2^23 and 2^24, a float can hold every integer. Then from 2^24 to 2^25, it can hold only even numbers, from 2^25 to 2^26 only numbers divisible by 4 and so on (also in the other direction: from 2^22 to 2^23, it can hold all multiples of 0.5). As long as the exponent isn't out of range, that's the rule of what a float can store.
16777225 is odd, so the "float version" is one off, because in that range (from 2^24 to 2^25) the "step size" of the float is 2.
And now, what does MySQL make of it.
Here is the fiddle, in case you don't believe me (I wouldn't)
http://www.sqlfiddle.com/#!2/a42e79/1
CREATE TABLE IF NOT EXISTS `test` (
`test` float NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
INSERT INTO `test`(`test`) VALUES (16777225)
SELECT * FROM `test`
result:
16777200
the result is off by 25 rather than 1, but has the "advantage" of being divisible by 100. Thanks a lot.
I think I understand the "philosophy" behind this utter nonsense, but I can't say I approve. Here is the "reason":
They don't want you to see the decimal places which could be wrong, which they accomplish by rounding the "actual" float value (as it is in JAVA and according to the industry standard) to some suitable power of ten.
In the example, if we leave it as it is, the last digit is wrong, without being a zero, and we can't have that.
Then, if we round to multiples of ten, the correct value would be 16777230, while what the "actual" float would be rounded to 16777220. Now, the 7th digit is wrong (it wasn't wrong before, but now it is.) And it's not zero. We can't have that. Better round to multiples of 100. Now both the correct value and the "actual" float value round to 16777200. So you see only the 6 correct digits. You don't want to know the "24" at the end, telling you (since the step size is 2 in that range) that your original number must have been between 1677723 and 1677725. No, you don't want to know that; those 2 numbers differ in the 7th digit after rounding to the 7th digit, so you can't know the "proper" 7th digit, and hence you want to stop at the 6th digit. Anyway, that's what they think you want at MySQL.
So their goal is to round to some number of decimal digits (namely, 6), such those 6 digits are always "correct", in that you'd have gotten the same 6 digits if you'd rounded the original exact number (before converting it to a float) to 6 digits. And since log_base10(2^23) = 6.92, rounded down 6, I can see why they think that this will always work. Tragically, not even that is true.
example:
i = 33554450
the number is between 2^25 and 2^26, so the "float version" (that is the JAVA float version, not the MySQL float version) of it is the closest multiple of 4 (the smaller one, if it's right in the middle), so that is
i_as_float = 33554448
i rounded to 6 decimals (i.e. to multiples of 100, since it's an 8 digit number) gives 33554500.
i_as_float rounded to 6 decimals gives 33554400
Oops! those differ at the 6th digit! But don't tell the MySQL people. They might just start "improving" 16777200 towards 16777000.
UPDATE
other databases don't do it like that.
fiddle: http://www.sqlfiddle.com/#!15/d9144/1