Erlang cowboy reply json data , float number precision is wrong? - json

code is here :
RstJson = rfc4627:encode({obj, [{"age", 45.99}]}),
{ok, Req3} = cowboy_req:reply(200, [{<<"Content-Type">>, <<"application/json;charset=UTF-8">>}], RstJson, Req2)
then I get this wrong data from front client:
{
"age": 45.990000000000002
}
the float number precision is changed !
how can I solved this problem?

Let's have a look at the JSON that rfc4627 generates:
> io:format("~s~n", [rfc4627:encode({obj, [{"age", 45.99}]})]).
{"age":4.59900000000000019895e+01}
It turns out that rfc4627 encodes floating-point values by calling float_to_list/1, which uses "scientific" notation with 20 digits of precision. As Per Hedeland noted on the erlang-questions mailing list in November 2007, that's an odd choice:
A reasonable question could be why float_to_list/1 generates 20 digits
when a 64-bit float (a.k.a. C double), which is what is used internally,
only can hold 15-16 worth of them - I don't know off-hand what a 128-bit
float would have, but presumably significantly more than 20, so it's not
that either. I guess way back in the dark ages, someone thought that 20
was a nice and even number (I hope it wasn't me:-). The 6.30000 form is
of course just the ~p/~w formatting.
However, it turns out this is actually not the problem! In fact, 45.990000000000002 is equal to 45.99, so you do get the correct value in the front end:
> 45.990000000000002 =:= 45.99.
true
As noted above, a 64-bit float can hold 15 or 16 significant digits, but 45.990000000000002 contains 17 digits (count them!). It looks like your front end tries to print the number with more precision than it actually contains, thereby making the number look different even though it is in fact the same number.
The answers to the question Is floating point math broken? go into much more detail about why this actually makes sense, given how computers handle floating point values.

the encode float number function in rfc4627 is :
encode_number(Num, Acc) when is_float(Num) ->
lists:reverse(float_to_list(Num), Acc).
I changed it like this :
encode_number(Num, Acc) when is_float(Num) ->
lists:reverse(io_lib:format("~p",[Num]), Acc).
Problem Solved.

Related

How to perform Arithmetic on Ones Complement Numbers and correct overflow?

For some backstory, I'm making a program that can do arithmetic on ones complement numbers. To do this I'm converting a binary string into a BigInteger and then performing the math using said BigIntegers, and then converting that back into a binary string. The only problem occurs when the end result goes below -127 or above +127 because I don't know how to correct it due to the nature of ones complement numbers. I was hoping I could somehow instead convert them like unsigned numbers and do like what this answer says to do.
There are also a couple of other questions that I got from reading the linked question. I put them in block quotes. I'm just asking for information on what they mean, and explain it to me.
Firstly
I know that the r-1 complement for r-base number should do end around carry if the highest bit has carry.
Secondly
End-around carry is actually rather simple: it changes the modulus of the addition operation from rn to rn–1.
And lastly
Again, let's keep the carry bit where it is. If you look at the numbers as unsigned integers, we're computing 13 + 11 = 24. However, due to the wrap-around carry, addition is done modulo 15, so we end up with 9, which represents -6 (the correct result).
If someone can explain these quotes to me and provide some web pages for me to read I would greatly appreciate it! :)

can we interpret negative binary as positive too(read the question please)?

I already know the concept of negative binary numbers, The 0 at the position most significant bit represents that the binary is positive and 1 at the position of most significant bit represents that the binary is negative.
BUT THE PROBLEM THAT INTIMIDATED ME TO ASK A QUESTION ON STACKOVERFLOW IS THIS:
what about the times that we might want to represent a huge number that it's representation has occurred to have 1 in msb.
let me explain it in this way: by considering the above rule for making negative counterparts of our binary numbers we could say that ;
in an 8-bit system we have, For example, a value of positive 12 (decimal) would be written as 00001100 in binary, but negative 12 (decimal) would be written as
10001100 but what makes me confused a bit is that 10001100 could also be interpreted as 268 in decimal while we know that its the negative form of 12 in binary using this method of conversion.
I just want to know how to deal with this tricky, two-faced possible ways of interpreting a binary number, just like the example i gave above(it seem's to be negative, OH! but wait it might also not be:).
It depends on the type you use. If you're using an 8-bit representation which is signed, then the largest number you can store is 1111111 (i.e. the first bit is set aside).
In our example, that would convert to an integer value of 127.
If you use an unsigned type, then you have the extra bit available, allowing you to store 11111111, or 255 expressed as an integer.
A strongly typed language with a good compiler should catch you trying to assign, say, 134 to a signed 8 bit integer and vomit errors all over you.
If you're doing something strange fiddling around with bits yourself, you're on your own! There's no way of reconstructing, post hoc, whether it was intended to be a negative or a large positive, so you'll have to choose a system and stick with it.
The general convention nowadays is to stick with signed representations always - although I have seen code (usually for extreme compute tasks like astrophysical calculations) use unsigned values simply to save memory. And of course images will use unsigned values by convention, usually.

Flex Currencyformatter automatically rounds off the larger values

We are working with Amounts of which value are higher. We are displaying the formatted amount in the respective spark TextInput. We are using the simple mx CurrencyFormatter for formatting the amount values. We dont have any problems till 16 digits . But after crossing 16 digits , the numbers are automtically rounded off. We are using the CurrencyFormatter with the following configurations,
<mx:CurrencyFormatter id="formateer" thousandsSeparatorTo="," decimalSeparatorTo="."
precision="2" currencySymbol="" rounding="none" />
My output:
We dont have any problem upto 16 digits
original-->1234567890123456
Number(txtInput.text)-->1234567890123456
formatted-->1,234,567,890,123,456.00
Erroneous output:
original-->12345678901234567
Number(txtInput.text)-->12345678901234568
formatted-->12,345,678,901,234,568.00
Here the last digit 7 is rounded to 8.
Erroneous output:
original-->12345678901234567890
Number(txtInput.text)-->12345678901234567000
formatted-->12,345,678,901,234,567,000.00
I have debugged the code and had gone into the format() method CurrencyFormatter . There actually the problem occurs from the Number conversion. I am wondering since the Number.MAX_VALUE is 1.79769313486231e+308 .
Also I found one more weird behavior of the Number. I described below,
var a:Number = 2.03;
var b:Number = 0.03
var c:Number = a- b;
trace("c --> "+c);
Output : c --> 1.9999999999999998
This kind of output is obtaining for this numbers only.
Please suggest me how to solve this issue or suggest me a workaround method.
Thanks in advance.
Vengatesh s
It's a common problem with big numbers in languages that use 64-bit floating point arithmetic (Actionscript and Javascript are the same in this, to make an example).
It has nothing to do with the CurrencyFormatter, if you try to trace(12345678901234566+1) you'll get 12345678901234568. That's because that number has so many digits that fills the 64-bit storage space and so it gets rounded off. I realise the explanation is quite simplistic, the argument is in fact quite complex.
There are a few BigInt libraries already available (i think as3crypt has one) that can be used if you have to do some arithmetic ... for the formatting i think you'll have to roll your own
EDIT:
out of curiosity, you can use this to see how your number is being represented in the IEEE754 binary format

NaN and +-INF in floating point number system following IEEE754

In the standard, representation of NaN and INF is like this:
For NaN: exponent = emax+1 & mantissa != 0;
For INF: exponent = emax+1 & mantissa = 0;
Their are many ways and calculations resulting these two value.
But what ACTUALLY is NaN(INF)?
And HOW does the system "decide" or "judge" to store value as these one(two)?
Here may be a case seeming to be odd to me:
a = b = 1*2(emax);
then calculating c = a+b, the actual result is 1*2^(emax+1);
Now, c is not an available FP value according to the standard;
then how does the system store c in device?
Is it NaN?
If yes, how can this be even reasonable?
I mean, 1*2^(emax+1) IS(Should be) a Number...in a common sense...?
If this is the case, then how ACTUALLY does the standard think what a NaN is?
If not, then how do we deal with this???
I'm considering one like this:
let eM = emax+1;
then 1d.d...d * 2^(eM-1) = 1d.d...d * 2^(emax)
with 1d.d...d having legal number of digits by the system.
This is actually a way like that dealing with denormalized number.
The thing here is this:
Is the judgement posterior or prior to the completion of calculation?
If it's the former, the above may be a problem or not?
On the other hand, then the task seems undonable...
Is there anyone ever thinking about this issue?
Thx for considering it!!
Note: things for +-INF are also presented.
From Wikipedia:
The five possible exceptions are:
Invalid operation (e.g., square root of a negative number) (returns qNaN by default).
Division by zero (an operation on finite operands gives an exact infinite result, e.g., 1/0 or log(0)) (returns ±infinity by default).
Overflow (a result is too large to be represented correctly) (returns ±infinity by default (for round-to-nearest mode)).
Underflow (a result is very small (outside the normal range) and is inexact) (returns a denormalized value by default).
...

Having a rough time decided on FLOAT, DOUBLE, or DECIMAL

Currently using MySQL version 5.1.6
This is my first real world build and so far I have actually been enjoying it; however, I now am stuck on a decision regarding a field datatype and hoping someone could sort it out for me.
I essentially have 10 fields that are all different test results. The numbers range from -100 to 100 and can have a decimal with one spot after the actual point.
For example, -5.1, 0, 1, 16.3, 99.2, and 100 are all possible data. From what I have read, one should use DECIMAL for those things that we usually measure and are exact (which these are), whereas FLOAT and DOUBLE are approximations, which I do not really want (though I am sure at this level, the approximation is very small if existent at all).
If I use DECIMAL, do I have to include a space for the '-' at the beginning if used? I.E, would I use DECIMAL(4,1) or DECIMAL(5,1) or am I way off here? I might be overthinking this a bit.
DECIMAL(4,1) will be enough, the sign digit does not need to be included.
More info: http://dev.mysql.com/doc/refman/5.1/en/precision-math-decimal-changes.html
For example, a DECIMAL(3,0) column supports a range of -999 to 999
Decimal will be indeed the best option for your needs. Float and Double can give you ugly numbers (e.g. 0.2 cannot be represented as float, you'd get 0.19999999)
The negative symbol does NOT count as a digit in your calculation, so DECIMAL(4,1) should be fine.
Edit: That also seems like the right field to use for your purposes. Try it out!