AS3 parseInt() method limitations - actionscript-3

I have this issue in converting a HEX string to Number in as3
I have a value
str = "24421bff100317"; decimal value = 10205787172373271
but when I parseInt it I get
parseInt(str, 16) = 10205787172373272
Can anyone please tell me what am I doing wrong here

Looks like adding one ("24421bff100318") works fine. I have to assume that means this is a case of precision error.
Because there are only a finite amount of numbers that can be represented with the memory available, there will be times that the computer is estimating. This is common when working with decimals and very large numbers. It's visible, for example, in this snippet where apparently the computer can't add basic decimals:
for(var i=0;i<3;i+=0.2){
trace(i);
}
There are a few workarounds if accuracy at this level is critical, namely using datatypes that store more information ("long" instead of "int" in Java - I believe "Number" might work in AS3 but I have not tested it for your scenario) or if that fails, breaking the numbers down into smaller parts and adding them together.
For further reading to understand this topic (since I do think it's fascinating), look up "precision errors" and "data types".

Related

Erlang cowboy reply json data , float number precision is wrong?

code is here :
RstJson = rfc4627:encode({obj, [{"age", 45.99}]}),
{ok, Req3} = cowboy_req:reply(200, [{<<"Content-Type">>, <<"application/json;charset=UTF-8">>}], RstJson, Req2)
then I get this wrong data from front client:
{
"age": 45.990000000000002
}
the float number precision is changed !
how can I solved this problem?
Let's have a look at the JSON that rfc4627 generates:
> io:format("~s~n", [rfc4627:encode({obj, [{"age", 45.99}]})]).
{"age":4.59900000000000019895e+01}
It turns out that rfc4627 encodes floating-point values by calling float_to_list/1, which uses "scientific" notation with 20 digits of precision. As Per Hedeland noted on the erlang-questions mailing list in November 2007, that's an odd choice:
A reasonable question could be why float_to_list/1 generates 20 digits
when a 64-bit float (a.k.a. C double), which is what is used internally,
only can hold 15-16 worth of them - I don't know off-hand what a 128-bit
float would have, but presumably significantly more than 20, so it's not
that either. I guess way back in the dark ages, someone thought that 20
was a nice and even number (I hope it wasn't me:-). The 6.30000 form is
of course just the ~p/~w formatting.
However, it turns out this is actually not the problem! In fact, 45.990000000000002 is equal to 45.99, so you do get the correct value in the front end:
> 45.990000000000002 =:= 45.99.
true
As noted above, a 64-bit float can hold 15 or 16 significant digits, but 45.990000000000002 contains 17 digits (count them!). It looks like your front end tries to print the number with more precision than it actually contains, thereby making the number look different even though it is in fact the same number.
The answers to the question Is floating point math broken? go into much more detail about why this actually makes sense, given how computers handle floating point values.
the encode float number function in rfc4627 is :
encode_number(Num, Acc) when is_float(Num) ->
lists:reverse(float_to_list(Num), Acc).
I changed it like this :
encode_number(Num, Acc) when is_float(Num) ->
lists:reverse(io_lib:format("~p",[Num]), Acc).
Problem Solved.

SQL query getting error [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Weird Java Boxing
Recently I saw a presentation where was the following sample of Java code:
Integer a = 1000, b = 1000;
System.out.println(a == b); // false
Integer c = 100, d = 100;
System.out.println(c == d); // true
Now I'm a little confused. I understand why in first case the result is "false" - it is because Integer is a reference type and the references of "a" and "b" is different.
But why in second case the result is "true"?
I've heard an opinion, that JVM caching objects for int values from -128 to 127 for some optimisation purposes. In this way, references of "c" and "d" is the same.
Can anybody give me more information about this behavior? I want to understand purposes of this optimization. In what cases performance is increased, etc. Reference to some research of this problem will be great.
I want to understand purposes of this
optimization. In what cases
performance is increased, etc.
Reference to some research of this
problem will be great.
The purpose is mainly to save memory, which also leads to faster code due to better cache efficiency.
Basically, the Integer class keeps a cache of Integer instances in the range of -128 to 127, and all autoboxing, literals and uses of Integer.valueOf() will return instances from that cache for the range it covers.
This is based on the assumption that these small values occur much more often than other ints and therefore it makes sense to avoid the overhead of having different objects for every instance (an Integer object takes up something like 12 bytes).
Look at the implementation of Integer.valueOf(int). It will return the same Integer object for inputs less than 256.
EDIT:
It's actually -128 to +127 by default as noted below.

Flex Currencyformatter automatically rounds off the larger values

We are working with Amounts of which value are higher. We are displaying the formatted amount in the respective spark TextInput. We are using the simple mx CurrencyFormatter for formatting the amount values. We dont have any problems till 16 digits . But after crossing 16 digits , the numbers are automtically rounded off. We are using the CurrencyFormatter with the following configurations,
<mx:CurrencyFormatter id="formateer" thousandsSeparatorTo="," decimalSeparatorTo="."
precision="2" currencySymbol="" rounding="none" />
My output:
We dont have any problem upto 16 digits
original-->1234567890123456
Number(txtInput.text)-->1234567890123456
formatted-->1,234,567,890,123,456.00
Erroneous output:
original-->12345678901234567
Number(txtInput.text)-->12345678901234568
formatted-->12,345,678,901,234,568.00
Here the last digit 7 is rounded to 8.
Erroneous output:
original-->12345678901234567890
Number(txtInput.text)-->12345678901234567000
formatted-->12,345,678,901,234,567,000.00
I have debugged the code and had gone into the format() method CurrencyFormatter . There actually the problem occurs from the Number conversion. I am wondering since the Number.MAX_VALUE is 1.79769313486231e+308 .
Also I found one more weird behavior of the Number. I described below,
var a:Number = 2.03;
var b:Number = 0.03
var c:Number = a- b;
trace("c --> "+c);
Output : c --> 1.9999999999999998
This kind of output is obtaining for this numbers only.
Please suggest me how to solve this issue or suggest me a workaround method.
Thanks in advance.
Vengatesh s
It's a common problem with big numbers in languages that use 64-bit floating point arithmetic (Actionscript and Javascript are the same in this, to make an example).
It has nothing to do with the CurrencyFormatter, if you try to trace(12345678901234566+1) you'll get 12345678901234568. That's because that number has so many digits that fills the 64-bit storage space and so it gets rounded off. I realise the explanation is quite simplistic, the argument is in fact quite complex.
There are a few BigInt libraries already available (i think as3crypt has one) that can be used if you have to do some arithmetic ... for the formatting i think you'll have to roll your own
EDIT:
out of curiosity, you can use this to see how your number is being represented in the IEEE754 binary format

What are "magic numbers" in computer programming?

When people talk about the use of "magic numbers" in computer programming, what do they mean?
Magic numbers are any number in your code that isn't immediately obvious to someone with very little knowledge.
For example, the following piece of code:
sz = sz + 729;
has a magic number in it and would be far better written as:
sz = sz + CAPACITY_INCREMENT;
Some extreme views state that you should never have any numbers in your code except -1, 0 and 1 but I prefer a somewhat less dogmatic view since I would instantly recognise 24, 1440, 86400, 3.1415, 2.71828 and 1.414 - it all depends on your knowledge.
However, even though I know there are 1440 minutes in a day, I would probably still use a MINS_PER_DAY identifier since it makes searching for them that much easier. Whose to say that the capacity increment mentioned above wouldn't also be 1440 and you end up changing the wrong value? This is especially true for the low numbers: the chance of dual use of 37197 is relatively low, the chance of using 5 for multiple things is pretty high.
Use of an identifier means that you wouldn't have to go through all your 700 source files and change 729 to 730 when the capacity increment changed. You could just change the one line:
#define CAPACITY_INCREMENT 729
to:
#define CAPACITY_INCREMENT 730
and recompile the lot.
Contrast this with magic constants which are the result of naive people thinking that just because they remove the actual numbers from their code, they can change:
x = x + 4;
to:
#define FOUR 4
x = x + FOUR;
That adds absolutely zero extra information to your code and is a total waste of time.
"magic numbers" are numbers that appear in statements like
if days == 365
Assuming you didn't know there were 365 days in a year, you'd find this statement meaningless. Thus, it's good practice to assign all "magic" numbers (numbers that have some kind of significance in your program) to a constant,
DAYS_IN_A_YEAR = 365
And from then on, compare to that instead. It's easier to read, and if the earth ever gets knocked out of alignment, and we gain an extra day... you can easily change it (other numbers might be more likely to change).
There's more than one meaning. The one given by most answers already (an arbitrary unnamed number) is a very common one, and the only thing I'll say about that is that some people go to the extreme of defining...
#define ZERO 0
#define ONE 1
If you do this, I will hunt you down and show no mercy.
Another kind of magic number, though, is used in file formats. It's just a value included as typically the first thing in the file which helps identify the file format, the version of the file format and/or the endian-ness of the particular file.
For example, you might have a magic number of 0x12345678. If you see that magic number, it's a fair guess you're seeing a file of the correct format. If you see, on the other hand, 0x78563412, it's a fair guess that you're seeing an endian-swapped version of the same file format.
The term "magic number" gets abused a bit, though, referring to almost anything that identifies a file format - including quite long ASCII strings in the header.
http://en.wikipedia.org/wiki/File_format#Magic_number
Wikipedia is your friend (Magic Number article)
Most of the answers so far have described a magic number as a constant that isn't self describing. Being a little bit of an "old-school" programmer myself, back in the day we described magic numbers as being any constant that is being assigned some special purpose that influences the behaviour of the code. For example, the number 999999 or MAX_INT or something else completely arbitrary.
The big problem with magic numbers is that their purpose can easily be forgotten, or the value used in another perfectly reasonable context.
As a crude and terribly contrived example:
while (int i != 99999)
{
DoSomeCleverCalculationBasedOnTheValueOf(i);
if (escapeConditionReached)
{
i = 99999;
}
}
The fact that a constant is used or not named isn't really the issue. In the case of my awful example, the value influences behaviour, but what if we need to change the value of "i" while looping?
Clearly in the example above, you don't NEED a magic number to exit the loop. You could replace it with a break statement, and that is the real issue with magic numbers, that they are a lazy approach to coding, and without fail can always be replaced by something less prone to either failure, or to losing meaning over time.
Anything that doesn't have a readily apparent meaning to anyone but the application itself.
if (foo == 3) {
// do something
} else if (foo == 4) {
// delete all users
}
Magic numbers are special value of certain variables which causes the program to behave in an special manner.
For example, a communication library might take a Timeout parameter and it can define the magic number "-1" for indicating infinite timeout.
The term magic number is usually used to describe some numeric constant in code. The number appears without any further description and thus its meaning is esoteric.
The use of magic numbers can be avoided by using named constants.
Using numbers in calculations other than 0 or 1 that aren't defined by some identifier or variable (which not only makes the number easy to change in several places by changing it in one place, but also makes it clear to the reader what the number is for).
In simple and true words, a magic number is a three-digit number, whose sum of the squares of the first two digits is equal to the third one.
Ex-202,
as, 2*2 + 0*0 = 2*2.
Now, WAP in java to accept an integer and print whether is a magic number or not.
It may seem a bit banal, but there IS at least one real magic number in every programming language.
0
I argue that it is THE magic wand to rule them all in virtually every programmer's quiver of magic wands.
FALSE is inevitably 0
TRUE is not(FALSE), but not necessarily 1! Could be -1 (0xFFFF)
NULL is inevitably 0 (the pointer)
And most compilers allow it unless their typechecking is utterly rabid.
0 is the base index of array elements, except in languages that are so antiquated that the base index is '1'. One can then conveniently code for(i = 0; i < 32; i++), and expect that 'i' will start at the base (0), and increment to, and stop at 32-1... the 32nd member of an array, or whatever.
0 is the end of many programming language strings. The "stop here" value.
0 is likewise built into the X86 instructions to 'move strings efficiently'. Saves many microseconds.
0 is often used by programmers to indicate that "nothing went wrong" in a routine's execution. It is the "not-an-exception" code value. One can use it to indicate the lack of thrown exceptions.
Zero is the answer most often given by programmers to the amount of work it would take to do something completely trivial, like change the color of the active cell to purple instead of bright pink. "Zero, man, just like zero!"
0 is the count of bugs in a program that we aspire to achieve. 0 exceptions unaccounted for, 0 loops unterminated, 0 recursion pathways that cannot be actually taken. 0 is the asymptote that we're trying to achieve in programming labor, girlfriend (or boyfriend) "issues", lousy restaurant experiences and general idiosyncracies of one's car.
Yes, 0 is a magic number indeed. FAR more magic than any other value. Nothing ... ahem, comes close.
rlynch#datalyser.com

What exactly is the danger of using magic debug values (such as 0xDEADBEEF) as literals?

It goes without saying that using hard-coded, hex literal pointers is a disaster:
int *i = 0xDEADBEEF;
// god knows if that location is available
However, what exactly is the danger in using hex literals as variable values?
int i = 0xDEADBEEF;
// what can go wrong?
If these values are indeed "dangerous" due to their use in various debugging scenarios, then this means that even if I do not use these literals, any program that during runtime happens to stumble upon one of these values might crash.
Anyone care to explain the real dangers of using hex literals?
Edit: just to clarify, I am not referring to the general use of constants in source code. I am specifically talking about debug-scenario issues that might come up to the use of hex values, with the specific example of 0xDEADBEEF.
There's no more danger in using a hex literal than any other kind of literal.
If your debugging session ends up executing data as code without you intending it to, you're in a world of pain anyway.
Of course, there's the normal "magic value" vs "well-named constant" code smell/cleanliness issue, but that's not really the sort of danger I think you're talking about.
With few exceptions, nothing is "constant".
We prefer to call them "slow variables" -- their value changes so slowly that we don't mind recompiling to change them.
However, we don't want to have many instances of 0x07 all through an application or a test script, where each instance has a different meaning.
We want to put a label on each constant that makes it totally unambiguous what it means.
if( x == 7 )
What does "7" mean in the above statement? Is it the same thing as
d = y / 7;
Is that the same meaning of "7"?
Test Cases are a slightly different problem. We don't need extensive, careful management of each instance of a numeric literal. Instead, we need documentation.
We can -- to an extent -- explain where "7" comes from by including a tiny bit of a hint in the code.
assertEquals( 7, someFunction(3,4), "Expected 7, see paragraph 7 of use case 7" );
A "constant" should be stated -- and named -- exactly once.
A "result" in a unit test isn't the same thing as a constant, and requires a little care in explaining where it came from.
A hex literal is no different than a decimal literal like 1. Any special significance of a value is due to the context of a particular program.
I believe the concern raised in the IP address formatting question earlier today was not related to the use of hex literals in general, but the specific use of 0xDEADBEEF. At least, that's the way I read it.
There is a concern with using 0xDEADBEEF in particular, though in my opinion it is a small one. The problem is that many debuggers and runtime systems have already co-opted this particular value as a marker value to indicate unallocated heap, bad pointers on the stack, etc.
I don't recall off the top of my head just which debugging and runtime systems use this particular value, but I have seen it used this way several times over the years. If you are debugging in one of these environments, the existence of the 0xDEADBEEF constant in your code will be indistinguishable from the values in unallocated RAM or whatever, so at best you will not have as useful RAM dumps, and at worst you will get warnings from the debugger.
Anyhow, that's what I think the original commenter meant when he told you it was bad for "use in various debugging scenarios."
There's no reason why you shouldn't assign 0xdeadbeef to a variable.
But woe betide the programmer who tries to assign decimal 3735928559, or octal 33653337357, or worst of all: binary 11011110101011011011111011101111.
Big Endian or Little Endian?
One danger is when constants are assigned to an array or structure with different sized members; the endian-ness of the compiler or machine (including JVM vs CLR) will affect the ordering of the bytes.
This issue is true of non-constant values, too, of course.
Here's an, admittedly contrived, example. What is the value of buffer[0] after the last line?
const int TEST[] = { 0x01BADA55, 0xDEADBEEF };
char buffer[BUFSZ];
memcpy( buffer, (void*)TEST, sizeof(TEST));
I don't see any problem with using it as a value. Its just a number after all.
There's no danger in using a hard-coded hex value for a pointer (like your first example) in the right context. In particular, when doing very low-level hardware development, this is the way you access memory-mapped registers. (Though it's best to give them names with a #define, for example.) But at the application level you shouldn't ever need to do an assignment like that.
I use CAFEBABE
I haven't seen it used by any debuggers before.
int *i = 0xDEADBEEF;
// god knows if that location is available
int i = 0xDEADBEEF;
// what can go wrong?
The danger that I see is the same in both cases: you've created a flag value that has no immediate context. There's nothing about i in either case that will let me know 100, 1000 or 10000 lines that there is a potentially critical flag value associated with it. What you've planted is a landmine bug that, if I don't remember to check for it in every possible use, I could be faced with a terrible debugging problem. Every use of i will now have to look like this:
if (i != 0xDEADBEEF) { // Curse the original designer to oblivion
// Actual useful work goes here
}
Repeat the above for all of the 7000 instances where you need to use i in your code.
Now, why is the above worse than this?
if (isIProperlyInitialized()) { // Which could just be a boolean
// Actual useful work goes here
}
At a minimum, I can spot several critical issues:
Spelling: I'm a terrible typist. How easily will you spot 0xDAEDBEEF in a code review? Or 0xDEADBEFF? On the other hand, I know that my compile will barf immediately on isIProperlyInitialised() (insert the obligatory s vs. z debate here).
Exposure of meaning. Rather than trying to hide your flags in the code, you've intentionally created a method that the rest of the code can see.
Opportunities for coupling. It's entirely possible that a pointer or reference is connected to a loosely defined cache. An initialization check could be overloaded to check first if the value is in cache, then to try to bring it back into cache and, if all that fails, return false.
In short, it's just as easy to write the code you really need as it is to create a mysterious magic value. The code-maintainer of the future (who quite likely will be you) will thank you.