flash as3 rounding numbers automatically - actionscript-3

I wrote out a long post trying to explain the exact details of the problem im having, but instead i think ill keep it simple, and ask an example question here:
var n1:Number = 9.99999999999999;
n1 += 0.000000000000009;
var n2:Number = n1 + 10;
var n3:Number = n1 - 10;
Long story short, n1 = 9.99....7, n2 = 20, n3 = 10.
If i try to make a comparison between n1 and n3, they should be the same but they arn't. I dont care if flash rounds it or not, i just need them to be the same (and they arn't, cause flash rounds in one case, and not the other).
Is there some standard solution for a problem like this?
P.S. I dont need this precision on my numbers, but i also would not like to micromanage the rounding of the numbers EVERY time i do a manipulation (that seems like it could add a LOT of code to the mix). If this is the only solution however, i guess ill just have to do a lot of rounding throughout the code, ha.

The Number in flash is a double precision floating point number. Read more here about them. These "problems" are not unique to flash, but just have to do with how these numbers are stored.
There are a couple of options. Here is a quick little library for fuzzy comparing numbers, within a certain margin of error. Another option would be implement a fixed point math library.

Is it a problem to just wrap in int() if you're trying to compare?
trace(int(n1) == int(n3));

Related

AS3 function producing combinations of array, no duplicates

This sounds like a duplicate question, as there are several questions similar to this, but they don't specifically ask this (or I just haven't found it! :) )
I have an array, this one has two distinct elements, "a" and "b", and a length of four total elements:
var list:Array = ["a","a","b","b"];
I'm looking for all combinations, using all elements, no duplicates.
This should yield:
aabb
abab
abba
bbaa
baba
baab
Searching for a solution for this has given me results similar to these:
a,b,ab,ba,aab,abb,aba, etc
or
a a b b, a a b b, a a b b, etc
Mind you, the application that would ultimately use this function would have two distinct elements, "a" and "b", and a length of 50 total elements:
var list:Array = ["a","a","a","a","a","a","a","a","a","a",
"a","a","a","a","a","a","a","a","a","a",
"a","a","a","a","a",
"b","b","b","b","b","b","b","b","b","b",
"b","b","b","b","b","b","b","b","b","b",
"b","b","b","b","b"]
...so a brute force solution like I used with aabb wouldn't be feasible.
Any help, especially using AS3 code, would be appreciated, even if it is simply pointing me to the right google search :)
Here is a JavaScript answer that might get you started: Permutations in JavaScript? (they're both EcmaScript implementations so converting to ActionScript should only require minor changes)
It doesn't handle the uniqueness requirement, but it might point you in the right direction.
However, there are a few things you might need to consider first. I don't think it will be feasible to pre-compute all unique permutations upfront.
Based on this answer about unique permutations it looks like there are 50! / 25! * 25! = 126,410,606,437,752 unique permutations for 25 a's and 25 b's.
To give an idea how large that number is: if each combination was 1 byte in memory (in practice it will be more than this) then that would be: 126410606437752 bytes = 126,410.6 gigabytes in memory.
Plus, the algorithm for generating the permutations has complexity O(n!) - so it might take far too long, separate to memory constraints, to generate the list of permutations.

Flex Currencyformatter automatically rounds off the larger values

We are working with Amounts of which value are higher. We are displaying the formatted amount in the respective spark TextInput. We are using the simple mx CurrencyFormatter for formatting the amount values. We dont have any problems till 16 digits . But after crossing 16 digits , the numbers are automtically rounded off. We are using the CurrencyFormatter with the following configurations,
<mx:CurrencyFormatter id="formateer" thousandsSeparatorTo="," decimalSeparatorTo="."
precision="2" currencySymbol="" rounding="none" />
My output:
We dont have any problem upto 16 digits
original-->1234567890123456
Number(txtInput.text)-->1234567890123456
formatted-->1,234,567,890,123,456.00
Erroneous output:
original-->12345678901234567
Number(txtInput.text)-->12345678901234568
formatted-->12,345,678,901,234,568.00
Here the last digit 7 is rounded to 8.
Erroneous output:
original-->12345678901234567890
Number(txtInput.text)-->12345678901234567000
formatted-->12,345,678,901,234,567,000.00
I have debugged the code and had gone into the format() method CurrencyFormatter . There actually the problem occurs from the Number conversion. I am wondering since the Number.MAX_VALUE is 1.79769313486231e+308 .
Also I found one more weird behavior of the Number. I described below,
var a:Number = 2.03;
var b:Number = 0.03
var c:Number = a- b;
trace("c --> "+c);
Output : c --> 1.9999999999999998
This kind of output is obtaining for this numbers only.
Please suggest me how to solve this issue or suggest me a workaround method.
Thanks in advance.
Vengatesh s
It's a common problem with big numbers in languages that use 64-bit floating point arithmetic (Actionscript and Javascript are the same in this, to make an example).
It has nothing to do with the CurrencyFormatter, if you try to trace(12345678901234566+1) you'll get 12345678901234568. That's because that number has so many digits that fills the 64-bit storage space and so it gets rounded off. I realise the explanation is quite simplistic, the argument is in fact quite complex.
There are a few BigInt libraries already available (i think as3crypt has one) that can be used if you have to do some arithmetic ... for the formatting i think you'll have to roll your own
EDIT:
out of curiosity, you can use this to see how your number is being represented in the IEEE754 binary format

solving Project Euler #305

Problem # 305
Let's call S the (infinite) string
that is made by concatenating the
consecutive positive integers
(starting from 1) written down in base
10.
Thus, S =
1234567891011121314151617181920212223242...
It's easy to see that any number will
show up an infinite number of times in
S.
Let's call f(n) the starting position
of the nth occurrence of n in S. For
example, f(1)=1, f(5)=81, f(12)=271
and f(7780)=111111365.
Find Summation[f(3^k)] for 1 <= k <=
13.
How can I go about solving this?
Calculating S to an arbitrary size is deceivingly easy, but as you have probably already found out, not practical, it simply becomes too big .
As is common for the newer Project Euler Problems, brute force simply does not work.
That said, you can still look at S for small values of k and maybe construct a formula that will solve the problem in parts (the first few values are easy to handle in memory). Also, look at Problem 40
Note: remember the one minute rule. (most problems can be solved in a few milliseconds)
My estimate of the running time is O(n2 log n), so this brute force approach is not feasible.
Note that you are supposed to solve Project Euler problems yourself, which IMHO applies in particular to newer problems.

How many combinations of k neighboring pixels are there in an image?

I suck at math, so I can't figure this out: how many combinations of k neighboring pixels are there in an image? Combinations of k pixels out of n * n total pixels in the image, but with the restriction that they must be neighbors, for each k from 2 to n * n. I need the sum for all values of k for a program that must take into account that many elements in a set that it's reasoning about.
Neighbors are 4-connected and do not wrap-around.
Once you get the number of distinct shapes for a blob of pixels of size k (here's a reference) then it comes down to two things:
How many ways on your image can you place this blob?
How many of these are the same so that you don't double-count (because of symmetries)?
Getting an exact answer is a huge computational job (you're looking at more than 10^30 distinct shapes for k=56 -- imagine if k = 10,000) but you may be able to get good enough for what you need by fitting for the first 50 values of k.
(Note: the reference in the wikipedia article takes care of duplicates with their definition of A_k.)
It seems that you are working on a problem that can be mapped to Markovian Walks.
If I understand your question, you are trying to count paths of length k like this:
Start (end)-> any pixel after visiting k neighbours
* - - - - -*
| |
| |
- - - -
in a structure that is similar to a chess board, and you want to connect only vertical and horizontal neighbours.
I think that you want the paths to be self avoiding, meaning that a pixel should not be traversed twice in a walk (meaning no loops). This condition lead to a classical problem called SAWs (Self Avoiding Walks).
Well, now the bad news: The problem is open! No one solved it yet.
You can find a nice intro to the problem here, starting at page 54 (or page 16, the counting is confusing because the page numbers are repeating in the doc). But the whole paper is very interesting and easy to read. It manages to explain the mathematical background, the historical anecdotes and the scientific importance of markovian chains in a few slides.
Hope this helps ... to avoid the problem.
If you were planning to iterate over all possible polyominos, I'm afraid you'll be waiting a long time. From the wikipedia site about polyominos, it's going to be at least O(4.0626^n) and probably closer to O(8^n). By the time n=14, the count will be over 5 billion and too big to fit into an int. By time n=30, the count will be more than 17 quintillion and you won't be able to fit it into a long. If all the world governments pooled together their resources to iterate through all polyominos in a 32 x 32 icon, they would not be able to do it before the sun goes supernova.
Now that doesn't mean what you want to do is intractable. It is likely almost all the work you do on one polyominal was done in part on others. It may be a fun task make an exponential speedup using dynamic programming. What is it you're trying to accomplish?

What's the absolute minimum a programmer should know about binary numbers and arithmetic? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Although I know the basic concepts of binary representation, I have never really written any code that uses binary arithmetic and operations.
I want to know
What are the basic concepts any
programmer should know about binary
numbers and arithmetic ? , and
In what "practical" ways can binary
operations be used in programming. I
have seen some "cool" uses of shift
operators and XOR etc. but are there
some typical problems where using binary
operations is an obvious choice.
Please give pointers to some good reference material.
If you are developing lower-level code, it is critical that you understand the binary representation of various types. You will find this particularly useful if you are developing embedded applications or if you are dealing with low-level transmission or storage of data.
That being said, I also believe that understanding how things work at a low level is useful even if you are working at much higher levels of abstraction. I have found, for example, that my ability to develop efficient code is improved by understanding how things are represented and manipulated at a low level. I have also found such understanding useful in working with debuggers.
Here is a short-list of binary representation topics for study:
numbering systems (binary, hex, octal, decimal, ...)
binary data organization (bits, nibbles, bytes, words, ...)
binary arithmetic
other binary operations (AND,OR,XOR,NOT,SHL,SHR,ROL,ROR,...)
type representation (boolean,integer,float,struct,...)
bit fields and packed data
Finally...here is a nice set of Bit Twiddling Hacks you might find useful.
Unless you're working with lower level stuff, or are trying to be smart, you never really get to play with binary stuff.
I've been through a computer science degree, and I've never used any of the binary arithmetic stuff we learned since my course ended.
Have a squizz here: http://www.swarthmore.edu/NatSci/echeeve1/Ref/BinaryMath/BinaryMath.html
You must understand bit masks.
Many languages and situations require the use of bit masks, for example flags in arguments or configs.
PHP has its error level which you control with bit masks:
error_reporting = E_ALL & ~E_NOTICE
Or simply checking if an int is odd or even:
isOdd = myInt & 1
I believe basic know-hows on binary operations line AND, OR, XOR, NOT would be handy as most of the programming languages support these operations in the form of bit-wise operators.
These operations are also used in image processing and other areas in graphics.
One important use of XOR operation which I can think of is Parity check. Check this http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html
cheers
The following are things I regularly appreciate knowing in my quite conventional programming work:
Know the powers of 2 up to 2^16, and know that 2^32 is about 4.3 billion. Know them well enough so that if you see the number 2147204921 pop up somewhere your first thought is "hmm, that looks pretty close to 2^31" -- that's a very effective module for your bug radar.
Be able to do simple arithmetic; e.g. convert a hexadecimal digit to a nybble and back.
Have some vague idea of how floating-point numbers are represented in binary.
Understand standard conventions that you might encounter in other people's code related to bit twiddling (flags get ORed together to make composite values and AND checks if one's set, shift operators pack and unpack numbers into different bytes, XOR something twice and you get the same something back, that kind of thing.)
Further knowledge is mostly gravy unless you work with significant performance constraints or do other less common work.
At the absolute bare minimum you should be able to implement a bit mask solution. The tasks associated with bit mask operations should ensure that you at least understand binary at a superficial level.
From the top of my head, here are some examples of where I've used bitwise operators to do useful stuff.
A piece of javascript that needed one of those "check all" boxes was something along these lines:
var check = true;
for(var i = 0; i < elements.length; i++)
check &= elements[i].checked;
checkAll.checked = check;
Calculate the corner points of a cube.
Vec3f m_Corners[8];
void corners(float a_Size){
for(size_t i = 0; i < 8; i++){
m_Corners[i] = a_Size * Vec3f(axis(i, Vec3f::X), axis(i, Vec3f::Y), axis(i, Vec3f::Z));
}
}
float axis(size_t a_Corner, int a_Axis) const{
return ((a_Corner >> a_Axis) & 1) == 1
? -.5f
: +.5f;
}
Draw a Sierpinski triangle
for(int y = 0; y < 512; y++)
for(int x = 0; x < 512; x++)
if(x & y) pixels[x + y * w] = someColor;
else pixels[x + y * w] = someOtherColor;
Finding the next power of two
int next = 1 << ((int)(log(number) / log(2));
Checking if a number is a power of two
bool powerOfTwo = number & (number - 1);
The list can go on and on, but for me these are (except for Sierpinksi) everyday examples. Once you'll understand and work with it though, you'll encounter it in more and more places such as the corners of a cube.
You don't specifically mention (nor rule out!-) floating point binary numbers and arithmetic, so I won't miss the opportunity to flog one of my favorite articles ever (seriously: I sometimes wish I could make passing a strict quiz on it a pre-req of working as a programmer...;-).
The most important thing every programmer should know about binary numbers and arithmetic is : Every number in a computer is represented in some kind of binary encoding, and all arithmetic on a computer is binary arithmetic.
The consequences of this are many:
Floating point "bugs" when doing math with IEEE floating point binary numbers (Which is all numbers in javascript, and quite a few in JAVA, and C)
The upper and lower bounds of representable numbers for each type
The performance cost of multiplication/division/square root etc operations (for embedded systems
Precision loss, and accumulation errors
and more. This is stuff you need to know even if you never do a bitwise xor, or not, or whatever in your life. You'll still run into these things.
This really depends on the language you're using. Recent languages such as C# and Java abstract the binary representation from you -- this makes working with binary difficult and is not usually the best way to do things anyway in these languages.
Middle and low level languages like C and C++, however, require you to understand quite a bit about how the numbers are stored underneath -- especially regarding endianness.
Binary knowledge is also useful when implementing a cross platform protcol of some sort .... for example, on x86 machines, byte order is little endian. but most network protocols want big endian numbers. Therefore you have to realize you need to do the conversion for things to go smoothly. Many RFCs, such as this one -> https://www.rfc-editor.org/rfc/rfc4648 require binary knowledge to understand.
In short, it's completely dependent on what you're trying to do.
Billy3
It's handy to know the numbers 256 and 65536. It's handy to know how two's complement negative numbers work.
Maybe you won't run into a lot of binary. I still use it pretty often, but maybe out of habit.
A good familiarity with bitwise operations should make you more facile with boolean algebra, and I think that's important for every programmer--you want to be able to quickly simplify complex logic expressions.
Absolute minimum is, that "2" is not a binary digit and 10b is smaller than 3.
If you never do low-level programming (like C in embedded systems), never have to use a debugger, and never have to work with real numbers, then I suppose you could get by without knowing binary. But knowing binary will make you a stronger programmer, even if indirectly.
Once you venture into those areas you will need to know binary (and its ``sister'' base, hexadecimal). Without knowing it:
Embedded systems programming would be impossible.
Debugging would be hard because you wouldn't know what you were looking at in memory.
Numerical calculations with decimals would give you answers you don't understand.
I learned to twiddle bits back when c and asm were still used for "mainstream" programming. Although I no longer have much use for that knowledge, I recently used it to solve a real-world business problem.
We use a fax service that posts a message back to us when the fax has been sent or failed after x number of retries. The only way I had to identify the fax was a 15 character field. We wanted to consolidate this into one URL for all of our clients. Before we consolidated, all we had to fit in this field was the FaxID PK (32 bit int) column which we just sent as a string.
Now we had to identify the client (a 4 character code) and the database (32 bit int) underneath the client. I was able to do this using base 64 encoding. Without understanding the binary representation of numbers and characters, I probably would never have even thought of this solution.
Some useful information about the number system.
Binary | base 2
Hexadecimal | base 16
Decimal | base 10
Octal | base 8
These are the most common.
Converting them is faily easy.
112 base 8 = (1 x 8^2) + (2 x 8^1) + (4 x 8^0)
74 base 10 = (7 x 10^1) + (4 x 10^0)
The AND, OR, XOR, and etc. are used in logic gates. Search boolean algebra, something well worth the time knowing.
Say for instance, you have 11001111 base 2 and you want to extract the last four only.
Truth table for AND:
P | Q | R
T | T | T
T | F | F
F | F | F
F | T | F
You can use 11001111 base 2 AND 00111111 base 2 = 00001111 base 2
There are plenty of resources on the internet.