I have read that long values (I believe all date types with 8-bit size) are read by CPU in two steps:
the first step is reading first 32 bits, and the second one is reading next 32 bit.
Is this statement true regarding only 32-bit machines?
Related
This Flash game has a lot of players including me and some friends. We noticed the same thing can run differently for different people. The math in the simulation is definitely to blame. Whether the cause is in hardware, OS, browser, 32-bit/64-bit, etc. is not really known. But with the combinations we have to test with, we've gotten 5 distinct end results from the same simulation starting conditions, and can likely get more.
This makes me wonder, does Actionscript have a floating point math specification? If so, what does it say about the accuracy and determinism of the computations?
I compare to Java, which differentiates between regular floating point math with the Math class and deterministic floating point with the StrictMath class and strictfp keyword. Both are always within 1 ulp of the exact result, this also implies the regular math and strict math always give results within 1 ulp of each other for a single operation or function call. The docs are very clear about this. I'd expect other respectable languages to have something similar, saying how accurate their floating point computations are and if they give the same results everywhere.
Update since some people have been saying the game is dishonest:
Some others have taken apart the swf and even made mods for it, they've seen the game engine and can confirm there is no randomness. Box2d is used for its physics. If a design ever does run differently on subsequent runs, it has actually changed due to some bug, usually this is a visible difference, but if not, you can check the raw data with this tool and see it is different. Different starting conditions as expected get different end results.
As for what we know so far, this is results on a test level:
For example, if I am running 32-bit Chrome on my desktop (AMD A10-5700 as CPU), I will always get that result of "946 ticks". But if I run on Firefox or Internet Explorer instead I always get the result of "794 ticks".
Actionscript doesn't really have a math specification in that sense. This is the closest you'll get:
https://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/Math.html
It says at the bottom of the top section:
The Math functions acos, asin, atan, atan2, cos, exp, log, pow, sin, and sqrt may result in slightly different values depending on the algorithms used by the CPU or operating system. Flash runtimes call on the CPU (or operating system if the CPU doesn't support floating point calculations) when performing the calculations for the listed functions, and results have shown slight variations depending upon the CPU or operating system in use.
So to answer our two questions:
What does it say about accuracy? Nothing, actually. At no point does it mention a limit to how inaccurate a result can be.
What does it say about determinism? Hardware and operating system are definitely factors, so it is platform-dependent. No confirmation for other factors.
If you want to look any deeper, you're on your own.
According to the docs, Actionscript has a catch-all Number data type in addition to int and uint types:
The Number data type uses the 64-bit double-precision format as specified by the IEEE Standard for Binary Floating-Point Arithmetic (IEEE-754). This standard dictates how floating-point numbers are stored using the 64 available bits. One bit is used to designate whether the number is positive or negative. Eleven bits are used for the exponent, which is stored as base 2. The remaining 52 bits are used to store the significand (also called mantissa), the number that is raised to the power indicated by the exponent.
By using some of its bits to store an exponent, the Number data type can store floating-point numbers significantly larger than if it used all of its bits for the significand. For example, if the Number data type used all 64 bits to store the significand, it could store a number as large as 265 – 1. By using 11 bits to store an exponent, the Number data type can raise its significand to a power of 21023.
Although this range of numbers is enormous, it comes at the cost of precision. Because the Number data type uses 52 bits to store the significand, numbers that require more than 52 bits for accurate representation, such as the fraction 1/3, are only approximations. If your application requires absolute precision with decimal numbers, use software that implements decimal floating-point arithmetic as opposed to binary floating-point arithmetic.
This could account for the varying results you're seeing.
I have a very fundamental question relating to 32 bit memory addresses. My understanding is that 2^32 is the maximum number of possible memory addresses on a 32 bit system. Where I am confused is how we go from this number to the alleged 4GB limit. In my research I have seen some people do this:
2^32 = 4,294,967,296 bytes
4,294,967,296 / (1,024 * 1,024) = ~4 GB
First, where does this (1,024 * 1,024) come from?
Second, correct me if I am wrong, but 4,294,967,296 is labeled as bytes because a byte is the smallest unit of storage space that can be addressed in RAM. Since we're limited to 2^32 addresses, that's the number of bytes that can be addressed.
Third, even though the smallest addressable space in RAM is a byte, this must not be the case with the hard-drive because 32 bit systems usually have hard disk's well in excess of 4 GB. Can someone briefly describe the addressing scheme for hard disks?
This is a case of basic arithmetics: Number of bytes per addressed unit times number of addressable units equals number of addressable bytes.
The hard part is, where to get those numbers from. Here is my take on it:
1 - What is a Kilobyte, Megabyte, Gigabyte?
For RAM, there is consent, that a Gigabyte is 1024 Megabytes, each consisting of 1024 Kilobytes, each being 1024 Bytes. This stems from the fact, that 1024 is 2^10, but close enough to 1000 to historically allow the Kilo prefix
For Storage, vendors have years ago started to use strictly decimal units, a Megabyte being 1000000 bytes (As it makes the capacities look bigger in glossy brochures)
This has led to 1024*1024 Bytes being called a MiB and 1000*1000 Bytes being called a MB
2 - The addressable unit
For RAM, the addressable unit is the byte, even if it is fetched from physical RAM in chunks of at least 4.
For mass Storage, the addressable unit is the sector or block, which most often is 512 bytes, but 4096 Bytes catches up fast.
3 - The number of addressable units is much more complicated, let's start with RAM:
A 32 Bit CPU (sans the MMU!) can address 2^32 Bytes or 4 GiB
All modern 32 Bit CPUs include a MMU, that maps these 4 GiB of virtual address space into a physical address space
This physical address space can have a different size than 4 GiB, as a function of the MMU using more (or in prehistoric times less) than 32 physical address lines. Today's most common implementation are 36 or more physical Bits, resulting in 16*4 GiB or more (PAE or physical adress extension)
This MMU magic does not work around the CPU running in 32 Bit mode, i.e. for every process, the address space can't be larger than 4 GiB
To make things a little more interesting, a part of this address space is used for kernel functionality in every modern OS I know of. This results in 2 GiB or 3 GiB maximum usable address space per process for all mainstream OSes.
And as this still is much too simple: Running the MMU in a mode, where it can actually use more than 4 GiB of physical RAM must be supported by the OS. A remarkable example is Windows XP 32 Bit, which does NOT allow that.
And last but not least: A part of the physical address space is most often used for memory-mapping hardware. If this is combined with OS limits as above, it results in Windows XP 32 Bit sometimes being able to use only 2.5 to 3.5 GiB of physical RAM
It's much less of a hassle for storage:
in all modern PC-based Cases I know of, the addressable units are simply counted with 32 or 48 Bits (LBA or logical block addressing). Even in it's most basic version this is enough for 2 TiB of storage per disk (2^32 blocks of 512 Bytes each). Maxed-out versions with 48 Bit LBA and 4 KiB per block result in ca. a Gazillion TiB per disk.
A computer is not all memory. The 32 bits are the maximum spots for an Instruction Set to be organized. 64 bits gives you more bits to reference more memory. I think those people meant 4,294,967,296 bit combinations not bytes (8 bits).
As for the math - it seems to mean that 20 bits are reserved for other uses besides specifying a possible memory address.
There are some scenarios where programmers need or want to find grossly large numbers. These are often so large that they defy the programmer's comprehension. I'm talking about things like the largest known prime number (with 12978189 digits) and the recently calculated 10 trillion digits of pi.
How can you create a program that handles these? This far exceeds an integer, a long, a double, a BigInteger, a BigDecimal, or anything of the sort. How do these kinds of programs for discovering these numbers get created? How can you even store them in memory when no appropriate datatypes exist, and they would likely consume gigabytes of data each?
To address your specific examples:
A 12 million digit integer isn't terribly large for a typical "large integer" class to handle. This should be able to be stored in memory.
To store 10 trillion digits of π, you could use a disk file and memory-map it. You'll need a 64 bit OS and application, but you can simply create a 10 terabyte file on disk (you'll probably need a few disks and a filesystem like ZFS that can store it across disks), and map it into CPU address space. The algorithms that calculate π (such as BBP) conveniently calculate one hex digit at a time which fits well into half a byte of memory.
The (abstract) answer is to write algorithms using the machine's native types that produce the results you want. For instance, when you do addition by hand on paper of two very large integers, the biggest actual calculation you need is only 9+9+1 (nine plus nine plus one for the carry). Of course you need paper large enough to write the two numbers down in the first place and the answer down as well. So as long as the two numbers and the answer can be stored in a computer's harddisk (the paper), an algorithm can be written that does it with variables that only need a value up to 19; so even a char variable is more than capable of handling this let alone an int variable.
The (concrete) answer is that really good programmers have already done this and there even FOSS libraries for it. One good one is the GNU Project's GMP library which has loads of functions to handle arbitrary size integer arithmetic and arbitrary precision floating point arithmetic. So as long as your computer can store the information needing during the calculation, it can be done. You'll need to invest the time to read the documentation of course.
I am doing some programming on MIPS which has a bunch of 32 bit registers but I also know that you can store 64 bit integers, how does this work? Does the integer take up two registers? If so how does the system know to combine the two registers into one long string of binary
According to Wikipedia, the 32-bit MIPS instruction set includes "Load Double Word" and "Store Double Word" instructions that load/store a pair of consecutive registers from/to memory.
For the actual arithmetic, it looks like you typically have to use multiple instructions.
You need to check the documentation for your platform, since it may vary. For example, for MIPS 32-bits, check something like this quick reference (see the "C calling convention" part).
For more details, though, you'd need a more complete reference (the quick one doesn't list any 64-bit arithmetic instructions that I could see, so if they don't exist, you'd have to implement them yourself, and then you can use your own convention for how to store the values).
I am in the process of interviewing at a few places and I saw this question in one of the discussion forums.
How many bytes are contained in a 32
bit system?
The answer given is 2^29 or 536870912 - I believe it's because a 32 bit system can address 2^32 bits of memory and 8 bits to a byte gives 2^32/8 = 2^29 bytes.
Can someone confirm if I'm on the right track?
Thanks!
Addressable unit is a byte, not a bit.
So 32bit pointer allows to address 2^32 bytes.
If the question really was: "How many bytes are in a 2^32 bit system?", the answer is correct.
(But still bad phrased)
It's not that 2**32 bits are accessible, it's that 2**32 words are accessible. If we say 4 bytes per word, then 2**34 bytes is a closer value.
Although traditional systems are byte-oriented and therefore could access 2**32 bytes.