Signed integer convertion - binary

What is -10234(10) in binary with a fixed width of 16 bits in 1) one's complement 2) two's complement 3) signed magnitude.
Please help me step by step, I feel confused about the above three. Many thanks.

That sounds like a homework problem. I'm not going to do your homework for you, because the goal is for you to learn, but I can explain the stuff in my own words for you. In my experience, most of the people who get lost on this stuff just need to hear things said in a way that works for them, rather than having the same thing repeated.
The first thing that you need to understand for this is what the positive of that number is in base 2. Since the problem said you have 16 bits to handle the signed version in, you'll only have 15 bits to get this done.
As far as how to make it negative...
When you're doing signed magnitude, you would have one of those bits signal whether it was positive or negative. For an example, I'll do 4 bits of signed magnitude. Our number starts off as 3, that is 0011. The signed bit is always the most significant bit, so -3 would be 1011.
When you're doing one's complement, you just flip all of the bits. (So if you had an 8 bit one's complement number that's currently positive - let's say it's 25(9+1) or 00011001(1+1), to make that 25 in one's complement, you'd flip all of those bits, so -25(9+1) is 11100110(1+1) in one's complement.
Two's complement is the same sort of thing, except that rather than having all 1s (11111111(1+1) for the 8 bit version be -0, a number we rarely care to distinguish from +0, it adjusts all of the negative numbers by one so that's now -1.
Note that I'm giving the bases in the form of number +1, because every base is base 10 in that base. But that's me, a grizzled computer professional; if you're still in school, represent bases the way your instructor tells you to, but understand they're crazy. (I can prove they're crazy: 1. They're human. 2. QED. In future years when some people are just learning from AIs, the proof is slightly more complicated. 1. They were made, directly or indirectly by humans. 2 All humans are crazy. 3. QED.)

Related

Does Actionscript have a math specification?

This Flash game has a lot of players including me and some friends. We noticed the same thing can run differently for different people. The math in the simulation is definitely to blame. Whether the cause is in hardware, OS, browser, 32-bit/64-bit, etc. is not really known. But with the combinations we have to test with, we've gotten 5 distinct end results from the same simulation starting conditions, and can likely get more.
This makes me wonder, does Actionscript have a floating point math specification? If so, what does it say about the accuracy and determinism of the computations?
I compare to Java, which differentiates between regular floating point math with the Math class and deterministic floating point with the StrictMath class and strictfp keyword. Both are always within 1 ulp of the exact result, this also implies the regular math and strict math always give results within 1 ulp of each other for a single operation or function call. The docs are very clear about this. I'd expect other respectable languages to have something similar, saying how accurate their floating point computations are and if they give the same results everywhere.
Update since some people have been saying the game is dishonest:
Some others have taken apart the swf and even made mods for it, they've seen the game engine and can confirm there is no randomness. Box2d is used for its physics. If a design ever does run differently on subsequent runs, it has actually changed due to some bug, usually this is a visible difference, but if not, you can check the raw data with this tool and see it is different. Different starting conditions as expected get different end results.
As for what we know so far, this is results on a test level:
For example, if I am running 32-bit Chrome on my desktop (AMD A10-5700 as CPU), I will always get that result of "946 ticks". But if I run on Firefox or Internet Explorer instead I always get the result of "794 ticks".
Actionscript doesn't really have a math specification in that sense. This is the closest you'll get:
https://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/Math.html
It says at the bottom of the top section:
The Math functions acos, asin, atan, atan2, cos, exp, log, pow, sin, and sqrt may result in slightly different values depending on the algorithms used by the CPU or operating system. Flash runtimes call on the CPU (or operating system if the CPU doesn't support floating point calculations) when performing the calculations for the listed functions, and results have shown slight variations depending upon the CPU or operating system in use.
So to answer our two questions:
What does it say about accuracy? Nothing, actually. At no point does it mention a limit to how inaccurate a result can be.
What does it say about determinism? Hardware and operating system are definitely factors, so it is platform-dependent. No confirmation for other factors.
If you want to look any deeper, you're on your own.
According to the docs, Actionscript has a catch-all Number data type in addition to int and uint types:
The Number data type uses the 64-bit double-precision format as specified by the IEEE Standard for Binary Floating-Point Arithmetic (IEEE-754). This standard dictates how floating-point numbers are stored using the 64 available bits. One bit is used to designate whether the number is positive or negative. Eleven bits are used for the exponent, which is stored as base 2. The remaining 52 bits are used to store the significand (also called mantissa), the number that is raised to the power indicated by the exponent.
By using some of its bits to store an exponent, the Number data type can store floating-point numbers significantly larger than if it used all of its bits for the significand. For example, if the Number data type used all 64 bits to store the significand, it could store a number as large as 265 – 1. By using 11 bits to store an exponent, the Number data type can raise its significand to a power of 21023.
Although this range of numbers is enormous, it comes at the cost of precision. Because the Number data type uses 52 bits to store the significand, numbers that require more than 52 bits for accurate representation, such as the fraction 1/3, are only approximations. If your application requires absolute precision with decimal numbers, use software that implements decimal floating-point arithmetic as opposed to binary floating-point arithmetic.
This could account for the varying results you're seeing.

Is there a relationship between doing these bit operations on the numbers they produce

Hey guys I'm a second year uni student and I'm really new to using bits and bitwise operations. When I got the binary of 3 (0011), and reversed it (1100) and then shifted it right I get (0110) and 6 as the decimal number. If i do this with 2 I get 2. I was wondering if there was some generic relationship to find what n would become doing those 2 operations on it, because I think it might be the key to one of my homework questions.
Also does anyone have a good resource to learn about simple bitwise operations and properties in general for a school kid
When I got the binary of 3 (0011), and reversed it (1100) and then
shifted it right I get (0110) and 6 as the decimal number.
You are assuming a 4 bit number, what if there is 16 or 32 bit number system? Your results will change accordingly.
Shifting right are two operations, unsigned right shift and signed right shift. Seems like you performed an unsigned right shift since, signed right shift will replace bits according to sign. If it's a negative number (Most significant bit is 1), all replaced bits will be 1, otherwise 0.
I was wondering if there was some generic relationship to find what n
would become doing those 2 operations on it.
Answer to this statement is, it totally varies.
Also does anyone have a good resource to learn about simple bitwise
operations and properties in general for a school kid
I luckily just recently have blogged about Number System and Bit operations.

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.

1's and 2's complement systems

I'm trying to understand the differences between these two systems and their impact on C programming.
From what I've learned from Wikipedia:
both systems are used to represent negative numbers
one's complement applies bitwise NOT to negative number (the system has +0 and -0)
two's complement does as in step 2 and adds 1 (eliminates +/-0)
Am I missing something else?
My questions:
which architectures support which system? What is the most common these days (1's or 2's complement)?
in what sense should we consider these systems when programming in C? Does it mainly make sense only in embedded world?
Thanks in advance!
Most systems nowadays use two's complement, since it lets the computer do the same exact operation for addition/subtraction without caring about the particular sign of the number.
When you're programming, the arithmetic works regardless of the system used -- the range of the data types are defined by the language, so if it says a type will work in the range -2^31 to +2^31 - 1, then it'll work regardless of the notation. You need to be careful when working with individual bits or bit shifts, though -- those won't behave like power-of-two arithmetic in non-two's complement systems (although you're not too likely to encounter such systems, and probably never will, if you're just working with PCs).
The only advantage of ones'-complement notation for integers is that it allows conversions to and from sign-magnitude form to be performed without a carry chain. Building a computer with a set of blinkenlights that show each register's value in sign-magnitude form will be much more convenient if the registers use ones'-complement form than if they use two's-complement form. If one wanted to use separate storage latches for the blinkenlights and the CPU's registers, the easiest way to accommodate things would be to have one circuit which translates two's-complement to one's-complement or sign-magnitude form, and then have each register write simultaneously store the two's-complement value in the register while updating the blinkenlight latches with the sign-magnitude value. Latching circuitry is sufficiently expensive, however, that if registers are being built out of discrete latches anyway, adding some circuitry to the ALU to make it use ones'-complement, and then feeding the lights from the CPU's "real" registers, may be cheaper than including an extra set of latches for the lights.
Over the last few decades, of course, the relative costs of different circuit elements have shifted to the point that it would be absurd to have lights wired to directly report the state of a CPU's registers. Consequently, the practical advantages that ones'-complement designs might have had in the past are no longer applicable.

Why is it useful to count the number of bits?

I've seen the numerous questions about counting the number of set bits in an insert type of input, but why is it useful?
For those looking for algorithms about bit counting, look here:
Counting common bits in a sequence of unsigned longs
Fastest way to count number of bit transitions in an unsigned int
How to count the number of set bits in a 32-bit integer?
You can regard a string of bits as a set, with a 1 representing membership of the set for the corresponding element. The bit count therefore gives you the population count of the set.
Practical applications include compression, cryptography and error-correcting codes. See e.g. wikipedia.org/wiki/Hamming_weight and wikipedia.org/wiki/Hamming_distance.
If you're rolling your own parity scheme, you might want to count the number of bits. (In general, of course, I'd rather use somebody else's.) If you're emulating an old computer and want to keep track of how fast it would have run on the original, some had multiplication instructions whose speed varied with the number of 1 bits.
I can't think of any time I've wanted to do it over the past ten years or so, so I suspect this is more of a programming exercise than a practical need.
In an ironic sort of fashion, it's useful for an interview question because it requires some detailed low-level thinking and doesn't seem to be taught as a standard algorithm in comp sci courses.
Some people like to use bitmaps to indicate presence/absence of "stuff".
There's a simple hack to isolate the least-significant 1 bit in a word, convert it to a field of ones in the bits below it, and then you can find the bit number by counting the 1-bits.
countbits((x XOR (x-1)))-1;
Watch it work.
Let x = 00101100
Then x-1 = 00101011
x XOR x-1 = 00000111
Which has 3 bits set, so bit 2 was the least-significant 1-bit in the original word