Two 32bit addtion in ALU - binary

https://electronics.stackexchange.com/questions/56488/parallel-multiplication-hardware/56518#56518
I was looking for the question about the multiple adders.
And the answer says that a wider adders is needed as the tree move down.
Suppose, I multiply two 32bits numbers.
Let's say in first layer I add two 32bis number in 16 parallel adders.
Then the answer says the result would be sixteen 34bits.
But if we add two 32bits is it enough with the 33bits long?
We can represent the overflow and sign with only one additional bits.
Why we need 34bits adder instead of 33bits long?
Here's another link with the similar question.
https://cs.stackexchange.com/questions/95733/difficulty-understanding-the-faster-multiplication-hardware

Related

Is there a relationship between doing these bit operations on the numbers they produce

Hey guys I'm a second year uni student and I'm really new to using bits and bitwise operations. When I got the binary of 3 (0011), and reversed it (1100) and then shifted it right I get (0110) and 6 as the decimal number. If i do this with 2 I get 2. I was wondering if there was some generic relationship to find what n would become doing those 2 operations on it, because I think it might be the key to one of my homework questions.
Also does anyone have a good resource to learn about simple bitwise operations and properties in general for a school kid
When I got the binary of 3 (0011), and reversed it (1100) and then
shifted it right I get (0110) and 6 as the decimal number.
You are assuming a 4 bit number, what if there is 16 or 32 bit number system? Your results will change accordingly.
Shifting right are two operations, unsigned right shift and signed right shift. Seems like you performed an unsigned right shift since, signed right shift will replace bits according to sign. If it's a negative number (Most significant bit is 1), all replaced bits will be 1, otherwise 0.
I was wondering if there was some generic relationship to find what n
would become doing those 2 operations on it.
Answer to this statement is, it totally varies.
Also does anyone have a good resource to learn about simple bitwise
operations and properties in general for a school kid
I luckily just recently have blogged about Number System and Bit operations.

How to clear the left most bit (that is set) of a binary number

I was reading this paper about a binary multiplier unit. The paper proposes an architecture for an iterative logarithmic multiplier. The block diagram of the proposed architecture is shown below :
The entire block diagram is not important, the question relates to a very small part of it. I wish to know how is N1 - 2^k1 calculated. The diagram shows a LOD (I don't know what that is) block, followed by an XOR gate, and it generates the value of N1 - 2^k1, which is basically clearing the left most bit that is set.
I don't understand this. Please help.
They mention in the text that it's a Leading One Detector, that is, it forms of a mask that indicates which bit (if any) is the leading one in its input. That's actually kind of obvious, because that's precisely what we know it should be doing from the semantics.
A possible implementation of the LOD is given in one of the references.

How can you handle absurdly large numbers?

There are some scenarios where programmers need or want to find grossly large numbers. These are often so large that they defy the programmer's comprehension. I'm talking about things like the largest known prime number (with 12978189 digits) and the recently calculated 10 trillion digits of pi.
How can you create a program that handles these? This far exceeds an integer, a long, a double, a BigInteger, a BigDecimal, or anything of the sort. How do these kinds of programs for discovering these numbers get created? How can you even store them in memory when no appropriate datatypes exist, and they would likely consume gigabytes of data each?
To address your specific examples:
A 12 million digit integer isn't terribly large for a typical "large integer" class to handle. This should be able to be stored in memory.
To store 10 trillion digits of π, you could use a disk file and memory-map it. You'll need a 64 bit OS and application, but you can simply create a 10 terabyte file on disk (you'll probably need a few disks and a filesystem like ZFS that can store it across disks), and map it into CPU address space. The algorithms that calculate π (such as BBP) conveniently calculate one hex digit at a time which fits well into half a byte of memory.
The (abstract) answer is to write algorithms using the machine's native types that produce the results you want. For instance, when you do addition by hand on paper of two very large integers, the biggest actual calculation you need is only 9+9+1 (nine plus nine plus one for the carry). Of course you need paper large enough to write the two numbers down in the first place and the answer down as well. So as long as the two numbers and the answer can be stored in a computer's harddisk (the paper), an algorithm can be written that does it with variables that only need a value up to 19; so even a char variable is more than capable of handling this let alone an int variable.
The (concrete) answer is that really good programmers have already done this and there even FOSS libraries for it. One good one is the GNU Project's GMP library which has loads of functions to handle arbitrary size integer arithmetic and arbitrary precision floating point arithmetic. So as long as your computer can store the information needing during the calculation, it can be done. You'll need to invest the time to read the documentation of course.

how does a computer work out if a value is greater than?

I understand basic binary logic and how to do basic addition, subtraction etc. I get that each of the characters in this text is just a binary number representing a number in a charset. The numbers dont really mean anything to the computer. I'm confused however as to how a computer works out that a number is greater than another. what does it do at the bit level?
If you have two numbers, you can compare each bit, from most significant to least significant, using a 1-bit comparator gate:
Of course n-bit comparator gates exist and are described further here.
It subtracts one from the other and sees if the result is less than 0 (by checking the highest-order bit, which is 1 on a number less than 0 since computers use 2's complement notation).
http://academic.evergreen.edu/projects/biophysics/technotes/program/2s_comp.htm
It substracts the two numbers and checks if the result is positive, negative (highest bit - aka "the minus bit" is set), or zero.
Within the processor, often there will be microcode to do operations, using hardwired options, such as add/subtract, that is already there.
So, to do a comparison of an integer the microcode can just do a subtraction, and based on the result determine if one is greater than the other.
Microcode is basically just low-level programs that will be called by assembly, to make it look like there are more commands than is actually hardwired on the processor.
You may find this useful:
http://www.osdata.com/topic/language/asm/intarith.htm
I guess it does a bitwise comparison of two numbers from the most significant bit to the least significant bit, and when they differ, the number with the bit set to "1" is the greater.
In a Big-endian architecture, the comparison of the following Bytes:
A: 0010 1101
B: 0010 1010
would result in A being greatest than B for its 6th bit (from the left) is set to one, while the precedent bits are equal to B.
But this is just a quick theoretic answer, with no concerns about floating point numbers and negative numbers.

Easiest way to find the correct kademlia bucket

In the Kademlia protocol node IDs are 160 bit numbers. Nodes are stored in buckets, bucket 0 stores all the nodes which have the same ID as this node except for the very last bit, bucket 1 stores all the nodes which have the same ID as this node except for the last 2 bits, and so on for all 160 buckets.
What's the fastest way to find which bucket I should put a new node into?
I have my buckets simply stored in an array, and need a method like so:
Bucket[] buckets; //array with 160 items
public Bucket GetBucket(Int160 myId, Int160 otherId)
{
//some stuff goes here
}
The obvious approach is to work down from the most significant bit, comparing bit by bit until I find a difference, I'm hoping there is a better approach based around clever bit twiddling.
Practical note:
My Int160 is stored in a byte array with 20 items, solutions which work well with that kind of structure will be preferred.
Would you be willing to consider an array of 5 32-bit integers? (or 3 64-bit integers)? Working with whole words may give you better performance than working with bytes, but the method should work in any case.
XOR the corresponding words of the two node IDs, starting with the most significant. If the XOR result is zero, move on to the next most significant word.
Otherwise, find the most significant bit that is set in this XOR result using the constant time method from Hacker's Delight.. This algorithm results in 32 (64) if the most significant bit is set, and 1 if the least significant bit is set, and so on. This index, combined with the index of the current word, will will tell you which bit is different.
For starters you could compare byte-by-byte (or word-by-word), and when you find a difference search within that byte (or word) for the first bit of difference.
It seems vaguely implausible to me that adding a node to an array of buckets will be so fast that it matters whether you do clever bit-twiddling to find the first bit of difference within a byte (or word), or just churn in a loop up to CHAR_BIT (or something). Possible, though.
Also, if IDs are essentially random with uniform distribution, then you will find a difference in the first 8 bits about 255/256 of the time. If all you care about is average-case behaviour, not worst-case, then just do the stupid thing: it's very unlikely that your loop will run for long.
For reference, though, the first bit of difference between numbers x and y is the first bit set in x ^ y. If you were programming in GNU C, __builtin_clz might be your friend. Or possibly __builtin_ctz, I'm kind of sleepy...
Your code looks like Java, though, so I guess the bitfoo you're looking for is integer log.