Bitwise Multiply - binary

just a quick question , your code does add up bits to numbers which are higher than 2 which is binary's limit ,for instance suppose n1 = 1111 and n2 is the same , then your code will show the result of 1234321 , though its not a valid binary number , How should we make the result be in binary ? thanks

In Java you can do the following to get output in binary
int i1 = 1111;
int i2 = 1111;
int result = i1 * 12;
System.out.println("result in binary " + Integer.toBinaryString(result));

The algorithm is the same as with decimal numbers, like you learned in school. You carry the overflow. But really, that has nothing to do with multiplication. That's just addition. For instance, 1111 x 1111 = 1111 + 11110 + 111100 + 1111000.

You are msisunderstanding binary math. 1111 is not a binary, anyway not the 1111 which can be multipled by 1111 and results 1234321.
a binary 1111 would be 15 in Decimal. try using the windoes Calc to change between Dec. and Bin.
[1111 (Dec) = 10001010111 (Bin)
1111 (Bin) = 15 (Dec)
1234321 (Dec) = 100101101010110010001 (Bin)
1234321 (bin) is invlid][1]
Anyway you will need to go through the binary Math in order to understand how this work.
Try this link and see how multiplication works

Related

2's Complement - Another Interpretation. Issues?

I've been browsing around the internet looking to validate my interpretation of 2's Complement.
Everywhere I look (including my educational class) I see the following:
Invert all bits in the # (1->0 and simultaneously 0->1, i.e. swap all 1s and 0s for all bits)
Add 1 to the now-inverted #.
However, my interpretation is to not do any of that but rather think of the MSB (Most Significant Bit, the Bit with the highest place value) as the same place value but multiplied by -1, then add the rest of the bits normally (typical place value addition).
The maximum # and minimum # achieved by my interpretation is the exact same as with 2's complement (as shown by first 3 examples).
E.g.
0000 = 0(-23) + 0(22)+0(21)+0(20) = 0
1000 = 1(-23) + 0(22)+0(21)+0(20) = -8
0111 = 0(-23) + 1(22)+1(21)+1(20) = +7
1111 = 1(-23) + 1(22)+1(21)+1(20) = -8+7 = -1 (example for signed 4-bit #s)
...........................................................................................................................................................................................................
1111 1111 = -27 + 26+25+24 + 23+22+21+20 = -128 + 64+32+16+8+4+2+1
= -128+(27-1) = -128+127 = -1 (example for signed 8-bit #s)
...........................................................................................................................................................................................................
1111 1111 1111 1111 = -215 + 214+213+212+211+210+29+28+27+26+25+24+23+22+21+20
= -32768 + 16384+8192+4096 + 2048+1024+512+256 + 128+64+32+16 + 8+4+2+1
= -32768+(215-1) = -32768 + 32767 = -1 (example for signed 16-bit #s)
...........................................................................................................................................................................................................
1111 1111 1111 1111 1111 1111 1111 1111 =
=-231+230+229+228+227+226+225+224+223+222+221+220+219+218+217+216+215+214+213+212+211+210+29+28+27+26+25+24+23+22+21+20
= -2,147,483,648 + ... + 32768+16384+8192+4096 + 2048+1024+512+256 + 128+64+32+16 + 8+4+2+1
= -2,147,483,648 + (2,147,483,648-1) = -1 (example for signed 32-bit #s)
...........................................................................................................................................................................................................
My interpretation appears to work perfectly.
At least to me, this interpretation is much easier to understand, but why haven't I ever seen anybody else use it? Is the interpretation flawed in some non obvious way?
Finally, is this the way 2's complement was invented with another method being taught instead?

How are Hex and Binary parsed?

HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.
There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160

What is the Decimal value for the 8-bit 2's complement number 11010110?

This more of a hw question but I cannot figure this one out. I thought it would be 214 but because of the first bit on the left, I am not so sure.
As it's a 2's complement number, the first bit being one means that it's a negative number.
The value is 214 - 256 = -42.
It can also be calculated as -(~214 + 1) = -(41 + 1) = -42.
Binary that would be -(~11010110 + 1) = -(00101001 + 1) = -00101010.
the translation is simple:
1: substract 1 from x
11010110-00000001=11010101
2: invert it
00101010
3: calculate binary to dec (but ignore first bit)
2+8+32 = 42
4: remember the first bit of original value ( == 1)
if 1 => invert it => -42
You can tell it's a negative number since there's a 1 in the leftmost bit position. One way you can get the magnitude is invert all the bits and then add 1.
11010110
00101001 <= inverted
00101010 <= +1
This result is decimal 42, so the original value is representing -42.

Convert decimal number to excel-header-like number

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
0 = A
1 = B
...
25 = Z
26 = AA
27 = AB
...
701 = ZZ
702 = AAA
I cannot think of any solution that does not involve loop-bruteforce :-(
I expect a function/program, that accepts a decimal number and returns a string as a result.
Haskell, 78 57 50 43 chars
o=map(['A'..'Z']:)$[]:o
e=(!!)$o>>=sequence
Other entries aren't counting the driver, which adds another 40 chars:
main=interact$unlines.map(e.read).lines
A new approach, using a lazy, infinite list, and the power of Monads! And besides, using sequence makes me :), using infinite lists makes me :o
If you look carefully the excel representation is like base 26 number but not exactly same as base 26.
In Excel conversion Z + 1 = AA while in base-26 Z + 1 = BA
The algorithm is almost same as decimal to base-26 conversion with just once change.
In base-26, we do a recursive call by passing it the quotient, but here we pass it quotient-1:
function decimalToExcel(num)
// base condition of recursion.
if num < 26
print 'A' + num
else
quotient = num / 26;
reminder = num % 26;
// recursive calls.
decimalToExcel(quotient - 1);
decimalToExcel(reminder);
end-if
end-function
Java Implementation
Python, 44 chars
Oh c'mon, we can do better than lengths of 100+ :
X=lambda n:~n and X(n/26-1)+chr(65+n%26)or''
Testing:
>>> for i in 0, 1, 25, 26, 27, 700, 701, 702:
... print i,'=',X(i)
...
0 = A
1 = B
25 = Z
26 = AA
27 = AB
700 = ZY
701 = ZZ
702 = AAA
Since I am not sure what base you're converting from and what base you want (your title suggests one and your question the opposite), I'll cover both.
Algorithm for converting ZZ to 701
First recognize that we have a number encoded in base 26, where the "digits" are A..Z. Set a counter a to zero and start reading the number at the rightmost (least significant digit). Progressing from right to left, read each number and convert its "digit" to a decimal number. Multiply this by 26a and add this to the result. Increment a and process the next digit.
Algorithm for converting 701 to ZZ
We simply factor the number into powers of 26, much like we do when converting to binary. Simply take num%26, convert it to A..Z "digits" and append to the converted number (assuming it's a string), then integer-divide your number. Repeat until num is zero. After this, reverse the converted number string to have the most significant bit first.
Edit: As you point out, once two-digit numbers are reached we actually have base 27 for all non-least-significant bits. Simply apply the same algorithms here, incrementing any "constants" by one. Should work, but I haven't tried it myself.
Re-edit: For the ZZ->701 case, don't increment the base exponent. Do however keep in mind that A no longer is 0 (but 1) and so forth.
Explanation of why this is not a base 26 conversion
Let's start by looking at the real base 26 positional system. (Rather, look as base 4 since it's less numbers). The following is true (assuming A = 0):
A = AA = A * 4^1 + A * 4^0 = 0 * 4^1 + 0 * 4^0 = 0
B = AB = A * 4^1 + B * 4^0 = 0 * 4^1 + 1 * 4^0 = 1
C = AC = A * 4^1 + C * 4^0 = 0 * 4^1 + 2 * 4^0 = 2
D = AD = A * 4^1 + D * 4^0 = 0 * 4^1 + 3 * 4^0 = 3
BA = B * 4^0 + A * 4^0 = 1 * 4^1 + 0 * 4^0 = 4
And so forth... notice that AA is 0 rather than 4 as it would be in Excel notation. Hence, Excel notation is not base 26.
In Excel VBA ... the obvious choice :)
Sub a()
For Each O In Range("A1:AA1")
k = O.Address()
Debug.Print Mid(k, 2, Len(k) - 3); "="; O.Column - 1
Next
End Sub
Or for getting the column number in the first row of the WorkSheet (which make more sense, since we are in Excel ...)
Sub a()
For Each O In Range("A1:AA1")
O.Value = O.Column - 1
Next
End Sub
Or better yet: 56 chars
Sub a()
Set O = Range("A1:AA1")
O.Formula = "=Column()"
End Sub
Scala: 63 chars
def c(n:Int):String=(if(n<26)""else c(n/26-1))+(65+n%26).toChar
Prolog, 109 123 bytes
Convert from decimal number to Excel string:
c(D,E):- d(D,X),atom_codes(E,X).
d(D,[E]):-D<26,E is D+65,!.
d(D,[O|M]):-N is D//27,d(N,M),O is 65+D rem 26.
That code does not work for c(27, N), which yields N='BB'
This one works fine:
c(D,E):-c(D,26,[],X),atom_codes(E,X).
c(D,B,T,M):-(D<B->M-S=[O|T]-B;(S=26,N is D//S,c(N,27,[O|T],M))),O is 91-S+D rem B,!.
Tests:
?- c(0, N).
N = 'A'.
?- c(27, N).
N = 'AB'.
?- c(701, N).
N = 'ZZ'.
?- c(702, N).
N = 'AAA'
Converts from Excel string to decimal number (87 bytes):
x(E,D):-x(E,0,D).
x([C],X,N):-N is X+C-65,!.
x([C|T],X,N):-Y is (X+C-64)*26,x(T,Y,N).
F# : 166 137
let rec c x = if x < 26 then [(char) ((int 'A') + x)] else List.append (c (x/26-1)) (c (x%26))
let s x = new string (c x |> List.toArray)
PHP: At least 59 and 33 characters.
<?for($a=NUM+1;$a>=1;$a=$a/26)$c=chr(--$a%26+65).$c;echo$c;
Or the shortest version:
<?for($a=A;$i++<NUM;++$a);echo$a;
Using the following formula, you can figure out the last character in the string:
transform(int num)
return (char)num + 47; // Transform int to ascii alphabetic char. 47 might not be right.
char lastChar(int num)
{
return transform(num % 26);
}
Using this, we can make a recursive function (I don't think its brute force).
string getExcelHeader(int decimal)
{
if (decimal > 26)
return getExcelHeader(decimal / 26) + transform(decimal % 26);
else
return transform(decimal);
}
Or.. something like that. I'm really tired, maybe I should stop answering questions and go to bed :P

x-y = x+¬y+1 problem

I am currently reading a book about "bit fiddling" and the following formula appears:
x-y = x+¬y+1
But this doesn't seem to work. Example:
x = 0100
y = 0010
x-y = 0010
¬y = 1101
¬y+1 = 1110
x+1110 = 10010
But 10010 != 0010...
Where did I make a mistake (if any)?
(The book is "Hacker's Delight" by Henry S. Warren.)
You only have a four bit system! That extra 1 on the left of your final result can't exist. It should be:
x = 0100
y = 0010
~y = 1101
~y + 1 = 1110
x + 1110 = 0010
The other bit overflows, and isn't part of your result. You may want to read up on two's complement arithmetic.
You are carrying the extra bit. In real computers if you overflow the word, the bit disappears. (actually it gets saved in a carry flag.) .
Assuming the numbers are constrained to 4 bits, then the fifth 1 would be truncated, leaving you with 0010.
It's all about overflow. You only have four bits, so it's not 10010, but 0010.
Just to add to the answers, in a 2's complement system:
~x + 1 = -x
Say x = 2. In 4 bits, that's 0010.
~x = 1101
~x + 1 = 1110
And 1110 is -2