I am going over some revision and I came across a question that asked what is 10011001 in signed integer and unsigned. I know the unsigned integer is 153 because there are no negatives in unsigned integers, but am I correct to say the signed integer of 10011001 is -153 or am I making a mistake ?
This difference between unsigned and signed number is that one of the bit is used to indicate positive or negative number.
So in your example you have 8 bits.
If I treat is as signed, then I have 7 bits to work with: 2^7
000 0000 = 0
111 1111 = 127
001 1001 = 25 then the most significant bit cause the following calculation to occurred.
(25 - 128) = -103
If I use all 8 bits then I unsigned bits to work with: 2^8
0000 0000 = 0
1111 1111 = 255
1001 1001 = 153
Here is code to demonstrate the answer:
char *endptr;
char binary[11] = "10011001"; // need an extra char for the termination
char x = (char)strtol(binary, &endptr, 2);
unsigned char y = (unsigned char)strtol(binary, &endptr, 2);
printf("%s to signed char (1 byte): %i\n", binary, (short)x);
printf("%s to unsigned char (1 byte): %u\n", binary, y);
Output:
Related
I have a 1010(base 2) 4 bit bit vector and a 1010(base 10) word. I need to show that they are equal.
Not totally sure of what you are trying to do, is this what you are looking for?
// 1010(base10) = 0000 0011 1111 0010(base2) -> requires 16 bits
unsigned base2 = 10; // 1010 in base 2
unsigned base10 = 1010; // Assume only 1's and 0's are used
unsigned answer = 0;
int bit = 0;
for (int bit = 0; base10 > 0; bit++)
{
if (base10 % 10) answer |= (0x01 << bit);
base10 /= 10;
}
// At this point answer = 1010(base2)
From online documentation:
cudaError_t cudaMemset (void * devPtr, int value, size_t count )
Fills the first count bytes of the memory area pointed to by devPtr with the constant byte value value.
Parameters:
devPtr - Pointer to device memory
value - Value to set for each byte of specified memory
count - Size in bytes to set
This description doesn't appear to be correct as:
int *dJunk;
cudaMalloc((void**)&dJunk, 32*(sizeof(int));
cudaMemset(dJunk, 0x12, 32);
will set all 32 integers to 0x12, not 0x12121212. (Int vs Byte)
The description talks about setting bytes. Count and Value are described in terms of bytes. Notice count is of type size_t, and value is of type int. i.e. Set a byte-size to an int-value.
cudaMemset() is not mentioned in the prog guide.
I have to assume the behavior I am seeing is correct, and the documentation is bad.
Is there a better documentation source out there? (Where?)
Are other types supported? i.e. Would float *dJunk; work? Others?
The documentation is correct, and your interpretation of what cudaMemset does is wrong. The function really does set byte values. Your example sets the first 32 bytes to 0x12, not all 32 integers to 0x12, viz:
#include <cstdio>
int main(void)
{
const int n = 32;
const size_t sz = size_t(n) * sizeof(int);
int *dJunk;
cudaMalloc((void**)&dJunk, sz);
cudaMemset(dJunk, 0, sz);
cudaMemset(dJunk, 0x12, 32);
int *Junk = new int[n];
cudaMemcpy(Junk, dJunk, sz, cudaMemcpyDeviceToHost);
for(int i=0; i<n; i++) {
fprintf(stdout, "%d %x\n", i, Junk[i]);
}
cudaDeviceReset();
return 0;
}
produces
$ nvcc memset.cu
$ ./a.out
0 12121212
1 12121212
2 12121212
3 12121212
4 12121212
5 12121212
6 12121212
7 12121212
8 0
9 0
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 0
20 0
21 0
22 0
23 0
24 0
25 0
26 0
27 0
28 0
29 0
30 0
31 0
ie. all 128 bytes set to 0, then first 32 bytes set to 0x12. Exactly as described by the documentation.
How to check if a binary number can be divided by 10 (decimal), without converting it to other system.
For example, we have a number:
1010 1011 0100 0001 0000 0100
How we can check that this number is divisible by 10?
First split the number into odd and even bits (I'm calling "even" the
bits corresponding to even powers of 2):
100100110010110000000101101110
0 1 0 1 0 0 1 0 0 0 1 1 0 1 0 even 1 0 0 1 0 1 1 0 0 0 0 0 1 1 1 odd
Now in each of these, add and subtract the digits alternately, as in
the standard test for divisibility by 11 in decimal (starting with
addition at the right):
100100110010110000000101101110 +0-1+0-1+0-0+1-0+0-0+1-1+0-1+0 =
-2 +1-0+0-1+0-1+1-0+0-0+0-0+1-1+1 = 1
Now double the sum of the odd digits and add it to the sum of the even
digits:
2*1 + -2 = 0
If the result is divisible by 5, as in this case, the number itself is
divisible by 5.
Since this number is also divisible by 2 (the rightmost digit being
0), it is divisible by 10.
Link
If you are talking about computational methods, you can do a divisiblity-by-5 test and a divisibility-by-2 test.
The numbers below assume unsigned 32-bit arithmetic, but can easily be extended to larger numbers.
I'll provide some code first, followed by a more textual explanation:
unsigned int div5exact(unsigned int n)
{
// returns n/5 as long as n actually divides 5
// (because 'n * (INV5 * 5)' == 'n * 1' mod 2^32
#define INV5 0xcccccccd
return n * INV5;
}
unsigned int divides5(unsigned int n)
{
unsigned int q = div5exact(n);
if (q <= 0x33333333) /* q*5 < 2^32? */
{
/* q*5 doesn't overflow, so n == q*5 */
return 1;
}
else
{
/* q*5 overflows, so n != q*5 */
return 0;
}
}
int divides2(unsigned int n)
{
/* easy divisibility by 2 test */
return (n & 1) == 0;
}
int divides10(unsigned int n)
{
return divides2(n) && divides5(n);
}
/* fast one-liner: */
#define DIVIDES10(n) ( ((n) & 1) == 0 && ((n) * 0xcccccccd) <= 0x33333333 )
Divisibility by 2 is easy: (n&1) == 0 means that n is even.
Divisibility by 5 involves multiplying by the inverse of 5, which is 0xcccccccd (because 0xcccccccd * 5 == 0x400000001, which is just 0x1 if you truncate to 32 bits).
When you multiply n*5 by the inverse of 5, you get n * 5*(inverse of 5), which in 32-bit math simplifies to n*1 .
Now let's say n and q are 32-bit numbers, and q = n*(inverse of 5) mod 232.
Because n is no greater than 0xffffffff, we know that n/5 is no greater than (232-1)/5 (which is 0x33333333). Therefore, we know if q is less than or equal to (232-1)/5, then we know n divides exactly by 5, because q * 5 doesn't get truncated in 32 bits, and is therefore equal to n, so n divides q and 5.
If q is greater than (232-1)/5, then we know it doesn't divide 5, because there is a one-one mapping between the 32-bit numbers divisible by 5 and the numbers between 0 and (232-1)/5, and so any number out of this range doesn't map to a number that's divisible by 5.
Here is the code in python to check the divisibilty by 10 using bitwise technique
#taking input in string which is a binary number eg: 1010,1110
s = input()
#taking initial value of x as o
x = 0
for i in s:
if i == '1':
x = (x*2 + 1) % 10
else:
x = x*2 % 10
#if x is turn to be 0 then it is divisible by 10
if x:
print("Not divisible by 10")
else:
print("Divisible by 10")
I know it's associative and commutative:
That is,
(~x1 + ~x2) + ~x3 = ~x1 + (~x2 + ~x3)
and
~x1 + ~x2 = ~x2 + ~x1
However, for the cases I tried, it doesn't seem to be distributive, i.e,
~x1 + ~x2 != ~(x1 + x2)
Is this true? Is there a proof?
I have C code as follows:
int n1 = 5;
int n2 = 3;
result = ~n1 + ~n2 == ~(n1 + n2);
int calc = ~n1;
int calc0 = ~n2;
int calc1 = ~n1 + ~n2;
int calc2 = ~(n1 + n2);
printf("(Part B: n1 is %d, n2 is %d\n", n1, n2);
printf("Part B: (calc is: %d and calc0 is: %d\n", calc, calc0);
printf("Part B: (calc1 is: %d and calc2 is: %d\n", calc1, calc2);
printf("Part B: (~%d + ~%d) == ~(%d + %d) evaluates to %d\n", n1, n2, n1, n2, result);
Which gives the following output:
Part B: (n1 is 5, n2 is 3
Part B: (calc is: -6 and calc0 is: -4
Part B: (calc1 is: -10 and calc2 is: -9
Part B: (~5 + ~3) == ~(5 + 3) evaluates to 0
[I know this is really old, but I had the same question, and since the top answers were contradictory...]
The one's compliment is indeed distributive over addition. The problem in your code (and Kaganar's kind but incorrect answer) is that you are dropping the carry bits -- you can do that in two-s compliment, but not in one's compliment.
Whatever you use to store the sum needs more memory space than what you are summing so that you don't drop your carry bits. Then fold the carry bits back into the number of bits you are using to store your operands to get the proper sum. This is called an "end-around carry" in one's compliment arithmetic.
From Wikipedia article ( https://en.wikipedia.org/wiki/Signed_number_representations#Ones'_complement ):
To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do an end-around carry: that is, add any resulting carry back into the resulting sum. To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010):
binary decimal
11111110 –1
+ 00000010 +2
─────────── ──
1 00000000 0 ← Not the correct answer
1 +1 ← Add carry
─────────── ──
00000001 1 ← Correct answer
In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in.
I changed your code a bit to make it easier to do the math myself for a sanity check and tested. It may require a bit more thought using signed integer datatypes or to account for end-around borrowing instead of carrying. I didn't go that far since my application is all about checksums (i.e. unsigned, and addition only).
unsigned short n1 = 5; //using 16-bit unsigned integers
unsigned short n2 = 3;
unsigned long oc_added = (unsigned short)~n1 + (unsigned short)~n2; //32bit
/* Fold the carry bits into 16 bits */
while (oc_added >> 16)
oc_added = (oc_added & 0xffff) + (oc_added >> 16);
unsigned long oc_sum = ~(n1 + n2); //oc_sum has 32 bits (room for carry)
/* Fold the carry bits into 16 bits */
while (oc_sum >> 16)
oc_sum = (oc_sum & 0xffff) + (oc_sum >> 16);
int result = oc_added == oc_sum;
unsigned short calc = ~n1;
unsigned short calc0 = ~n2;
unsigned short calc1 = ~n1 + ~n2; //loses a carry bit
unsigned short calc2 = ~(n1 + n2);
printf("(Part B: n1 is %d, n2 is %d\n", n1, n2);
printf("Part B: (calc is: %d and calc0 is: %d\n", calc, calc0);
printf("Part B: (calc1 is: %d and calc2 is: %d\n", calc1, calc2);
printf("Part B: (~%d + ~%d) == ~(%d + %d) evaluates to %d\n", n1, n2, n1, n2, result);
Check out the Wikiepdia article on Ones' complement. Addition in one's complement has end-carry around where you must add the overflow bit to the lowest bit.
Since ~ (NOT) is equivalent to - (NEGATE) in ones' complement, we can re-write it as:
-x1 + -x2 = -(x1 + x2)
which is correct.
One's compliment is used to represent negative and positive numbers in fixed-width registers. To distribute over addition the following must apply: ~a + ~b = ~(a + b). The OP states + represents adding 'two binary numbers'. This itself is vague, however if we take it to mean adding unsigned binary numbers, then no, one's compliment is not distributive.
Note that there are two zeroes in one's compliment: all bits are ones or all bits are zeroes.
Check to see that ~0 + ~0 != ~(0 + 0):
~0 is zero. However, ~0 is represented by all ones. Adding it to itself is doubling it -- the same as a left shift -- and thus introduces a zero in the right hand digit. But that is no longer one of the two zeroes.
However, 0 is zero, and 0 + 0 is also zero, and thus so is ~(0 + 0). But the left side isn't zero, so to distribution does not hold.
On the other hand... Consider two's compliment: flip all bits and add one. If care is taken to treat negatives in one's compliment specially, then that version of 'binary addition' is similar to two's compliment and is distributive as you end up with a familiar quotient ring just like in two's compliment.
The aforementioned Wikipedia article has more details on handling addition to allow for expected arithmetic behavior.
From De Morgan's laws:
~(x1 + x2) = ~x1 * ~x2
I have an 8 digit hexadecimal number of which I need certain digits to be either 0 or f. Given the specific place of the digits is there a quick way to generate the hex number with those places "flipped" to f. For example:
flip_digits(1) = 0x000000f
flip_digits(1,2,4) = 0x0000f0ff
flip_digits(1,7,8) = 0xff00000f
I'm doing this on an embedded device so I can't call any math libraries, I suspect it can be done with just bit shifts but I can't quite figure out the method. Any sort of solution (Python, C, Pseudocode) will work. Thanks in advance.
result = 0
for i in inputs:
result |= 0xf << ((i - 1) << 2)
You can define eight named variables, each with a given nibble set all bits on:
unsigned n0 = 0x0000000f;
unsigned n1 = 0x000000f0;
unsigned n2 = 0x00000f00;
unsigned n3 = 0x0000f000;
unsigned n4 = 0x000f0000;
unsigned n5 = 0x00f00000;
unsigned n6 = 0x0f000000;
unsigned n7 = 0xf0000000;
Then you can use the bitwise-or to combine them:
unsigned nibble_0 = n0;
unsigned nibbles_013 = n0 | n1 | n3;
unsigned nibbles_067 = n0 | n6 | n7;
If you want to combine them at runtime, it would likely be simplest to store the constants in an array so they can be accessed more easily (e.g., n[0] | n[6] | n[7]).