How to calculate data offset in a GIF file - gif

Can anyone explain, how to calculate the data offset?
Here's the example from the website (link below):
(1 <<((0xE6 & 0x07) + 1)) * 3 = 384 bytes.
E6 equals 230 decimal - but how should I read the rest?
Example

& is bitwise AND. 0xE6 & 0x07 is 6.
<< is the left-shift operator. x << y equivalent to x× 2y. In this case: 1 << (6 + 1) is 128.
* is the multiplication operator. 3 * 128 is 384.

Related

Trouble understanding an exercise given the two's complement in hex format to convert into decimal format

I am trying to convert the two's complement of the following hex values to their decimal values:
23, 57, 94 and 87.
a) 23
Procedure: (3 x 16^0) + (2 x 16^1) -> (3) + (32) = 35 (Correct)
b) 57
Procedure: (7 x 16^0) + (5 x 16^1) -> (7) + (80) = 87 (Correct)
For 94 and 87, the correct values are -108 & -121 respectively.
If I follow the procedure I used for numbers a) and b) I get 148 & 128 for 94 & 87.
Can someone explain to me how do I get to the correct results since mine are wrong? Do I need to convert the byte to binary first and then proceed from there?
Thanks a lot in advance!
0x94 = 0b10010100
now you can convert it to a decimal number like it is an normal binary number, except that the MSB counts negative:
1 * -2^7 + 0 * 2^6 + 0 * 2^5 + 1 * 2^4 + 0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 =
-2^7 + 2^4 + 2^2 =
-128 + 16 + 4 =
-108
the other number works similar
First write down the binary representation of the hex value:
94h = 10010100b
To take the two's complement, you flip all bits and add 00000001b, so the two's complement of this binary string is
01101011b + 00000001b = 01101100b
Then the first bit is interpreted as the sign (in this case minus), and the remaining 7 bits constitute the magnitude, so:
01101100b = -108d
The other works similarly.

Add 25 & 30 as binary number

Using 8 bit registers and signed magnitude representation.
I thought 25 in BCD is 010 0101 but my text book says it as 001 1001. Can somebody explain?
25 / 2 = 12r1 (12 with a remainder of 1)
12 / 2 = 6r0 (6 with a remainder of 0)
6 / 2 = 3r0 (3 with a remainder of 0)
3 / 2 = 1r1 (1 with a remainder of 0)
1 / 2 = 0r1 (0 with a remainder of 0)
So 11001 (working backward up the tree) is the binary equivalent to 25.
Another way to think about it is with powers of 2:
(1*16) + (1*8) + (0*4) + (0*2) + (1*1) = 25
And it's worth noting, just as in base 10, leading zeros do not change the value of a number. (00025 == 25) (0011001 == 11001).
The leading zeros are there in your case because your needing to populate an 8 bit register (there needs to be 8 binary digits regardless of their value).

How are Hex and Binary parsed?

HEX Article
By this I mean,
In a program if I write this:
1111
I mean 15. Likewise, if I write this:
0xF
I also mean 15. I am not entirely sure how the process is carried out through the chip, but I vagely recall something about flagging, but in essence the compiler will calculate
2^3 + 2^2 + 2^1 + 2^0 = 15
Will 0xF be converted to "1111" before this calculation or is it written that somehow HEX F represents 15?
Or simply, 16^0 ? -- which obviously is not 15.
I cannot find any helpful article that states a conversion from HEX to decimal rather than first converting to binary.
Binary(base2) is converted how I did above (2^n .. etc). How is HEX(base16) converted? Is there an associated system, like base 2, for base 16?
I found in an article:
0x1234 = 1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4660
But, what if the number was 0xF, does this represent F in the 0th bit? I cannot find a straightforward example.
There are sixteen hexadecimal digits, 0 through 9 and A through F.
Digits 0 through 9 are the same in hex and in decimal.
0xA is 1010.
0xB is 1110.
0xC is 1210.
0xD is 1310.
0xE is 1410.
0xF is 1510.
For example:
0xA34F = 10 * 163 + 3 * 162 + 4 * 161 + 15 * 160

How to multiply two 23-bit numbers using 16-bit memory locations

How can you multiply two unsigned 23-bit numbers if you only have memory locations that are 16-bits?
I am not performing these operations using x86 instructions, so information related to x86 will be ignored.
I assume you mean you can only multiply two 16-bit numbers at a time.
First I'm going to say that a and b are the numbers we want to multiply. They are between 17 and 32 bits in size (which covers your 23). However big they are they're zero padded out to an unsigned 32-bit size.
r = a * b
can be re-written as
r = (al + (ah << 16)) * (bl + (bh << 16))
where al and ah are the low and high 16-bit parts of a. Shifting by 16 is the same as multiplying by 2^16. We can then expand this equation.
r = (al + (ah * 2^16)) * (bl + (bh * 2^16))
r = al * bl + al * (bh * 2^16) + (ah * 2^16) * bl + (ah * 2^16) * (bh * 2^16)
r = al * bl + al * bh * 2^16 + ah * bl * 2^16 + ah * bh * 2^32
r = al * bl + (al * bh + ah * bl) << 16 + (ah * bh) << 2^32
So you end up with 4 multiplies (al*bl, al*bh, ah*bl & ah*bh) and you shift the results and add them together. Bear in mind that each 16-bit multiply produces a 32-bit result, so the whole thing will go into 64 bits. In your case, because you know a & b aren't bigger than 2^23 the whole thing will fit in 46 bits.
The other thing is that the summation will need to be done in 16-bit chunks, with the carry bits looked after.
Hope that helps.

Convert byte to specific mask with bit hack

I have number with binary representation 0000abcd.
How convert it to 0a0b0c0d with smallest number of operations?
How convert 0a0b0c0d back to 0000abcd?
I was searching for a solution here:
http://graphics.stanford.edu/~seander/bithacks.html and other
Generally the problem a bit more than described.
Given first number a₁b₁c₁d₁a₂b₂c₂d₂ and second number a₃a₄b₃b₄c₃c₄d₃d₄
If (a₁ and a₂ = 0) then clear both a₃ and a₄, if (a₃ and a₄ = 0) then clear both a₁ and a₂, etc.
My solution:
a₁b₁c₁d₁a₂b₂c₂d₂
OR 0 0 0 0 a₁b₁c₁d₁ ( a₁b₁c₁d₁a₂b₂c₂d₂ >> 4)
----------------
0 0 0 0 a b c d
? (magic transformation)
? ? ? ? ? ? ? ?
----------------
0 a 0 b 0 c 0 d
OR a 0 b 0 c 0 d 0 (0 a 0 b 0 c 0 d << 1)
----------------
a a b b c c d d
AND a₃a₄b₃b₄c₃c₄d₃d₄
----------------
A₃A₄B₃B₄C₃C₄D₃D₄ (clear bits)
UPDATED: (thanks for #AShelly)
x = a₁b₁c₁d₁a₂b₂c₂d₂
x = (x | x >> 4) & 0x0F
x = (x | x << 2) & 0x33
x = (x | x << 1) & 0x55
x = (x | x << 1)
y = a₃a₄b₃b₄c₃c₄d₃d₄
y = (y | y >> 1) & 0x55
y = (y | y >> 1) & 0x33
y = (y | y >> 2) & 0x0F
y = (y | y << 4)
work for 32-bit with constants 0x0F0F0F0F, 0x33333333, 0x55555555 (and twice long for 64-bit).
If you're looking for the smallest number of operations, use a look-up table.
I have number with binary
representation 0000abcd. How convert
it to 0a0b0c0d with smallest number of
operations?
Isn't this exactly "Interleave bits of X and Y" where Y is 0? Bit Twiddling Hacks has Multiple Solutions that don't use a lookup table.
How convert 0a0b0c0d back to 0000abcd?
See "How to de-interleave bits (UnMortonizing?)"
You can't do it in one go, you should shift bits on per bit basis:
Pseudo code:
X1 = a₁b₁c₁d₁
X2 = a₂b₂c₂d₂
Bm = 1 0 0 0 // Bit mask
Result = 0;
while (/* some bytes left */)
{
Result += (X1 and Bm) << 1 or (X2 and Bm);
Bm = Bm shr 1
Result = Result shl 2;
}
As a result you will get a1a2b1b2c1c2d1d2
I think it is not possible (without lookup table) to do it in less operations using binary arithmetic and x86 or x64 processor architecture. Correct me if I'm mistaken but your problem is about moving bits. Having the abcd bits you want to get 0a0b0c0d bits in one operation. The problem starts when you will look at how many bits the 'a','b','c' and 'd' has to travel.
'a' was 4-th, became 7-th, distance travelled 3 bits
'b' was 3-rd, became 5-th, distance travelled 2 bits
'c' was 2-nd, became 3-rd, distance travelled 1 bit
'd' was 1-st, became 1-st, distance travelled 0 bits
There is no such processor instruction that will move these bits dynamically to a different distance. Though if you have different input representations of the same number for free, for example you have precomputed several values which you are using in a cycle, than maybe it will be possible to gain some optimization, this is the effect you get when using additional knowledge about the topology. You just have to choose whether it will be:
[4 cycles, n^0 memory]
[2 cycles, n^1 memory]
[1 cycle , n^2 memory]