Need help decompressing GIF raster data - gif

I have a 10x10 gif that consists of 4 colors, white, red, blue, black. I have parsed the gif data below
4749 4638 3961 <-- header
0a00 0a00 9100 00 <-- lsd (pb 91 = 1001 0001) nColors = 4, bytes = 12
ffffff ff0000 0000ff 000000 <-- global color table
#0 FF FF FF
#1 FF 00 00
#2 00 00 FF
#3 00 00 00
21f9 04(00) (0000) (00)00 <-- graphics control extension
(00) = pb (000 reserved 000 disposal method 0 user input flag 0 transparent color flag)
(0000) = delay time
(00) = transparent color index
2c 0000 0000 0a00 0a00 00 <-- image descriptor
02 16 <-- (image data - 02 = lzw min code size, 0x16 size of image (bytes))
8c2d 9987 2a1c dc33 a002 75ec 95fa a8de 608c 0491 4c01 00 <-- image block
3b
Okay so we have our image data above (labeled image block) and I'm trying to decompress this so that I get the original image back. My understanding is that you read bytes left to right and bits right to left starting with lzwmincodesize + 1 (2 + 1 bits = 3 bits at a time).
This is the decompression algorithm I'm following
Initialize code table
let CODE be the first code in the code stream
output {CODE} to index stream
<LOOP POINT>
let CODE be the next code in the code stream
is CODE in the code table?
Yes:
output {CODE} to index stream
let K be the first index in {CODE}
add {CODE-1}+K to the code table
No:
let K be the first index of {CODE-1}
output {CODE-1}+K to index stream
add {CODE-1}+K to code table
return to LOOP POINT
I'm stepping through the decompression algorithm and here's what I've come up with so far... (starting with first 3 byte codes)
Global Color Table
000 FF FF FF
001 FF 00 00
010 00 00 FF
011 00 00 00
100 CLEAR
101 End of Data
3 2 1 6 5 4 8 7
10|001|100 0|010|110|1 100|110|01
last current output cindex exists dictionary value
100 4 CLEAR
100 001 001 1 y RED
001 110 001 001 6 n +001 001 RED RED
001 001 110 001 001 6 y +001 001 001 RED RED
001 001 010 010 2 y +001 001 010 BLUE
010 010 010 2 y +010 010 BLUE
010 110 001 001 6 y +010 001 001 RED RED
001 001 100 4 CLEAR
100 111 111 7 ???? <--- (what do I do here)?
I should be getting 5 values of red, and then 5 values of blue for the first 10 pixels but as you can see it decodes 5 red then 2 blue then 2 red. Can anybody point out what I'm doing wrong here?
Thanks

Your mistake comes from missing the code size increase. Here are the codes, "nextcode" values and current code size:
Code read from bitstream: 100, 001, 110, 110, 0010, 1001
internal 'Next' code value: 110, 110, 110, 111, 1000, 1001
current code size: 3, 3, 3, 3, 4, 4
The missing logic from your decode loop is that you need to maintain a "nextcode" variable which tells you where to insert codes in your table and when to increase the code size. It starts at the value "clearcode + 2" and increases after each code is read from the bitstream (after the first non CC value). The logic basically looks like this:
clear_dictionary:
clearcode = 1<<codestart;
codesize = codestart+1;
nextcode = clearcode + 2;
nextlimit = 1<<(codesize+1);
oldcode = -1;
mainloop:
while (!done)
{
code = getCode();
if (code == clearcode)
goto clear_dictionary;
if (oldcode == -1)
{
write_code_to_output(code);
}
else
{
<LZW logic>
nextcode++;
if (nextcode >= nextlimit)
{
nextlimit <<= 1;
codesize++;
}
}
oldcode = code;
} // while !done

Related

Trying to write to a Hex file(.bin) using PB12.5 using BlobEdit (Blob)

I'm trying to write to a hex file using PB12.5, I'm able to write to it without any issues but through testing noticed I will need to send a null value (00) to the file at certain points.
I know if I assign null to a string, it will null out the entire string so I tried using a Blob where I can insert a null value when needed (BlobEdit(blb_data, ll_pos, CharA(0)) )
But BlobEdit() automatically inserts a null value in between each position, I don't want this as it's causing issues as I'm trying to update the hex file. I just need to add my CharA(lb_byte) to each consecutive position in the Blob.
Is there any way around this or is PB just unable to do this? Below is the code:
ll_test = 1
ll_pos = 1
ll_length = Len(ls_output)
Do While ll_pos <= (ll_length)
ls_data = Mid(ls_output, ll_pos, 2)
lb_byte = Event ue_get_decimal_value_of_hex(ls_data)
ll_test = BlobEdit(blb_data, ll_test, CharA(lb_byte), EncodingANSI!)
ll_pos = ll_pos + 2
Loop
Hex file appears as follows:
16 35 2D D8 08 45 29 18 35 27 76 25 30 55 66 85 44 66 57 A4 67 99
After Blob update:
16 00 48 00 5D 00 C3 92 00 08 00 48 00 51 00 E2
I hope to help you:
//////////////////////////////////////////////////////////////////////////
// Function: f_longtohex
// Description: LONG to HEXADECIMAL
// Ambito: public
// Argumentos: as_number //Variable long to convert to hexadecimal
// as_digitos //Number of digits to return
// Return: String
// Example:
// f_longtohex(198 , 2) --> 'C6'
// f_longtohex(198 , 4) --> '00C6'
//////////////////////////////////////////////////////////////////////////
long ll_temp0, ll_temp1
char lc_ret
if isnull(as_digitos) then as_digitos = 2
IF as_digitos > 0 THEN
ll_temp0 = abs(as_number / (16 ^ (as_digitos - 1)))
ll_temp1 = ll_temp0 * (16 ^ (as_digitos - 1))
IF ll_temp0 > 9 THEN
lc_ret = char(ll_temp0 + 55)
ELSE
lc_ret = char(ll_temp0 + 48)
END IF
RETURN lc_ret + f_longtohex(as_number - ll_temp1 , as_digitos - 1)
END IF
RETURN ''

How to calculate the Hamming weight for a vector?

I am trying to calculate the Hamming weight of a vector in Matlab.
function Hamming_weight (vet_dec)
Ham_Weight = sum(dec2bin(vet_dec) == '1')
endfunction
The vector is:
Hamming_weight ([208 15 217 252 128 35 50 252 209 120 97 140 235 220 32 251])
However, this gives the following result, which is not what I want:
Ham_Weight =
10 10 9 9 9 5 5 7
I would be very grateful if you could help me please.
You are summing over the wrong dimension!
sum(dec2bin(vet_dec) == '1',2).'
ans =
3 4 5 6 1 3 3 6 4 4 3 3 6 5 1 7
dec2bin(vet_dec) creates a matrix like this:
11010000
00001111
11011001
11111100
10000000
00100011
00110010
11111100
11010001
01111000
01100001
10001100
11101011
11011100
00100000
11111011
As you can see, you're interested in the sum of each row, not each column. Use the second input argument to sum(x, 2), which specifies the dimension you want to sum along.
Note that this approach is horribly slow, as you can see from this question.
EDIT
For this to be a valid, and meaningful MATLAB function, you must change your function definition a bit.
function ham_weight = hamming_weight(vector) % Return the variable ham_weight
ham_weight = sum(dec2bin(vector) == '1', 2).'; % Don't transpose if
% you want a column vector
end % endfunction is not a MATLAB command.

octave: using find() on cell array {} subscript and assigning it to another cell array

This is an example in Section 6.3.1 Comma Separated Lists Generated from Cell Arrays of the Octave documentation (I browsed it through the doc command on the Octave prompt) which I don't quite understand.
in{1} = [10, 20, 30, 40, 50, 60, 70, 80, 90];
in{2} = inf;
in{3} = "last";
in{4} = "first";
out = cell(4, 1);
[out{1:3}] = find(in{1 : 3}); % line which I do not understand
So at the end of this section, we have in looking like:
in =
{
[1,1] =
10 20 30 40 50 60 70 80 90
[1,2] = Inf
[1,3] = last
[1,4] = first
}
and out looking like:
out =
{
[1,1] =
1 1 1 1 1 1 1 1 1
[2,1] =
1 2 3 4 5 6 7 8 9
[3,1] =
10 20 30 40 50 60 70 80 90
[4,1] = [](0x0)
}
Here, find is called with 3 output parameters (forgive me if I'm wrong on calling them output parameters, I am pretty new to Octave) from [out{1:3}], which represents the first 3 empty cells of the cell array out.
When I run find(in{1 : 3}) with 3 output parameters, as in:
[i,j,k] = find(in{1 : 3})
I get:
i = 1 1 1 1 1 1 1 1 1
j = 1 2 3 4 5 6 7 8 9
k = 10 20 30 40 50 60 70 80 90
which kind of explains why out looks like it does, but when I execute in{1:3}, I get:
ans = 10 20 30 40 50 60 70 80 90
ans = Inf
ans = last
which are the 1st to 3rd elements of the in cell array.
My question is: Why does find(in{1 : 3}) drop off the 2nd and 3rd entries in the comma separated list for in{1 : 3}?
Thank you.
The documentation for find should help you answer your question:
When called with 3 output arguments, find returns the row and column indices of non-zero elements (that's your i and j) and a vector containing the non-zero values (that's your k). That explains the 3 output arguments, but not why it only considers in{1}. To answer that you need to look at what happens when you pass 3 input arguments to find as in find (x, n, direction):
If three inputs are given, direction should be one of "first" or
"last", requesting only the first or last n indices, respectively.
However, the indices are always returned in ascending order.
so in{1} is your x (your data if you want), in{2} is how many indices find should consider (all of them in your case since in{2} = Inf) and {in3}is whether find should find the first or last indices of the vector in{1} (last in your case).

Octal full adder How to

I have this project listen below and im not sure where to start maybe someone can give me a few pointers or perhaps point me in the right direction of starting this?
Thanks!!
Input: A, B = octal digits (see representation below); Cin = binary digit
Output: S = octal digit (see representation below); Cout = binary digit
Task: Using binary FAs, design a circuit that acts as an octal FA. More specifically,
this circuit would input the two octal digits A, B, convert them into binary numbers, add
them using only binary FAs, convert the binary result back to octal, and output the sum as
an octal digit, and the binary carry out.
Input/Output binary representation of octal digits
Every octal digit will be represented using the following 8-bit binary representation:
Octal 8-bit Input Lines:
Digit: 0 1 2 3 4 5 6 7
0 1 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0
2 0 0 1 0 0 0 0 0
3 0 0 0 1 0 0 0 0
4 0 0 0 0 1 0 0 0
5 0 0 0 0 0 1 0 0
6 0 0 0 0 0 0 1 0
7 0 0 0 0 0 0 0 1
You are required to design the circuit in a structured way.
Ok, so essentially you're being asked to design a 8-to-3 encoder and a 3-to-8 decoder. Because you're given FAs to work with that's not the point of the assignment.
First we need to define how an encoder and decoder function. So we construct a truth table:
Encoder:
Input | Output
01234567 | 421
-----------------
10000000 | 000
01000000 | 001
00100000 | 010
00010000 | 011
00001000 | 100
00000100 | 101
00000010 | 110
00000001 | 111
and the decoder is just the reverse of that.
Next, how do we construct our encoder? Well, we can simply attack it one bit at a time.
So for the 1s digit we have if input bit 1, 3, 5 or 7 is set then it's 1, otherwise it's 0. So we just need a giant OR with 4 inputs connected to 1, 3, 5 and 7.
For the 2s digit we need the OR gate connected to 2, 3, 6, 7. Finally for the 4s gate, connect them to 4, 5, 6, 7. This doesn't do any error checking to make sure extra bits aren't set. Though, the behavior in that case seems to be undefined by spec, so it's probably OK.
Then you take your three lines and feed them to your adders. This is easy so I won't get into it.
Finally you need a decoder, this is a bit more tricky than the encoder.
Let's look at the decoder truth table:
Input | Output
421 | 01234567
----------------
000 | 10000000
001 | 01000000
010 | 00100000
011 | 00010000
100 | 00001000
101 | 00000100
110 | 00000010
111 | 00000001
This time we can't just use 3 or gates and call it a day.
Let's write this down in C-like code:
if (!input[0] && !input[1] && !input[2])
output[0] = 1
if (input[0] && !input[1] && !input[2])
output[1] = 1
if (!input[0] && input[1] && !input[2])
output[2] = 1
if (input[0] && input[1] && !input[2])
output[3] = 1
if (!input[0] && !input[1] && input[2])
output[4] = 1
if (input[0] && !input[1] && input[2])
output[5] = 1
if (!input[0] && input[1] && input[2])
output[6] = 1
if (input[0] && input[1] && input[2])
output[7] = 1
So, it looks like we're going to be using 8 3 input AND gates, and three NOT gates!
This one is a bit more complicated, so I made an example implementation:
If the conversion is to be done by hand in class, you can try the following way.
Conversion of Octal to Binary:
To convert octal to binary, replace each octal digit by its binary representation.
Example: Convert 518 to binary:
58 = 1012
18 = 0012
Therefore, 518 = 101 0012.
Conversion of Binary to Octal:
The process is the reverse of the previous algorithm. The binary digits are grouped by threes, starting from the decimal point(if present) or the last digit and proceeding to the left and to the right. Add leading 0s (or trailing zeros to the right of decimal point) to fill out the last group of three if necessary. Then replace each trio with the equivalent octal digit.
Example, convert binary 1010111100 to octal:
(Adding two leading zero's, the number is 001010111100)
001 = 1, 010 = 2, 111 = 7, 100 = 4
Therefore, 1010111100 = 1274
To convert to and from octal you can use an encoder & decoder pair (http://www.asic-world.com/digital/combo3.html). The 3 bit adder can be made from chaining the 3 FAs.

How to convert my binary (hex) data to latitude and longitude?

I have some binary data stream which passes geo location coordinates - latitude and longitude. I need to find the method they are encoded.
4adac812 = 74°26.2851' = 74.438085
2b6059f9 = 43°0.2763' = 43.004605
4adaee12 = 74°26.3003' = 74.438338
2a3c8df9 = 42°56.3177' = 42.938628
4ae86d11 = 74°40.1463' = 74.669105
2afd0efb = 42°59.6263' = 42.993772
1st value is hex value. 2nd & 3rd are values that I get in output (not sure which one is used in conversion).
I've found that first byte represents integer part of value (0x4a = 74). But I cannot find how decimal part is encoded.
I would really appreciate any help!
Thanks.
--
Upd: This stream comes from some "chinese" gps server software through tcp protocol. I have no sources or documentation for clent software. I suppose it was written in VC++6 and uses some standard implementations.
--
Upd: Here is packets I get:
Hex data:
41 00 00 00 13 bd b2 2c
4a e8 6d 11 2a 3c 8d f9
f6 0c ee 13
Log data in client soft:
[Lng] 74°40.1463', direction:1
[Lat] 42°56.3177', direction:1
[Head] direction:1006, speed:3318, AVA:1
[Time] 2011-02-25 19:52:19
Result data in client (UI):
74.669105
42.938628
Head 100 // floor(1006/10)
Speed 61.1 // floor(3318/54.3)
41 00 00 00 b1 bc b2 2c
4a da ee 12 2b 60 59 f9
00 00 bc 11
[Lng] 74°26.3003', direction:1
[Lat] 43°0.2763', direction:1
[Head] direction:444, speed:0, AVA:1
[Time] 2011-02-25 19:50:49
74.438338
43.004605
00 00 00 00 21 bd b2 2c
4a da c8 12 aa fd 0e fb
0d 0b e1 1d
[Lng] 74°26.2851', direction:1
[Lat] 42°59.6263', direction:1
[Head] direction:3553, speed:2829, AVA:1
[Time] 2011-02-25 19:52:33
74.438085
42.993772
I don't know what first 4 bytes mean.
I found the lower 7 bits of 5th byte represent the number of sec. (maybe 5-8 bits are time?)
Byte 9 represent integer of Lat.
Byte 13 is integer of Lng.
Bytes 17-18 reversed (word byte) is speed.
Bytes 19-20 reversed is ava(?) & direction (4 + 12 bits). (btw, somebody knows what ava is?)
And one note. In 3rd packet 13th byte you can see only lower 7 bits are used. I guess 1st bit doesnt mean smth (I removed it in the beginning, sorry if I'm wrong).
I have reordered your data so that we first have 3 longitures and then 3 latitudes:
74.438085, 74.438338, 74.669105, 43.004605, 42.938628, 42.993772
This is the best fit of the hexadecimals i can come up with is:
74.437368, 74.439881, 74.668392, 42.993224, 42.961388, 42.982391
The differences are: -0.000717, 0.001543, -0.000713, -0.011381, 0.022760, -0.011381
The program that generates these values from the complete Hex'es (4 not 3 bytes) is:
int main(int argc, char** argv) {
int a[] = { 0x4adac812, 0x4adaee12, 0x4ae86d11, 0x2b6059f9, 0x2a3c8df9, 0x2afd0efb };
int i = 0;
while(i<3) {
double b = (double)a[i] / (2<<(3*8)) * 8.668993 -250.0197;
printf("%f\n",b);
i++;
}
while(i<6) {
double b = (double)a[i] / (2<<(3*8)) * 0.05586007 +41.78172;
printf("%f\n",b);
i++;
}
printf("press key");
getch();
}
Brainstorming here.
If we look at the lower 6 bits of the second byte (data[1]&0x3f) we get the "minutes" value for most of the examples.
0xda & 0x3f = 0x1a = 26; // ok
0x60 & 0x3f = 0; // ok
0xe8 & 0x3f = 0x28 = 40; // ok
0x3c & 0x3f = 0x3c = 60; // should be 56
0xfd & 0x3f = 0x3d = 61; // should be 59
Perhaps this is the right direction?
I have tried your new data packets:
74+40.1463/60
74+26.3003/60
74+26.2851/60
42+56.3177/60
43+0.2763/60
42+59.6263/60
74.66910, 74.43834, 74.43809, 42.93863, 43.00460, 42.99377
My program gives:
74.668392, 74.439881, 74.437368, 42.961388, 42.993224, 39.407346
The differences are:
-0.000708, 0.001541, -0.000722, 0.022758, -0.011376, -3.586424
I re-used the 4 constants i derived from your first packet as those are probably stored in your client somewhere. The slight differences might be the result of some randomization the client does to prevent you from getting the exact value or reverse-engineering their protocol.