as the title says I'm curious to know how the checksum value is calculated, from what I've read it calculated using 2s complement. Below is a 2 lines from the hex file which was loaded onto my Microcontroller, I've added spaces to make it easier to read, S315 appears on every line, the address on line 1 is 080C0000 followed by 16 hex values which represent the bytes, the values AA on line 1 and AB on line 2 are I assume the checksum values.
For line 1 I've tried adding the following 15+08+0C+00+00+4D+53+53+70+6F+74+31+00+66+10+AE+19+7E+63+1F+78 which gives me 555 Hex or 010101010101 in binary. I've entered the binary value into an online 2s complement calculator but it always says "invalid binary"??
S3 15 080C0000 4D 53 53 70 6F 74 31 00 66 10 AE 19 7E 63 1F 78 AA
S3 15 080C0010 00 00 00 00 45 85 63 EB FF FF FF FF 04 00 03 00 AB
You add the byte values, like you've done. From that sum you take only the least significant byte.
Then for Motorola HEX (SREC):
Then you take the one's complement of that byte by inverting its bits (i.e. 1s turns to 0s and vice versa).
Then for Intel HEX:
Then you take the two's complement of that byte by inverting its bits (i.e. 1s turns to 0s and vice versa) and then you add 1.
Going by your example you have the sum 0x555. Then take the least significant byte, which is 0x55.
For Motorola HEX (SREC): Calculate the one's complement of that. You get 0xAA as the checksum.
For Intel HEX: Calculate the two's complement of that. You get 0xAB as the checksum.
Related
Have tried some of the online references as wells as unix time form at etc. but none of these seem to work. See the examples below.
running Mysql 5.5.5 in ubuntu. innodb engine.
nothing is custom. This is using a built in datetime function.
Here are some examples with the 6 byte hex string and the decoded message below. We are looking for the decoding algorithm. i.e.how to turn the 6 byte hex string into the correct date/time. The algorithm must work correctly on the examples below. The right most byte seems to indicate difference in seconds correctly for small small differences in time between records. i.e. we show an example with 14 sec difference.
full records,nicely highlighted and formated word doc here.
https://www.dropbox.com/s/zsqy9o2rw1h0e09/mysql%20datetime%20examples%20.docx?dl=0
link to formatted word document with the examples.
contact frank%simrex.com re. reward.
replace % with #
hex strings and decoded date/time pairs are below.
pulled from healthy file running mysql
12 51 72 78 B9 46 ... 2014-10-22 16:53:18
12 51 72 78 B9 54 ... 2014-10-22 16:53:32
12 51 72 78 BA 13 ... 2014-10-22 16:55:23
12 51 72 78 CC 27 ... 2014-10-22 17:01:51
here you go.
select str_to_date(conv(replace('12 51 72 78 CC 27',' ', ''), 16, 10), '%Y%m%d%H%i%s')
I have a trivial RGB file saved as TIFF in Photoshop, 1000 or so pixels wide. The first row consists of 3 pixels all of which are hex 4B red, B0 green, 78 blue, and the rest of the row white.
The strip is LZW-encoded and the initial bytes of the strip are:
80 12 D6 07 80 04 16 0C B4 27 A1 E0 D0 B8 64 36 ... (actually only the first 7 or so bytes are significant to my question.)
In 9-bit segments this is:
100000000 001001011 010110000 001111000 000000000 100000101 100000110 ...
(0x100) (0x4B) (0xB0) (0x78) (0x00) (0x105) (0x106)
From what I understand 256 (0x100) is a reset code, but why is the first extended code after that 261 (0x105) instead of 257? I would expect whatever dictionary entry this points to to be the 4B/B0 pair for the second pixel (which it may well be), but how would the decompression algorithm know to place 4B/B0 at 261 instead of 257? Can someone explain what I'm missing here? Might there be something elsewhere in the .tif file that would indicate this? Thanks very much.
~
Let's see
256 (100h) is Clear
257 (101h) is EOF
in your case, then
4Bh B0h is 258 (102h)
B0h 78h is 259 (103h)
78h 00h is 260 (104h)
00h 00h is 261 (105h)
Looks good to me. LZW can actually encode one character ahead of what's been added to the table.
I have a video with an unknown frame rate. I need to calculate the frame rate it was encoded for. I am trying to calculate it using the data in SPS but I cannot decode it.
The bitstream for the NAL is :
67 64 00 1e ac d9 40 a0 2f f9 61 00 00 03 00 7d 00 00 17 6a 0f 16 2d 96
From an online guide (http://www.cardinalpeak.com/blog/the-h-264-sequence-parameter-set/), I could figure out its profile and level fields, but to figure out everything after the "seq_parameter_set_id" field in the table, I need to know the ue(v). Here is where I get confused. According to this page the "ue(v)" should be called with the value v=32? (why?) What exactly should I feed into the exponential-golomb function? Do I read 32 digits from the beginning of the bitstream, or from after the previously read bytes, to regard it as the "seq_parameter_set_id"?
( My ultimate goal is to decode the VUI parameters so that I can recalculate the framerate.)
Thanks!
ue = Unsigned Exponential golomb coding.
(v) = variable number of bits.
http://en.wikipedia.org/wiki/Exponential-Golomb_coding
Looking at the PNG specification, it appears that the PNG pixel data chunk starts with IDAT and ends with IEND (slightly clearer explanation here). In the middle are values that don't make sense to make sense to me.
How can I get usable RGB values from this, without using any libraries (ie from the raw binary file)?
As an example, I made a 2x2px image with 4 black rgb(0,0,0) pixels in Photoshop:
Here's the resulting data (in the raw binary input, the hex values, and the human-readable ASCII):
BINARY HEX ASCII
01001001 49 'I'
01000100 44 'D'
01000001 41 'A'
01010100 54 'T'
01111000 78 'x'
11011010 DA '\xda'
01100010 62 'b'
01100000 60 '`'
01000000 40 '#'
00000110 06 '\x06'
00000000 00 '\x00'
00000000 00 '\x00'
00000000 00 '\x00'
00000000 00 '\x00'
11111111 FF '\xff'
11111111 FF '\xff'
00000011 03 '\x03'
00000000 00 '\x00'
00000000 00 '\x00'
00001110 0E '\x0e'
00000000 00 '\x00'
00000001 01 '\x01'
10000011 83 '\x83'
11010100 D4 '\xd4'
11101100 EC '\xec'
10001110 8E '\x8e'
00000000 00 '\x00'
00000000 00 '\x00'
00000000 00 '\x00'
00000000 00 '\x00'
01001001 49 'I'
01000101 45 'E'
01001110 4E 'N'
01000100 44 'D'
You missed a rather crucial detail in both the specifications:
The official one:
.. The IDAT chunk contains the actual image data which is the output stream of the compression algorithm.
[...]
Deflate-compressed datastreams within PNG are stored in the "zlib" format.
Wikipedia:
IDAT contains the image, which may be split among multiple IDAT chunks. Such splitting increases filesize slightly, but makes it possible to generate a PNG in a streaming manner. The IDAT chunk contains the actual image data, which is the output stream of the compression algorithm.
Both state the raw image data is compressed. Looking at your data, the first 2 bytes
78 DA
contain the compression flags as specified in RFC1950. The rest of the data is compressed.
Decompressing this with a general zlib compatible routine show 14 bytes of output:
00 00 00 00 00 00 00
00 00 00 00 00 00 00
where each first byte is the PNG row filter (0 for both rows), followed by 2 RGB triplets (0,0,0), for the 2 lines of your image.
"Without using any libraries" you need 3 separate routines to:
read and parse the PNG superstructure; this provides the IDAT compressed data, as well as essential information such as width, height, and color depth;
decompress the zlib part(s) into raw binary data;
parse the decompressed data, handling Adam-7 interlacing if required, and applying row filters.
Only after performing these three steps you will have access to the raw image data. Of these, you seem to have a good grasp of step (1). Step (2) is way harder to "do" yourself; personally, I cheated and used miniz in my own PNG handling programs. Step 3, again, is merely a question of determination. All the necessary bits of information can be found on the web, but it takes a while to put everything in the right order. (Just recently I found an error in my execution of the rarely used Paeth row filter--it went unnoticed because it is fairly rarely used in 'real world' images.)
See Building a fast PNG encoder issues for a similar discussion and Trying to understand zlib/deflate in PNG files for an in-depth look into the Deflate scheme.
I am in a CSCI class and we are just learning about program execution. I am running a program called "Brookshear machine simulator" which was written by the author of the class text book ( Computer Science 11th edition by J. Glenn Brookshear). The program is intended to add the contents of 11 and 0F, storing the result into F1. I have done everything necessary and produced the hex value in 11 which is 09. I am then asked to convert this into two's complement 8-bit binary, which is where I am having a problem. I will be needing to convert some hex values into two's compl 8-bit binary in the future for this lab but I cant figure out how to do it. Can someone please help me understand what twos comp is and how is it related or the same as 8-bit binary , so I can convert this to two's complement 8-bit binary?
Here is a picture of the machine simulator with the inputs as directed by the lab instructions. My task is to find the hex value in 11 (09) then convert it to twos complement 8-bit binary.
Each hexadecimal digit has a 4 bit binary equivalent:
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
A 1010
B 1011
C 1100
D 1101
E 1110
F 1111
So if you have a two character hex value, like 09 then you can see that 0 = 0000 and 9 = 1001, so that would be:
00001001
which is an 8 bit value.
This works for any length of hex number of course, so for example 37FF in hex would be 0011011111111111 in binary.
Note that two's complement is irrelevant for your example as the number is positive.