Revers Engineering CRC 32 in firmware - reverse-engineering

i have a p-flash (size is about 700kb) in this flash there is a CRC32. I know where it is, and i know the CRC calculation method (polynomial, initial value, final Xor Value, input and output reflected) the problem is that just a part of these 700kb are used to calculate the crc. And i don't know which part. Is there a way to find out the input data for the calculation?
I have 5 of these 700kb files. The files are all the same except 4 bytes that are different, and the 4 bytes of the crc.

If you can get the files onto a PC, that would help. You can xor any two of the files to get a file that is all zeroes except for the 4 different bytes and the 4 bytes of the CRC. The xor of two files will also eliminate any initial value or final xor value, as if initial value = 0 and final xor value = 0. Then check the nearly all zero file to see if the CRC matches what you would expect. If it matches, then you would know that the CRC includes the 4 non-zero bytes and all the zero bytes that follow, but you wouldn't know how far before the 4 non-zero bytes that the CRC includes in its calculation, but this would at least be a start. If it does match, that would reduce the amount of searching for what is included in the CRC calculation.
Assuming the part used for CRC is contiguous, you could do a brute force search using a fast CRC32. On a X86 with SSE2 (xmm) registers, an assembly based CRC32 could calculate a CRC32 for 700,000 bytes in about 0.0002 seconds on an Intel 3770K 3.5ghz a 3rd gen processor (they're faster now), or a bit more than 70 seconds to try lengths from 8 to 700,000 bytes.
I converted the code from this github example to Visual Studio asm, for both reflected and non-reflected CRC, using CRC32 and CRC32C polynomials, and I could upload the code if interested.
https://github.com/intel/isa-l/blob/master/crc/crc16_t10dif_01.asm

Related

Need a formula to get total LUN size using lunSizeLow and lunSizeHigh SNMP objects

I have 2 SNMP Objects/OIDs. Below are the details:
Object1:
Name: lunSizeLow
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.9
Description: `LUN` size in bytes - low order bytes
Object2:
Name: lunSizeHigh
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.10
Description: `LUN` size in bytes - high order bytes
My requirement:
I want to monitor LUN size through some script. But i didn't found any SNMP object, which can give total LUN size directly. I found 2 separate objects (lunSizeLow and lunSizeHigh) to get LUN total size, so i need a formula to get total LUN size using these 2 low order and high order SNMP objects (lunSizeLow and lunSizeHigh).
I gone through many articles over internet and i found couple of formulas in community.hpe.com.
But I'm not sure which one is correct.
Formula 1:
Max unsigned number that can be stored in 32bits counter is 4294967295.
Total size would be: LOW_ORDER_BYTES + HIGH_ORDER_BYTES * 4294967296
Formula 2:
Total size in GB is LOW_ORDER_BYTES / 1073741824 + HIGH_ORDER_BYTES * 4
Could any one help me to get correct formula.
Most languages will have the bit-shift operator, allowin you to do something similar to the below (pseudo-Java):
long myBigInteger = lunSizeHigh
myBigInteger << 32 # Shifts the high bits 32 positions to the left, into the high half of the long
myBigInteger = myBigInteger + lunSizeLow
This has two advantages over multiplying:
Bit shifting is often faster than multiplication, even though most compilers would optimize that particular multiplication into a bit shift anyway.
It is easier to read the code and understand why this would provide the correct answer, given the description from the MIB. Magic numbers should be avoided where possible.
That aside, putting some numbers into the Windows Calculator (using Programmer Mode) and trying formula 1, we can see that it works.
Now, you don't specify what language or environment you're working in, and in some languages you won't have any number type that supports the size of numbers you want to manipulate. (Same reason that this number had to be split into two counters to begin with - it's larger than the largest number representation available on some (primitive) platforms.) If you want to do it using multiplication, you'll have to make sure your implementation language can do better.

Minimum swaps to create an alternating binary string of ones and zeros

Given a binary string, that is it contains only 0s and 1s (number of zeros equals the number of ones) We need to make this string a sequence of alternate characters by swapping some of the bits, our goal is to minimize the number swaps.
For example, for the string "00011011" the minimum number of swaps is 2, one way to do it is:
1) swap the bits : 00011011 --->> 00010111
2) swap the bits(after the first swap) : 00010111 --->> 01010101
Note that if we are given the string "00101011" we can turn it into an alternate string starting with 0 (that requires 3 swaps) and also into alternate string starting with 1 ( that requires one swap - the first and the last bits ).
So the minimum in this case is one swap.
The end goal is to return the minimum number of swaps for a given string of ones and zeros.
What is the most efficient way to solve it?
Trivial stuff is trivial.
xor your sequence with 010101010…. Note: if the sequence was already "correct", you'd get an all-0 bit stream (also do with the inverse if you don't care whether you start with 0 or 1). XOR is very efficient on modern CPUs. I can do 512 bits at a time on my Xeon, and I think it takes 2 cycles.
count the ones (#ones). On modern x87, and a lot of other ISAs, there's a POPCNT instruction that makes that extremely efficient. Throughput one, 64 bits at a time on my CPU.
swaps necessary is #ones/2 (obvious), or #zeros/2, whichever is less.
Note: Code published on this site is CC-by-SA. That means that you're legally required to state the URL of this answer and my username when you use this in your software, or your homework, or your exam, or your assignment.

error-correcting code checksum

Question! : Adding all bytes together gives 118h.
Drop the Carry Nibble to give you 18h. I can't get this word 'Carry Nible'.
If I make checksum for this byte 10010101(95hex), then the checksum is 4(04hex)?
source : http://www.asic-world.com/digital/numbering4.html#Error_Detecting_and_Correction_Codes
"
The parity method is calculated over byte, word or double word. But when errors need to be checked over 128 bytes or more (basically blocks of data), then calculating parity is not the right way. So we have checksum, which allows to check for errors on block of data. There are many variations of checksum.
Adding all bytes
CRC
Fletcher's checksum
Adler-32
The simplest form of checksum, which simply adds up the asserted bits in the data, cannot detect a number of types of errors. In particular, such a checksum is not changed by:
Reordering of the bytes in the message
Inserting or deleting zero-valued bytes
Multiple errors which sum to zero
Example of Checksum : Given 4 bytes of data (can be done with any number of bytes): 25h, 62h, 3Fh, 52h
Adding all bytes together gives 118h.
Drop the Carry Nibble to give you 18h.
Get the two's complement of the 18h to get E8h. This is the checksum byte.
To Test the Checksum byte simply add it to the original group of bytes. This should give you 200h.
Drop the carry nibble again giving 00h. Since it is 00h this means the checksum means the bytes were probably not changed."

Why do we Sign Extend in load word instruction?

I am learning MIPS 32 bit. I wanted to ask that why do we Sign Extend the 16 bit offset (in Single Cycle Datapath) before sending it to the ALU in case of Store Word?
I am not sure if it's helpful for you now, but I am posting it anyway.
Let us consider in a very very general sense, an array of instructions in C++ i.e. A[0],A[1],A[2] .....
The "figurative" distance between any two instructions is 1 UNIT.
Lets take this analogy to MIPS. In MIPS, figuratively every instruction is separated by "1 UNIT", however, 1 UNIT = 4 Bytes in MIPS. Every instruction is 4 Bytes long and this is why when moving from instruction to instruction the PC is incremented by 4 i.e. PC+4. So that way the gap between instruction i and instruction i+2 is "figuratively" 2 but actually 2*4=8 i.e. PC+4+4
Coming back to offsets that are specified in Branch instructions, the offset represents the "figurative" distance from the next instruction(the instruction following the Branch). So to get the "real" distance, the offset is to be multiplied by 4. This is the reason we are instructed to "sign-extend" the offset by 2 bits to the 'LEFT', because, left shifting any binary value by n bits results in multiplying that value by 2^n. In our case 2^2 = 4
So the actual target address of a branch instruction is PC+4+4*Offset.
Hope this helps.
Sounds like the 16-bit offset is a signed 2's complement number, i.e. it can be either positive or negative.
When converting it to 32 bits, the most significant bit needs to be copied to the upper 16 bits in order to keep the sign information.
To the best of my knowledge,in load or store instructions the offset value is added to the value in temporary register,as temp. register is 32 bit and addition operation of 16 bit and 32 bit is not possible,the value is sign extended.
I think you are getting your concepts a little wrong here.
The 5 bits that you think are going inside the ALU, actually go inside the register memory to select one of the 32[2^5] registers.
Each register itself is of 32 bits. Hence, to add the offset to the register value, you need to sign extend it to 32 bits.
ALU operation is always between two registers of the same size in the single cycle datapath for MIPS.
In the hardware of a 32-bit machine most ALU's take 32-bit inputs, and all registers are 32-bit registers.
To work with your data it must be 32-bits wide, this why we SIGN-extend, however another approach would be to ZERO-extend, but SIGN-extend is used when you are dealing with immediates and offsets to preserve the sign in 2's complement.
Sign extension happens e.g. in case of M68xxx machines only in case of loading the address registers. Not so in case of data registers.
having e.g.
movea.w addr,a0
move addr,d0
addr:
dc.w $FFFF
leads in case of data register loading to $0000FFFF, in case of the
address register loading however to $FFFFFFFF.
To understand this, build the two complement of the signed negative
presentation, $FFFF, extend the number to 32 bit and redo the two-
complement, finding the corresponding representation in 32 bit.
Cheers and kind regards,
Stephan S.

Compressing a binary matrix

We were asked to find a way to compress a square binary matrix as much as possible, and if possible, to add redundancy bits to check and maybe correct errors.
The redundancy thing is easy to implement in my opinion. The complicated part is compressing the matrix. I thought about using run-length after reshaping the matrix to a vector because there will be more zeros than ones, but I only achieved a 40bits compression (we are working on small sizes) although I thought it'd be better.
Also, after run-length an idea was Huffman coding the matrix, but a dictionary must be sent in order to recover the original information.
I'd like to know what would be the best way to compress a binary matrix?
After reading some comments, yes #Adam you're right, the 14x14 matrix should be compressed in 128bits, so if I only use the coordinates (rows&cols) for each non-zero element, still it would be 160bits (since there are twenty ones). I'm not looking for an exact solution but for a useful idea.
You can only talk about compressing something if you have a distribution and a representation. That's the issue of the dictionary you have to send along: you always need some sort of dictionary of protocol to uncompress something. It just so happens that things like .zip and .mpeg already have those dictionaries/codecs. Even something as simple as Huffman-encoding is an algorithm; on the other side of the communication channel (you can think of compression as communication), the other person already has a bit of code (the dictionary) to perform the Huffman decompression scheme.
Thus you cannot even begin to talk about compressing something without first thinking "what kinds of matrices do I expect to see?", "is the data truly random, or is there order?", and if so "how can I represent the matrices to take advantage of order in the data?".
You cannot compress some matrices without increasing the size of other objects (by at least 1 bit). This is bad news if all matrices are equally probable, and you care equally about them all.
Addenda:
The answer to use sparse matrix machinery is not necessarily the right answer. The matrix could for example be represented in python as [[(r+c)%2 for c in range (cols)] for r in range(rows)] (a checkerboard pattern), and a sparse matrix wouldn't compress it at all, but the Kolmogorov complexity of the matrix is the above program's length.
Well, I know every matrix will have the same number of ones, so this is kind of deterministic. The only think I don't know is where the 1's will be. Also, if I transmit the matrix with a dictionary and there are burst errors, maybe the dictionary gets affected so... wouldnt be the resulting information corrupted? That's why I was trying to use lossless data compression such as run-length, the decoder just doesnt need a dictionary. --original poster
How many 1s does the matrix have as a fraction of its size, and what is its size (NxN -- what is N)?
Furthermore, this is an incorrect assertion and should not be used as a reason to desire run-length encoding (which still requires a program); when you transmit data over a channel, you can always add error-correction to this data. "Data" is just a blob of bits. You can transmit both the data and any required dictionaries over the channel. The error-correcting machinery does not care at all what the bits you transmit are for.
Addendum 2:
There are (14*14) choose 20 possible arrangements, which I assume are randomly chosen. If this number was larger than 128^2 what you're trying to do would be impossible. Fortunately log_2((14*14) choose 20) ~= 90bits < 128bits so it's possible.
The simple solution of writing down 20 numbers like 32,2,67,175,52,...,168 won't work because log_2(14*14)*20 ~= 153bits > 128bits. This would be equivalent to run-length encoding. We want to do something like this but we are on a very strict budget and cannot afford to be "wasteful" with bits.
Because you care about each possibility equally, your "dictionary"/"program" will simulate a giant lookup table. Matlab's sparse matrix implementation may work but is not guaranteed to work and is thus not a correct solution.
If you can create a bijection between the number range [0,2^128) and subsets of size 20, you're good to go. This corresponds to enumerating ways to descend the pyramid in http://en.wikipedia.org/wiki/Binomial_coefficient to the 20th element of row 196. This is the same as enumerating all "k-combinations". See http://en.wikipedia.org/wiki/Combination#Enumerating_k-combinations
Fortunately I know that Mathematica and Sage and other CAS software can apparently generate the "5th" or "12th" or arbitrarily numbered k-subset. Looking through their documentation, we come upon a function called "rank", e.g. http://www.sagemath.org/doc/reference/sage/combinat/subset.html
So then we do some more searching, and come across some arcane Fortran code like http://people.sc.fsu.edu/~jburkardt/m_src/subset/ksub_rank.m and http://people.sc.fsu.edu/~jburkardt/m_src/subset/ksub_unrank.m
We could reverse-engineer it, but it's kind of dense. But now we have enough information to search for k-subset rank unrank, which leads us to http://www.site.uottawa.ca/~lucia/courses/5165-09/GenCombObj.pdf -- see the section
"Generating k-subsets (of an n-set): Lexicographical
Ordering" and the rank and unrank algorithms on the next few pages.
In order to achieve the exact theoretically optimal compression, in the case of a uniformly random distribution of 1s, we must thus use this technique to biject our matrices to our output number of range <2^128. It just so happens that combinations have a natural ordering, known as ranking and unranking of combinations. You assign a number to each combination (ranking), and if you know the number you automatically know the combination (unranking). Googling k-subset rank unrank will probably yield other algorithms.
Thus your solution would look like this:
serialize the matrix into a list
e.g. [[0,0,1][0,1,1][1,0,0]] -> [0,0,1,0,1,1,1,0,0]
take the indices of the 1s:
e.g. [0,0,1,0,1,1,1,0,0] -> [3,5,6,7]
1 2 3 4 5 6 7 8 9 a k=4-subset of an n=9 set
take the rank
e.g. compressed = rank([3,5,6,7], n=9)
compressed==412 (or something, I made that up)
you're done!
e.g. 412 -binary-> 110011100 (at most n=9bits, less than 2^n=2^9=512)
to uncompress, unrank it
I'll get to 128 bits in a sec, first here's how you fit a 14x14 boolean matrix with exactly 20 nonzeros into 136 bits. It's based on the CSC sparse matrix format.
You have an array c with 14 4-bit counters that tell you how many nonzeros are in each column.
You have another array r with 20 4-bit row indices.
56 bits (c) + 80 bits (r) = 136 bits.
Let's squeeze 8 bits out of c:
Instead of 4-bit counters, use 2-bit. c is now 2*14 = 28 bits, but can't support more than 3 nonzeros per column. This leaves us with 128-80-28 = 20 bits. Use that space for array a4c with 5 4-bit elements that "add 4 to an element of c" specified by the 4-bit element. So, if a4c={2,2,10,15, 15} that means c[2] += 4; c[2] += 4 (again); c[10] += 4;.
The "most wasteful" distribution of nonzeros is one where the column count will require an add-4 to support 1 extra nonzero: so 5 columns with 4 nonzeros each. Luckily we have exactly 5 add-4s available.
Total space = 28 bits (c) + 20 bits
(a4c) + 80 bits (r) = 128 bits.
Your input is a perfect candidate for a sparse matrix. You said you're using Matlab, so you already have a good sparse matrix built for you.
spm = sparse(dense_matrix)
Matlab's sparse matrix implementation uses Compressed Sparse Columns, which has memory usage on the order of 2*(# of nonzeros) + (# of columns), which should be pretty good in your case of 20 nonzeros and 14 columns. Storing 20 values sure is better than storing 196...
Also remember that all matrices in Matlab are going to be composed of doubles. Just because your matrix can be stored as a 1-bit boolean doesn't mean Matlab won't stick it into a 64-bit floating point value... If you do need it as a boolean you're going to have to make your own type in C and use .mex files to interface with Matlab.
After thinking about this again, if all your matrices are going to be this small and they're all binary, then just store them as a binary vector (bitmask). Going off your 14x14 example, that requires 196 bits or 25 bytes (plus n, m if your dimensions are not constant). That same vector in Matlab would use 64 bits per element, or 1568 bytes. So storing the matrix as a bitmask takes as much space as 4 elements of the original matrix in Matlab, for a compression ratio of 62x.
Unfortunately I don't know if Matlab supports bitmasks natively or if you have to resort to .mex files. If you do get into C++ you can use STL's vector<bool> which implements a bitmask for you.