DD Extract LAST 256bytes from an unknown size partition/drive? - partitioning

With dd (http://www.chrysocome.net/dd) you can extract the FIRST 256 bytes of a drive/partition using a command such as: dd if=\\.\g: of=.\myheader.hd bs=256 count=1
How can I get the LAST 256 bytes of the same partition without knowing the total size?
If this cannot be done, is it possible to find the total size from dd?

Related

Revers Engineering CRC 32 in firmware

i have a p-flash (size is about 700kb) in this flash there is a CRC32. I know where it is, and i know the CRC calculation method (polynomial, initial value, final Xor Value, input and output reflected) the problem is that just a part of these 700kb are used to calculate the crc. And i don't know which part. Is there a way to find out the input data for the calculation?
I have 5 of these 700kb files. The files are all the same except 4 bytes that are different, and the 4 bytes of the crc.
If you can get the files onto a PC, that would help. You can xor any two of the files to get a file that is all zeroes except for the 4 different bytes and the 4 bytes of the CRC. The xor of two files will also eliminate any initial value or final xor value, as if initial value = 0 and final xor value = 0. Then check the nearly all zero file to see if the CRC matches what you would expect. If it matches, then you would know that the CRC includes the 4 non-zero bytes and all the zero bytes that follow, but you wouldn't know how far before the 4 non-zero bytes that the CRC includes in its calculation, but this would at least be a start. If it does match, that would reduce the amount of searching for what is included in the CRC calculation.
Assuming the part used for CRC is contiguous, you could do a brute force search using a fast CRC32. On a X86 with SSE2 (xmm) registers, an assembly based CRC32 could calculate a CRC32 for 700,000 bytes in about 0.0002 seconds on an Intel 3770K 3.5ghz a 3rd gen processor (they're faster now), or a bit more than 70 seconds to try lengths from 8 to 700,000 bytes.
I converted the code from this github example to Visual Studio asm, for both reflected and non-reflected CRC, using CRC32 and CRC32C polynomials, and I could upload the code if interested.
https://github.com/intel/isa-l/blob/master/crc/crc16_t10dif_01.asm

Need a formula to get total LUN size using lunSizeLow and lunSizeHigh SNMP objects

I have 2 SNMP Objects/OIDs. Below are the details:
Object1:
Name: lunSizeLow
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.9
Description: `LUN` size in bytes - low order bytes
Object2:
Name: lunSizeHigh
OID: 1.3.6.1.4.1.43906.1.4.3.2.3.1.10
Description: `LUN` size in bytes - high order bytes
My requirement:
I want to monitor LUN size through some script. But i didn't found any SNMP object, which can give total LUN size directly. I found 2 separate objects (lunSizeLow and lunSizeHigh) to get LUN total size, so i need a formula to get total LUN size using these 2 low order and high order SNMP objects (lunSizeLow and lunSizeHigh).
I gone through many articles over internet and i found couple of formulas in community.hpe.com.
But I'm not sure which one is correct.
Formula 1:
Max unsigned number that can be stored in 32bits counter is 4294967295.
Total size would be: LOW_ORDER_BYTES + HIGH_ORDER_BYTES * 4294967296
Formula 2:
Total size in GB is LOW_ORDER_BYTES / 1073741824 + HIGH_ORDER_BYTES * 4
Could any one help me to get correct formula.
Most languages will have the bit-shift operator, allowin you to do something similar to the below (pseudo-Java):
long myBigInteger = lunSizeHigh
myBigInteger << 32 # Shifts the high bits 32 positions to the left, into the high half of the long
myBigInteger = myBigInteger + lunSizeLow
This has two advantages over multiplying:
Bit shifting is often faster than multiplication, even though most compilers would optimize that particular multiplication into a bit shift anyway.
It is easier to read the code and understand why this would provide the correct answer, given the description from the MIB. Magic numbers should be avoided where possible.
That aside, putting some numbers into the Windows Calculator (using Programmer Mode) and trying formula 1, we can see that it works.
Now, you don't specify what language or environment you're working in, and in some languages you won't have any number type that supports the size of numbers you want to manipulate. (Same reason that this number had to be split into two counters to begin with - it's larger than the largest number representation available on some (primitive) platforms.) If you want to do it using multiplication, you'll have to make sure your implementation language can do better.

Plotting data using Gnuplot

I have a csv file includes two column
no. of packet size
1 60
2 70
3 400
4 700
.
.
.
1000000 60
where the first column is
the number of packet
, and the second column is
the size of packet in bytes.
the total number of packets in the csv file is one million. I need to plot histogram for this data file by:
xrange = [0, 5 , 10 , 15 ]
which denotes the packet size in bytes. The range [0] denotes the packet size less than 100 bytes, and [5] denotes the packet bytes less than 500 bytes and so on.
yrange = [ 10, 100, 10000, 100000000],
which denots the number of packets
Any help will be highly appreciated.
Don't quite remember exactly how this works, but the commands given in my Gnuplot in Action book for creating a histogram are
bin(x,s) = s*int(x/s)
plot "data-file" using (bin(1,0.1)):(1./(0.1*300)) smooth frequency with boxes
I believe smooth frequency is the command that's important to you, and you need to figure out what the using argument should be (possibly with a different function used).
This should do the job:
# binning function for arbitrary ranges, change as needed
bin(x) = x<100 ? 0 : x<500 ? 5 : x<2500 ? 10 : 15
# every occurence is counted as (1)
plot datafile using (bin($2)):(1) smooth freq with boxes
Im not really sure what you mean by "yrange [10 100 1000 ...]", do you want a logscaled ordinate?
Then just
set xrange [1:1e6]
set logscale y
before plotting.

Does AES_ENCRYPT reduce the byte size of the encrypted text?

I know that base64 increases the total 'size' of the specific image or text by 1/3, but what about AES_ENCRYPT?
AES is a block cipher and thus processes data only in multiples of a specified block size. Its input (and as a result is output) is padded with enough bytes to round the size up towards a multiple of the block size. Since the manual says that an 128-bit key is used, we know the block size is 16 bytes.
The manual also gives a formula that describes this mathematically:
the result string length may be calculated using this formula:
16 * (trunc(string_length / 16) + 1)

Sql Msg 1701, Level 16, State 1 while making a very wide table for testing purposes

I am making a huge table simulating a very rough scenario in SQL (a huge table with 1024 atts, of course a lot of rows if you wonder), the data type for each attribute are floats.
To do so I am using another table which has 300 attributes and I am doing something like
SELECT [x1]
,[x2]
,[x3]
,[x4]
,[x5]
,[x6]
,[x7]
,[x8]
,[x9]
,[x10]
,[x11]
,[x12]
,[x13]
,[x14]
,[x300]
,x301= x1
,x302= x2
...
,x600= x300
,x601= x1
,x602= x2
...
,x900= x300
,x901= x1
,x902= x2
...
,x1000= x100
,x1001= x101
,x1002= x102
,x1003= x103
,x1004= x104
...
,x1024= x124
INTO test_1024
FROM my_300;
However an error is present:
Msg 1701, Level 16, State 1, Line 2
Creating or altering table 'test_1024' failed because the minimum row size
would be 8326, including 134 bytes of internal overhead. This exceeds the
maximum allowable table row size of 8060 bytes.
How to overcome this issue? (I know SQL can handle 1024 columns...)
You will have to change your data types to either varchar, nvarchar, varbinary or text to circumvent this error - or break the input into several tables (or better yet, find a better way to structure your data...which I know isn't always possible depending on constraints).
To read more about the 'why' - check out this article which explains it better than I could: http://blog.sqlauthority.com/2007/06/23/sql-server-2005-row-overflow-data-explanation/
Let's have a look at the figures in the error message.
'8326, including 134 bytes of internal overhead' means that data only has taken 8326-134=8192 bytes.
Given that the number of columns is 1024, it's exactly 8192÷1024=8 bytes per column.
Moving on to the overhead, of those 134 bytes, your 1024 columns require 1024÷8=128 bytes for the NULL bitmap.
As for the remaining 134-128=6 bytes, I am not entirely sure but we can very well consider that size a constant overhead.
Now, let's try to estimate the maximum possible number of float columns per table in theory.
The maximum row size is said to be 8060 bytes.
Taking off the constant overhead, it's 8060-6=8054 bytes.
As we now know, one float column takes 8 bytes of data plus 1 bit in the bitmap, which is 8×8+1=65 bits.
The data + NULL bitmap size in bits is 8054×8=64432.
The estimated maximum number of float columns per table is therefore 64432÷65≈991 columns.
So, commenting out 33 columns in your script should result in successful creation of the table.
To verify, uncommenting one back should produce the error again.
SQL server limits row sizes to approximately 8KB - certain column types are excluded from this total, but the value of each individual column must fit within the 8KB limit, and a certain amount of data will be placed in the row itself as a pointer. If you are exceeding this limit, you should step back and reconsider your schema; you do NOT need 300 columns in a table.