Code for the Arduino Nano ATMega328p (AES128; AddRoundKey & SubBytes) - arduino-ide

I'm working on the project on breaking AES128 on Arduino (Nano) board.
I'm using Arduino IDE. My programming skills aren't great, unfortunately.
I need to implement AddRoundKey + SubBytes.
There's the pseudocode that I've found (Thank you Owen Lo, William J. Buchanan and Douglas Carson). But I need to declare variables.
set plainText to a known value
for i 0 to 9:
delay 500
set LED 13 to HIGH
for j 0 to 15:
sbox_lookup[j] = s[plainText[j] XOR key[j]]
set LED 13 to LOW
"The variable s is an array of 256 bytes and consists of the S-Box entries 0×63 to 0×16
while the sbox_lookup is an array of 16 bytes which stores the S-box lookup result.
The plainText variable is an array of 16 bytes which contains our plaintext input. Each
plaintext value is known during an attack.
As demonstrated in the pseudocode, each plaintext value is operated 10 times in
order for an average to be acquired. Furthermore, plaintext values should be known or
observable by the attacker. In our implementation, we chose to increment plaintText[]
by one each time this routine occurs (e.g. plain text array will start at 00, 00, 00, 00, 00,
00, 00, 00, 00, 00, 00, 00, 00, 00, 00, 00, and end at FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF,
FF, FF, FF, FF, FF) A delay of 500 ms is implemented before the next iteration occurs to
ensure the oscilloscope has enough time to capture the trace. The sbox_lookup[] array
is used to store the result of the AddRoundKey and SubBytes operation while s is a 256
byte array which contains the constants of Rijndael S-Box."
Could anyone please help?
Tried to write the code but it isn't working (lack of hard skills).

Related

Error calculating MAC on Thales Payshield 9000 HSM

I've a strange problem with M6 command in HSM Payshield 9000 3.4C firmware. For some leghts of message I receive error code 15 - even if message length is multiply of 8 bytes.
During call I'm sending:
1. Mode Flag: 0
2. Input Format Flag: 0
3. MAC Size: 1
4. MAC Algo: 3
5. Padding Method: 0 (I tested also with 0, 1, 3, but to simplify lets focus on padding mode 0. For test I prepared byte arrays to be mac'ed which size is multiply of 8 so no padding is needed).
6. Key type: 008
I created a simple test where in a loop I'm building byte arrays of '1' with size from 8 to 1000 and mac such array. Each array has length that is multiply of 8 (8,16,24, ...)
For some of array leghts I receive error code 15 Invalid input data (invalid format, invalid characters, or not enough data provided). Below you can find array size ranges for which I'm receiving such error. (<160-248> means I received error for array of length from 160 (included) to 248(included) which are multiple of 8 (160, 168, 176,....248)
<160 - 248>
<416 - 504>
<672 - 760>
<928 - 1000>
For other all other sizes from that range (for example from 256-408 that are multiple of 8 I'm receiving correct response with calculated MAC
For byte array of lengh 160 (which returns error) the example command that I'm sending in this test is (in hex format):
00d33f3f3f3f4d3630303133303030385544324241464236353835433642303735334334363645393434424338423837353030613031313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131
Example command (for array of 152 size) which returns correct response:
00cb3f3f3f3f4d363030313330303038554432424146423635383543364230373533433436364539343442433842383735303039383131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131
What can be the reason of such behaviour?
I finally solved it. The problem was in the message length tag. When the message length represented as 4 digit Hex contains letters, they should be sent in uppercase, ie. 00A0 instead of 00a0

All binary converters return 8 digits

All text to binary converters return a 8 digit per letter code - is there another 0/1 system with less or more digits?
I hear that there are different forms of binary code but all text to binary converters return a 8 digit per letter code (e.x. 01001101).
Is there text to binary conversion which includes only 0 and 1 and does it has less or more number of digits?
If I want to convert text into zeros and ones, will I always end up with 8 digits per letter? Is this 8 digit type of binary conversion used commonly today?
Representing letters as binary requires some kind of standard. Otherwise computers sending bits over a network to each other could never make sense of what letters to turn those bits into!
There are a bunch of standards for character encoding:
ASCII, UTF-8, UTF-16, EBCDIC and more!
But why are letters (almost) always converted to 8-bits?
Before considering letters at all, Binary is just a numeral system with two symbols. You can count up as far as you'd like...
0, 1, 10, 11, 100, 101, 110, 111...
See how this is similar to the decimal system:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13...
Computers have been designed to store numbers in bytes. A byte is always 8-bits long, which means it is possible to store any number from 0 to 255 in one byte.
Now with decimals, you could assign a letter to each number from 1 - 26:
a=1, b=2, c=3 ... z=26
In binary you could do the same:
a=0, b=10, c=11 ... z=11001
This is where we get into character encoding. ASCII is a very common system for encoding letters as numbers.
In the ASCII standard, you can see that A=65, which is 01000001 in binary. Since most computers and software understand ASCII (or UTF-8), you can be sure that loading a text file with 01000001 in the raw data will result in that character showing up as A on any computer.
If you wanted to represent a character in a non-standard way, maybe using 9 bits, you can absolutely do that! But this would mean you are using your own encoding system and other software/computers/people wouldn't be able to convert the binary back to letters without your supporting documentation.

Mysql max on hex string

i am having issues using the max operator on a hex value saved in varchar(25) format.
The numbers are like this:
0
1
0A
0F
FF
10A
Now if i do something like this:
SELECT MAX(CONV(number, 16, 10)) as number FROM `numbers` WHERE 1
i get FF (255) instead of what i would expect to be 10A (266)
What's the problem? Is it with the different lengths? But why does it work for 0 and FF then? A hint would be great!
Thanks in advance.
From the documentation: http://dev.mysql.com/doc/refman/5.0/en/mathematical-functions.html#function_conv
Converts numbers between different number bases. Returns a string representation of the number N, converted from base from_base to base to_base.
The result of conv is always a string. If your to_base is 10, it will still result in a string even though you think it should make sense to be a number.
When maxing a varchar column, mysql can do some strange things, but I personally do not know all the details. Has to do with B-trees. See this resource. http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html. Strange that there is info on this problem at that page, but it's the cause.

240 bit radar word

I Have a project using a 240 bit Octal data format that will be coming in the serial port of Arduino uno at 2.4K RS232 converted to TTL.
The 240 bits along with other things has range, azimuth and elevation words, which is what I need to display.
The frame starts with a frame sync code wich is an alternating binary 7 bit code which is:
1110010 for frame 1 and
0001101 for frame 2 and so on.
I was thinking that I might use something like val = serial.read command like
if (val = 1110010 or 0001101) { data++val; }`
that will let me validate the start of my sting.
The rest of the 240 bit octal frame (all numbers) can be serial read to a string of which only parts will be needed to be printed to the screen.
Past the frame sync, all octal data is serial with no Nulls or delimiters so I am thinking
printf("%.Xs",stringname[xx]);
will let me off set the characters as needed so they can be parsed out.
How do I tell the program that the frame sync its looking for is binary or that the data that needs to go into the string is octal, or that it may need to be converted to be read on the screen?

What is this floating point format?

I am trying to figure out how to read a historical binary data file. I believe it came from an older 32 bit Solaris system. I am looking at a section of the file that I believe contains 32 bit floating numbers (not IEEE floats). The format appears to be (as a hex dump):
xx41 xxxx
xx42 xxxx
The 41 and 42 in those positions appear consistently through the floating point numbers. I'm afraid that I do not have any additional information to add to this. So the first part of my question is, what format is this? If the first part can not be answered directly, a list of likely possibilities would be great. Lastly, how would you suggest going about determining what format this is? Thank you for your input.
Could this be PDP-11 format? The giveaway for me is that the second byte is mostly constant, which suggests that the exponent of the floating-point format is ending up in the second byte rather than the first (as you'd expect for a big-endian machine) or the last (for a little-endian machine). The PDP-11 is notorious for its funny byte order for floats and integers; see the material near the bottom of this Floating-Point Formats page.
The values of 41 and 42 would appear to be consistent with positive values of roughly unit-magnitude: the exponent bias for the PDP-11 format appears to be 128, so with the unusual byte-order I'd expect the 2nd byte that you list to contain the sign and the topmost 7 bits of the exponent; that would make the unbiased exponent for a second byte of 41 be either 2 or 3 depending on the 8th exponent bit (which should appear as the MSB of the first byte).
See also this page for a brief description of the PDP-11 format.
[EDIT] Here's some Python code to convert from a 4-byte string in the form you describe to a Python float, assuming that the 4-byte string represents a float in PDP-11 format.
import struct
def pdp_to_float(xs):
"""Convert a 4-byte PDP-11 single-precision float to a Python float."""
ordered_bytes = ''.join(xs[i] for i in [1, 0, 3, 2])
n = struct.unpack('>I', ordered_bytes)[0]
fraction = n & 0x007fffff
exponent = (n & 0x7f800000) >> 23
sign = (n & 0x80000000) >> 31
hidden = 1 if exponent != 0 else 0
return (-1)**sign * 2**(exponent-128) * (hidden + fraction / 2.0**23)
Example:
>>> pdp_to_float('\x00\x00\x00\x00')
0.0
>>> pdp_to_float('\x23\x41\x01\x00')
5.093750476837158
>>> pdp_to_float('\x00\x42\x00\x00')
16.0
The data described is consistent with the usual IEEE 754 format, stored in big-endian order, then displayed by a little-endian dump program two bytes at a time.
32-bit floats in the interval [8, 128) have first bytes of 0x41 or 0x42. Consider such a number, perhaps 0x41010203. Stored big end first, it would appear in memory as the four bytes 0x41, 0x01, 0x02, and 0x03. When the dump program reads 16-byte integers, little end first, it will read and display 0x0141 and 0x0302.