Error calculating MAC on Thales Payshield 9000 HSM - hsm

I've a strange problem with M6 command in HSM Payshield 9000 3.4C firmware. For some leghts of message I receive error code 15 - even if message length is multiply of 8 bytes.
During call I'm sending:
1. Mode Flag: 0
2. Input Format Flag: 0
3. MAC Size: 1
4. MAC Algo: 3
5. Padding Method: 0 (I tested also with 0, 1, 3, but to simplify lets focus on padding mode 0. For test I prepared byte arrays to be mac'ed which size is multiply of 8 so no padding is needed).
6. Key type: 008
I created a simple test where in a loop I'm building byte arrays of '1' with size from 8 to 1000 and mac such array. Each array has length that is multiply of 8 (8,16,24, ...)
For some of array leghts I receive error code 15 Invalid input data (invalid format, invalid characters, or not enough data provided). Below you can find array size ranges for which I'm receiving such error. (<160-248> means I received error for array of length from 160 (included) to 248(included) which are multiple of 8 (160, 168, 176,....248)
<160 - 248>
<416 - 504>
<672 - 760>
<928 - 1000>
For other all other sizes from that range (for example from 256-408 that are multiple of 8 I'm receiving correct response with calculated MAC
For byte array of lengh 160 (which returns error) the example command that I'm sending in this test is (in hex format):
00d33f3f3f3f4d3630303133303030385544324241464236353835433642303735334334363645393434424338423837353030613031313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131
Example command (for array of 152 size) which returns correct response:
00cb3f3f3f3f4d363030313330303038554432424146423635383543364230373533433436364539343442433842383735303039383131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131313131
What can be the reason of such behaviour?

I finally solved it. The problem was in the message length tag. When the message length represented as 4 digit Hex contains letters, they should be sent in uppercase, ie. 00A0 instead of 00a0

Related

Reading / Computing Hex received over RS232

I am using Docklight Scripting to put together a VBScript that communicates with a device via RS232. All the commands are sent in Hex.
When I want to read from the device, I send a 32-bit address, a 16-bit read length, and an 8-bit checksum.
When I want to write to the device, I send a 16-bit data length, the data, followed by an 8-bit checksum.
In Hex, the data that is sent to the device is the following:
AA0001110200060013F81800104D
AA 00 01 11 02 0006 0013F818 0010 4D
(spaced for ease of reading)
AA000111020006 is the protocol header, where:
AA is the Protocol Byte
00 is the Source ID
01 is the Dest ID
11 is the Message Type
02 is the Command Byte
0006 is the Length Byte(s)
The remainder of the string is broken down as follows:
0013F818 is the 32-bit address
0010 is the 16 bit read length
4D is the 8-bit checksum
If the string is not correct, or the checksum is invalid the device replies back with an error string. However, I am not getting an error. The device replies back with the following hex string:
AA0100120200100001000000000100000000000001000029
AA 01 00 12 02 0010 00010000000001000000000000010000 29
(spaced for ease of reading)
Again, the first part of the string (AA00011102) is a part of the protocol header, where:
AA is the Protocol Byte
01 is the Source ID
00 is the Dest ID
12 is the Message Type
02 is the Command Byte
The difference between what is sent to the device, and what the device replies back with is that the length bytes is not a "static" part of the protocol header, and will change based of the request. The remainder of the string is broken down as follows:
0010 is the Length Byte(s)
00010000000001000000000000010000 is the data
29 is the 8-bit Check Sum
The goal is to read a timer that is stored in the NVM. The timer is stored in the upper halves of 60 4-byte NVM words.
The instructions specify that I need to read the first two bytes of each word, and then sum the results.
Verbatim, the instructions say:
Read the NVM elapsed timer. The timer is stored in the upper halves of 60 4-byte words.
Read the first two bytes of each word of the timer. Read the 16 bit values of these locations:
13F800H, 13F804H, 13808H, and continue to 13F8ECH.
Sum the results. Multiply the sum by 409.6 seconds, then divide by 3600 to get the results in hours.
My knowledge of bits, and bytes, and all other things is a bit cloudy. The first thing I need to confirm is that I am understanding the read protocol correctly.
I am assuming that when I specify 0010 as the 16 bit read length, that translates to the 16-bit values that the instructions want me to read.
The second thing I need to understand a little better is that when it tells me to read the first two bytes of each word, what exactly constitutes the first two bytes of each word?
I think what confuses me a little more is that the instructions say the timer is stored in the upper half of the 4 byte word (which to me seems like the first half).
I've sat with another colleague of mine for a day trying to figure out how to make this all work, and we haven't had any consistent results with our trials.
I have looked on the internet to find something that would explain this better in the context being used.
Another worry is that the technical data I am using to accomplish this project isn't 100% accurate in their instructions, and they have conflicting information or skipping information throughout their publication (which is probably close to 1000 pages long).
What I would really appreciate is someone who has a much better understanding of hex / binary to review the instructions I've posted, and provide some feedback on my interpretation of the instructions provided, and provide any information.

MySQL: Get bytes size from hex

Raywenderlich gave an example of a push token:
'740f4707 bebcf74f 9b7c25d4 8e335894 5f6aa01d a5ddb387
462c7eaf 61bb78ad'
If I correct, push token are of bytes size 32.
I would like to get the byte size from it.
Here is MySQL statement:
SELECT LENGTH(CONV('740f4707 bebcf74f 9b7c25d4 8e335894 5f6aa01d a5ddb387
462c7eaf 61bb78ad', 16, 2));
When I input this, it gives me a size of 31. I was expecting 32.
Any ideas?
CONV() converts a number to a string representation of that number. Leading zeros are removed, so LENGTH() counts only the symbols in the string.
You could prepend your number with any number which you know will have the high bit set, and deduct the bit length of the extra values from the result.
i.e.
SELECT LENGTH(CONV('FF740f4707 bebcf74f 9b7c25d4 8e335894 5f6aa01d a5ddb387 462c7eaf 61bb78ad', 16, 2))-8;
gives you the 32 bits you expect.

How to properly generate UUID on Windows Phone 8

I want to generate UUID on a Windows Phone 8 device.
I have used DeviceExtendedProperties to get DeviceUniqueId, which returns a 20 byte array of numbers.
Then I have truncated it to 16 bytes (as in RFC4122 implementation example) and inserted variant (binary 10) and version number (5).
Finally, I initialized System.Guid object, passing my byte array into its constructor. The resulting string representation of System.Guid object is "xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx", where M is version (4 bits set to 0 1 0 1) and N is a variant (4 bit number where two most significant bits are set to 1 0).
Is it okay to simply truncate the last 4 bytes from DeviceUniqueId? Do I really need to insert variant and version number? If so, which version number should I use or does it matter which version to use?

Pascal memory issues

I have a problem that requires searching and saving some values that prevent it from doing an infinite loop. Every possible state of this problem is expressed as an unique 8 digit code with base 6 (all digits are 0-5). When the program evaluates this position, I want a boolean to be set as true so as not to evaluate this position again. However an array 1..55555555 is too large in memory and if i convert the 8 digit code to decimal it takes too much time. Also not all combinations are possible in the problem; 11 11 11 11, 11 11 55 12 and others are not valid and i need not use extra memory. So, is there a way to store as value "true" a block of memory with adress e.g 23451211 and when i call the evaluating process check if 23451211 is true or unassigned;
6 to the power 8 = 1679616.
To mark used or not you need one bit, thus you can do with about 209952 bytes.
In recent Free Pascal's, bitpacked structures are done as follows
var
arr : bitpacked array [0..6*6*6*6*6*6*6*6-1] of boolean;
and arr[x] will give true or false.
The conversion time from base 6 to binary (not decimal!) will probably be shorter than trying to use large swaths of memory. (((digit8)*6+digit7)*6+digit6)*6 etc
p.s. FPC does have an exponent operator, but not for constants, so that's why 6^8 is written like that.

240 bit radar word

I Have a project using a 240 bit Octal data format that will be coming in the serial port of Arduino uno at 2.4K RS232 converted to TTL.
The 240 bits along with other things has range, azimuth and elevation words, which is what I need to display.
The frame starts with a frame sync code wich is an alternating binary 7 bit code which is:
1110010 for frame 1 and
0001101 for frame 2 and so on.
I was thinking that I might use something like val = serial.read command like
if (val = 1110010 or 0001101) { data++val; }`
that will let me validate the start of my sting.
The rest of the 240 bit octal frame (all numbers) can be serial read to a string of which only parts will be needed to be printed to the screen.
Past the frame sync, all octal data is serial with no Nulls or delimiters so I am thinking
printf("%.Xs",stringname[xx]);
will let me off set the characters as needed so they can be parsed out.
How do I tell the program that the frame sync its looking for is binary or that the data that needs to go into the string is octal, or that it may need to be converted to be read on the screen?