Extract Key from a tr31 key block (exporting from HSM Thales 9000) - hsm

Using the HSM command A0 (Generate a Key), I am getting the below response.
HEADA100U7D4213E0422F4E08E9455D9837E09FDDRA0072B1TX00S000073C35FF96F7A8C7D35D440CCBDA06FFED3AC7017F27B0A0E8896FFC971F0B9
HEAD (Message Header)
A1 (Response Code)
00 (Error Code)
U7D4213E0422F4E08E9455D9837E09FDD (Key under LMK)
RA0072B1TX00S000073C35FF96F7A8C7D35D440CCBDA06FFED3AC7017F27B0A0E8896FFC9 (Key under TMK - export tr31 key block)
71F0B9 (Key Check Value)
My questions are:
How can I extract the key (under tmk) from tr31 key block?
Does anyone know how to decode the tr31 key block?

Take a look at the TR31 standard (which isn't legally available for free, because ANSI wants to make your life miserable).
R doesn't seem to be part of the TR31 block and I can only assume is something Thales specific
A is the key block version id (the first field of the header) and describes the key binding method being used. A is deprecated and uses a key variant binding method.
0072 is the length of the whole TR31 key block in decimal digits, which happens to fit if, we ignore the leading R
B1 is the key usage, which is an Initial DUKPT Key
T is the algorithm of the key, which is Triple-DES (or Triple-DEA in TR31 notation)
X is the mode of use, which is "Key used to derive other key(s)"
00 is the key version number, which means no key versioning is used for this key
S is the exportability of the key, which is "Sensitive"
00 is the number of optional blocks in decimal.
00 is reserved for future use and always has to be two ASCII zeros. As there are no optional blocks, this field is the last field of the header.
73C35FF96F7A8C7D35D440CCBDA06FFED3AC7017F27B0A0E is the hex encoded encrypted key (everything after the header except the last 8 characters). It is 24 bytes long, which fits for a 16 byte long key (2 bytes key length, 16 bytes key, 6 bytes padding to get to full 8 byte block size).
8896FFC9 is the MAC (the last 8 characters which (for key block version id A) are the leftmost 32 bit of the Triple-DES CBC-MAC)
To go any further (decrypt the encrypted key) I would need the Key Block Protection Key (which is probably the TMK?).

'R' is the scheme flag used by Thales, means the format of the key is TR-31. This would not usually be included in any messaging to a peer device as it isn't part of the TR-31 format.

Related

SQL string literal hexadecimal key to binary and back

after extensive search I am resorting to stack-overflows wisdom to help me.
Problem:
I have a database table that should effectively store values of the format (UserKey, data0, data1, ..) where the UserKey is to be handled as primary key but at least as an index. The UserKey itself (externally defined) is a string of 32 characters representing a checksum, which happens to be (a very big) hexadecimal number, i.e. it looks like this UserKey = "000000003abc4f6e000000003abc4f6e".
Now I can certainly store this UserKey in a char(32)-field, but I feel this being mighty inefficient, as I store a series of in principle arbitrary characters, i.e. reserving space for for more information per character than the 4 bits i need to store the hexadecimal characters (0..9,A-F).
So my thought was to convert this string literal into the hex-number it really represents, and store that. But this number (32*4 bits = 16Bytes) is much too big to store/handle as SQL only handles BIGINTS of 8Bytes.
My second thought was to convert this into a BINARY(16) representation, which should be compact and efficient concerning memory. However, I do not know how to efficiently convert between these two formats, as SQL also internally only handles numbers up to the maximum of 8 Bytes.
Maybe there is a way to convert this string to binary block by block and stitch the binary together somehow, in the way of:
UserKey == concat( stringblock1, stringblock2, ..)
UserKey_binary = concat( toBinary( stringblock1 ), toBinary( stringblock2 ), ..)
So my question is: is there any such mechanism foreseen in SQL that would solve this for me? How would a custom solution look like? (I find it hard to believe that I should be the first to encounter such a problem, as it has become quite modern to use ridiculously long hashkeys in many applications)
Also, the Userkey_binary should than act as relational key for the table, so I hope for a bit of speed by this more compact representation, as it needs to determine the difference on a minimal number of bits. Additionally, I want to mention that I would like to do any conversion if possible on the Server-side, so that user-scripts have not to be altered (the user-side should, if possible, still transmit a string literal not [partially] converted values in the insert statement)
In Contradiction to my previous statement, it seems that MySQL's UNHEX() function does a conversion from a string block by block and then concat much like I stated above, so the method works also for HEX literal values which are bigger than the BIGINT's 8 byte limitation. Here an example table that illustrates this:
CREATE TABLE `testdb`.`tab` (
`hexcol_binary` BINARY(16) GENERATED ALWAYS AS (UNHEX(charcol)) STORED,
`charcol` CHAR(32) NOT NULL,
PRIMARY KEY (`hexcol_binary`));
The primary key is a generated column, so that that updates to charcol are the designated way of interacting with the table with string literals from the outside:
REPLACE into tab (charcol) VALUES ('1010202030304040A0A0B0B0C0C0D0D0');
SELECT HEX(hexcol_binary) as HEXstring, tab.* FROM tab;
as seen building keys and indexes on the hexcol_binary works as intended.
To verify the speedup, take
ALTER TABLE `testdb`.`tab`
ADD INDEX `charkey` (`charcol` ASC);
EXPLAIN SELECT * from tab where hexcol_binary = UNHEX('1010202030304040A0A0B0B0C0C0D0D0') #keylength 16
EXPLAIN SELECT * from tab where charcol = '1010202030304040A0A0B0B0C0C0D0D0' #keylength 97
the lookup on the hexcol_binary column is much better performing, especially if its additonally made unique.
Note: the hex conversion does not care if the hex-characters A through F are capitalized or not for the conversion process, however the charcol will be very sensitive to this.

Is Not BigInt Enough To House sha1?

I want to know if BigInt is enough in size.
I have created a registration.php where the user gets emailed an account activation link to click to verify his email so his account gets activated.
Account Activation Link is in this format:
[php]
$account_activation_link =
"http://www.".$site_domain."/".$social_network_name."/activate_account.php?primary_website_email=".$primary_website_email."&account_activation_code=".$account_activation_code."";
[/php]
Account Activation Code is in this format:
$account_activation_code = sha1( (string) mt_rand(5, 30)); //Type Casted the INT to STRING on the 1st parameter of sha1 as it needs to be a STRING.
Now, the following link got emailed:
http://www.myssite.com/folder/activate_account.php?primary_website_email=my.email#gmail.com&account_activation_code=22d200f8670dbdb3e253a90eee5098477c95c23d
Note the account activation code that got generated by sha1:
22d200f8670dbdb3e253a90eee5098477c95c23d
But in my mysql db, in the "account_activation_code" column, I only see:
"22". The rest of the activation code is missing. Why is that ?
The column is set to BigInt. Is not that enough to house the Sha1 generated code ?
What is your suggestion ?
Thank You
Hashing methods like SHA-1 produce binary values that are on the order of 160+ bits long depending on the variant used. The common SHA256 one is 256 bits long. No cryptographic hash will fit in a 64-bit BIGINT field because 64-bit hashes are uselessly small, you'll have nothing but collisions.
Normally people store hashes as their hex-encoded equivalents in a VARCHAR(255) column. These can be indexed and perform well enough in most situations, especially one where you do periodic lookups based on clicks. From a performance and storage perspective there's no problems here.
Short answer: BIGINT is way too small.
A hash is basically a stream of bits (160 bits in the case of SHA-1). While it's certainly possible to render those bits as a base 2 number and convert it to base 10, you need a really big storage to do so (as far as I know it's not common to see integer variables larger then 64 bits) and there aren't obvious advantages. BIGINT is a 64-bit type, thus cannot do the job.
Unless you have a good reason to store it as number, I'd simply go for either a binary column type or its plain-text hexadecimal representation in a good old VARCHAR (the latter tends to be more practical to handle).
You are trying to store a string in a BigInt. That is your issue. SHA hashes are a mix of alphanumeric characters not just numbers. Change the field to a VARCHAR and you'll be fine

What happens if I send longer key to mysql AES_ENCRYPT than allowed

In mysql 5.7.10 I am using AES_Encrypt with cbc 256 bit mode so I have to use 32-byte key. But I I use longer key, the result is different, so how does MYSQL takes longer key into account?
Because I wanted to use 64-byte (512 bit) key and this kind of "works" in MYSQL, but using Chilkasoft crypt2library the 64-byte key is not working, I mean result is not the same as from MySQL.
Any ideas, can I use longer key than 32-byte (I use SHA512 to generate the key, that is why I have 64-byte key).
bzero((char*) rkey,AES_KEY_LENGTH/8);
for (ptr= rkey, sptr= key; sptr < key_end; ptr++,sptr++)
{
if (ptr == rkey_end)
ptr= rkey; /* Just loop over tmp_key until we used all key */
*ptr^= (uint8) *sptr;
}
I found the answer here. Basically it creates an array of zeros and foreach the key and just xors it. No matter how long is the key, the array will be always 32/16 bytes.

smallest storage of integer array in mysql?

I have a table of user entries, and for every entry I have an array of (2-byte) integers to store (15-25, sporadically even more). The array elements will be written and read all at the same time, it is never needed to update or to access them individually. Their order matters. It makes sense to think of this as an array object.
I have many millions of these user entries and want to store this with the minimum possible amount of disk space. I'm however struggling with MySQL's lack of Array datatype.
I've been considering the following options.
Do it the MySQL way. Make a table my_data with columns user_id, data_id and data_int. To make this efficient, one needs an index on user_id, totalling well over 10 bytes per integer.
Store the array in text format. This takes ~6.5 bytes per integer.
making 35-40 columns ("enough") and having -32768 be 'empty' (since this value cannot occur in my data). This takes 3.5-4 bytes per integer, but is somewhat ugly (as I have to impose a strict limit on the number of elements in the array).
Is there a better way to do this in MySQL? I know MySQL has an efficient varchar type, so ideally I'd store my 2-byte integers as 2-byte chars in a varchar (or a similar approach with blob), but I'm not sure how to do that. Is this possible? How should this be done?
You could store them as separate SMALLINT NULL columns.
In MyISAM this this uses 2 bytes of data + 1 bit of null indicator for each value.
In InnoDB, the null indicators are encoded into the column's field start offset, so they don't take any extra space, and null values are not actually stored in the row data. If the rows are small enough that all the offsets are 1 byte, then this uses 3 bytes for every existing value (1 byte offset, 2 bytes data), and 1 byte for every nonexistent value.
Either of these would be better than using INT with a special value to indicate that it doesn't exist, since that would be 4 bytes of data for every value.
See NULL in MySQL (Performance & Storage)
The best answer was given in the comments, so I'll repost it here with some use-ready code, for further reference.
MySQL has a varbinary type that works really well for this: you can simply use PHP's pack/unpack functions to convert them to and from binary form, and store that binary form in the database using varbinary. Example code for the conversion is below.
function pack24bit($n) { //input: 24-bit integer, output: binary string of length 3 bytes
$b3 = $n%256;
$b2 = $n/256;
$b1 = $b2/256;
$b2 = $b2%256;
return pack('CCC',$b1,$b2,$b3);
}
function unpack24bit($packed) { //input: binary string of 3 bytes long, output: 24-bit int
$arr = unpack('C3b',$packed);
return 256*(256*$arr['b1']+$arr['b2'])+$arr['b3'];
}

Truncates Long Text/Memo string to 255 characters when it is a primary key field or "Indexed: Yes (no-duplicates) allowed"

I created a table in MS Access 2013 with only one column of "Long Text" type (called as Memo earlier) and made it the primary key of the table. I stored a long string of 255+ characters and then I tried to store another string whose first 255 characters were same as previous stored string but all other characters after first 255 were different and MS Access gave "duplicate data" error. In the new string I changed the characters that were after 255th position, using different combinations of characters and all gave error. But when I change any character before the 255th position it does not give any error. So, I concluded that MS Access checks only the first 255 characters of "Long Text" data type for checking duplicates in that column. Is it so? What else could be reason?
String Stored of 256 characters:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelectr
String Gave Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelect1
String Gave Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelect2
String Gave Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelect123
Does Not Give Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelec1
Does Not Give Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelec2
Does Not Give Error:
LoremIpsumissimplydummytextoftheprintingandtypesettingindustryLoremIpsumhasbeentheindustrysstandarddummytexteversincethe1500swhenanunknownprintertookagalleyoftypeandscrambledittomakeatypespecimenbookIthassurvivednotonlyfivecenturiesbutalsotheleapintoelec3
Please notice the difference in the last few characters of above samples. The first stored string has 256 characters. Even if the column is not the primary key, the problem remains same if "Indexed: Yes (no-duplicates) allowed" value is set true in the table design for that column.
As #HansUp stated in the comments, Access (specifically the Jet/ACE db engine) only uses the first 255 characters of a Memo/Long Text field to create its index. Hence, it only uses the first 255 characters to enforce No Duplicates.
#HansUp's advice to use a different db engine that provides better support for long strings and Full Text search is probably the best approach, but I understand there are often other considerations that may be limiting you to solving your problem in Access.
As such, here is an Access-only approach to solving your problem. This assumes the requirement you listed in the comments is valid; i.e., you need to store unique strings of between 400 and 1000 characters.
Alternative 1
Keep your initial Memo/Long Text field: Notes
Create four text fields (not Memo/Long Text) of 250 characters max: Notes1, Notes2, Notes3, Notes4
Set all four text fields: Required -> True and Allow Zero Length -> True (this is required to ensure the unique index is enforced for strings less than 751 characters)
Create a unique index and add all four text fields to that index
Don't ignore nulls in your index
When you store the values, you will need to store them in the Notes field and also split the string among the four smaller NotesX fields
Alternative 2:
Keep your current setup and enforce the uniqueness at code level. Every time you update or insert a note, do a search on all notes that match the first 255 characters, read the value and perform the comparison in code.
Alternative 3 (thanks to #HansUp for suggesting this in the comments):
Keep your initial Memo/Long Text field: Notes
Create a 16 or 32 character text field to store the 256 bit or 512 bit hash of your long text: NotesHash
Add a unique index to your NotesHash field
Every time the memo field is changed, re-compute the hash value and attempt to store it in the table
Notes for this method:
As the pigeonhole principle easily proves, there is the possibility that two different strings will generate the same hash (a collision). However, using a good hashing algorithm will make the actual probability approach zero.
This site offers some VB6/VBA/VBScript implementations of various hashing algorithms. I can't vouch for their correctness, but they passed the eye test for me. Use at your own risk, but it's at least a good starting point.
Really, you can use any deterministic function that returns a string of 255 characters or fewer given an arbitrarily large input. The difference between a crappy hash algorithm and a good one is how well it minimizes collisions. For that reason, I would suggest you use one based on a popular standard.
And yes, I still highly recommend #HansUp's solution to simply use a different db engine.