GC and FK command in Thales HSM - hsm

what is the difference between the output of command GC and FK command in Thales HSM. since they both generate the same (clear and encrypted components).

Assuming you are referring to console commands not host commands.
'GC' command is used to generate completely new random keys and output on console clear and encrypted under LMK.
'FK' command is used to XOR multiple keys generated by 'GC' command and output final key encrypted by LMK.

Related

mariadb c-connector bind / execute mess up memory allocation

i'm using mariadb c-connector with prepare, bind and execute. it works usualy. but one case end up in "corrupted unsorted chunks" and core dumping when freeing bind buffer. i suggest the whole malloc organisation is messed up after calling mysql_stmt_execute(). my test's MysqlDynamic.c show:
the problem only is connected to x509cert variable bound by bnd[9]
freeing memory only fails if bnd[9].is_null = 0, if is_null execute end normally
freeing memory (using FreeStmt()) after bind and before execute end normally
print of bnd[9].buffer before execute show (void*) is connected to the correct string buffer
same behavior for setting bnd[9].buffer_length to STMT_INDICATOR_NTS or strlen()
other similar bindings (picture, bnd[10]) do not lead to corrupted memory and core dump.
i defined a c structure test for test data in my test program MysqlDynamic.c which is bound in MYSQL_BIND structure.
bindings for x509cert (string buffer) see bindInsTest():
bnd[9].buffer_type = MYSQL_TYPE_STRING;
bnd[9].buffer_length = STMT_INDICATOR_NTS;
bnd[9].is_null = &para->x509certI;
bnd[9].buffer = (void*) para->x509cert;
please get the details out of source file MysqlDynamic.c. please adapt defines in the source to your environment, verify content, and run it. you will find compile info in source code. MysqlDynymic -c will create the table. MysqlDynamic -i will insert 3 records each run. And 'MysqlDynamic -d` drop the the table again.
MysqlDynamic -vc show:
session set autocommit to <0>
connection id: 175
mariadb server ver:<100408>, client ver:<100408>
connected on localhost to db test by testA
>> if program get stuck - table is locked
table t_test created
mysql connection closed
pgm ended normaly
MysqlDynamic -i show
ins2: BufPara <92> name<master> stamp<> epoch<1651313806000>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<0x5596a0f0c220> buf<0x5596a0f0c220> null<0> length<172>
ins1: BufPara <91> name<> stamp<2020-04-30> epoch<1650707701123>
cert is cert<0x5596a0f181d0> buf<0x5596a0f181d0> null<0>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
ins0: BufPara <90> name<gugus> stamp<1988-10-12T18:43:36> epoch<922337203685477580>
cert is cert<(nil)> buf<(nil)> null<1>
picure is pic<(nil)> buf<(nil)> null<1> length<0>
free(): corrupted unsorted chunks
Aborted (core dumped)
checking t_test table content show all records are inserted as expected.
you can disable loading of x509cert and/or picture by commenting out the defines line 57/58. the program than end normally. you also can comment out line 208. the buffers are then indicated as NULL.
Questions:
is there a generic coding mistake in the program causing this behavior?
can you run the program in your environment without core dumping? i'm currently using version 10.04.08.
any improvment in code will be welcome.

Can GPG change the contents of an encrypted file?

Our company has a vendor which sends a csv that contains commas that are part of the text. This causes columns drift to the right. They claim that they are enclosing those fields in quotation marks (which would resolve the issue) but when we decrypt them using gpg, the quotation marks are being lost.
Is this claim nonsense?
The file is delivered encrypted as a .pgp.
This is the template for the batch file we use to invoke gpg to perform the decryption.
gpg --batch --yes --passphrase {PASSPHRASE} --pinentry-mode loopback -d -o "{OUTPUT}" "{TARGET}"
They claim that they are enclosing those fields in quotation marks (which would resolve the issue) but when we decrypt them using gpg, the quotation marks are being lost.
Is this claim nonsense?
Yes because files before encryption and after decryption are identical.
If you want assurance the files are unchanged, have the vendor create a hash (ie, sha256) of the file before encryption and include this hash when he sends you the file.
For example, something like sha256sum FILE > SHA256SUM.txt && gpg -r USER -e FILE would produce a SHA256SUM.txt file containing the sha256 hash of FILE and also encrypt FILE with USER's key. The vendor can then send you the SHA256SUM.txt file along with the encrypted file so you can compare it to the hash of the decrypted file.

Created key from wrong file, how could I modify it / work around the issue?

I created a key from the wrong file by mistake:
gcloud kms encrypt --plaintext-file=keys/staging-access-chris \
--ciphertext-file=id_rsa.enc \
--location=global --keyring="$keyRing" --key=bitbucket
How can I update / edit this entry? Do I need to change the file name and all references to it or is there a solution that requires less work?
The 'encrypt' function takes a plaintext (raw data) as input and produces a ciphertext (encrypted data) as output. It does not create a key.
If you meant to encrypt a different file, the fix is to simply delete the ciphertext you don't want, correct the command, and repeat.

LMDB: How to interpret output from mdb_stat and mdb_dump utilities

I have a functional LMDB that, for test purposes, currently contains only 21 key / value records. I've successfully tested inserting and reading records, and I'm comfortable with the database working as intended.
However, when I use the mdb_stat and mdb_dump utilities, I see the following output, respectively:
Status of Main DB
Tree depth: 1
Branch pages: 0
Leaf pages: 1
Overflow pages: 0
Entries: 1
VERSION=3
format=bytevalue
type=btree
mapsize=1073741824
maxreaders=126
db_pagesize=4096
HEADER=END
4d65737361676573
000000000000010000000000000000000100000000000000d81e0000000000001500000000000000ba1d000000000000
DATA=END
In particular, why would mdb_stat indicate only one entry when I have 21? Moreover, each entry comprises 1024 x 300 values of five bytes per value. mdb_dump obviously doesn't show anywhere near the 1,536,000 bytes I'd expect to see, yet the values I mdb_put() and mdb_get() on the fly are correct. Anyone know what's going on?
The relationship between an operating system's directory and an LMDB environment's data.mdb and lock.mdb files is one-to-one.
If the LMDB environment (in the OS directory) has more than one database, then the environment also contains a separate LMDB database containing all of its named databases.
The mdb_stat and mdb_dump utilities appear to contain minimal logic, so when they are fed a given directory via the command line, they appear to produce results only for the database storing database names and not the database(s) storing the actual data of interest.
4d65737361676573 is the Ascii for "Messages", which is the name of table ("sub-db" in lmdb terminology) storing the actual data in your case.
The mdb_dump command only dumps the main db by default. You can use the -s option to dump that sub-db, i.e.
mdb_dump -s Messages
or you can use the -a option to dump all the sub-dbs.
Since you are using a sub-database, the number of entries in the main database corresponds to the number of sub-databases you've created (ie just 1).
Try using mdb_stat -a. This will show you a break-down of all the sub-databases (as well as the main DB). In this breakdown it will list the number of entries for each sub-database. Here you should see your 21 entries.

How to update OpenGrok indices

The OpenGrok wrapper script has an update option, but when I run it without any options (as echoed in the usage), I get
Loading the default instance configuration ...
FATAL ERROR: OpenGrok Source Path /var/opengrok/src doesn't exist - Aborting!
I have also tried specifying the SRC_ROOT, but continue to get the same error.
This might not be the right answer, but I have been able to update by re-running the index job itself. It doesn't take as long as the initial indexing.
from https://github.com/OpenGrok/OpenGrok
E.g. if opengrok data directory is /tank/opengrok and source root is
in /tank/source then to get more verbosity run the indexer as:
$ OPENGROK_VERBOSE=true OPENGROK_INSTANCE_BASE=/tank/opengrok \
./OpenGrok index /tank/source
SRC_ROOT is a variable in the OpenGrok wrapper (normally in /usr/opengrok/bin/OpenGrok), this variable is to say OpenGrok where you have your src code to be indexed. So you need to edit it:
SRC_ROOT="your/src/path"
Also, after that maybe you'll se an error for the Data location... and you have to set also the variable DATA_ROOT (index location)
DATA_ROOT="you/data"