Created key from wrong file, how could I modify it / work around the issue? - google-cloud-kms

I created a key from the wrong file by mistake:
gcloud kms encrypt --plaintext-file=keys/staging-access-chris \
--ciphertext-file=id_rsa.enc \
--location=global --keyring="$keyRing" --key=bitbucket
How can I update / edit this entry? Do I need to change the file name and all references to it or is there a solution that requires less work?

The 'encrypt' function takes a plaintext (raw data) as input and produces a ciphertext (encrypted data) as output. It does not create a key.
If you meant to encrypt a different file, the fix is to simply delete the ciphertext you don't want, correct the command, and repeat.

Related

Can GPG change the contents of an encrypted file?

Our company has a vendor which sends a csv that contains commas that are part of the text. This causes columns drift to the right. They claim that they are enclosing those fields in quotation marks (which would resolve the issue) but when we decrypt them using gpg, the quotation marks are being lost.
Is this claim nonsense?
The file is delivered encrypted as a .pgp.
This is the template for the batch file we use to invoke gpg to perform the decryption.
gpg --batch --yes --passphrase {PASSPHRASE} --pinentry-mode loopback -d -o "{OUTPUT}" "{TARGET}"
They claim that they are enclosing those fields in quotation marks (which would resolve the issue) but when we decrypt them using gpg, the quotation marks are being lost.
Is this claim nonsense?
Yes because files before encryption and after decryption are identical.
If you want assurance the files are unchanged, have the vendor create a hash (ie, sha256) of the file before encryption and include this hash when he sends you the file.
For example, something like sha256sum FILE > SHA256SUM.txt && gpg -r USER -e FILE would produce a SHA256SUM.txt file containing the sha256 hash of FILE and also encrypt FILE with USER's key. The vendor can then send you the SHA256SUM.txt file along with the encrypted file so you can compare it to the hash of the decrypted file.

GC and FK command in Thales HSM

what is the difference between the output of command GC and FK command in Thales HSM. since they both generate the same (clear and encrypted components).
Assuming you are referring to console commands not host commands.
'GC' command is used to generate completely new random keys and output on console clear and encrypted under LMK.
'FK' command is used to XOR multiple keys generated by 'GC' command and output final key encrypted by LMK.

Regex to find keys in JSON

I want to match keys in JSON string on linux shell grep. My objective is to remove JSON keys so that values would come out in CSV. Please help me with regex. I tried "(.*?)":
{"field1":"value1","field2":"value2"}
But above regex matches "field1": and then "value1","field2":
So basically it shouldn't match groups containing comma. I know this should be done in python or java. But I want to avoid deployment of application on that specific server. Also internet access has been revoked from this server and many othe restrictions so I cannot install any new tools or commands. Is it possible?
You can try the following regex:
"([^"]+?)"\s*:
It will match any word character that may be between quotes(" ") succeed by a : (ignoring whitespaces).
Demo

how to use import bat

I'm new in neo4j.
I'm trying to load csv files using the import.bat,
with shell.
(in windows)
I have 500,000 nodes
and 37 million relationships.
The import.bat is not working.
The code in shell cmd:
../neo4j-community-3.0.4/bin/neo4j-import \
--into ../neo4j-community-3.0.4/data/databases/graph.db \
--nodes:Chain import\entity.csv
--relationships import\roles.csv
but I did not know where to keep the csv files
and how to use the import.bat with shell.
I'm not sure I'm in the right place:
neo4j-sh(?)$
(I looked at a lot of examples, for me it just does not work)
I try to start the server with the cmd line and it's not working. That's what I did:
neo4j-community-3.0.4/bin/neo4j.bat start
I want to work with indexes I set the index, but when I try to use it,
it's not working:
start n= node:Chain(entity_id='1') return n;
I set the properties:
node_keys_indexable=entity_id
and also:
node_auto_indexing=true
Without indexes this query:
match p = (a:Chain)-[:tsuma*1..3]->(b:Chain)
where a.entity_id= 1
return p;
try to get one node with 3 levels
it's returned 49 relationships in 5 minutes.
It's a lot of time!!!!!
Your import command looks correct. You point to the csv files where they are, just like with how you point to --into directory. If you're unsure then use fully qualified names like /home/me/some-directory/entities.csv. What does it say (really hard to help you without knowing the error).
What's the error?
Legacy indexes doesn't go well with the importer and so enabling the legacy indexes afterwards doesn't index your data, could you instead use a index (CREATE INDEX ...)?

How to update OpenGrok indices

The OpenGrok wrapper script has an update option, but when I run it without any options (as echoed in the usage), I get
Loading the default instance configuration ...
FATAL ERROR: OpenGrok Source Path /var/opengrok/src doesn't exist - Aborting!
I have also tried specifying the SRC_ROOT, but continue to get the same error.
This might not be the right answer, but I have been able to update by re-running the index job itself. It doesn't take as long as the initial indexing.
from https://github.com/OpenGrok/OpenGrok
E.g. if opengrok data directory is /tank/opengrok and source root is
in /tank/source then to get more verbosity run the indexer as:
$ OPENGROK_VERBOSE=true OPENGROK_INSTANCE_BASE=/tank/opengrok \
./OpenGrok index /tank/source
SRC_ROOT is a variable in the OpenGrok wrapper (normally in /usr/opengrok/bin/OpenGrok), this variable is to say OpenGrok where you have your src code to be indexed. So you need to edit it:
SRC_ROOT="your/src/path"
Also, after that maybe you'll se an error for the Data location... and you have to set also the variable DATA_ROOT (index location)
DATA_ROOT="you/data"