Using TPM key handle for device CA key and device Identity Keys - identity

Does anyone tried using the TPM key for the device CA and identity certificates in Edge Device?
Currently the device CA and the identity keys are generated in PEM files and set the path in the config.yaml as URI link.
I have generated a TPM key and generate device CA and Identity certificate with a root CA. How do I use the TPM key instead of the PEM key file by referencing to the handle example 0x81000002?

Objective is to secure the certificates used by edge device using TPM for upstream(device identity) and downstream (device CA)operations.
Currently both keys above are using PEM key files which is unsafe.
Operation example:
Step1: User Create TPM keys for device CA and identity keys with peristent handle under a SRK primary key using tpm2-tools
example : device identity at 0x81020000 and device CA at 0x81000002
echo ">>>>>>>> Create SRK primary"
tpm2_createprimary -C o -g sha256 -G ecc -c SRK_primary.ctx
tpm2_evictcontrol -C o -c SRK_primary.ctx 0x81000001
echo "create persistent IDevID Key"
tpm2_create -C 0x81000001 -g sha256 -G ecc -r ID_Priv.key -u ID_Pub.key
tpm2_load -C 0x81000001 -u ID_Pub.key -r ID_Priv.key -n ID_key_name_structure.data -c ID_keycontext.ctx
tpm2_evictcontrol -C o -c ID_keycontext.ctx 0x81020000
echo "create persistent devCA Key"
tpm2_create -C 0x81000001 -g sha256 -G rsa -r DevCA_Priv.key -u DevCA_Pub.key
tpm2_load -C 0x81000001 -u DevCA_Pub.key -r DevCA_Priv.key -n DevCA_key_name_structure.data -c DevCA_keycontext.ctx
tpm2_evictcontrol -C o -c DevCA_keycontext.ctx 0x81000002
Step2: Create CSR and certificates using the above key handles
openssl req -new -engine tpm2tss -key 0x81020000 -passin pass:"" -keyform engine -subj /CN=DeviceIdentity -out dev_iden.csr
Step3: need modification in the security daemon to make this work
modify config.yaml to use the above handles for the keys of device ca and idendtity and specify certs as URI path

Great question!
The IoT Edge runtime needs to access the TPM to automatically provision your device. See how to do it here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-simulated-device-linux#give-iot-edge-access-to-the-tpm
The way the attestation process works is like this:
When a device with a TPM first connects to the Device Provisioning Service, the service first checks the provided EK_pub against the EK_pub stored in the enrollment list. If the EK_pubs do not match, the device is not allowed to provision. If the EK_pubs do match, the service then requires the device to prove ownership of the private portion of the EK via a nonce challenge, which is a secure challenge used to prove identity. The Device Provisioning Service generates a nonce and then encrypts it with the SRK and then the EK_pub, both of which are provided by the device during the initial registration call. The TPM always keeps the private portion of the EK secure. This prevents counterfeiting and ensures SAS tokens are securely provisioned to authorized devices.
Ref: https://learn.microsoft.com/en-us/azure/iot-dps/concepts-tpm-attestation
I believe your case is different though (you want to add your CA certificate to your TPM and then retrieve it from there?). I saw your feedback request, sharing here for others to vote:
Using TPM keys for Device CA and Identity - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/40920013-using-tpm-keys-for-device-ca-and-identity
If your idea is identical to the one already added in IoT Edge Feedback forum, please merge it and vote:
Store Private key for X.509 based DPS securely on HSM - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/39457678-store-private-key-for-x-509-based-dps-securely-on

Related

is there easy way to delete all resources in oracle cloud infrastructure compartment?

is there an easy way to delete all resources in a compartment of the oracle cloud infrastructure tenancy?
since tracking all resources in the compartment is hard to do manually.
I know we can use Tenancy Explorer.
But even with the Tenancy Explorer it is hard to do since
Tenancy Explorer does not list all resources as of now, like stream pools.
the process is still manual.
You can easily do that with from shell function using oci cli as follows
delcmpt(){
OCI_TENANCY_NAME=<Ur Teanncy Name>
OCI_TENANCY_OCID=<tenancy ocid>
OCI_CMPT_ID=$1 #OCID for cmpt to be deleted, passed as argument
OCI_CMPT_NAME=$(oci iam compartment get -c ${OCI_CMPT_ID} | jq '.data.name')
echo Compartment being deleted is ${OCI_CMPT_NAME} for 4 regions SJC, PHX, IAD and BOM.
declare -a region_codes=("SJC"
"PHX" "IAD"
"BOM"
) # list of region codes where cmpt resources exists
for OCI_REGION_CODE in "${region_codes[#]}"
do
UNIQUE_STACK_ID=$(date "+DATE_%Y_%m_%d_TIME_%H_%M")
OCID_CMPT_STACK=$(oci resource-manager stack create-from-compartment --compartment-id ${OCI_TENANCY_OCID} \
--config-source-compartment-id ${OCI_CMPT_ID} \
--config-source-region ${OCI_REGION_CODE} --terraform-version "1.0.x"\
--display-name "Stack_${UNIQUE_STACK_ID}_${OCI_REGION_CODE}" --description "Stack From Compartment ${OCI_CMPT_NAME} for region ${OCI_REGION_CODE}" --wait-for-state SUCCEEDED --query "data.resources[0].identifier" --raw-output)
echo $OCID_CMPT_STACK
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 300
# twice since it fails sometimes and running it twice and is idempotent
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 540
oci resource-manager stack delete --stack-id ${OCID_CMPT_STACK} --force --wait-for-state DELETED
done
oci iam compartment delete -c ${OCI_CMPT_ID} --force --wait-for-state SUCCEEDED
}
OCI_CMPT_ID is OCID for the compartment to be deleted.
OCI_TENANCY_OCID is your tenancy OCID
usage:
shell $: delcmpt OCID_for_the_Compartment_to_be_deleted

Why gsutil restore a file from a bucket encrypted with KMS (using a service account without DECRYPT permission)?

I am working with GCP KMS, and it seems that when I send a file to a GCP bucket (using gustil cp) it is encrypted.
However, I have a question related to the permission to restore that file from the same bucket, using a different service account. I mean, the service account that I am using to restore the file from the bucket, doesn't have Decrypt privilege and even so the gustil cp works.
My question is whether it's normal behavior, or if I'm missing something ?
Let me describe my question:
First of all, I confirm that the default encryption for the bucket is the KEY that I set up previously:
$ kms encryption gs://my-bucket
Default encryption key for gs://my-bucket:
projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY
Next, with gcloud config, I set a service account, which has "Storage Object Creator" and "Cloud KMS CryptoKey Encrypter" permissions:
$ gcloud config set account my-service-account-with-Encrypter-and-object-creator-permissions
Updated property [core/account].
I send a local file to the bucket:
$ gsutil cp my-file gs://my-bucket
Copying file://my-file [Content-Type=application/vnd.openxmlformats-officedocument.presentationml.presentation]...
| [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
After sending the file to the bucket, I confirm that the file is encrypted using the KMS key I created before:
$ gsutil ls -L gs://my-bucket
gs://my-bucket/my-file:
Creation time: Mon, 25 Mar 2019 06:41:02 GMT
Update time: Mon, 25 Mar 2019 06:41:02 GMT
Storage class: REGIONAL
KMS key: projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY/cryptoKeyVersions/1
Content-Language: en
Content-Length: 616959
Content-Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Hash (crc32c): 8VXRTU==
Hash (md5): fhfhfhfhfhfhfhf==
ETag: xvxvxvxvxvxvxvxvx=
Generation: 876868686868686
Metageneration: 1
ACL: []
Next, I set another service account, but this time WITHOUT DECRYPT permission and with object viewer permission (so that it be able to read files from the bucket):
$ gcloud config set account my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions
Updated property [core/account].
After set up the new service account (WITHOUT Decrypt permission), the gustil to restore the file from the bucket works smooth...
gsutil cp gs://my-bucket/my-file .
Copying gs://my-bucket/my-file...
\ [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
My question is whether is it a normal behavior ? Or, since the new service account doesn't have Decrypt permission, the gustil cp to restore the file shouldn't work ? I mean, it is not the idea that with KMS encryption, the 2nd gustil cp command should fail with a "403 permission denied" error message or something..
If I revoke "Storage object viewer" privilege from the 2nd service account (to restore the file from the bucket), in this case the gustil fails, but it is because it doesn't have permission to read the file:
$ gsutil cp gs://my-bucket/my-file .
AccessDeniedException: 403 my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions does not have storage.objects.list access to my-bucket.
I appreciate if someone else could give me a hand, and clarify the question....specifically I don't sure whether the command gsutil cp gs://my-bucket/my-file . should work or not.
I think it shouldn't work (because the service account doesn't have Decrypt permission), or should it work ?
This is working correctly. When you use Cloud KMS with Cloud Storage, the data is encrypted and decrypted under the authority of the Cloud Storage service, not under the authority of the entity requesting access to the object. This is why you have to add the Cloud Storage service account to the ACL for your key in order for CMEK to work.
When an encrypted GCS object is accessed, the KMS decrypt permission of the accessor is never used and its presence isn't relevant.
If you don't want the second service account to be able to access the file, remove its read access.
By default, Cloud Storage encrypts all object data using Google-managed encryption keys. You can instead provide your own keys. There are two types:
CSEK which you must supply
CMEK which you also supply, but this time is managed by Google KMS service (this is the one you are using).
When you use gsutil cp, you are already using the encryption method behind the curtains. So, as stated on the documentation for Using Encryption Keys:
While decrypting a CSEK-encrypted object requires supplying the CSEK
in one of the decryption_key attributes, this is not necessary for
decrypting CMEK-encrypted objects because the name of the CMEK used to
encrypt the object is stored in the object's metadata.
As you can see, the key is not necessary because it is already included on the metadata of the object which is the one the gsutil is using.
If encryption_key is not supplied, gsutil ensures that all data it
writes or copies instead uses the destination bucket's default
encryption type - if the bucket has a default KMS key set, that CMEK
is used for encryption; if not, Google-managed encryption is used.

How to create Hyperledger Sawtooth network keys

I am setting up a Hyperledger Sawtooth network. In /etc/sawtooth/validator.toml.example, I saw the following:
# A Curve ZMQ key pair are used to create a secured network based on side-band
# sharing of a single network key pair to all participating nodes.
# Note if the config file does not exist or these are not set, the network
# will default to being insecure.
network_public_key = 'wFMwoOt>yFqI/ek.G[tfMMILHWw#vXB[Sv}>l>i)'
network_private_key = 'r&oJ5aQDj4+V]p2:Lz70Eu0x#m%IwzBdP(}&hWM*'
Can anybody tell me how to create another keypair?
These are the ZMQ message keys used to securely communicate with other nodes.
If you've installed sawtooth already, python3 and python3-zmq would have been already installed and available in your system. Here's an example to create the keypair in Python:
import zmq
(public, secret) = zmq.curve_keypair()
print("network_public_key =", public.decode("utf-8"),
"\nnetwork_private_key =", secret.decode("utf-8"))
Also, if you can use a compiled binary tool:
$ sudo apt-get install g++ libzmq3-dev
$ wget https://raw.githubusercontent.com/zeromq/libzmq/master/tools/curve_keygen.cpp
$ g++ curve_keygen.cpp -o curve_keygen -lzmq
$ ./curve_keygen
Copy the corresponding public key output to network_public_key and the private key output to network_private_key fields in /etc/sawtooth/validator.toml
The above was from my Sawtooth FAQ at
https://sawtooth.hyperledger.org/faq/validator/#how-do-i-generate-the-network-public-key-and-network-private-key-in-validator-toml

How to create re-encrypting route for hawkular-metrics in OpenShift Enterprise 3.2

As per documentation to enable cluster metrics, I should create re-encrypting route as per the below statement
$ oc create route reencrypt hawkular-metrics-reencrypt \
--hostname hawkular-metrics.example.com \
--key /path/to/key \
--cert /path/to/cert \
--ca-cert /path/to/ca.crt \
--service hawkular-metrics
--dest-ca-cert /path/to/internal-ca.crt
What exactly should I use for these keys and certificates?
Are these already exists somewhere or I need to create them?
Openshift Metrics developer here.
Sorry if the docs were not clear enough.
The route is used to expose Hawkular Metrics, particularly to the browser running the OpenShift console.
If you don't specify any certificates, the system will use a self signed certificate instead. The browser will complain that this self signed certificate is not trusted, but you can usually just click through to accept it anyways. If you are ok with this, then you don't need to do any extra steps.
If you want the browser to trust this connection by default, then you will need to provide your own certificates signed by a trusted certificate authority. This is exactly similar to how you would have to generate your own certificate if you are running a normal site under https.
From the following command:
$ oc create route reencrypt hawkular-metrics-reencrypt \ --hostname hawkular-metrics.example.com \ --key /path/to/key \ --cert /path/to/cert \ --ca-cert /path/to/ca.crt \ --service hawkular-metrics --dest-ca-cert /path/to/internal-ca.crt
'cert' corresponds to your certificate signed by the certificate authority
'key' corresponds to the key for your certificate
'ca-cert' corresponds to the certificate authorities certificate
'dest-ca-cert' corresponds to the certificate authority which signed the self signed certificate generated by the metrics deployer
The docs https://docs.openshift.com/enterprise/3.2/install_config/cluster_metrics.html#metrics-reencrypting-route should explain how to get the dest-ca-cert from the system
First of all and as far as I know, note that using a re-encrypting route is optional. The documentation mentions deploying without importing any certificate:
oc secrets new metrics-deployer nothing=/dev/null
And you should be able to start with that and make hawkular working (for instance you'll be able to curl with '-k' option). But re-encrypting route is sometimes necessary, some clients refuse to communicate with untrusted certificates.
This page explains what are the certificates needed here: https://docs.openshift.com/enterprise/3.1/install_config/cluster_metrics.html#metrics-reencrypting-route
Note that you can also configure it from the web console if you find it more convenient: from https://(your_openshift_host)/console/project/openshift-infra/browse/routes , you can create a new route and upload the certificate files from that page. Under "TLS termination" select "Re-Encrypt", then provide the 4 certificate files.
If you don't know how to generate self-signed certificates you can follow steps described here: https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/ . You'll end up with a rootCA.pem file (use it for "CA Certificate"), a device.key file (or name it hawkular.key, and upload it as private key) and a device.crt file (you can name it hawkular.pem, it's the PEM format certificate). When asked for the Common Name, make sure to enter the hostname for your hawkular server, like "hawkular-metrics.example.com"
The final one to provide is the current self-signed certificate used by Hawkular, under so-called "Destination CA Certificate". OpenShift documentation explains how to get it: run
base64 -d <<< \
`oc get -o yaml secrets hawkular-metrics-certificate \
| grep -i hawkular-metrics-ca.certificate | awk '{print $2}'`
and, if you're using the web console, save it to a file then upload it under Destination CA Certificate.
Now you should be done with re-encrypting.

I got error " Invalid Key Hash XXXXXXXXXXX does not match any stored key hashes " in realise mode in android?

I develop facebook integration with android. App is running in build mode its working fine but while run in realise mode app is not working. How can i resolve this problem? I got error " Invalid Key Hash XXXXXXXXXXX does not match any stored key hashes .
For people looking for answer:
We need to add both- release and debug keys under your project in Facebook developer site.
Here are the steps I followed:
create keystore by running this command:
...\Java\jdk1.7.0_01\bin>keytool -exportcert -likeytool -genkey -v -keystore <APP_NAME>.keystore -alias <APP_NAME> -keyalg RSA -validity 999999
You have already added the debug key. Now generate a release key and add it under you project.
..\Java\jdk1.7.0_01\bin>keytool -exportcert -alias -keystore .keystore file> | openssl sha1 -binary | openssl base64
Sign your app in the release mode.