is there easy way to delete all resources in oracle cloud infrastructure compartment? - oracle-cloud-infrastructure

is there an easy way to delete all resources in a compartment of the oracle cloud infrastructure tenancy?
since tracking all resources in the compartment is hard to do manually.
I know we can use Tenancy Explorer.
But even with the Tenancy Explorer it is hard to do since
Tenancy Explorer does not list all resources as of now, like stream pools.
the process is still manual.

You can easily do that with from shell function using oci cli as follows
delcmpt(){
OCI_TENANCY_NAME=<Ur Teanncy Name>
OCI_TENANCY_OCID=<tenancy ocid>
OCI_CMPT_ID=$1 #OCID for cmpt to be deleted, passed as argument
OCI_CMPT_NAME=$(oci iam compartment get -c ${OCI_CMPT_ID} | jq '.data.name')
echo Compartment being deleted is ${OCI_CMPT_NAME} for 4 regions SJC, PHX, IAD and BOM.
declare -a region_codes=("SJC"
"PHX" "IAD"
"BOM"
) # list of region codes where cmpt resources exists
for OCI_REGION_CODE in "${region_codes[#]}"
do
UNIQUE_STACK_ID=$(date "+DATE_%Y_%m_%d_TIME_%H_%M")
OCID_CMPT_STACK=$(oci resource-manager stack create-from-compartment --compartment-id ${OCI_TENANCY_OCID} \
--config-source-compartment-id ${OCI_CMPT_ID} \
--config-source-region ${OCI_REGION_CODE} --terraform-version "1.0.x"\
--display-name "Stack_${UNIQUE_STACK_ID}_${OCI_REGION_CODE}" --description "Stack From Compartment ${OCI_CMPT_NAME} for region ${OCI_REGION_CODE}" --wait-for-state SUCCEEDED --query "data.resources[0].identifier" --raw-output)
echo $OCID_CMPT_STACK
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 300
# twice since it fails sometimes and running it twice and is idempotent
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 540
oci resource-manager stack delete --stack-id ${OCID_CMPT_STACK} --force --wait-for-state DELETED
done
oci iam compartment delete -c ${OCI_CMPT_ID} --force --wait-for-state SUCCEEDED
}
OCI_CMPT_ID is OCID for the compartment to be deleted.
OCI_TENANCY_OCID is your tenancy OCID
usage:
shell $: delcmpt OCID_for_the_Compartment_to_be_deleted

Related

Using TPM key handle for device CA key and device Identity Keys

Does anyone tried using the TPM key for the device CA and identity certificates in Edge Device?
Currently the device CA and the identity keys are generated in PEM files and set the path in the config.yaml as URI link.
I have generated a TPM key and generate device CA and Identity certificate with a root CA. How do I use the TPM key instead of the PEM key file by referencing to the handle example 0x81000002?
Objective is to secure the certificates used by edge device using TPM for upstream(device identity) and downstream (device CA)operations.
Currently both keys above are using PEM key files which is unsafe.
Operation example:
Step1: User Create TPM keys for device CA and identity keys with peristent handle under a SRK primary key using tpm2-tools
example : device identity at 0x81020000 and device CA at 0x81000002
echo ">>>>>>>> Create SRK primary"
tpm2_createprimary -C o -g sha256 -G ecc -c SRK_primary.ctx
tpm2_evictcontrol -C o -c SRK_primary.ctx 0x81000001
echo "create persistent IDevID Key"
tpm2_create -C 0x81000001 -g sha256 -G ecc -r ID_Priv.key -u ID_Pub.key
tpm2_load -C 0x81000001 -u ID_Pub.key -r ID_Priv.key -n ID_key_name_structure.data -c ID_keycontext.ctx
tpm2_evictcontrol -C o -c ID_keycontext.ctx 0x81020000
echo "create persistent devCA Key"
tpm2_create -C 0x81000001 -g sha256 -G rsa -r DevCA_Priv.key -u DevCA_Pub.key
tpm2_load -C 0x81000001 -u DevCA_Pub.key -r DevCA_Priv.key -n DevCA_key_name_structure.data -c DevCA_keycontext.ctx
tpm2_evictcontrol -C o -c DevCA_keycontext.ctx 0x81000002
Step2: Create CSR and certificates using the above key handles
openssl req -new -engine tpm2tss -key 0x81020000 -passin pass:"" -keyform engine -subj /CN=DeviceIdentity -out dev_iden.csr
Step3: need modification in the security daemon to make this work
modify config.yaml to use the above handles for the keys of device ca and idendtity and specify certs as URI path
Great question!
The IoT Edge runtime needs to access the TPM to automatically provision your device. See how to do it here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-simulated-device-linux#give-iot-edge-access-to-the-tpm
The way the attestation process works is like this:
When a device with a TPM first connects to the Device Provisioning Service, the service first checks the provided EK_pub against the EK_pub stored in the enrollment list. If the EK_pubs do not match, the device is not allowed to provision. If the EK_pubs do match, the service then requires the device to prove ownership of the private portion of the EK via a nonce challenge, which is a secure challenge used to prove identity. The Device Provisioning Service generates a nonce and then encrypts it with the SRK and then the EK_pub, both of which are provided by the device during the initial registration call. The TPM always keeps the private portion of the EK secure. This prevents counterfeiting and ensures SAS tokens are securely provisioned to authorized devices.
Ref: https://learn.microsoft.com/en-us/azure/iot-dps/concepts-tpm-attestation
I believe your case is different though (you want to add your CA certificate to your TPM and then retrieve it from there?). I saw your feedback request, sharing here for others to vote:
Using TPM keys for Device CA and Identity - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/40920013-using-tpm-keys-for-device-ca-and-identity
If your idea is identical to the one already added in IoT Edge Feedback forum, please merge it and vote:
Store Private key for X.509 based DPS securely on HSM - https://feedback.azure.com/forums/907045-azure-iot-edge/suggestions/39457678-store-private-key-for-x-509-based-dps-securely-on

is there any API to list all the BootVolumes under the root compartment

In oracle we have option to create VM in the root account/compartment.
So the boot Volumes create for the above VM will also fall under this root compartment.
Also we have options to terminate this VM leaving behind this bootVolume active.
Exisitng api to list bootVolumes need compartmentId as input, and these bootVolumes don't have any such compartmentId.
Just wanted to know if we have any API's to list all the bootVolumes in the rootCompartment.
When you want to refer to the root compartment in an API or other call, use the tenancy OCID as the compartment OCID. So, to list all boot volumes in IAD AD2, use;
oci bv boot-volume list --compartment-id ocid1.tenancy.oc1..aaXXXXXXXXXXXXXXXXXXXXXh3vXXXXXXXXXXXXXXXXXXXX --availability-domain EXXI:US-ASHBURN-AD-2
Thanks,
Tony

ERROR: (gcloud.beta.functions.deploy) ... message=[The caller does not have permission]

I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.

How to add application packages to Azure Batch task from Azure CLI?

I am trying to write a bash command line script that will create an azure batch task with an application package. The package is called "testpackage" and exists and is activated on the batch account. However, every time I create this task, I get the following error code: BlobAccessDenied.
This only occurs when I include the application-package-references option on the command line. I tried to follow the documentation here, which states the following:
--application-package-references
The space-separated list of IDs specifying the application packages to be installed. Space-separated application IDs with optional version in 'id[#version]' format.
I have tried --application-package-references "test", --application-package-references" test[1]", and --application-package-references test[1], all with no luck. Does anyone have an example of doing this properly?
Here is the complete script I am running:
#!/usr/bin/env bash
AZ_BATCH_KEY=myKey
AZ_BATCH_ACCOUNT=myBatchAccount
AZ_BATCH_ENDPOINT=myBatchEndpoint
AZ_BATCH_POOL_ID=myPoolId
AZ_BATCH_JOB_ID=myJobId
AZ_BATCH_TASK_ID=myTaskId
az batch task create \
--task-id $AZ_BATCH_TASK_ID \
--job-id $AZ_BATCH_JOB_ID \
--command-line "/bin/sh -c \"echo HELLO WORLD\"" \
--account-name $AZ_BATCH_ACCOUNT \
--account-key $AZ_BATCH_KEY \
--account-endpoint $AZ_BATCH_ENDPOINT \
--application-package-references testpackage
Ah the classic "write up a detailed SO question then immediately answer it yourself" conundrum.
All I needed was --application-package-references testpackage#1
Have a good day world.

Create a new Google Cloud project using gcloud

As per the documentation at https://cloud.google.com/sdk/gcloud/reference/init gcloud init myproject command does not work.
google-cloud> gcloud init myproject
Initialized gcloud directory in [/Users/arungupta/workspaces/google-cloud/myproject/.gcloud].
Cloning [https://source.developers.google.com/p/myproject/r/default] into [default].
Cloning into '/Users/arungupta/workspaces/google-cloud/myproject/default'...
fatal: remote error: Repository not found.
You may need to create a repository for this project using the Source Code tab at https://console.developers.google.com
ERROR: Command '['git', 'clone', 'https://source.developers.google.com/p/myproject/r/default', '/Users/arungupta/workspaces/google-cloud/myproject/default', '--config', 'credential.helper=gcloud.sh']' returned non-zero exit status 128
ERROR: Unable to initialize project [myproject], cleaning up [/Users/arungupta/workspaces/google-cloud/myproject].
ERROR: (gcloud.init) Unable to initialize project [myproject].
Creating a project using gcloud init minecraft-server --project minecraft-server-183 creates the project with the name minecraft-server-183.
The project so created is then not visible at https://console.developers.google.com/project.
What is the correct gcloud command to create a new project, without going to the console?
It is now possible with the gcloud alpha projects create command.
For more information see: https://cloud.google.com/resource-manager/
Just wanted to complete the circle here.
Google Cloud CLI tool 'gcloud' supports creating of projects without the need for the 'alpha' component installed from the version 147.0.0 (March 15, 2017) onwards.
Official Reference Link: https://cloud.google.com/sdk/gcloud/reference/projects/create
Release Notes for v147.0.0:
https://cloud.google.com/sdk/docs/release-notes#14700_2017-03-15
It is mentioned under subheading of Google Cloud Resource Manager
For quick reference
Synopsis
gcloud projects create [PROJECT_ID] [--no-enable-cloud-apis] [--folder=FOLDER_ID] [--labels=[KEY=VALUE,…]] [--name=NAME] [--organization=ORGANIZATION_ID] [--set-as-default] [GCLOUD_WIDE_FLAG …]
Description
Creates a new project with the given project ID. By default, projects are not created under a parent resource. To do so, use either the --organization or --folder flag.
Sample Code
gcloud projects create example-foo-bar-1 --name="Happy project" --labels=type=happy
Here's a script that will create a project that is editable by a user (for many reasons, such as for auditability of service accounts, you might want to create per-user projects):
#!/bin/bash
if [ "$#" -lt 3 ]; then
echo "Usage: ./create_projects.sh billingid project-prefix email1 [email2 [email3 ...]]]"
echo " eg: ./create_projects.sh 0X0X0X-0X0X0X-0X0X0X learnml-20170106 somebody#gmail.com someother#gmail.com"
exit
fi
ACCOUNT_ID=$1
shift
PROJECT_PREFIX=$1
shift
EMAILS=$#
gcloud components update
gcloud components install alpha
for EMAIL in $EMAILS; do
PROJECT_ID=$(echo "${PROJECT_PREFIX}-${EMAIL}" | sed 's/#/-/g' | sed 's/\./-/g' | cut -c 1-30)
echo "Creating project $PROJECT_ID for $EMAIL ... "
# Create project
gcloud alpha projects create $PROJECT_ID
# Add user to project
gcloud alpha projects get-iam-policy $PROJECT_ID --format=json > iam.json.orig
cat iam.json.orig | sed s'/"bindings": \[/"bindings": \[ \{"members": \["user:'$EMAIL'"\],"role": "roles\/editor"\},/g' > iam.json.new
gcloud alpha projects set-iam-policy $PROJECT_ID iam.json.new
# Set billing id of project
gcloud alpha billing accounts projects link $PROJECT_ID --account-id=$ACCOUNT_ID
done
Explanation of the script is on medium: https://medium.com/google-cloud/how-to-automate-project-creation-using-gcloud-4e71d9a70047#.t58mss3co and a github link to the above code (I'll update it to remove the alpha when it goes beta/GA, for example) is here: https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/gcloudprojects/create_projects.sh
Update: as of 10/24/2016 #poolie says the gcloud command mentioned in Stephen's answer is now publicly accessable, will leave this answer here as I give some other usage suggestions.
I also have the problem, and was extremely discouraged by #Stephan Weinberg's remark, but I noticed when doing gcloud init that it asks where to put a "default" repository. so I looked at that one's config and see that it's slightly different from what's documented.
try pushing to https://source.developers.google.com/p/YOUR-PROJECT-NAME/r/default instead, it worked for me!