Oracle cloud api health check - oracle-cloud-infrastructure

I have below command for creating api health check in oracle cloud.
oci health-checks http-monitor create --compartment-id ocid1.compartment.oc1..aaaaaaaabbb5aavs3npxp6ttq525qoollwxtrjmp1vh6skthcsitfzpw4sq2rfa --display-name "keepalive-check" --interval-in-seconds 300 --method HEAD --protocol "HTTPS" --timeout-in-seconds 60 --targets "[api.abcglobal.com]" --path "/dev/user-service/warm" --vantage-point-names '["aws-sin"]'
While running this command from cloud terminal I am getting below error. Any help would be appreciated.
***Parameter 'targets' must be in JSON format.***
- Command
**ocidevelop#cloudshell:~ (ap-hyderabad-1)$** *oci health-checks http-monitor create --compartment-id ocid1.compartment.oc1..aaaaaaaabbb5aavs3npxp6ttq525qoollwxtrjmp1vh6skthcsitfzpw4sq2rfa --display-name "keepalive-check" --interval-in-seconds 300 --method HEAD --protocol "HTTPS" --timeout-in-seconds 60 --targets "[api.abcglobal.com]" --path "/dev/user-service/warm" --vantage-point-names '["aws-sin"]'*
**Parameter 'targets' must be in JSON format.**
For help with formatting JSON input see our documentation here: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliusing.htm#ManagingCLIInputandOutput

--targets is a complex parameter. You can create its skeleton using https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.6.1/oci_cli_docs/cmdref/health-checks/http-monitor/create.html#cmdoption-targets
Please follow this:
oci health-checks http-monitor create --generate-param-json-input targets > target.json
edit target.json
oci health-checks http-monitor create --compartment-id $C --protocol "HTTPs" --display-name "test" --interval-in-seconds "300" --targets file://target.json

Related

is there easy way to delete all resources in oracle cloud infrastructure compartment?

is there an easy way to delete all resources in a compartment of the oracle cloud infrastructure tenancy?
since tracking all resources in the compartment is hard to do manually.
I know we can use Tenancy Explorer.
But even with the Tenancy Explorer it is hard to do since
Tenancy Explorer does not list all resources as of now, like stream pools.
the process is still manual.
You can easily do that with from shell function using oci cli as follows
delcmpt(){
OCI_TENANCY_NAME=<Ur Teanncy Name>
OCI_TENANCY_OCID=<tenancy ocid>
OCI_CMPT_ID=$1 #OCID for cmpt to be deleted, passed as argument
OCI_CMPT_NAME=$(oci iam compartment get -c ${OCI_CMPT_ID} | jq '.data.name')
echo Compartment being deleted is ${OCI_CMPT_NAME} for 4 regions SJC, PHX, IAD and BOM.
declare -a region_codes=("SJC"
"PHX" "IAD"
"BOM"
) # list of region codes where cmpt resources exists
for OCI_REGION_CODE in "${region_codes[#]}"
do
UNIQUE_STACK_ID=$(date "+DATE_%Y_%m_%d_TIME_%H_%M")
OCID_CMPT_STACK=$(oci resource-manager stack create-from-compartment --compartment-id ${OCI_TENANCY_OCID} \
--config-source-compartment-id ${OCI_CMPT_ID} \
--config-source-region ${OCI_REGION_CODE} --terraform-version "1.0.x"\
--display-name "Stack_${UNIQUE_STACK_ID}_${OCI_REGION_CODE}" --description "Stack From Compartment ${OCI_CMPT_NAME} for region ${OCI_REGION_CODE}" --wait-for-state SUCCEEDED --query "data.resources[0].identifier" --raw-output)
echo $OCID_CMPT_STACK
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 300
# twice since it fails sometimes and running it twice and is idempotent
oci resource-manager job create-destroy-job --execution-plan-strategy 'AUTO_APPROVED' --stack-id ${OCID_CMPT_STACK} --wait-for-state SUCCEEDED --max-wait-seconds 540
oci resource-manager stack delete --stack-id ${OCID_CMPT_STACK} --force --wait-for-state DELETED
done
oci iam compartment delete -c ${OCI_CMPT_ID} --force --wait-for-state SUCCEEDED
}
OCI_CMPT_ID is OCID for the compartment to be deleted.
OCI_TENANCY_OCID is your tenancy OCID
usage:
shell $: delcmpt OCID_for_the_Compartment_to_be_deleted

'gcloud functions deploy' deploys code that cannot listen to Firestore events

When I try to use a gcloud CLI to deploy a small python script that listens to Firestore events, the script fails to listen to the Firestore events. If I use the web inline UI or web zip upload, the script actually listens to Firestore events. The command line doesn't show any errors.
Deploy script
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/<myprojectid>/databases/default/documents/Test/{account}
main.py
def print_name(event, context):
value = event["value"]["fields"]["name"]["stringValue"]
print("New name: " + str(value))
gcloud --version
Google Cloud SDK 243.0.0
beta 2019.02.22
bq 2.0.43
core 2019.04.19
gsutil 4.38
Back to comments
The document is pretty basic (has a name string field).
Any ideas? I'm curious if the gcloud CLI has a bug.
The inline web UI and zip uploader work great. I've tried multiple variations of this (e.g. removing 'beta', adding and removing different deploy args).
I'd expect the script to actually listen to Firestore events.
The "default" in trigger-resource needs parentheses around it.
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource "projects/<myprojectid>/databases/(default)/documents/Test/{account}"

Openshift: How to alert/publish a message if deployment/build fails

In our deployment process it is crucial that we are informed, when a deployment fails. The deployment is rolling, but an information through slack would be nice anyway. Would it be possible through lifecycles or what other possibilities do exist?
The deployment status is logging out as events logs from OpenShift usually.
Do you use OpenShift logging component EFK stack ? Then additionally consider to install EventRouter, it collects OpenShift events logs as eventrouter pod's logs.
You can pick up the deployment event messages from the logs and trigger the alert by custom script or monitoring system's log tailing feature and so on.
Refer Specifying Logging Ansible Variables
for ansible variable details.
openshift_logging_install_eventrouter
openshift_logging_eventrouter_nodeselector
openshift_logging_eventrouter_namespace
...
You can pass customParams to the deployment process and do a curl if openshift-deploy fails.
"strategy": {
"type": "Rolling",
"timeoutSeconds": 180,
"customParams": {
"command": [
"/bin/sh",
"-c",
"set -e && if ! openshift-deploy; then curl -i -X POST -d '{\"text\": \"Deployment of ${application} failed!\"}' ${webhook} && exit 1; else echo \"Deployment complete\"; fi"
]
}

ERROR: (gcloud.beta.functions.deploy) ... message=[The caller does not have permission]

I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.

How to add application packages to Azure Batch task from Azure CLI?

I am trying to write a bash command line script that will create an azure batch task with an application package. The package is called "testpackage" and exists and is activated on the batch account. However, every time I create this task, I get the following error code: BlobAccessDenied.
This only occurs when I include the application-package-references option on the command line. I tried to follow the documentation here, which states the following:
--application-package-references
The space-separated list of IDs specifying the application packages to be installed. Space-separated application IDs with optional version in 'id[#version]' format.
I have tried --application-package-references "test", --application-package-references" test[1]", and --application-package-references test[1], all with no luck. Does anyone have an example of doing this properly?
Here is the complete script I am running:
#!/usr/bin/env bash
AZ_BATCH_KEY=myKey
AZ_BATCH_ACCOUNT=myBatchAccount
AZ_BATCH_ENDPOINT=myBatchEndpoint
AZ_BATCH_POOL_ID=myPoolId
AZ_BATCH_JOB_ID=myJobId
AZ_BATCH_TASK_ID=myTaskId
az batch task create \
--task-id $AZ_BATCH_TASK_ID \
--job-id $AZ_BATCH_JOB_ID \
--command-line "/bin/sh -c \"echo HELLO WORLD\"" \
--account-name $AZ_BATCH_ACCOUNT \
--account-key $AZ_BATCH_KEY \
--account-endpoint $AZ_BATCH_ENDPOINT \
--application-package-references testpackage
Ah the classic "write up a detailed SO question then immediately answer it yourself" conundrum.
All I needed was --application-package-references testpackage#1
Have a good day world.