'gcloud functions deploy' deploys code that cannot listen to Firestore events - google-cloud-functions

When I try to use a gcloud CLI to deploy a small python script that listens to Firestore events, the script fails to listen to the Firestore events. If I use the web inline UI or web zip upload, the script actually listens to Firestore events. The command line doesn't show any errors.
Deploy script
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/<myprojectid>/databases/default/documents/Test/{account}
main.py
def print_name(event, context):
value = event["value"]["fields"]["name"]["stringValue"]
print("New name: " + str(value))
gcloud --version
Google Cloud SDK 243.0.0
beta 2019.02.22
bq 2.0.43
core 2019.04.19
gsutil 4.38
Back to comments
The document is pretty basic (has a name string field).
Any ideas? I'm curious if the gcloud CLI has a bug.
The inline web UI and zip uploader work great. I've tried multiple variations of this (e.g. removing 'beta', adding and removing different deploy args).
I'd expect the script to actually listen to Firestore events.

The "default" in trigger-resource needs parentheses around it.
gcloud beta functions deploy print_name \
--runtime python37 \
--service-account <myprojectid>#appspot.gserviceaccount.com \
--verbosity debug \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource "projects/<myprojectid>/databases/(default)/documents/Test/{account}"

Related

gcloud has the problem of parsing the path of my google drive

I try to run Google's Image example of AI Explanations.
Link: https://colab.sandbox.google.com/github/GoogleCloudPlatform/ml-on-gcp/blob/master/tutorials/explanations/ai-explanations-image.ipynb
And, I use these code mount my google drive
from google.colab import drive
drive.mount('/content/gdrive')
Then, I export the model to my Google Drive with
export_path = keras_estimator.export_saved_model(
'/content/gdrive/My Drive/xai_flower/',
serving_input_receiver_fn
).decode('utf-8')
But when I want use
!gcloud beta ai-platform versions create $VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 1.14 \
--framework TENSORFLOW \
--python-version 3.5 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25
It will output
/bin/bash: /content/gdrive/My: No such file or directory
ERROR: (gcloud.beta.ai-platform.versions.create) unrecognized arguments: Drive/xai_flower/1576834069
Obviously, gcloud has the problem of parsing the path with space.
I have tried to rename "My Drive" with other words, but it seems to be unavailable.
Please try escaping the space in the path using a backslash (\), i.e.
export_path = keras_estimator.export_saved_model(
'/content/gdrive/My\ Drive/xai_flower/',
serving_input_receiver_fn
).decode('utf-8')

(gcloud.functions.deploy) permission denied when deploying function with specific service account

I'm trying to deploy Google Cloud Functions using a different service account. I have the service account properties saved to a json file. I swapped out the values to make it easier to read.
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keys/mynewserviceaccount.json"
gcloud functions deploy MyFunction \
--runtime python37 \
--entry-point MyFunction \
--source src \
--service-account mynewserviceaccount#appspot.gserviceaccount.com \
--verbosity debug \
--stage-bucket staging.projectname.appspot.com \
--trigger-event providers/cloud.firestore/eventTypes/document.write \
--trigger-resource "projects/projectname/databases/(default)/documents/User/{userId}" &
mynewserviceaccount has the following roles. I've tried a few others and haven't had success.
- Cloud Functions Admin
- Cloud Functions Service Agent
- Errors Writer
- Service Account User
- Logs Writer
- Pub/Sub Subscriber
I've also ran
gcloud auth activate-service-account mynewserviceaccount#appspot.gserviceaccount.com --key-file "/path/to/keys/mynewserviceaccount.json"
When I run this, I get:
ERROR: (gcloud.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
When I try to find "gcloud.functions.deploy" in the Roles list, I don't see it. I don't know if this is an issue with documentation or an issue with the code.
The Docs on cloud functions states that if you want to deploy a function with a service account you have to do an extra step.
You must assign the user the IAM Service Account User role (roles/iam.serviceAccountUser) on the Cloud Functions Runtime service account.
if this was when running gcloud builds submit command, the most likely reason is Cloud Functions Developer role not being enabled for the Cloud Build service.
Navigate to Cloud Build > Settings
Enable Cloud Functions Developer Role

setting up microcks in openshift

I am trying to set up microcks in the openshift..
I am just using the free starter from openshift at the https://console.starter-us-west-2.openshift.com/console/catalog
In the http://microcks.github.io/installing/openshift/ , the command is given as below
oc new-app --template=microcks-persistent --param=APP_ROUTE_HOSTNAME=microcks-microcks.192.168.99.100.nip.io --param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-microcks.192.168.99.100.nip.io --param=OPENSHIFT_MASTER=https://192.168.99.100:8443 --param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client
In that , how can i find the route for my project ? my project is called testcoolers .
so what will be instead microcks-microcks.192.168.99.100.nip.io? I guess something will replace 192.168.99.100.nip.io
same with keycloak hostname ?also what will be the Public OpenShift master address? Its now https://192.168.99.100:8443
Installing Microcks appears to assume some level of OpenShift familiarity. Also, there are several restrictions that make this not an ideal install for OpenShift Online Starter, but it can definitely still be made to work.
# Create the template within your namespace
oc create -f https://raw.githubusercontent.com/microcks/microcks/master/install/openshift/openshift-persistent-full-template-https.yml
# Deploy the application from the template, be sure to replace <NAMESPACE> with your proper namespace
oc new-app --template=microcks-persistent-https \
--param=APP_ROUTE_HOSTNAME=microcks-<NAMESPACE>.7e14.starter-us-west- 2.openshiftapps.com \
--param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-<NAMESPACE>.7e14.starter-us-west-2.openshiftapps.com \
--param=OPENSHIFT_MASTER=https://api.starter-us-west-2.openshift.com \
--param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client \
--param=MONGODB_VOL_SIZE=1Gi \
--param=MEMORY_LIMIT=384Mi \
--param=MONGODB_MEMORY_LIMIT=384Mi
# The ROUTE params above are still necessary for the variables, but in Starter, you can't specify a hostname in a route, so you'll have to manually create the routes
oc create route edge microcks --service=microcks --insecure-policy=Redirect
oc create route edge keycloak --service=microcks-keycloak --insecure-policy=Redirect
You should also see an error about not being able to create the OAuthClient. This is expected because you don't have permissions to create this for the whole cluster. You will instead need to manually create a user in KeyCloak.
I was able to get this to successfully deploy and logged in on OpenShift Online Starter, so use the comments if you struggle at all.

How to deploy multiple functions using gcloud command line?

I want to deploy multiple cloud functions. Here is my index.js:
const { batchMultipleMessage } = require('./gcf-1');
const { batchMultipleMessage2 } = require('./gcf-2');
module.exports = {
batchMultipleMessage,
batchMultipleMessage2
};
How can I use gcloud beta functions deploy xxx to deploy these two functions at one time.
Option 1:
For now, I write a deploy.sh to deploy these two cloud functions at one time.
TOPIC=batch-multiple-messages
FUNCTION_NAME_1=batchMultipleMessage
FUNCTION_NAME_2=batchMultipleMessage2
echo "start to deploy cloud functions\n"
gcloud beta functions deploy ${FUNCTION_NAME_1} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
gcloud beta functions deploy ${FUNCTION_NAME_2} --trigger-resource ${TOPIC} --trigger-event google.pubsub.topic.publish
It works, but if gcloud command line support deploy multiple cloud functions, that will be best way.
Option 2:
https://serverless.com/
If anyone is looking for a better/cleaner/parallel solution, this is what I do:
# deploy.sh
# store deployment command into a string with character % where function name should be
deploy="gcloud functions deploy % --trigger-http"
# find all functions in index.js (looking at exports.<function_name>) using sed
# then pipe the function names to xargs
# then instruct that % should be replaced by each function name
# then open 20 processes where each one runs one deployment command
sed -n 's/exports\.\([a-zA-Z0-9\-_#]*\).*/\1/p' index.js | xargs -I % -P 20 sh -c "$deploy;"
You can also change the number of processes passed on the -P flag. I chose 20 arbitrarily.
This was super easy and saves a lot of time. Hopefully it will help someone!

How to add application packages to Azure Batch task from Azure CLI?

I am trying to write a bash command line script that will create an azure batch task with an application package. The package is called "testpackage" and exists and is activated on the batch account. However, every time I create this task, I get the following error code: BlobAccessDenied.
This only occurs when I include the application-package-references option on the command line. I tried to follow the documentation here, which states the following:
--application-package-references
The space-separated list of IDs specifying the application packages to be installed. Space-separated application IDs with optional version in 'id[#version]' format.
I have tried --application-package-references "test", --application-package-references" test[1]", and --application-package-references test[1], all with no luck. Does anyone have an example of doing this properly?
Here is the complete script I am running:
#!/usr/bin/env bash
AZ_BATCH_KEY=myKey
AZ_BATCH_ACCOUNT=myBatchAccount
AZ_BATCH_ENDPOINT=myBatchEndpoint
AZ_BATCH_POOL_ID=myPoolId
AZ_BATCH_JOB_ID=myJobId
AZ_BATCH_TASK_ID=myTaskId
az batch task create \
--task-id $AZ_BATCH_TASK_ID \
--job-id $AZ_BATCH_JOB_ID \
--command-line "/bin/sh -c \"echo HELLO WORLD\"" \
--account-name $AZ_BATCH_ACCOUNT \
--account-key $AZ_BATCH_KEY \
--account-endpoint $AZ_BATCH_ENDPOINT \
--application-package-references testpackage
Ah the classic "write up a detailed SO question then immediately answer it yourself" conundrum.
All I needed was --application-package-references testpackage#1
Have a good day world.