Running daml script on `daml start` - daml

I have a daml file with just one script
module User where
import Daml.Script
makeAdmin =
script do
admin <- allocateParty "Admin"
submit admin do
createCmd User with username = admin
template User with
username: Party
where
signatory username
key username: Party
maintainer key
...
Is there a way to execute the script any time I run daml start?

Yes! This is explained in the documentation for Daml Script, specifically:
You can use Daml script to initialize a ledger on startup. To do so, specify an init-script: ScriptExample:initializeFixed field in your daml.yaml. This will automatically be picked up by daml start and used to initialize sandbox.
In your case, that would be:
init-script: User:makeAdmin

Related

Azure CLI: az cdn endpoint purge | Not working with Service Principal

My Service Principal has the following two roles on the whole Resource Group Playground
CDN Endpoint Contributor
CDN Profile Contributor
I am trying to run the following commands
az login --service-principal --username="ca85199a-7e86-40eb-b6c8-a774a9edc010" --password="<pwd>" --tenant="<tenant-id>"
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait
I am getting the following error.
AuthorizationFailed: The client 'acd5dfea-f69a-4178-812c-4204963c6959' with object id 'acd5dfea-f69a-4178-812c-4204963c6959' does not have authorization to perform action 'Microsoft.Cdn/profiles/endpoints/purge/action' over scope '/subscriptions/b19669be-bfa2-4e86-b7d4-f1b4d98dd2a5/resourceGroups/Playgroud/providers/Microsoft.Cdn/profiles/mopar-poc/endpoints/mopar' or the scope is invalid. If access was recently granted, please refresh your credentials.
What am I missing here?
The two roles are enough, the command works fine on my side, please follow the steps below to troubleshoot.
1.Double-check the RBAC roles in the azure portal, make sure the correct service principal has the correct role in the correct scope.
2.Run az account clear first, then login again to make sure you are using the correct service principal.
3.Make sure you logged in to the correct subscription, just use az account set --subscription <subscription-id> after login to set it.
The error was misleading.
Turns out I had misspelled Playground in the command below.
az cdn endpoint purge -n mopar --profile-name mopar-poc --content-paths "/*" --resource-group Playgroud --no-wait

GCE API Required 'compute.zones.list' permission for 'projects/someproject'

I have created project in GCP. Then i create service account with ComputeAdmin role. Then i enable "Compute Engine API" for project.
But can't work with instances:
#gcloud compute instances list
ERROR: (gcloud.compute.instances.list) Some requests did not succeed: - Required 'compute.zones.list'
permission for 'projects/someproject'
what am I doing wrong ?
Edited
from service account:
ERROR: (gcloud.projects.get-iam-policy) User
[cloud66#project_id.iam.gserviceaccount.com] does not have permission
to access project [project_id:getIamPolicy] (or it may not exist): The
caller does not have permission
When i switch to main google account:
$ gcloud projects get-iam-policy project_id bindings:
members:
serviceAccount:cloud66#project_id.iam.gserviceaccount.com
role: roles/compute.admin
members:
serviceAccount:service-855312803173#compute-system.iam.gserviceaccount.com
role: roles/compute.serviceAgent
members:
serviceAccount:855312803173-compute#developer.gserviceaccount.com
serviceAccount:855312803173#cloudservices.gserviceaccount.com
role: roles/editor
members:
user:my_google_user role: roles/owner
From Logs view:
2020-01-28 16:46:30.932 EET Compute Engine list zones
cloud66#project_id.iam.gserviceaccount.com PERMISSION_DENIED
code: 7
message: "PERMISSION_DENIED"
Have a look at the documentation first. Usually such error occurs if your service account doesn't have enough permissions. In this case, you should check available roles, search there for required permission like compute.zones. and add it to your service account as it described here. For example it could be Compute Instance Admin (v1) role.
EDIT It look like Compute Admin role should work for you. To be sure, check granted roles in your project with command:
gcloud projects get-iam-policy YOUR_PROJECT_ID
If want to use your service account with Cloud API from some application have a look at this instructions.
EDIT2 Try to check your service account and key from 3rd place (like some linux desktop or server) as it described in the documentation.
The first time I created a service account "cloud66" in the Google test period. Most likely, this affected the access rights. Then I switched billing from a test period to a paid one. I deleted and recreated the cloud66 service account in the "APIs & Services -> Credentials" section. But there was an access policy for "cloud66" with the role of "ComputeAdmin" in the "IAM" section. When I deleted the access policy from the "IAM" section and recreated the service account, the problem was resolved.

ERROR: (gcloud.beta.functions.deploy) ... message=[The caller does not have permission]

I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.

GCE Service Account with Compute Instance Admin permissions

I have setup a compute instance called to run cronjobs on Google Compute engine using a service account with the following roles:
Custom Compute Image User + Deletion rights
Compute Admin
Compute Instance Admin (beta)
Kubernetes Engine Developer
Logs Writer
Logs Viewer
Pub/Sub Editor
Source Repository Reader
Storage Admin
Unfortunately, when I ssh into this cronjob runner instance and then run:
sudo gcloud compute --project {REDACTED} instances create e-latest \
--zone {REDACTED} --machine-type n1-highmem-8 --subnet default \
--maintenance-policy TERMINATE \
--scopes https://www.googleapis.com/auth/cloud-platform \
--boot-disk-size 200 \
--boot-disk-type pd-standard --boot-disk-device-name e-latest \
--image {REDACTED} --image-project {REDACTED} \
--service-account NAME_OF_SERVICE_ACCOUNT \
--accelerator type=nvidia-tesla-p100,count=1 --min-cpu-platform Automatic
I get the following error:
The user does not have access to service account {NAME_OF_SERVICE_ACCOUNT}. User: {NAME_OF_SERVICE_ACCOUNT} . Ask a project owner to grant you the iam.serviceAccountUser role on the service account.
Is there some other privilege besides compute instance admin that I need to be able to create instances with my instance?
Further notes: (1) when I try to not specify --service-account the error is the same except that the service account my user doesn't have access to is the default '51958873628-compute#developer.gserviceaccount.com'.
(2) adding/removing sudo doesn't change anything
Creating an instance that uses a service account requires you have the compute.instances.setServiceAccount permission on that service account. To make this work, grant the iam.serviceAccountUser role to your service account (either on the entire project or on the specific service account you want to be able to create instances with).
Find out who you are first
if you are using Web UI: what email address did you use to login?
if you are using local gcloud or terraform: find the json file that contains your credentials for gcloud (often named similarly to myproject*.json) and see if it contains the email: grep client_email myproject*.json
GCP IAM change
Go to https://console.cloud.google.com
Go to IAM
Find your email address
Member -> Edit -> Add Another Role -> type in the role name Service Account User -> Add
(You can narrow it down with a Condition, but lets keep it simple for a while).
Make sure that NAME_OF_SERVICE_ACCOUNT is service account from current project.
If you change project ID, and don't change NAME_OF_SERVICE_ACCOUNT, then you will encounter this error.
This can be checked on Google Console -> IAM & Admin -> IAM.
Then look for service name ....-compute#developer.gserviceaccount.com and check if numbers at the beginning are correct. Each project will have different numbers in this service name.

How to automatically exit/stop the running instance

I have managed to create an instance and ssh into it. However, I have couple of questions regarding the Google Compute Engine.
I understand that I will be charged for the time my instance is running. That is till I exit out of the instance. Is my understanding correct?
I wish to run some batch job (java program) on my instance. How do I make my instance stop automatically after the job is complete (so that I don't get charged for the additional time it may run)
If I start the job and disconnect my PC, will the job continue to run on the instance?
Regards,
Asim
Correct, instances are charged for the time they are running. (to the minute, minimum 10 minutes). Instances run from the time they are started via the API until they are stopped via the API. It doesn't matter if any user is logged in via SSH or not. For most automated use cases users never log in - programs are installed and started via start up scripts.
You can view your running instances via the Cloud Console, to confirm if any are currently running.
If you want to stop your instance from inside the instance, the easiest way is to start the instance with the compute-rw Service Account Scope and use gcutil.
For example, to start your instance from the command line with the compute-rw scope:
$ gcutil --project=<project-id> addinstance <instance name> --service_account_scopes=compute-rw
(this is the default when manually creating an instance via the Cloud Console)
Later, after your batch job completes, you can remove the instance from inside the instance:
$ gcutil deleteinstance -f <instance name>
You can put halt command at the end of your batch script (assuming that you output your results on persistent disk).
After halt the instance will have a state of TERMINATED and you will not be charged.
See https://developers.google.com/compute/docs/pricing
scroll downn to "instance uptime"
You can auto shutdown instance after model training. Just run few extra lines of code after the model training is complete.
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
# Project ID for this request.
project = 'xyz' # Project ID
# The name of the zone for this request.
zone = 'xyz' # Zone information
# Name of the instance resource to stop.
instance = 'xyz' # instance id
request = service.instances().stop(project=project, zone=zone, instance=instance)
response = request.execute()
add this to your model training script. When the training is complete GCP instance automatically shuts down.
More info on official website:
https://cloud.google.com/compute/docs/reference/rest/v1/instances/stop
If you want to stop the instance using the python script, you can follow this way:
from google.cloud.compute_v1.services.instances import InstancesClient
from google.oauth2 import service_account
instance_client = InstancesClient().from_service_account_file(<location-path>)
zone = <zone>
project = <project>
instance = <instance_id>
instance_client.stop(project=project, instance=instance, zone=zone)
In the above script, I have assumed you are using service-account for authentication. For documentation of libraries used you can go here:
https://googleapis.dev/python/compute/latest/compute_v1/instances.html