As per the documentation at https://cloud.google.com/sdk/gcloud/reference/init gcloud init myproject command does not work.
google-cloud> gcloud init myproject
Initialized gcloud directory in [/Users/arungupta/workspaces/google-cloud/myproject/.gcloud].
Cloning [https://source.developers.google.com/p/myproject/r/default] into [default].
Cloning into '/Users/arungupta/workspaces/google-cloud/myproject/default'...
fatal: remote error: Repository not found.
You may need to create a repository for this project using the Source Code tab at https://console.developers.google.com
ERROR: Command '['git', 'clone', 'https://source.developers.google.com/p/myproject/r/default', '/Users/arungupta/workspaces/google-cloud/myproject/default', '--config', 'credential.helper=gcloud.sh']' returned non-zero exit status 128
ERROR: Unable to initialize project [myproject], cleaning up [/Users/arungupta/workspaces/google-cloud/myproject].
ERROR: (gcloud.init) Unable to initialize project [myproject].
Creating a project using gcloud init minecraft-server --project minecraft-server-183 creates the project with the name minecraft-server-183.
The project so created is then not visible at https://console.developers.google.com/project.
What is the correct gcloud command to create a new project, without going to the console?
It is now possible with the gcloud alpha projects create command.
For more information see: https://cloud.google.com/resource-manager/
Just wanted to complete the circle here.
Google Cloud CLI tool 'gcloud' supports creating of projects without the need for the 'alpha' component installed from the version 147.0.0 (March 15, 2017) onwards.
Official Reference Link: https://cloud.google.com/sdk/gcloud/reference/projects/create
Release Notes for v147.0.0:
https://cloud.google.com/sdk/docs/release-notes#14700_2017-03-15
It is mentioned under subheading of Google Cloud Resource Manager
For quick reference
Synopsis
gcloud projects create [PROJECT_ID] [--no-enable-cloud-apis] [--folder=FOLDER_ID] [--labels=[KEY=VALUE,…]] [--name=NAME] [--organization=ORGANIZATION_ID] [--set-as-default] [GCLOUD_WIDE_FLAG …]
Description
Creates a new project with the given project ID. By default, projects are not created under a parent resource. To do so, use either the --organization or --folder flag.
Sample Code
gcloud projects create example-foo-bar-1 --name="Happy project" --labels=type=happy
Here's a script that will create a project that is editable by a user (for many reasons, such as for auditability of service accounts, you might want to create per-user projects):
#!/bin/bash
if [ "$#" -lt 3 ]; then
echo "Usage: ./create_projects.sh billingid project-prefix email1 [email2 [email3 ...]]]"
echo " eg: ./create_projects.sh 0X0X0X-0X0X0X-0X0X0X learnml-20170106 somebody#gmail.com someother#gmail.com"
exit
fi
ACCOUNT_ID=$1
shift
PROJECT_PREFIX=$1
shift
EMAILS=$#
gcloud components update
gcloud components install alpha
for EMAIL in $EMAILS; do
PROJECT_ID=$(echo "${PROJECT_PREFIX}-${EMAIL}" | sed 's/#/-/g' | sed 's/\./-/g' | cut -c 1-30)
echo "Creating project $PROJECT_ID for $EMAIL ... "
# Create project
gcloud alpha projects create $PROJECT_ID
# Add user to project
gcloud alpha projects get-iam-policy $PROJECT_ID --format=json > iam.json.orig
cat iam.json.orig | sed s'/"bindings": \[/"bindings": \[ \{"members": \["user:'$EMAIL'"\],"role": "roles\/editor"\},/g' > iam.json.new
gcloud alpha projects set-iam-policy $PROJECT_ID iam.json.new
# Set billing id of project
gcloud alpha billing accounts projects link $PROJECT_ID --account-id=$ACCOUNT_ID
done
Explanation of the script is on medium: https://medium.com/google-cloud/how-to-automate-project-creation-using-gcloud-4e71d9a70047#.t58mss3co and a github link to the above code (I'll update it to remove the alpha when it goes beta/GA, for example) is here: https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/gcloudprojects/create_projects.sh
Update: as of 10/24/2016 #poolie says the gcloud command mentioned in Stephen's answer is now publicly accessable, will leave this answer here as I give some other usage suggestions.
I also have the problem, and was extremely discouraged by #Stephan Weinberg's remark, but I noticed when doing gcloud init that it asks where to put a "default" repository. so I looked at that one's config and see that it's slightly different from what's documented.
try pushing to https://source.developers.google.com/p/YOUR-PROJECT-NAME/r/default instead, it worked for me!
Related
I am trying to set up microcks in the openshift..
I am just using the free starter from openshift at the https://console.starter-us-west-2.openshift.com/console/catalog
In the http://microcks.github.io/installing/openshift/ , the command is given as below
oc new-app --template=microcks-persistent --param=APP_ROUTE_HOSTNAME=microcks-microcks.192.168.99.100.nip.io --param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-microcks.192.168.99.100.nip.io --param=OPENSHIFT_MASTER=https://192.168.99.100:8443 --param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client
In that , how can i find the route for my project ? my project is called testcoolers .
so what will be instead microcks-microcks.192.168.99.100.nip.io? I guess something will replace 192.168.99.100.nip.io
same with keycloak hostname ?also what will be the Public OpenShift master address? Its now https://192.168.99.100:8443
Installing Microcks appears to assume some level of OpenShift familiarity. Also, there are several restrictions that make this not an ideal install for OpenShift Online Starter, but it can definitely still be made to work.
# Create the template within your namespace
oc create -f https://raw.githubusercontent.com/microcks/microcks/master/install/openshift/openshift-persistent-full-template-https.yml
# Deploy the application from the template, be sure to replace <NAMESPACE> with your proper namespace
oc new-app --template=microcks-persistent-https \
--param=APP_ROUTE_HOSTNAME=microcks-<NAMESPACE>.7e14.starter-us-west- 2.openshiftapps.com \
--param=KEYCLOAK_ROUTE_HOSTNAME=keycloak-<NAMESPACE>.7e14.starter-us-west-2.openshiftapps.com \
--param=OPENSHIFT_MASTER=https://api.starter-us-west-2.openshift.com \
--param=OPENSHIFT_OAUTH_CLIENT_NAME=microcks-client \
--param=MONGODB_VOL_SIZE=1Gi \
--param=MEMORY_LIMIT=384Mi \
--param=MONGODB_MEMORY_LIMIT=384Mi
# The ROUTE params above are still necessary for the variables, but in Starter, you can't specify a hostname in a route, so you'll have to manually create the routes
oc create route edge microcks --service=microcks --insecure-policy=Redirect
oc create route edge keycloak --service=microcks-keycloak --insecure-policy=Redirect
You should also see an error about not being able to create the OAuthClient. This is expected because you don't have permissions to create this for the whole cluster. You will instead need to manually create a user in KeyCloak.
I was able to get this to successfully deploy and logged in on OpenShift Online Starter, so use the comments if you struggle at all.
Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network
I am trying to deploy code from this repo:
https://github.com/anishkny/puppeteer-on-cloud-functions
in Google Cloud Build. My cloudbuild.yaml file contents are:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
I have given the following roles to my Cloud Build Service account (****#cloudbuild.gserviceaccount.com):
Cloud Build Service Account
Cloud Functions Developer
Yet, in my Cloud Build log I see the following error:
starting build "1f04522c-fe60-4a25-a4a8-d70e496e2821"
FETCHSOURCE
Fetching storage object: gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047
Copying gs://628906418368.cloudbuild-source.googleusercontent.com/94762cc396ed1bb46e8c5dbfa3fa42550140c2eb-b3cfa476-cb21-45ba-849c-c28423982a0f.tar.gz#1534532794239047...
/ [0 files][ 0.0 B/ 835.0 B]
/ [1 files][ 835.0 B/ 835.0 B]
Operation completed over 1 objects/835.0 B.
tar: Substituting `.' for empty member name
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
ERROR: (gcloud.beta.functions.deploy) ResponseError: status=[403], code=[Forbidden], message=[The caller does not have permission]
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1
What am I missing?
It would appear that the permissions changed when (perhaps) Cloud Functions went GA. Another customer raised this issue today and I recalled your question.
The Cloud Build robot (${NUM}#cloudbuild.gserviceaccount.com) additionally needs to be a serviceAccountUser of the ${PROJECT-ID}#appspot.gserviceaccount.com account:
NB While the Cloud Build robot local part is the project number (${NUM}), the appspot robot local part is the project ID (${PROJECT})
Please try:
PROJECT=[[YOUR-PROJECT-ID]]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
Let me know!
I struggled with this too after reading quite a bit of documentation. A combination of the above answers got me on the right track. Basically, something like the following is needed:
PROJECT=[PROJECT-NAME]
NUM=$(gcloud projects describe $PROJECT --format='value(projectNumber)')
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#appspot.gserviceaccount.com \
--member=serviceAccount:${NUM}#cloudbuild.gserviceaccount.com \
--role=roles/iam.serviceAccountUser \
--project=${PROJECT}
gcloud iam service-accounts add-iam-policy-binding \
${PROJECT}#[INSERT_YOUR_IAM_OWNER_SERVICE_ACCOUNT_NAME].iam.gserviceaccount.com \
--member='serviceAccount:service-${NUM}#gcf-admin-robot.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountUser'
Also, I added the "Cloud Functions Developer" role to my #cloudbuild.gserviceaccount.com account via the IAM Console.
According to Cloud Build documentation, for Cloud Functions you have to grant the "Project Editor" role to your service account.
But, Cloud Functions documentation states that alternatively to using the Project Editor role, you can use "the Cloud Functions Developer role [but you have to] ensure that you have granted the Service Account User role". Regarding Service Accounts, it indicates to have "the CloudFunctions.ServiceAgent role on your project" and to "have permissions for trigger sources, such as Pub/Sub or the Cloud Storage bucket triggering your function".
Due to those considerations, my understanding is that the documentation omitted to specify all the roles your service account would need and went directly to indicate to grant the Project Editor role.
You have to update Service Account permissions on Cloud Build settings page.
Here is instructions https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-cloud-run#fully-managed
You just have to set the status of the Cloud Run Admin role to ENABLED on that page:
start your cloud build with auth
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['auth', 'activate-service-account', 'xoxox#xoxo-dev.iam.gserviceaccount.com', '--key-file=account.json', '--project=rabbito-dev']
and then simply your code deployment on cloud function
- name: 'gcr.io/cloud-builders/gcloud'
args: ['beta', 'functions', 'deploy', 'screenshot', '--trigger-http', '--runtime', 'nodejs8', '--memory', '1024MB']
Please add 'Cloud Functions Service Agent' role to your service account alongside 'Cloud Functions Developer'.
I am trying to follow this tutorial https://tensorflow.github.io/serving/serving_inception
But I see this
$ gcloud container clusters create inception-serving-cluster --num-nodes 5
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/tensorflow-serving".
I did not see an option to add permissions to the project anywhere. How do I do this using the CLI or the UI?
EDIT:
I do have the project created
EDIT:
Just saw that it works fine from the cloud shell
Update: Your project's name is tensorflow-serving-1360, so you should be running gcloud container clusters create inception-serving-cluster --num-nodes 5 --project=tensorflow-serving-1360.
The project tensorflow-serving is not owned by you. It is the example project name used in the linked tutorial, but you need to replace it with the name of your own project as described in the line at the beginning of Part 2:
Here we assume you have created and logged in a gcloud project named
tensorflow-serving
(Tested on 2019.04.07)
Firstly, check the list of auth accounts:
gcloud auth list
Next set the active account:
gcloud config set account <email_address_from_above_output>
Then, specify the parameter for create cluster commamd:
gcloud container clusters create <cluster_name> --num-nodes=2 --project=<PROJECT_ID>
e.g.
gcloud container clusters create prod-myapp-cluster --num-nodes=2 --project=myapp-20394823094
Expected output:
kubeconfig entry generated for prod-myapp-cluster.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
prod-myapp-cluster asia-south1-a 1.11.7-gke.12 35.5xx.2xx.1xx n1-standard-1 1.11.7-gke.12 2 RUNNING
Get your project name or create a project if you have created on already at console.cloud.google.com
Enable Kubernetes engine API on the console
run this code on your command prompt
gcloud container clusters create bd-serving-cluster --num-nodes 5 -project=tensorflow-serving-264611 \
--zone=us-central1-f
replace 'bd' with the name of your serving cluster and 'tensorflow-serving-264611' with the project name you created in step 1 and you can choose your preferred zone or use the default 'us-central1-f'
Creating an instance using gcloud does not seem to work:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Unable to fetch a list of zones. Specifying [--zone] may fix this issue:
- Project marked for deletion.
Adding the zone name fails differently:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-14-10 --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Failed to find image for alias [ubuntu-14-10] in public image project [ubuntu-os-cloud].
- Project marked for deletion.
Providing a different image name fails too:
google-cloud> gcloud compute instances create minecraft-instance --image ubuntu-1410-utopic --zone us-central1-a --tags minecraft
NAME ZONE MACHINE_TYPE INTERNAL_IP EXTERNAL_IP STATUS
ERROR: (gcloud.compute.instances.create) Could not fetch image resource:
- Project marked for deletion.
What is the exact command to create an instance using gcloud?
Did you authenticate before and set the default project?
gcloud auth login
gcloud config set project PROJECT
The base setup of gcloud is in the Google Cloud documentation.
Or did you delete your project?
Project marked for deletion.
You have several things going on, one of which is reading the docs:
https://cloud.google.com/compute/docs/gcloud-compute/#creating
You syntax should be:
gcloud compute instances create minecraftinstance \
--image ubuntu-14-10 \
--zone [SOME-ZONE-ID] \
--machine-type [SOME-MACHINE-TYPE]
Where SOME-ZONE-ID is a geographic zone to create the instance in, found by running:
gcloud compute zones list
SOME-MACHINE-TYPE is the machince type to create. Valid types are found by running:
gcloud compute machine-types list
But specifically, you seem to be creating an instance in a Project that has been deleted:
- Project marked for deletion.
Also, you need to authenticate and set a default project:
gcloud auth
and
gcloud config set project [ID]
Billable resources can not be created for projects which has been flagged for deletion. For a project to be deletable, billing must be disabled first, and so, instances can not be created. As for the error messages, it seems gcloud command is not handling this situation correctly and replying bogus error codes instead.
The only compulsory arguments to gcloud compute instances create are the name, the zone and the project. A valid working project must be set either by using --project PROJECT flag to gcloud commands, or by using gcloud config set project PROJECT before. Similarly, to choose the zone you can either use the --zone ZONE flag or the gcloud config set compute/zone ZONE command before.
Enabling billing on your current project and undeleting it will work too. To figure out which project and zone the gcloud command is running in by default, use this:
gcloud config list
In my case I had to specify --image-project that got me going:
gcloud compute instances create core --image ubuntu-1604-xenial-v20180126 --machine-type f1-micro --zone us-east4-a --image-project ubuntu-os-cloud
My Case,Create a managed instance group using the instance template:
gcloud compute instance-groups managed create nginx-group \
--base-instance-name nginx \
--size 2 \
--template nginx-template \
--target-pool nginx-pool \
--zone us-central1-c
You have to specify the --image-project --image-family
Refer https://cloud.google.com/compute/docs/images#os-compute-support.