Get a list of quota usage/limit of my project using gcloud command line - google-compute-engine

Can anyone show me how to get a list of used quota per project in GCE cloud?
I can only get this list from console: console.cloud.google.com/iam-admin/quotas?project=my-project&location=us-east1. but I don't know who we can listed using gcloud command line.?

Run the following command to check project-wide quotas. Replace myproject with your own project ID:
gcloud compute project-info describe --project myproject
Official reference here

To verify the capacity used vs available quota, you can run the following command.
$ gcloud compute project-info describe --project myproject
Or you can interact with Compute engine API to list some quotas and their limit, but the Persistent Disk or Local SSD… are a regional quota and the results of the command-line or Compute engine API don't list per-region quotas. In order to retrieve information regarding regional quota, you have to run:
$ gcloud compute regions describe example-region

i managed to get all quotas per region with this command:
gcloud compute regions list --project=production --format=json

Related

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

Find image path to GCE image

When configuring a gitlab runner I can tell it what type of instances to spawn on GCP via the google-machine-image keyword.
How do I find out what the path to an GCE image is?
You can find the URIs running the following command:
$ gcloud compute images list --uri
Here's is more information about it.

Google Compute Engine VM instance access permission

I got some issues when dealing with the Google VM instances,
I use this command to create a instance at the beginning,
gcloud compute instances create <instance name> \
--scopes storage-rw,bigquery,compute-rw \
--image-family container-vm \
--image-project google-containers \
--zone us-central1-b \
--machine-type n1-standard-1
The instance is working well, and I can do a lot of things on the instance.
However, in the project, there are several other users, and every user in the same project can SSH access to my instance and see my work, and they have sudo permission, which means they can also change my settings, documents and so on. It is not secure.
Is there a method to set up the instance to be personal instead of public to the project? In this case, everyone in the project can have his/her own VM, and no one else can access it except himself/herself.
If you want to allow only access to particular users in a VM and not to other project SSH keys, you must block the propagation of project level keys to the instance. In that way only the specific keys defined to the instance will have access to it. The details are explained in this article
The command to deploy such an instance at the creation time would look something like this
gcloud compute --project "myproject" instances create "myinstance" --zone "us-central1-f" --machine-type "n1-standard-1" --network "default" --metadata "block-project-ssh-keys=true,ssh-keys=MYPUBLICKEYVLUE" --maintenance-policy "MIGRATE" --scopes default="https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly" --image "/debian-cloud/debian-8-jessie-v20160923" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "myinstance"
Now if you have defined a Project Member as Owner/Editor, the key will still get automatically transferred to the instance when he SSH using gcloud or the Developer Console. This behaviour makes sense since the permissions at the project level allows him to even delete the VM.
If your instance is already created you must change block-project-ssh-keys metadata value to TRUE and delete any undesired keys in the VM, as explained in the same article

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!

gsutil not working in GCE

So when I bring up a GCE instance using the standard debian 7 image, and issue a "gsutil config" command, it fails with the following message:
jcortez#master:~$ gsutil config
Failure: No handler was ready to authenticate. 4 handlers were checked. ['ComputeAuth', 'OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials.
I've tried it on the debian 6 and centos instances and had the same results. Issuing "gcutil config" works fine however. I gather I need to set up my ~/.boto file but I'm not sure what to.
What am I doing wrong?
Using service account scopes as E. Anderson mentions is the recommended way to use gsutil on Compute Engine, so the images are configured to get OAuth access tokens from the metadata server in /etc/boto.cfg:
[GoogleCompute]
service_account = default
If you want to manage gsutil config yourself, rename /etc/boto.cfg, and gsutil config should work:
$ sudo mv /etc/boto.cfg /etc/boto.cfg.orig
$ gsutil config
This script will create a boto config file at
/home/<...snipped...>/.boto
containing your credentials, based on your responses to the following questions.
<...snip...>
Are you trying to use a service account to have access to Cloud Storage without needing to enter credentials?
It sounds like gsutil is searching for an OAuth access token with the appropriate scopes and is not finding one. You can ensure that your VM has access to Google Cloud Storage by requesting the storage-rw or storage-full permission when starting your VM via gcutil, or by selecting the appropriate privileges under "Project Access" on the UI console. For gcutil, something like the following should work:
> gcutil addinstance worker-1 \
> --service_account_scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/compute.readonly
When you configured your GCE instance, did you set it up with a service account configured? Older versions of gsutil got confused when you attempted to run gsutil config when you already had service account credentials configured.
If you already have a service account configured you shouldn't need to run gsutil config - you should be able to simply run gsutil ls, cp, etc. (it will use credentials located elsewhere than your ~/.boto file).
If you really do want to run gsutil config (e.g., to set up credentials associated with your login identity, rather than service account credentials), you could try downloading the current gsutil from http://storage.googleapis.com/pub/gsutil.tar.gz, unpacking it, and running that copy of gsutil. Note that if you do this, the personal credentials you create by running gsutil config will essentially "hide" your service account credentials (i.e., you would need to move your .boto file aside if you ever want to user your service account credentials again).
Mike Schwartz, Google Cloud Storage team
FYI I'm working on some changes to gsutil now that will handle the problem you encountered more smoothly. That version should be out within the next week or two.
Mike