How is one supposed to access the CPU usages and Memory usages of all the instances in a given project in Google Cloud Compute?
I'm unable to find anything regarding this in the documentation.
You can use Google Cloud Monitoring to see some usage metrics for your systems, and the Google Cloud Monitoring agent to get more precise metrics like memory. See the GCP metrics documentation for a list of all available compute metrics.
For memory usage on Debian:
free -m
in console.
You can CPU, Memory, Network, Disk I/O related information of your instance group into Google Stackdriver. Stackdriver comes with a separate subscription. You can add charts to monitoring your GCP infrastructure into one single/multiple dashboard/s.
You can see detail information in How to monitor GCP infrastructure using stack driver visualization
if you are using Linux, you can install gnome-system-monitor and ssh -X into the system and launch it.
For Ubuntu on GCP:
"apt-get install gnome-system-monitor dbus"
From another linux machine (or if you have cygwin/x installed on windows), just "ssh -X {remote ip}" Then type gnome-system-monitor and it will launch on your desktop.
You could also setup a vncserver on the cloud platform.
Related
I have created a Google Cloud Function to do Image Processing, I am using a Deep Learning model for one part of the process, but It uses GPU so I could unable GPU to change CPU and it is working well. After reading many links.
My question is: how can enable use of GPU for Cloud Functions? How could I send one image to be processed in a Compute Engine instance with GPU from Cloud Functions? Finally, I read something about Atheros but it looks expensiver more than 1k/month.
Thanks for your comments and ideas.
GPU are expensive, there is no real solution for that. You need to have a small VM with a small GPU to limit cost, but it's still expensive.
The communication between Cloud Functions and the VM is up to you. It can be through HTTP rest API, gRPC, custom protocol on custom port. If you use the VM private IP, you need to add a serverless VPC connector to your Cloud Functions to bridge the serverless world managed by Google with your own VPC where live your VM
I am new to GCE. I am trying to write a cron job to take some action based on current CPU utilization of some VMs on GCE. Is there a way I can get this info using 'gcloud' command?
I have tried gcloud compute instances describe <instance_name>. But that does not provide current CPU utilization info.
I found this other post that talks about getting this info from StackDriver - Understanding instance/cpu/utilization of Google Compute Engine
I am looking for this info using 'gcloud'.
Appreciate any help. Thank you in advance.
gcloud commands are used to create and manage Google Cloud resources. I checked the gcloud command references and it does't cover monitoring metric. To monitor performance of your VM instance, you need to use stackdriver monitoring
If you really want this feature in gcloud command, you can open a feature request here
Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot
The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.
I created a VM instance on Google Cloud Compute attaching a NVIDIA Tesla K80 and using SSD for persistent storage.
I'm running Ubuntu 16.04 LTS on it and stopped the instance to prevent billing during no-usage time. Now I try to start the instance again but it won't start, neither from Console nor from Terminal (macOS).
I have already tried to view the instance's console port log, but it's not available as the instance is not running.
I would suggest checking the preemptibility of your instance and GPU. You can check whether your GPU is preemptive or not from the quota page. You can check the preemptibility of your instance by clicking on the instance and finding out whether Availability policy > Preemptibility is off or on, or by following this document.
Keep in mind, preemptible GPUs will only work on a preemptible instance.
If you find out that both the preemptibility matches, then it might be a project specific issue which will require one to one investigation. To get this support, you can open an issue in the public issue tracker so someone from Google can assist you.
We are having problem to start a simple GCE VM with GPU in us-central-1. I am wondering if anyone experience the same thing. The error we got is below:
Instance 'instance-group-2-vc37' creation failed: The resource 'projects/xxxxx-xxxx-858/zones/us-central1-a/acceleratorTypes/nvidia-tesla-k80' was not found (when acting as 'xxxxxxx#cloudservices.gserviceaccount.com')
Thanks
GCE doesn't offer GPUs in us-central1. The docs list which regions GPUs are available in.
Cloud ML Engine is a separate product and not what you are using here.