Google Cloud Compute no storage left - google-compute-engine

Hi when i try to ssh to google cloud VM instance it doesn't connect and when i check the logs it says there is no storage available.
but when i connect using google cloud console it connects and when i check the storage there is enough storage
also one thing my current persistent disk is 20gb but here it shows twice the amount. if anyone can explain me whats going this would help me out a lot

The output that you are posting is from Cloud Shell link.
When you start Cloud Shell, it provisions a g1-small Google Compute
Engine virtual machine running a Debian-based Linux operating system.
Cloud Shell instances are provisioned on a per-user, per-session
basis. The instance persists while your Cloud Shell session is active;
after an hour of inactivity, your session terminates and its VM,
discarded. For more on usage quotas, refer to the limitations guide.
With the default Cloud Shell experience, you are allocated with an
ephemeral, pre-configured VM and the environment you work with is a
Docker container running on that VM. You can also choose to use a
custom environment to save your configurations, in which case, your
environment will be your very own custom Docker image.
Cloud Shell provisions 5 GB of free persistent disk storage mounted as
your $HOME directory on the virtual machine instance.
As Travis mentioned you run df -h --total in the Cloud Shell storage not the VM.
Here you can find a SO related question with possible solutions to fix your issue.
Disk is full, and I can't SSH to instance.

Related

Can we migrate local beanstalkd to GCP Compute instance?

I have local system that have a beanstalkd application running.Now i want to migrate to GCP compute instances.
I have installed beanstalkd on CE.So how i can migrate this to CE?
The steps are straightworfard
make sure binlog is activated
locate the binlog file
stop the Beanstalkd instance, for fully graceful shutdown, so binlog file should be flushed
upload to Cloud Storage, (either using the Console or using gcloud utility)
on the Compute Engine instance, make sure you have Storage permission in the edit section
download using gcloud commands the binlog file to the machine itself
start the beanstalkd service and woala, your persisted messages are there
you can setup on the same machine or other machines Beanstalkd Console Admin

Invoking gsutil or gcloud from a Google cloud function?

I have a Google Firebase app with a Cloud Functions back-end. I'm using the node.js 10 runtime which is Ubuntu 18.04. Users can upload files to Google Cloud Storage, and that triggers a GCF function.
What I'd like that function to do is copy that file to Google Cloud Filestore (that's File with an L), Google's new NFS-mountable file server. They say the usual way to do that is from the command line with gsutil rsync or gcloud compute scp.
My question is: are either or both of those commands available on GCF nodes? Or is there another way? (It would be awesome if I could just mount the Filestore share on the GCF instance, but I'm guessing that's nontrivial.)
Using NFS based storage is not a good idea in this environment. NFS works by providing a mountable file system, something that will not work in Cloud Functions environment as the file system is read only with the exception of /tmp folder.
You should consider using cloud native storage systems like GCS for which the Application Default Credentials are already setup. See the list of supported services here.
According to the official documentation Cloud Filestore documentation
Use Cloud Filestore to create fully managed NFS file servers on Google
Cloud Platform (GCP) for use with applications running on Compute
Engine virtual machines (VMs) instances or Google Kubernetes Engine
clusters.
You can not mount the Filestore on GCF.
Also, you can not execute gsutil or gcloud commands from a Google cloud function Writing Cloud Functions.
Google Cloud Functions can be written in Node.js, Python, and Go, and
are executed in language-specific runtimes

Google Compute Engine is not responding

My GCP server is down. It was working last day. I can see the server in VM Instances but can not connect using SSH. All the client websites are down.
Can any one help ?
There is several reasons this could happen:
If your disk is full
sshd deamon isn't configured properly
If OS login is enabled on your instance
A firewall rule block port 20
Sometimes, you see some connection errors in the console, that worth to take a look.
EDIT:
I will need additional information if that still not working;
Take a look to your serial console logs and tell me if you have any relevant logs that can help like a kernel panic, issue with networking, permission denied, etc
Use Cloud Shell and try to connect to your VM instance with these commands:
gcloud compute firewall-rules create --network=default default-allow-ssh --allow tcp:22
gcloud compute ssh YOUR_INSTANCE_NAME --zone YOUR_ZONE -- -vvv
If you can't connect from cloud shell, try to ping your VM instance (internal IP & external IP)
I highly recommend to delete your screenshots showing information about your VM instance (Firewall rules, Project name, nmap scans, etc).

Clone a google compute engine instance to a local Server

I have a vm instance on google compute engine, which deploys a python web application. But now i want to have a snapshot of the online vm deplyed on my local server. Is it feasible to do that? if yes, how do i proceed to do that?

Get Memory and Cpu Usages in Google Cloud Compute

How is one supposed to access the CPU usages and Memory usages of all the instances in a given project in Google Cloud Compute?
I'm unable to find anything regarding this in the documentation.
You can use Google Cloud Monitoring to see some usage metrics for your systems, and the Google Cloud Monitoring agent to get more precise metrics like memory. See the GCP metrics documentation for a list of all available compute metrics.
For memory usage on Debian:
free -m
in console.
You can CPU, Memory, Network, Disk I/O related information of your instance group into Google Stackdriver. Stackdriver comes with a separate subscription. You can add charts to monitoring your GCP infrastructure into one single/multiple dashboard/s.
You can see detail information in How to monitor GCP infrastructure using stack driver visualization
if you are using Linux, you can install gnome-system-monitor and ssh -X into the system and launch it.
For Ubuntu on GCP:
"apt-get install gnome-system-monitor dbus"
From another linux machine (or if you have cygwin/x installed on windows), just "ssh -X {remote ip}" Then type gnome-system-monitor and it will launch on your desktop.
You could also setup a vncserver on the cloud platform.