Cant Disable Google Compute Engine - google-compute-engine

I am trying to stop google compute engine from
https://console.developers.google.com/project/myapp/apiui/api
As soon as I click the off button next to google compute engine, I see the message "Disabling Google Compute Engine...".
The message never goes away and Google Compute Engine is still on.
I'm using Chrome on Windows.
I'm trying to restart the GCE service because I keep getting:
Error: API rate limit exceeded when I try to run gcutil listinstances after setting up my instance for the first time.
Can someone help with either the service restart issue or the API rate limit exceeded issue?

If you'd like to restart your instance, there are two ways to do this. For the first one, go to your VM Instances page (https://console.developers.google.com/project/{project-id}/compute/instances) Be sure to fill in your project-id, or simply go to the page from the console.
Then click on the instance you'd like to restart, and on that page, there's a simple "reboot" button. If this still isn't working, try the next option.
The next option is to login to your instance and do sudo reboot. My guess is this will succeed and not fix your problem, but it will either succeed of tell you why it didn't. Alternatively you can use sudo poweroff and use the console to restart your instance.

You have 4 options to stop your instance.
If you would like to stop your instance using Google Cloud Platform Console, you may use this guide:
Go to the VM instances page in the GCP Console.
Select one or more instances that you want to stop.
At the top of the VM instances page, click Stop.
To stop your instance using gcloud tool, please follow this procedure.
Open your Google Cloud Shell in GCP.
Use the instances stop command and specify one or more instances that you want to stop.
$ gcloud compute instances stop your-instance1 your-instance-2
In the API, construct a POST request to stop an instance.
POST https://www.googleapis.com/compute/v1/projects/myproject/zones/us-central1-f/instances/example-instance/stop
To stop your instance through the OS for Linux and Windows.
In the GCP Console, go to the VM Instances page.
In the list of virtual machine instances, click SSH in the row of the instance that you want to connect to.
Use the sudo shutdown -h now or sudo poweroff commands. Execute one of these commands while you are logged into the virtual machine:
$ sudo shutdown -h now
$ sudo poweroff
You can reboot a Windows instance, similar to sudo reboot above, using the Start menu. In the Start menu, click on the arrow next to Log off and click Restart.

Related

Google Compute Engine is not responding

My GCP server is down. It was working last day. I can see the server in VM Instances but can not connect using SSH. All the client websites are down.
Can any one help ?
There is several reasons this could happen:
If your disk is full
sshd deamon isn't configured properly
If OS login is enabled on your instance
A firewall rule block port 20
Sometimes, you see some connection errors in the console, that worth to take a look.
EDIT:
I will need additional information if that still not working;
Take a look to your serial console logs and tell me if you have any relevant logs that can help like a kernel panic, issue with networking, permission denied, etc
Use Cloud Shell and try to connect to your VM instance with these commands:
gcloud compute firewall-rules create --network=default default-allow-ssh --allow tcp:22
gcloud compute ssh YOUR_INSTANCE_NAME --zone YOUR_ZONE -- -vvv
If you can't connect from cloud shell, try to ping your VM instance (internal IP & external IP)
I highly recommend to delete your screenshots showing information about your VM instance (Firewall rules, Project name, nmap scans, etc).

Automatically start gcloud sql proxy when google compute engine VM starts

I'm using google compute engine and have an auto scaling instance group that spins up new VMs as needed all sitting behind a load balancer. I'm also using google's cloud SQL in the same project. The VMs need to connect to the cloud SQL instance.
Since the IPs of the VMs are dynamic I can't just plug in the IPs to the SQL access config so I followed the cloud sql proxy setup along with the notes from this very similar question:
How to connect from a pool of Google Compute Engine instances to Cloud SQL DB in the same project?
I can now log into a single test VM and run:
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
and everything works great and that VM connects to the cloud SQL instance.
The next step is where I'm having issues. How can I setup the VM so it automatically starts up the proxy when it's either built from an instance template or just restarted. The obvious answer seem to be to shove the above in the VM's start-up script but that doesn't seem to be working. So with my single test VM I can SSH into the VM and manually run the cloud_sql_proxy command and all works. If I then include the below in my start-up script and restart the VM it doesn't connect:
#! /bin/bash
./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306
Any suggestions? I seriously can't believe it's this hard to connect to the SQL cloud from a VM in the same project...
The startup script you have shown doesn’t show the download step of the cloud_sql_proxy.
You need to first download and then launch the proxy. So, your startup script should look like:
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
I choose crontab to run cloud_sql_proxy automatically when vm start up.
$crontab -e
and add
#reboot cloud_sql_proxy blah blah.

How do I redirect the output from a Google Compute Engine instance startup script?

I've set up startup scripts for all of my instances, so that when I reboot one, it updates itself to the latest version of whatever it's running. Now I want to do multiple of those via one script, one single button push. It works by just rebooting all relevant instances, but I want to see the output of the startup scripts.
From here: https://cloud.google.com/compute/docs/startupscript#rerunthescript - I've found out that, on Debian machines, triggering a startup script by itself without rebooting a machine is done via sudo google_metadata_script_runner --script-type startup, and that all output from the startup script goes to /var/log/daemon.log. Is there any way to set the startup scripts to output directly to stdout?
As ZachB mentioned, startup scripts on Google Compute Engine will output to the serial port which you can view in the Cloud Console or on the command line with the gcloud tool. The following docs explain in more detail how to view the serial port output:
Interacting with the Serial Console
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
(Navigate to 'VM Instances' -> instance name -> 'Serial port' -> 'Connect to serial port')
gcloud compute instances get-serial-port-output
https://cloud.google.com/sdk/gcloud/reference/compute/instances/get-serial-port-output
gcloud compute instances get-serial-port-output NAME [--port=PORT] [--zone=ZONE] [GLOBAL-FLAG …]

Can't Connect to (or start?) Google Cloud Compute Instance

I attempted to restart my Windows instance (using the Start menu), but haven't been able to connect to it. After a few hours of waiting, I also tried using the Reboot button in the Google Developers Console. That didn't work either. I can't RDP into it, or even ping it. However, when I look at the instance in the GDC, it's been steady at 27% CPU usage for the duration. Anyone know what's going on or how to fix it?
You can run this gcutil command to reset the instance: gcutil --project=[project-id] resetinstance [instance-name]
If this has no effect, a suggestion would be to delete the instance without deleting the boot disk and then create a new instance with that disk. This will ensure that you get a fresh instance without losing any of your previous data.

How to ssh into HA application gears?

As was explained in the answer to this question: https://stackoverflow.com/questions/11730590/what-are-some-of-the-tricks-to-using-openshift it should be possible to ssh into some of the other gears when using a scaled app with openshift.
Unfortunately the link mentioned there (https://openshift.redhat.com/community/faq/can-i-access-my-applications-gear) seems to be gone.
Via [my app url]/haproxy-status/ I can see the names of the other gears. They are long names like gear-[long number]-[app name]. Using that name I can no longer ssh into them when I'm ssh'ed into the main gear. ssh there just immediately returns without any error.
If I do ssh blala the same thing happened, so it looks like ssh had been replaced by a noop command on the primary gear?
When I examine the haproxy conf file, I see entries like;
server gear-[long number]-[app name] ex-std-node[number].prod.rhcloud.com:[number] check fall 2 ...
I tried ssh'ing into this ext-std-node... address as well, both from the main/primary application gear as well as from my desktop, but it didn't work in both cases.
How can I get shell access to my other gears?
This command shows how to access individual gears:
rhc app show <appname> --gears
The last column of output is the ssh URL. It is of the form $UUID#$UUID-$NAMESPACE.rhcloud.com . You can ssh into them directly, and they are also accessible via ssh from the "head" gear; they have to be, since git pushes are synchronized from the head gear to the others via ssh.