Does traffic get discarded if a google cloud endpoint is redeployed? - google-compute-engine

Let's say for argument's sake that I have a vm instance, which is configured with an endpoint config_id in it's meta-data that is set to an existing working cloud endpoint.
Can someone please explain to me what happens to the incoming requests if the cloud endpoint is redeployed? Obviously, I will get an new config_id, but if haven't yet applied this config_id to the vm instance, does the traffic just get discarded?
If this is the case, what are some viable solutions to prevent service interruption for my users.
Thanks!

The traffic keeps going to the old configuration until you change the endpoints-service-config-id with the new config_id:
And then ssh into the VM instance with gcloud compute ssh [INSTANCE-NAME] and run sudo /etc/init.d/nginx restart.
In conclusion, traffic won't be discarded. It just keeps using the old config deployment. See redeploying

Related

Creating a Staging VM in Google Compute Engine

I'm trying to set up a Staging VM for a site that's in production that I have just inherited. The site is running Wordpress/Woocommerce and has not been updated in a while. The VM it's hosted on is running an old version of PHP. Obviously, this all needs to be fixed up but I'm unfamiliar with GCP Compute Engine. Also any attempt to run backup/clone plugins crashes the site and requires a restore from the daily snapshot which is very annoying.
Is it possible to clone the VM/disk to a new instance, point that at a temporary domain, and test/update the site? I have been trying to do this for a while now without much luck any suggestions would be much appreciated. Thanks.
Creating a clone of an existing VM is possible and quite easy.
Create a snapshot of the VM. If possible stop the VM before doing this to ensure 100% accuracy - this way you will have exact snapshot of the drive without any errors. You can do it while the VM is running too if stopping it is out of the question.
Create a VM from the shapshot - select as a boot disk a snapshot that you've just created. Remember to assign a static public IP to this VM (unless you want it changed after VM restart and since you're going to do some configuration this would likely happen). You can change the VM's specs at this time too - nothing stops you from adding/removing CPU's, RAM etc. It may well be that your VM is underutilised and you can use something smaller to save costs. Or the opposite.
Start the machine. Now you can modify your WP configuration to point to a new domain. Depending on the SSL certificate - you can either use external one or the one provided by GCP (most convinient solution).
If you already own a domain you want to use for staging you can host it in Cloud DNS or at some other provider - just point it to the external IP you just reserved.
If you will be hosting your domain in the Cloud DNS then you will find necessary infomration in the documentation about managed zones (domains).
You can also consider creating a new VM and setting it as a template for creating a group of VM's (managed autoscaled group) and creating an external HTTPS load balancer in front of it. But this adds a little to the complexity so it's just my idea if you needed to handle a lot more traffic.

My google cloud instance lost network connectivity

My google cloud instance (10.128.0.3) lost network connectivity somewhere just after 0400 this AM. I am running Centos 6.10) The network interfaces are up and have IP addresses. Unable to ping default gateway (10.128.0.1). Firewall rules (google and local) have not been changed/modified. This instance has been online for several years with no recent changes made. Any suggestions would be helpful and appreciated.
This is a known issue when updating to kernel 2.6.32-754 that is affecting both Red Hat, and CentOS images, and seems related to this DHCP update. The Compute Engine team are already aware of this issue.
Meanwhile, and in addition to the great suggestions above, you may also use a startup script ( add the default gateway IP address) to fix this issue, and then restart your instance. Todo so without access to the instance simply add a metadata for the instance with the name startup-script and the content of the below script (make sure to update the gateway to your, it can be found in the VPC Page)
#!/bin/bash
route add default gw [default_gateway_ip] eth0
For further information/updates about this issue, you may check this issue tracker link. https://issuetracker.google.com/issues/111154121

CloudSql with Autoscaler access

I am stuck at one thing regarding CloudSQL.
I have my WordPress app running on GCE and I create Instance Group so I will utilise the AutoScaler.
for Db, I am using CloudSQL.
Now point where is stuck is the "Authorise network" in CloudSQL as it accepts only IPV4 Public IP.
How do I know when autoscaling happen what IP will attach to Instance so my instance will know where the DB is?
I can hard code the CloudSQL IP as a CNAME but from CloudSQL Side I am not able to figure it out how to provide access. I can make my DB access all open
If you can let me know what will be the point which I am missing.
I used cloudsql proxy also but that doesn't come with Service in Linux ... I hope you can understand my situation. Let me know if any idea you like to share on this.
Thank you
The recommended way is to use the second generation instances and Cloud SQL Proxy, you’ll need to configure the Proxy on Linux and start it by using service account credentials as outlined at the provided link.
Another way is to use startup script in your GCE instance template, so you can get your new instance’s external IP address and add it to a Cloud SQL instance’s authorized networks by using gcloud sql instance patch command. The IP can be removed from the authorized networks in the same way by using shutdown script. The external IP address of GCE VM instance can be retrieved from metadata by running:
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" -H "Metadata-Flavor: Google".

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Hadoop cluster on Google Compute Engine: Accessing master node via REST

I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.