Do Google Compute instances have a stable public DNS name? - google-compute-engine

This is a question in two parts:
Do GCE instances have a stable public DNS name? The default DNS name for instance with public IP a.b.c.d seems to be d.c.b.a.bc.googleusercontent.com
If yes, what's the best way to obtain this information? Here's the hack I've been using thus far:
EXTERNAL_IP=$(curl -s http://bot.whatismyipaddress.com/)
EXTERNAL_DNS=$(dig +short -x ${EXTERNAL_IP})

reverse lookup is okay to do, for IP address you would probably prefer using gcutil
https://developers.google.com/compute/docs/gcutil/tips
EXTERNAL_IP=$(gcutil getinstance --format=csv --zone=[your_zone] [your_instance] | grep external-ip | cut -d "," -f 2)

GCE instances don't currently have a public DNS name for their external IP address. But there is now a gcloud compute config-ssh (docs) command that's a pretty good substitute.
This will insert Host blocks into your ~/.ssh/config file that contain the IP address and configuration for the host key.
Although this only helps with SSH (and SSH-based applications like Mosh and git+ssh), it does have a few advantages over DNS:
There is no caching/propagation delay as you might have with DNS
It pre-populates the right host key, and the host key is checked the right way even if the ephemeral IP address changes.
Example:
$ gcloud compute config-ssh
...
$ ssh myhost.us-west1-b.surly-koala-232

Related

How to allow IP dynamically using ingress controller

My structure
Kubernetes cluster on GKE
Ingress controller deployed using helm
An application which will return list of IP ranges note: it will get updated periodically
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Secured application which is not working
What I am trying to do?
Have my clients IPs in my API endpoint which is done
curl https://allowed.domain.com
172.30.1.210/32,172.30.2.60/32
Deploy my example app with ingress so it can pull from the https://allowed.domain.com and allow people to access to the app
What I tried and didn't work?
Deploy the application with include feature of nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
include /tmp/allowed-ips.conf;
deny all;
yes its working but the problem is when /tmp/allowed-ips.conf gets updated the ingress config doesn't
I tried to use if condition to pull the IPs from the endpoint and deny if user is not in the list
nginx.ingress.kubernetes.io/configuration-snippet: |
set $deny_access off;
if ($remote_addr !~ (https://2ce8-73-56-131-204.ngrok.io)) {
set $deny_access on;
}
I am using nginx.ingress.kubernetes.io/whitelist-source-range annotation but that is not what I am looking for
None of the options are working for me.
From the official docs of ingress-nginx controller:
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app)
After the nginx ingress resource was initially created, the ingress controller assembles the nginx.conf file and uses it for routing traffic. Nginx web server does not auto-reload its configuration if the nginx.conf and other config files were changed.
So, you can work around this problem in several ways:
update the k8s ingress resource with new IP addresses and then apply changes to the Kubernetes cluster (kubectl apply / kubectl patch / smth else) / for your options 2 and 3.
run nginx -s reload inside an ingress Pod to reload nginx configuration / for your option 1 with include the allowed list file.
$ kubectl exec ingress-nginx-controller-xxx-xxx -n ingress-nginx -- nginx -s reload
try to write a Lua script (there is a good example for Nginx+Lua+Redis here and here). You should have a good understanding of nginx and lua to estimate if it is worth trying.
Sharing what I implemented at my workplace. We had a managed monitoring tool called Site24x7. The tool pings our server from their VMs with dynamic IPs and we had to automate the whitelisting of the IPs at GKE.
nginx.ingress.kubernetes.io/configuration-snippet allows you to set arbitrary Nginx configurations.
Set up a K8s CronJob resource on the specific namespace.
The CronJob runs a shell script, which
fetches the list of IPs to be allowed (curl, getent, etc.)
generates a set of NGINX configurations (= the value for nginx.ingress.kubernetes.io/configuration-snippet)
runs a kubectl command which overwrites the annotation of the target ingresses.
Example shell/bash script:
#!/bin/bash
site24x7_ip_lookup_url="site24x7.enduserexp.com"
site247_ips=$(getent ahosts $site24x7_ip_lookup_url | awk '{print "allow "$1";"}' | sort -u)
ip_whitelist=$(cat <<-EOT
# ---------- Default whitelist (Static IPs) ----------
# Office
allow vv.xx.yyy.zzz;
# VPN
allow aa.bbb.ccc.ddd;
# ---------- Custom whitelist (Dynamic IPs) ----------
$site247_ips # Here!
deny all;
EOT
)
for target_ingress in $TARGET_INGRESS_NAMES; do
kubectl -n $NAMESPACE annotate ingress/$target_ingress \
--overwrite \
nginx.ingress.kubernetes.io/satisfy="any" \
nginx.ingress.kubernetes.io/configuration-snippet="$ip_whitelist" \
description="*** $(date '+%Y/%m/%d %H:%M:%S') NGINX annotation 'configuration-snippet' updated by cronjob $CRONJOB_NAME ***"
done
The shell/bash script can be stored as ConfigMap to be mounted on the CronJob resource.

How to permanenty save sub interface IP (logic/virual interface) on solaris

I am using Solaris 10.
I want to add sub interface (including ip and mask) and save it to keep it when server reboot (Ex: Bge0:1, Bge0:2...)
General, I am use NetConf to add sub interface and assign iP for it, but it is too long for add multiple Interfaces.
Is there other way to do it like creating a file and runing it.
Thanks.
Add in /etc files like hostname.bge0:1 and so on with appropriate information
yourhostname netmask + broadcast + up
Also you need to add in /etc/hosts pairs for those hostnames
<your IP> yourhostname
Then plumb the interface
ifconfig bge0:1 plumb
and then up it
ifconfig bge0:1 `cat /etc/hostname.bge0:1`
Do not forget to add appropriate netmask record in /etc/netmasks
<your IP network> <your IP network netmask>

gcloud: how to get ip addresses of group of managed instances

My problem is to create 5k instances and retrieve there public IP addresses.
Specifically for zone us-west1-a I can create a group of 50 instances by the following:
gcloud compute instance-groups managed create test --base-instance-name morning --size 50 --template benchmark-template-micro --zone us-west1-a
Questions:
How to specify the start-script to run each created instances? I can't find them here.
How to get the public IP addresses of those created instances?
the startup-script can be assigned to the template for the instance used; see here.
one can obtain information with gcloud compute instance-groups managed describe.
while there are no public IP addresses unless you'd assign external IP addresses.
As mentioned by Martin, the startup-script is configured in the instance template.
Unfortunately, there is no API that lists the ip addresses of the instances in the group. There are however APIs (and gcloud commands) to get the list of instances and the ip addresses of instances. Here is an example to fetch this information from the command line:
gcloud compute instance-groups list-instances $INSTANCE_GROUP --uri \
| xargs -I '{}' gcloud compute instances describe '{}' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'
To speed this up, you may want to use xarg's -P flag to parallelize the instance describe requests.
Since all instances in the group have the same prefix. You can also just do a list search by prefix. Although, this may pull in another that uses the same prefix even if not part of the instance group:
gcloud compute instances list --filter='name ~ ${PREFIX}*' \
--flatten networkInterfaces[].accessConfigs[] \
--format 'csv[no-heading](name,networkInterfaces.accessConfigs.natIP)'

gcloud compute addresses list returns 0 results although I have external ephemeral addresses

I have one google compute instance in my project with an external IP.
A describe command on the instance shows me:
networkInterfaces:
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: xx.yyy.nnn.mmm
networkTier: PREMIUM
type: ONE_TO_ONE_NAT
fingerprint: hjhjhjhjh=
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/foo-201800/global/networks/default
However, when I run on the cloudshell.
$ gcloud config get-value project
Your active configuration is: [cloudshell-xxxx]
foo-201800
$ gcloud compute addresses list
Listed 0 items.
$ gcloud compute addresses list --global
Listed 0 items.
$ gcloud version
Google Cloud SDK 215.0.0
...snipped...
Are external ephemeral IP addresses not counted in the gcloud compute addresses execution ?
The ‘gcloud compute addresses’ command only counts static IP addresses assigned in a project. More specifically, you can read in the summary of the command the following description:
'gcloud compute addresses list': lists summary information of addresses in a project.
In the document about the compute engine IP addresses the definition about the static IP external address and the ephemeral IP address says the following:
Static external IP addresses are assigned to a project
Ephemeral external IP addresses are available to VM instances and forwarding rules.
The ephemeral IP are attached to a resources but not a project, when you use the command ‘gcloud compute addresses’ you are only listing the IP attached in a project; the static external IP.
Here you have an example to list the different types of IP address.
――

GCE + Load Balancer + Instance without public IP

I have instance that on purpose does not have public IP.
I have GCE Network Load Balancer that is using above instance as target pool.
Everything works great.
Then I wanted my instance to communicate with internet so I followed this documentation: https://cloud.google.com/compute/docs/networking#natgateway (Configuring a NAT gateway)
Instance can communicate with internet fine but load balancer cannot communicate with my instance anymore.
I think that these steps create the issue with loadbalancer:
$ gcloud compute routes create no-ip-internet-route --network gce-network \
--destination-range 0.0.0.0/0 \
--next-hop-instance nat-gateway \
--next-hop-instance-zone us-central1-a \
--tags no-ip --priority 800
user#nat-gateway:~$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
```
Do you know what can be done to make both things work together ?
I have recreated the environment you've described and did not run into any issues.
$ gcloud compute routes create no-ip-internet-route --network gce-network \
--destination-range 0.0.0.0/0 \
--next-hop-instance nat-gateway \
--next-hop-instance-zone us-central1-a \
--tags no-ip --priority 800
The only thing that the above command will do, is create a routing rule so that instances with no external traffic are pointed to the NAT gateway for any traffic they want need to send out. This will not affect the LB's ability to reach your instance.
In my test, I followed the exact guide you referenced, which you can find here, and that results in:
1 new network
1 firewall rule to allow SSH on port 22
1 firewall rule to allow all internal traffic
1 new instance to act as a NAT Gateway
1 new instance internal instance with no external IP address
I also added the internal instance to a 'TargetPool' and created an LB for the purpose of the test.
My internal instance was accessible both via the LB's external address and internally via the NAT Gateway. The internal instance was also able to communicate with the Internet due to the NAT Gateway's configuration. Everything works as expected.
My recommendation for you and other readers (as this post is now rather old) is to try again. Make sure that you do not have any other conflicting rules, routes or software.