We are in the process of move all our services over to Docker hosted on Google Container Engine. In the mean time we have have some services in docker and some not.
Within Kubernetes services discovery is easy via DNS, but how do I resolve services from outside my container cluster? ie, How do I connect from a Google Compute Engine instance to a service running in Kubernetes?
The solution I have for now is to use the service clusterIP address.
You can see this IP address by executing kubectl get svc. This ip address is by default not static, but you can assign it when defining you service.
From the documentation:
You can specify your own cluster IP address as part of a Service creation request. To do this, set the spec.clusterIP
The services are accessed outside the cluster via IP address instead of DNS name.
Update
After deploying another cluster the above solution did not work. It turns out that the new IP range could not be reached and that you do need to add a network route.
You can get the cluster IP range by running
$ gcloud container clusters describe CLUSTER NAME --zone ZONE
In the output the ip range is shown with the key clusterIpv4Cidr, in my case it was 10.32.0.0/14.
Then create a route for that ip range that points to one of the nodes in your cluster. $ gcloud compute routes create --destination-range 10.32.0.0/14 --next-hop-instance NODE0 INSTANCE NAME
Related
My Setup
GKE / EKS - Managed Kubernetes Cluster
As of now for Business requirements, it is k8s cluster with Public Endpoints
What it means is that I have a Public endpoint for API Server as well Nodes have an External Public IP Address
nginx ingress is deployed for route-based traffic and exposed as a Loadbalancer type
And The LoadBalancer is of type Network Load Balancer internet facing(Or External) having a Public IP Address (say 35.200.24.99)
My requirement or I want to understand, is this
If my Pod makes a call to the outside APIs, what will be the source IP that the outside API will receive? Is it my LoadBalencer IP or the Pod Node External IP Address
If it receives the LB IP, is there a way to change this behavior to send the Pod Node IP Address?
Also is there any tool or a way to simulate what is the Source IP, I am getting while Pod makes a request to an outside API
I could not try out anything
I tried hitting curl requests to nginx Pod that wsa running inside, but did not get desired results or I could not figure out
If my Pod makes a call to the outside APIs, what will be the source IP
that the outside API will receive? Is it my LoadBalencer IP or the Pod
Node External IP Address
It your POD sending request and your cluster is public it will be Node's IP on which POD is running/scheduled.
If it receives the LB IP, is there a way to change this behavior to
send the Pod Node IP Address?
it wont get the LB IP, it will be Node's IP only on which POD is running. If you want to manage the Single outgoing IP you can use the NAT gateway so all traffic will go out of the single source IP.
Also is there any tool or a way to simulate what is the Source IP, I
am getting while Pod makes a request to an outside API
Go to the POD using kubectl exec -it <POD name> bash once you are inside the POD run the curl ifconfig.me it will return the IP from which you are hitting the site. Mostly it will be Node's IP.
Consider ifconfig.me as an outside API and you will get your result.
I have a Python application which has been deployed to openshift.
I am using an external REST service in my application. In order to use this service, the developers of the REST service have to whitelist my IP because a Firewall blocks unauthorized IP addresses.
How can I find the external IP of my application? How can I find it in openshift? I tried a few OC commands, but I am not sure if I have to get the IP of the pod or the service.
Out of the box the traffic from internal cluster components will appear to external infrastructure like they are coming from whichever OpenShift compute host their pods are currently scheduled on.
Information on internal cluster networking and how traffic traverses from a process running inside a pod to the external network can be found at SDN: Packet Flow.
In your case you could have the external application whitelist all of the ip addresses of the compute hosts that are expected to run your application pods.
Alternately you could set up an EgressIP. This will cause all traffic originating from a specific OpenShift project to appear as if it is originating from a single ip address. You could then have your external application whitelist the EgressIP address.
Documentation for configuring EgressIP can be found in the official documentation under Enabling Static IPs for External Project Traffic
What you are searching for is the external IP of the Service. A Service acts as a load balancer for your pods but by default it only has a cluster-wide IP address. If you need a URL to access it from the outside, you can create a Route. For your purpose where you need an actual external IP address, you can assign the Service an external IP manually. Information on how to do this can be found in the official OpenShift Docs.
I am stuck at one thing regarding CloudSQL.
I have my WordPress app running on GCE and I create Instance Group so I will utilise the AutoScaler.
for Db, I am using CloudSQL.
Now point where is stuck is the "Authorise network" in CloudSQL as it accepts only IPV4 Public IP.
How do I know when autoscaling happen what IP will attach to Instance so my instance will know where the DB is?
I can hard code the CloudSQL IP as a CNAME but from CloudSQL Side I am not able to figure it out how to provide access. I can make my DB access all open
If you can let me know what will be the point which I am missing.
I used cloudsql proxy also but that doesn't come with Service in Linux ... I hope you can understand my situation. Let me know if any idea you like to share on this.
Thank you
The recommended way is to use the second generation instances and Cloud SQL Proxy, you’ll need to configure the Proxy on Linux and start it by using service account credentials as outlined at the provided link.
Another way is to use startup script in your GCE instance template, so you can get your new instance’s external IP address and add it to a Cloud SQL instance’s authorized networks by using gcloud sql instance patch command. The IP can be removed from the authorized networks in the same way by using shutdown script. The external IP address of GCE VM instance can be retrieved from metadata by running:
$ curl "http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip" -H "Metadata-Flavor: Google".
I have a cluster on a google container engine. There are internal service with the domain app.superproject with exposed port 9999.
Also I have an instance in google compute engine.
How can I access to service with it's domain name from the instance of google compute engine?
GKE is built on top of GCE, a GKE instance is also a GCE instance. You can view all your instances either in the web console, or with gcloud compute instances list command.
Note that they may not be in the same GCE virtual network, but in your use case, it's better to put them in, e.g., the default network (I guess they are already, but check their network properties if you are not sure), then they're accessible to each other through the internal IPs (if not, check firewall settings).
You can also use instance names, which resolve to internal IPs, e.g., ping instance1.
If they're not in the same GCE virtual network, you have to treat the service as an external service by exposing an external IP, which is not recommended in your use case.
I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.