See service hostname from OpenShift CLI - openshift

In OpenShift Container Platform v3.11 I can able to see the service hostname from the web console interface by inspecting the service.
In the web console if going to Applications > Services > service-name > Details.
You see the following info:
Selectors: app=nexus3, deploymentconfig=nexus3
Type: ClusterIP
IP: 172.30.154.6
Hostname: nexus3.xm-nexus.svc
Session affinity: None
Is there a way to see the service hostname from the CLI using the oc tool? I haven't been able to find it from reading the docs or online.
Example Hostname: nexus3.xm-nexus.svc
If you issue a oc get svc you will see the following but not the hostname.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nexus ClusterIP 172.30.186.244 <none> 3000/TCP 2h

Not directly. The hostname doesn't exist on the service object itself so you won't see it via the cli. However it is just a concatenation of (service-name).(service-namespace).svc. See docs on DNS for services
You could template it out via the cli if desired.
oc get svc nexus -o go-template --template='{{.metadata.name}}.{{.metadata.namespace}}.svc{{println}}'

Use oc describe service -n
e.g. oc describe service nexus3 -n
Services are provisioned labels like DNS.

I think the simplest way is
oc get routes
And get the hostname that you need to access by url
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
demowildfly demowildfly-swarmdemo2.192.168.42.87.nip.io demowildfly 8080 None

Related

Openshift: How can i test connectivity from pod to pod in same namespace

I have pods in same namespace pod1 and pod2
oc rsh pod1 curl http://service1 - this says
upstream connect error or disconnect/reset before headers. reset reason: connection failuresh-4.4
We use istio service mesh and i only have developer access, i cannot view istio-configuration.
Should i add anything in helmchart to make the network connection work?

How would I connect to a MySQL on the host machine from inside a docker kubernetes pod? [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 11 months ago.
I have a docker container running jenkins. As part of the build process, I need to access a web server that is run locally on the host machine. Is there a way the host web server (which can be configured to run on a port) can be exposed to the jenkins container?
I'm running docker natively on a Linux machine.
UPDATE:
In addition to #larsks answer below, to get the IP address of the Host IP from the host machine, I do the following:
ip addr show docker0 | grep -Po 'inet \K[\d.]+'
For all platforms
Docker v 20.10 and above (since December 14th 2020)
On Linux, add --add-host=host.docker.internal:host-gateway to your Docker command to enable this feature. (See below for Docker Compose configuration.)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
To enable this in Docker Compose on Linux, add the following lines to the container definition:
extra_hosts:
- "host.docker.internal:host-gateway"
For macOS and Windows
Docker v 18.03 and above (since March 21st 2018)
Use your internal IP address or connect to the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
Linux support pending https://github.com/docker/for-linux/issues/264
MacOS with earlier versions of Docker
Docker for Mac v 17.12 to v 18.02
Same as above but use docker.for.mac.host.internal instead.
Docker for Mac v 17.06 to v 17.11
Same as above but use docker.for.mac.localhost instead.
Docker for Mac 17.05 and below
To access host machine from the docker container you must attach an IP alias to your network interface. You can bind whichever IP you want, just make sure you're not using it to anything else.
sudo ifconfig lo0 alias 123.123.123.123/24
Then make sure that you server is listening to the IP mentioned above or 0.0.0.0. If it's listening on localhost 127.0.0.1 it will not accept the connection.
Then just point your docker container to this IP and you can access the host machine!
To test you can run something like curl -X GET 123.123.123.123:3000 inside the container.
The alias will reset on every reboot so create a start-up script if necessary.
Solution and more documentation here: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
When running Docker natively on Linux, you can access host services using the IP address of the docker0 interface. From inside the container, this will be your default route.
For example, on my system:
$ ip addr show docker0
7: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::f4d2:49ff:fedd:28a0/64 scope link
valid_lft forever preferred_lft forever
And inside a container:
# ip route show
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 src 172.17.0.4
It's fairly easy to extract this IP address using a simple shell
script:
#!/bin/sh
hostip=$(ip route show | awk '/default/ {print $3}')
echo $hostip
You may need to modify the iptables rules on your host to permit
connections from Docker containers. Something like this will do the
trick:
# iptables -A INPUT -i docker0 -j ACCEPT
This would permit access to any ports on the host from Docker
containers. Note that:
iptables rules are ordered, and this rule may or may not do the
right thing depending on what other rules come before it.
you will only be able to access host services that are either (a)
listening on INADDR_ANY (aka 0.0.0.0) or that are explicitly
listening on the docker0 interface.
If you are using Docker on MacOS or Windows 18.03+, you can connect to the magic hostname host.docker.internal.
Lastly, under Linux you can run your container in the host network namespace by setting --net=host; in this case localhost on your host is the same as localhost inside the container, so containerized service will act like non-containerized services and will be accessible without any additional configuration.
Use --net="host" in your docker run command, then localhost in your docker container will point to your docker host.
The answer is...
Replace http://127.0.0.1 or http://localhost with http://host.docker.internal.
Why?
Source in the docs of Docker.
My google search brought me to here, and after digging in the comments I found it's a duplicate of From inside of a Docker container, how do I connect to the localhost of the machine?. I voted for closing this one as a duplicate, but since people (including myself!) often scroll down on the answers rather than reading the comments carefully, here is a short answer.
For linux systems, you can – starting from major version 20.04 of the docker engine – now also communicate with the host via host.docker.internal. This won't work automatically, but you need to provide the following run flag:
--add-host=host.docker.internal:host-gateway
See
https://github.com/moby/moby/pull/40007#issuecomment-578729356
https://github.com/docker/for-linux/issues/264#issuecomment-598864064
Solution with docker-compose:
For accessing to host-based service, you can use network_mode parameter
https://docs.docker.com/compose/compose-file/#network_mode
version: '3'
services:
jenkins:
network_mode: host
EDIT 2020-04-27: recommended for use only in local development environment.
EDIT 2021-09-21: IHaveHandedInMyResignation wrote it does not work for Mac and Windows. Option is supported only for Linux
I created a docker container for doing exactly that https://github.com/qoomon/docker-host
You can then simply use container name dns to access host system e.g.
curl http://dockerhost:9200
Currently the easiest way to do this on Mac and Windows is using host host.docker.internal, that resolves to host machine's IP address. Unfortunately it does not work on linux yet (as of April 2018).
We found that a simpler solution to all this networking junk is to just use the domain socket for the service. If you're trying to connect to the host anyway, just mount the socket as a volume, and you're on your way. For postgresql, this was as simple as:
docker run -v /var/run/postgresql:/var/run/postgresql
Then we just set up our database connection to use the socket instead of network. Literally that easy.
I've explored the various solution and I find this the least hacky solution:
Define a static IP address for the bridge gateway IP.
Add the gateway IP as an extra entry in the extra_hosts directive.
The only downside is if you have multiple networks or projects doing this, you have to ensure that their IP address range do not conflict.
Here is a Docker Compose example:
version: '2.3'
services:
redis:
image: "redis"
extra_hosts:
- "dockerhost:172.20.0.1"
networks:
default:
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
You can then access ports on the host from inside the container using the hostname "dockerhost".
For docker-compose using bridge networking to create a private network between containers, the accepted solution using docker0 doesn't work because the egress interface from the containers is not docker0, but instead, it's a randomly generated interface id, such as:
$ ifconfig
br-02d7f5ba5a51: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.32.1 netmask 255.255.240.0 broadcast 192.168.47.255
Unfortunately that random id is not predictable and will change each time compose has to recreate the network (e.g. on a host reboot). My solution to this is to create the private network in a known subnet and configure iptables to accept that range:
Compose file snippet:
version: "3.7"
services:
mongodb:
image: mongo:4.2.2
networks:
- mynet
# rest of service config and other services removed for clarity
networks:
mynet:
name: mynet
ipam:
driver: default
config:
- subnet: "192.168.32.0/20"
You can change the subnet if your environment requires it. I arbitrarily selected 192.168.32.0/20 by using docker network inspect to see what was being created by default.
Configure iptables on the host to permit the private subnet as a source:
$ iptables -I INPUT 1 -s 192.168.32.0/20 -j ACCEPT
This is the simplest possible iptables rule. You may wish to add other restrictions, for example by destination port. Don't forget to persist your iptables rules when you're happy they're working.
This approach has the advantage of being repeatable and therefore automatable. I use ansible's template module to deploy my compose file with variable substitution and then use the iptables and shell modules to configure and persist the firewall rules, respectively.
This is an old question and had many answers, but none of those fit well enough to my context. In my case, the containers are very lean and do not contain any of the networking tools necessary to extract the host's ip address from within the container.
Also, usin the --net="host" approach is a very rough approach that is not applicable when one wants to have well isolated network configuration with several containers.
So, my approach is to extract the hosts' address at the host's side, and then pass it to the container with --add-host parameter:
$ docker run --add-host=docker-host:`ip addr show docker0 | grep -Po 'inet \K[\d.]+'` image_name
or, save the host's IP address in an environment variable and use the variable later:
$ DOCKERIP=`ip addr show docker0 | grep -Po 'inet \K[\d.]+'`
$ docker run --add-host=docker-host:$DOCKERIP image_name
And then the docker-host is added to the container's hosts file, and you can use it in your database connection strings or API URLs.
For me (Windows 10, Docker Engine v19.03.8) it was a mix of https://stackoverflow.com/a/43541732/7924573 and https://stackoverflow.com/a/50866007/7924573 .
change the host/ip to host.docker.internal
e.g.: LOGGER_URL = "http://host.docker.internal:8085/log"
set the network_mode to bridge (if you want to maintain the port forwarding; if not use host):
version: '3.7'
services:
server:
build: .
ports:
- "5000:5000"
network_mode: bridge
or alternatively: Use --net="bridge" if you are not using docker-compose (similar to https://stackoverflow.com/a/48806927/7924573)
As pointed out in previous answers: This should only be used in a local development environment.
For more information read: https://docs.docker.com/compose/compose-file/#network_mode and https://docs.docker.com/docker-for-windows/networking/#use-cases-and-workarounds
You can access the local webserver which is running in your host machine in two ways.
Approach 1 with public IP
Use host machine public IP address to access webserver in Jenkins docker container.
Approach 2 with the host network
Use "--net host" to add the Jenkins docker container on the host's network stack. Containers which are deployed on host's stack have entire access to the host interface. You can access local webserver in docker container with a private IP address of the host machine.
NETWORK ID NAME DRIVER SCOPE
b3554ea51ca3 bridge bridge local
2f0d6d6fdd88 host host local
b9c2a4bc23b2 none null local
Start a container with the host network
Eg: docker run --net host -it ubuntu and run ifconfig to list all available network IP addresses which are reachable from docker container.
Eg: I started a nginx server in my local host machine and I am able to access the nginx website URLs from Ubuntu docker container.
docker run --net host -it ubuntu
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a604f7af5e36 ubuntu "/bin/bash" 22 seconds ago Up 20 seconds ubuntu_server
Accessing the Nginx web server (running in local host machine) from Ubuntu docker container with private network IP address.
root#linuxkit-025000000001:/# curl 192.168.x.x -I
HTTP/1.1 200 OK
Server: nginx/1.15.10
Date: Tue, 09 Apr 2019 05:12:12 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 26 Mar 2019 14:04:38 GMT
Connection: keep-alive
ETag: "5c9a3176-264"
Accept-Ranges: bytes
In almost 7 years the question was asked, it is either docker has changed, or no one tried this way. So I will include my own answer.
I have found all answers use complex methods. Today, I have needed this, and found 2 very simple ways:
use ipconfig or ifconfig on your host and make note of all IP addresses. At least two of them can be used by the container.
I have a fixed local network address on WiFi LAN Adapter: 192.168.1.101. This could be 10.0.1.101. the result will change depending on your router
I use WSL on windows, and it has its own vEthernet address: 172.19.192.1
use host.docker.internal. Most answers have this or another form of it depending on OS. The name suggests it is now globally used by docker.
A third option is to use WAN address of the machine, or in other words IP given by the service provider. However, this may not work if IP is not static, and requires routing and firewall settings.
PS: Although pretty identical to this question here, and I posted this answer there, I first found this post, so I post it here too as may forget my own answer.
The simplest option that worked for me was,
I used the IP address of my machine on the local network(assigned by the router)
You can find this using the ifconfig command
e.g
ifconfig
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=400<CHANNEL_IO>
ether f0:18:98:08:74:d4
inet 192.168.178.63 netmask 0xffffff00 broadcast 192.168.178.255
media: autoselect
status: active
and then used the inet address. This worked for me to connect any ports on my machine.
When you have two docker images "already" created and you want to put two containers to communicate with one-another.
For that, you can conveniently run each container with its own --name and use the --link flag to enable communication between them. You do not get this during docker build though.
When you are in a scenario like myself, and it is your
docker build -t "centos7/someApp" someApp/
That breaks when you try to
curl http://172.17.0.1:localPort/fileIWouldLikeToDownload.tar.gz > dump.tar.gz
and you get stuck on "curl/wget" returning no "route to host".
The reason is security that is set in place by docker that by default is banning communication from a container towards the host or other containers running on your host.
This was quite surprising to me, I must say, you would expect the echosystem of docker machines running on a local machine just flawlessly can access each other without too much hurdle.
The explanation for this is described in detail in the following documentation.
http://www.dedoimedo.com/computers/docker-networking.html
Two quick workarounds are given that help you get moving by lowering down the network security.
The simplest alternative is just to turn the firewall off - or allow all. This means running the necessary command, which could be systemctl stop firewalld, iptables -F or equivalent.

Basic Auth doesn't work in kubernetes ingress

I have created pypiserver in kubernetes cluster, I have used https://hub.docker.com/r/pypiserver/pypiserver docker image. I need to create basic auth for the server which I created. I used this method https://kubernetes.github.io/ingress-nginx/examples/auth/basic/
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: pypiserver
labels:
app: pypiserver
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: 'true'
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: pypiservice
servicePort: 8080
tls:
- hosts:
- example.com
secretName: secret-tls
But my host name would be "www.example.com/8080" and I don't see ingress has any pod in kubernetes cluster. Ingress is running fine but I don't get auth for this host. (And also I have http://IP adress:8080 which I converted to domain through cloudflare)
Please let me know what am I doing wrong?
I don't know exactly what is your nginx ingress controller version, but I can share what worked for me. I've reproduced it on my GKE cluster.
I installed my nginx ingress controller following this guide. Basically it came down to running the following commands:
If you're using GKE you need to initialize your user as a
cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
I'm using 1.13 version on my GKE so this tip is also applied in my case:
Tip
If you are using a Kubernetes version previous to 1.14, you need to
change kubernetes.io/os to beta.kubernetes.io/os at line 217 of
mandatory.yaml, see Labels details.
But I dealt with it quite differently. Basically you need your Nodes to have kubernetes.io/os=linux label so you can simply label them. Following command will do the job:
kubectl label node --all kubernetes.io/os=linux
Then we're heading to Provider Specific Steps which in case of GKE came down to applying the following yaml:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Then you may want to verify your installation:
To check if the ingress controller pods have started, run the
following command:
kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
or simply run:
kubectl get all -n ingress-nginx
It will also tell you if all the required resorces are properly deployed.
Next we need to write our ingress (ingress object/resource) containing basic-auth related annotations. I was following same tutorial as mentioned in your question.
First we need to create our auth file containing username and hashed password:
$ htpasswd -c auth foo
New password: <bar>
New password:
Re-type new password:
Adding password for user foo
Once we have it, we need to create a Secret object which then we'll use in our ingress:
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
Once it is created we can check if everything went well:
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
kind: Secret
metadata:
name: basic-auth
namespace: default
type: Opaque
Alright, so far so good...
Then we need to create our ingress resource/object.
My ingress-with-auth.yaml file looks slightly different than the one in the instruction, namely I just added kubernetes.io/ingress.class: nginx to make sure my nginx ingress controller is used rather than built-in GKE solution:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-with-auth
annotations:
kubernetes.io/ingress.class: nginx
# type of authentication
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: pypiserver
servicePort: 80
In your example you may need to add nginx prefix in your basic-auth related annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-secret: secret
ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
so it looks like this:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - ok"
First I used the address listed in my ingress resource (it doesn't appear there any more once I added kubernetes.io/ingress.class: nginx annotation in my ingress definition:
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-with-auth foo.bar.com 80 117m
When I tried to access pypi-server using this IP it brought me directly to the page without a need of any authentication. But it looks like if you didn't define proper ingress class, the default is used instead so in practice your ingress definition with auth-basic details isn't taken into consideration and isn't passed to the nginx ingress controller we installed in one of the previous steps.
So what IP address should be used to access your app ? Run the following command which will show you both CLUSTER-IP (can be accessed within your cluster from any Pod or Node) and EXTERNAL-IP of your nginx ingress controller:
$ kubectl get service --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.3.220 35.111.112.113 80:30452/TCP,443:30006/TCP 18h
You can basically host many different websites in your cluster and all of them will be available through this IP. All of them can be available on default http 80 port (or https 443 in your case). The only difference between them will be the hostname that you pass in http header of your http request.
Since I don't have a domain pointing to this external IP address and can't simply access my website by going to http://foo.bar.com I need to pass somehow the hostname I'm requesting from 35.111.112.113 address. It can be done in a few ways:
I installed in my Google Chrome browser ModHeader extension which allows me to modify my http request headers and set the hostname I'm requestig to any value I want.
You can do it also using curl as follows:
curl -v http://35.111.112.113 -H 'Host: foo.bar.com' -u 'foo:bar'
You should be prompted for authentication.
If you don't provide -u username:password flag you should get 401 Authorization Required.
Basically hat's all.
Let me know if it helped you. Don't hesitate to ask additional questions if something isn't completely clear.
One more thing. If something still doesn't work you may start from attaching to your nginx ingress controller Pod (check your Pod name first by running kubectl get pods -n ingress-nginx):
kubectl exec -ti -n ingress-nginx nginx-ingress-controller-pod /bin/bash
and checking the content of your /etc/nginx/nginx.conf file. Look for foo.bar.com (or in your case example.com). It should contain similar lines:
auth_basic "Authentication Required - foo";
auth_basic_user_file /etc/ingress-controller/auth/default-ingress-with-auth.passwd;
Then check if the file is present in the indicated location /etc/ingress-controller/auth/default-ingress-with-auth.passwd.
One note to your Service definition. The fact that pypiserver container exposes specifically port 8080 doesn't mean that you need to use this port when accessing it via ingress. In Service definition the port exposed by the Container is called targetPort. You need to specify it when defining your Service but Service itself can expose completely different port. I defined my Service using following command:
kubectl expose deployment pypiserver --type=LoadBalancer --port=80 --target-port=8080
Note that the type should be set to NodePort or LoadBalancer. Then in your ingress definition you don't have to use 8080 but 80 which is the port exposed by your pypiserver Service. Note that there is servicePort: 80 in my ingress object/resource definition. Your example.com domain in cloudflare should point with it's A record to your nginx ingress controller LoadBalancer Service IP (kubectl get svc -n ingress-nginx) without specifying any ports.

Wildfly on OpenShift 3 with path-base routing and accessible console

I have Wildfly 10 running on Openshift Origin 3 in AWS with an elastic ip.
I setup a Route in Openshift to map / to the wildfly service. This is working fine. If I go to http://my.ip.address I get the WildFly welcome page.
But if I map a different path, say /wf01, it doesn't work. I get a 404 Not Found error.
My guess is the router is passing along the /wf01 to the service? If that's the case, can I stop it from doing it? Otherwise how can I map http://my.ip.address/wf01 to my wildfly service?
I also want the wildfly console to be accessible from outside (this is a demo server for my own use). I added "-bmanagement","0.0.0.0" to the deploymentconfig but looking at the wildfly logs it is still binding to 127.0.0.1:
02:55:41,483 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051:
Admin console listening on http://127.0.0.1:9990
A router today cannot remap/rewrite the incoming HTTP path to another path value before passing it along. A workaround is to mount another route+service at the root that handles the root and redirects / forwards.
You can also use port-forward :
oc port-forward -h
Forward 1 or more local ports to a pod
Usage:
oc port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [options]
Examples:
# Listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
$ oc port-forward -p mypod 5000 6000
# Listens on port 8888 locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 8888:5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod :5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 0:5000

Why can't I access my Kubernetes service via its IP?

I have a Kubernetes service on GKE as follows:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: <none>
Selector: app=jupiter
Type: NodePort
IP: 10.11.246.27
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31683/TCP
Endpoints: 10.8.0.33:1337
Session Affinity: None
No events.
I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337) or via the node port (10.240.251.174:31683 in my case). However, if I try to access 10.11.246.27:80, I get nothing. I've also tried ports 1337 and 31683.
Why can't I access the service via its IP? Do I need a firewall rule or something?
Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver).
Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network.
There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short:
Create a bastion GCE route for your cluster's services.
Install your cluster's kube-proxy anywhere you want to access the cluster's services.