VNIC not attaching to instance when nodes and pods in different subnets - oracle-cloud-infrastructure

I'm trying to create a k8s cluster in oracle oci with VCN-native pod networking. I have separate subnets for my pods and nodes (followed example 4 here) and when oci tries to attach the secondary VNIC to the instance, it fails and the status never gets past "Attaching". However when I use the same subnet for both pods and nodes it attaches successfully. Anyone know what's going on?

Related

All nodes failing with ingress

I installed gloo in my cluster as ingress controller. (I am using EKS)
glooctl install ingress.
This creates a new elb but in the newly created elb status of all the nodes are OutofService(It's failing health checks).
Anyone have any idea what could be the issue. All my pods are working fine on the nodes. (Also my nodes dont have a public IP could this be the reason)

Connecting to Kubernetes service from Cloud Compute instance

We are in the process of move all our services over to Docker hosted on Google Container Engine. In the mean time we have have some services in docker and some not.
Within Kubernetes services discovery is easy via DNS, but how do I resolve services from outside my container cluster? ie, How do I connect from a Google Compute Engine instance to a service running in Kubernetes?
The solution I have for now is to use the service clusterIP address.
You can see this IP address by executing kubectl get svc. This ip address is by default not static, but you can assign it when defining you service.
From the documentation:
You can specify your own cluster IP address as part of a Service creation request. To do this, set the spec.clusterIP
The services are accessed outside the cluster via IP address instead of DNS name.
Update
After deploying another cluster the above solution did not work. It turns out that the new IP range could not be reached and that you do need to add a network route.
You can get the cluster IP range by running
$ gcloud container clusters describe CLUSTER NAME --zone ZONE
In the output the ip range is shown with the key clusterIpv4Cidr, in my case it was 10.32.0.0/14.
Then create a route for that ip range that points to one of the nodes in your cluster. $ gcloud compute routes create --destination-range 10.32.0.0/14 --next-hop-instance NODE0 INSTANCE NAME

Are pods managed by a Deployment restarted when updating a Kubernetes cluster

The documentation says that only pods that are managed by a Replication Controller will be restarted after a Kubernetes cluster update on Google Container Engine.
What about the pods that are managed by a Deployment?
In this case the language is too precise. Any pods that are managed by a controller (Replication Controller, Replica Set, Daemon Set, Deployment, etc) will be restarted. The warning is for folks that have created Pods without a corresponding controller. Because nodes are replaced with new nodes (rather than upgraded in place), Pods without a controller ensuring that they remain running will just disappear.

Kubernetes on GCE

I am trying to setup Kubernetes on GCE. My question is , lets say there are 20 minions part of the cluster in Kubernetes, and two services deployed with type LoadBalancer with replicas as 2 each. So K8S will basically put 2 pods on two different minions per service. My question is, would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added? At least that is my observation but would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed. Is the reason that any service should be reachable from any minion no matter whether the minion is part of that service or not ? Let me know if there is a better community for this question ?
Would the rest of the minions which are not running any pods also get the iptables rule in chain KUBE-PORTALS-CONTAINER and KUBE-PORTALS-HOST for these two services added?
Yes. The load balancer can forward external traffic to any node in the cluster, so every node needs to be able to receive traffic for a service and forward it to the appropriate pod.
I would like to confirm if this is just how Kubernetes works on GCE or this is how K8S behaves irrespective of where it is deployed.
The iptables rule for services within the cluster is the same regardless of where Kubernetes is deployed. Making a service externally accessible differs slightly depending on where you deploy your cluster (e.g. on AWS you'd create the service as type NodePort instead of type LoadBalancer), so the iptables rule for services that are externalized can vary a bit.

Name kubernete generated Google cloud ingress load balancer

I have multiple kubernetes clusters that have Google powered load balancers (ingress lbs).
So right now to access my k8s cluster service (s) I just have to ping the public IP given by the $ kubectl get service, cool.
My problem is that sometimes I need to tear down/create clusters, reconfigure services, those services might also need SSL certificates very soon, and my clusters'/services' builds needs to be easily reproducible too (for cloud devs!).
The question is simple: can I instead of having an ingress load balancer IP have an ingress load balancer hostname?
Something like ${LOAD_BALANCER_NAME}.${GOOGLE_PROJECT_NAME}.appspot.com would be uber awesome.
Kubernetes integration with google cloud DNS is a feature request for which there is no immediate timeline (it will happen, I cannot comment on when). You can however create DNS records with the static ip of a loadbalancer.
If I've understood your problem correctly, you're using an L4 loadbalancer (service.Type=LoadBalancer) and you want to be able to delete the service/nodes etc and continue using the same IP (because you have DNS records for it). In other words, you want a loadbalancer not tied to the service lifecycle. This is possible through an L7 loadbalancer [1] & [2], or by recreating the service with an existing IP [3].
Note that [1] divorces the loadbalancer from service lifetime, but if you take down your entire cluster you will lose the loadbalancer. [2] is tied to the Ingress resource, so if you delete your cluster and recreate it, start the loadbalancer controller pod, and recreate the same Ingress resource, it will use the existing loadbalancer. Also note that both [1] and [2] depend on a "beta" resource that will be released with kubernetes 1.1, I'd appreciate your feedback if you deploy them :)
[1] https://github.com/kubernetes/contrib/tree/master/service-loadbalancer
[2] https://github.com/kubernetes/contrib/pull/132
[3] github.com/kubernetes/kubernetes/issues/10323