ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet - kubernetes-ingress

I am getting an error "ingress Failed build model due to couldn't auto-discover subnets: unable to discover at least one subnet" while deploying ingress in EKS.
Steps already taken:
Cluster Name is correct in Deployment file
below annotation is am using in Ingress-Resource file
annotations:
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
kubernetes.io/role/internal-elb: 1
alb.ingress.kubernetes.io/subnets: subnet-xxxx, subnet-yyy, subnet-zzz
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct cluster name)
Key point:
I am using private subnet in EKS, Subnets were separately created with proper Tags.

2. below annotation is am using in Ingress-Resource file
...
kubernetes.io/role/internal-elb: 1
...
kubernetes.io/cluster/<ClusterName>: owned ---> (I am using correct
cluster name)
The above are tags and not for annotation usage. Try tag the 3 subnets in your question on the AWS console with kubernetes.io/role/internal-elb: 1 and kubernetes.io/cluster/<ClusterName>: owned; so that the LB controller can discover them.

Related

multiple ingress controller in kubernetes

I've a microservice architecture running on baremetal kubernetes cluster.We've mainly two services out of which one is to be exposed publically whereas the other service is to be made available internally. I'm using ingress nginx to expose my service internally,but now i have to expose the other service also,so i thought of using another ingress controller for that.
When i'm trying to deploy another ingress controller in different namespace,I'm getting error like :
Error: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot list resource "endpoints" in API group "" at the cluster scope
and my first ingress also stops working properly.
The ingress deployment yaml which i'm using is:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
Whereas,the second ingress yaml which i'm using in another namespace is : https://github.com/wali97/second-ingress-controller.yaml/blob/main/ingress.yaml

Pod level route restriction

EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.

Kubernetes -- Helm -- Mysql Chart loses stored data after stopping pod

Using https://github.com/helm/charts/tree/master/stable/mysql (all the code is here), it is cool being able to run mysql as part of my local kubernetes cluster (using docker kubernetes).
The problem though is that once I stop running the pod, and then run the pod again, all the data that was stored is now gone.
My question is how do I keep the data that was added to the mysql pod? I have read about persistent volumes, and the mysql helm example from github is showing that it is using PersistentVolumeClaim. I have also enabled persistence on the values.yaml file, but I cannot seem to have the same data that was saved in the database.
My docker kubernetes version is currently 1.14.6.
Please verify your msql POD You should notice volumes and volumesMount options:
volumeMounts:
- mountPath: /var/lib/mysql
name: data
.
.
.
volumes:
- name: data
persistentVolumeClaim:
claimName: msq-mysql
In additions please verify your PersistentVolume and PersistentVolumeClaim, storageClass:
kubectl get pv,pvc,pods,sc:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO Delete Bound default/msq-mysql standard 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/msq-mysql Bound pvc-2c6aa172-effd-11e9-beeb-42010a840083 8Gi RWO standard 24m
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/msq-mysql-b5c48c888-pz6p2 1/1 Running 0 4m28s 10.0.0.8 gke-te-1-default-pool-36546f4e-5rgw <none> <none>
Please run kubectl describe persistentvolumeclaim/msq-mysql (in your example you should change the pvc name)
You can notice that pvc was provisioned successfully using gce-pd and mounted by msq-mysql POD.
Normal ProvisioningSucceeded 26m persistentvolume-controller Successfully provisioned volume pvc-2c6aa172-effd-11e9-beeb-42010a840083 using kubernetes.io/gce-pd
Mounted By: msq-mysql-b5c48c888-pz6p2
I have created table with on row, deleted the pod and verified after that (as expected everything is alright):
mysql> SELECT * FROM t;
+------+
| c |
+------+
| ala |
+------+
1 row in set (0.00 sec)
Why: all the data that was stored is now gone.
As per helm chart docs:
The MySQL image stores the MySQL data and configurations at the /var/lib/mysql path of the container.
By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality you can change the values.yaml to disable persistence and use an emptyDir instead.
Mostly there is problem with pv,pvc binding. It can be also problem with user defined or non default storageClass.
So please verify pv,pvc as stated above.
Take a look at StorageClass
A claim can request a particular class by specifying the name of a StorageClass using the attribute storageClassName. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC.
PVCs don’t necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

route to application stopped working in OpenShift Online 3.9

I have an application running in Openshift Online starter, which worked for the last 5 months. A single pod behind a service with a route defined that does edge tls termination.
Since Saturday, when trying to access the application, I get the error message
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The pod is running, I can exec into it and check this, I can port-forward to it and access it.
checking the different components with oc:
$ oc get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE
taboo3-23-jt8l8 1/1 Running 0 1h 10.128.37.90 ip-172-31-30-113.ca-central-1.compute.internal
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
taboo3 172.30.238.44 <none> 8080/TCP 151d
$ oc describe svc taboo3
Name: taboo3
Namespace: sothawo
Labels: app=taboo3
Annotations: openshift.io/generated-by=OpenShiftWebConsole
Selector: deploymentconfig=taboo3
Type: ClusterIP
IP: 172.30.238.44
Port: 8080-tcp 8080/TCP
Endpoints: 10.128.37.90:8080
Session Affinity: None
Events: <none>
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
taboo3 taboo3-sothawo.193b.starter-ca-central-1.openshiftapps.com taboo3 8080-tcp edge/Redirect None
I tried to add a new route as well (with or without tls), but am getting the same error.
Does anybody have an idea what might be causing this and how to fix it?
Addition April 17, 2018: Got an email from Openshift Online support:
It looks like you may be affected by this bug.
So waiting for it to be resolved.
The problem has been resolved by Openshift Online, the application is working again

Get Error storing cluster namespace secret (E0025) trying to bind service to a cluster

I am following Tutorial: Creating Kubernetes clusters in IBM Bluemix Container Service but when I try to bind a service to my cluster I get:
$ bx cs cluster-service-bind kub_cluster myns cloudant
FAILED
Error storing cluster namespace secret (E0025)
Incident ID: ebdbdd0d-5d6a-4373-8e54-b7dd84733a29
I have a worker node:
$ bx cs workers kub_cluster
will list one in State 'normal' and Status 'Ready'.
I tried with different services (messageHub and Cloudant) and different names for the namespace. These are services I already have. Anyone know how to get around this?
I was able to test this out following the same guide. I used the tone analyzer service. For testing I used the default namespace.
Are you able to see the namespace you are using when you list out available kubernetes namespaces? The option "myns" will need to be a kubernetes namespace.
$ kubectl get namespaces
This should print out the default namespace as well as other system namespaces + any namespaces you created.
Earlier in the guide a namespace is setup for the docker registry, it is possible that you are using that namespace.
Other instances of this issue appear to be related to the status of the cluster. It looks like your cluster has an available node(normal and ready), so it should be able to store the secret in an available namespace.
You might be missing the specific namespace in your cluster.
You can create one by calling:
kubectl create namespace <your namespace>