Kubernetes ingress address is empty - kubernetes-ingress

I have set up a Kubernetes cluster using Minikube in an Ubuntu VM. I cloned this GitHub repo and created the namespace, deployment, service and ingress.
I have also enabled ingress addon by running minikube addons enable ingress.
When I run kubectl get svc -n ingress-nginx, the external ip is none.
When I run kubectl get ingress -n sample, the address is empty.
Please advise how to set up k8s ingress.
PS: I had minikube tunnel running.
PS 2: kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create--1-* 0/1 Completed 0 11m
ingress-nginx-admission-patch--1-* 0/1 Completed 1 11m
ingress-nginx-controller-* 1/1 Running 0 11m

Thanks to this SO post. It worked after I downgraded Minikube to v1.11.0. I used --driver=none.

Related

Private Azure Kubernetes cluster with nginx ingress can't be restarted

I have private Azure Kubernetes cluster with installed Nginx Ingress (using internal Load Balancer)
This is non-production cluster and during weekends we plan stop it. But when we start it - it can't be finished successfully and after 30 minutes AKS cluster is in Failed state
After research I found that it happens only if Ingress is installed on private AKS with restricted outbound access
Any ideas how can it be solved?
one thing you can do is upgrade your Kubernetes cluster:
Check the upgrades available for your cluster
az aks get-upgrades --resource-group <resoure-group-name> --name <cluster-name> --output table
Then upgrade your cluster
az aks upgrade \
--resource-group <resoure-group-name> \
--name <cluster-name> \
--kubernetes-version <kubernetes_verion>
Replace the Kubernetes version by a version you got from the first command

Basic install of k3s has no nodes

I followed the instructions here to install k3s. I also watched this tutorial.
In both cases they show running this command after the install:
k3s kubectl get node
However when I do that I get this:
# k3s kubectl get node
No resources found
What reasons could there be for this not working?
If I specify the kubeconfig file that Rancher creates, I get the same response.
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get node
No resources found
I believe that the cluster is running:
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Services and Namespaces
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 16h
# kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get ns
NAME STATUS AGE
default Active 16h
kube-system Active 16h
kube-public Active 16h
kube-node-lease Active 16h
OS
# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
This is a VM with 2 CPUs and 8 GB RAM.
Was caused by an incompatible file system. This was in the logs.
ERRO[2021-09-24T10:40:28.848795952-04:00] Failed to configure agent: "overlayfs" snapshotter cannot be enabled for "/var/lib/rancher/k3s/agent/containerd", try using "fuse-overlayfs" or "native": /var/lib/rancher/k3s/agent/containerd does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support

Openshift container with wrong openshift.io/scc

Having unexplained behavior in openshift 4.4.17 cluster: oauth-openshift Deployment (in openshift-authentication namespace) has replicas=2, the first pod is Running with:
openshift.io/scc: anyuid
the second pod goes in CrashLoopBackOff state, and scc assigned to it is the one below:
openshift.io/scc: nginx-ingress-scc (that is a customized scc for nginx purposes)
By documentation:
By default, the pods inside openshift-authentication and openshift-authentication-operator namespace runs with anyuid SCC.
I suppose something has been changed in the cluster but i cannot figure out where the mistake is.
Oauth-penshift Deployment is in its default configuration:
serviceAccountName: oauth-openshift
namespace: openshift-authentication
$ oc get scc anyuid -o yaml
users:
system:serviceaccount:default:oauth-openshift
system:serviceaccount:openshift-authentication:oauth-openshift
system:serviceaccount:openshift-authentication:default
$ oc get pod -n openshift-authentication
NAME READY STATUS RESTARTS AGE
oauth-openshift-59f498986d-lmxdv 0/1 CrashLoopBackOff 158 13h
oauth-openshift-d4968bd74-ll7mn 1/1 Running 0 23d
$ oc logs oauth-openshift-59f498986d-lmxdv -n openshift-authentication
Copying system trust bundle
cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep serviceAccount
serviceAccount: oauth-openshift
serviceAccountName: oauth-openshift
$ oc get pod oauth-openshift-59f498986d-lmxdv -n openshift-authentication -o=yaml|grep scc
openshift.io/scc: nginx-ingress-scc
Auth Operator:
$ oc get pod -n openshift-authentication-operator
NAME READY STATUS RESTARTS AGE
authentication-operator-5498b9ddcb-rs9v8 1/1 Running 0 33d
$ oc get pod authentication-operator-5498b9ddcb-rs9v8 -n openshift-authentication-operator -o=yaml|grep scc
openshift.io/scc: anyuid
The managementState is set to Managed
First of all, you should check if your SCC priority is customized or not. For example, anyuid scc priority is 10 and it's the highest by default.
But if other SCC(in this case, nginx-ingress-scc) is configured more than 10 priority, then the SCC is selected by the oauth pod unexpectedly. It may causes this issue.
The problem was the customized scc (nginx-ingress-scc) had priority higher than 10, that is the anyuid's priority.
Now solved.

Helm: could not find tiller

I'm getting this error message:
➜ ~ helm version
Error: could not find tiller
I've created tiller project:
➜ ~ oc new-project tiller
Now using project "tiller" on server "https://192.168.99.100:8443".
Then, I've created tiller into tiller namespace:
➜ ~ helm init --tiller-namespace tiller
$HELM_HOME has been configured at /home/jcabre/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
So, after that, I've been waiting for tiller pod is ready.
➜ ~ oc get pod -w
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s
^C%
Any ideas?
Try deleting your cluster tiller
kubectl get all --all-namespaces | grep tiller
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller
Initialise it again:
helm init
Now add the service account:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
This solved my issue!
You don't have helm configured yet, use the following command:
helm init
This will create .helm with repository, plugins, etc, in your home directory.
Background:
helm comes with client and server, if you have a different deployment environment, it might be possible that your helm server (known as tiller) is different, in that case, there are two ways to point to tiller
set environment variable TILLER_NAMESPACE
--tiller-namespace string namespace of Tiller (default "kube-system")
For more details check the helm READ.md file.
You installed tiller into a non-default namespace, so you have to tell helm where to look.
helm --tiller-namespace tiller version
First of all you need to create service account for teller to use in helm:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
To verify that Tiller is running:
kubectl get pods --namespace kube-system
DigitalOcean Reference
Now you can upgrade to the latest version of Helm or any version > 3.0.0.
You don't need to do
helm init
anymore.
The Tiller and client directories are initialised automatically when you start using helm. As mentioned here
I was facing the same issue, try to re-install helm by using the commands below:
For linux: (Via Snap)
sudo snap install helm --classic
For Linux (from Binary source):
Download your desired version
Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
Find the helm binary in the unpacked directory, and move it to its desired destination
(mv linux-amd64/helm /usr/local/bin/helm)
For MacOS (Via brew):
brew install kubernetes-helm
For windows (Via Chocolatey):
choco install kubernetes-helm
And finaly, intialize the helm:
helm init
With helm 3 releases, we do not need tiller anymore. Try to upgrade the helm version to 3. It provides more security to your cluster.Because tiller runs in your Kubernetes cluster with full administrative rights, which is a risk if somebody gets unauthorized access to the cluster.
If you migrate to helm3, you do not need to do helm init thereafter because helm version 3 is a tiller-less architecture.
try
cp /usr/local/bin/tiller ~/.helm/
and check if the helm is deployed on server with
helm version

Wildfly on OpenShift 3 with path-base routing and accessible console

I have Wildfly 10 running on Openshift Origin 3 in AWS with an elastic ip.
I setup a Route in Openshift to map / to the wildfly service. This is working fine. If I go to http://my.ip.address I get the WildFly welcome page.
But if I map a different path, say /wf01, it doesn't work. I get a 404 Not Found error.
My guess is the router is passing along the /wf01 to the service? If that's the case, can I stop it from doing it? Otherwise how can I map http://my.ip.address/wf01 to my wildfly service?
I also want the wildfly console to be accessible from outside (this is a demo server for my own use). I added "-bmanagement","0.0.0.0" to the deploymentconfig but looking at the wildfly logs it is still binding to 127.0.0.1:
02:55:41,483 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051:
Admin console listening on http://127.0.0.1:9990
A router today cannot remap/rewrite the incoming HTTP path to another path value before passing it along. A workaround is to mount another route+service at the root that handles the root and redirects / forwards.
You can also use port-forward :
oc port-forward -h
Forward 1 or more local ports to a pod
Usage:
oc port-forward POD [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N] [options]
Examples:
# Listens on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod
$ oc port-forward -p mypod 5000 6000
# Listens on port 8888 locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 8888:5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod :5000
# Listens on a random port locally, forwarding to 5000 in the pod
$ oc port-forward -p mypod 0:5000