Helm install NGINX ingress controller tries to look up AKS DNS with wrong region - kubernetes-ingress

Tried to follow this link: https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli to setup ingress controller in AKS with its region EastUS2. When I tried to run the command as given in the doc:
helm install ingress-nginx ingress-nginx/ingress-nginx
--version 4.1.3
--namespace ingress-basic
--create-namespace
--set controller.replicaCount=2
--set controller.nodeSelector."kubernetes.io/os"=linux
--set controller.image.registry=$ACR_URL
--set controller.image.image=$CONTROLLER_IMAGE
--set controller.image.tag=$CONTROLLER_TAG
--set controller.image.digest=""
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes.io/os"=linux
--set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz
--set controller.admissionWebhooks.patch.image.registry=$ACR_URL
--set controller.admissionWebhooks.patch.image.image=$PATCH_IMAGE
--set controller.admissionWebhooks.patch.image.tag=$PATCH_TAG
--set controller.admissionWebhooks.patch.image.digest=""
--set defaultBackend.nodeSelector."kubernetes.io/os"=linux
--set defaultBackend.image.registry=$ACR_URL
--set defaultBackend.image.image=$DEFAULTBACKEND_IMAGE
--set defaultBackend.image.tag=$DEFAULTBACKEND_TAG
--set defaultBackend.image.digest=""
-f internal-ingress.yaml
It gives the error:
INSTALLATION FAILED: Kubernetes cluster unreachable: Get "https://testaks-dns-38ca4dd8.hcp.centralus.azmk8s.io:443/version": dial tcp: lookup testaks-dns-38ca4dd8.hcp.centralus.azmk8s.io on 168.63.129.16:53: no such host
The actual API Server address is testaks-dns-04129ffe.hcp.eastus2.azmk8s.io. For some reason its trying to lookup 'centralus' in the domain name.
Could not find where the region centralus is coming from

You need to specify the kubeconfig file for the helm command with --kubeconfig config-file,
or your should replace the default one located in ~/.kube/config.
The correct kubeconfig file should contain the right K8s cluster server address: testaks-dns-04129ffe.hcp.eastus2.azmk8s.io, not the wrong one.

Related

Openshift 4: oc get environment variable with name

Is there a way to get the value of a specific Openshift Service environment variable?
such as:
oc get <namespace>/servicename env <envname>
Found this:
oc set env dc/<applicaiton-name> --list

Connecting to MySQL 5.6 inside Docker For Desktop/Kubernetes: ERROR 1130 (HY000): Host 'xx.xx.xx.xx' is not allowed to connect to this MySQL server

I'm following theses instructions (page 181) to create a persistent volume & claim, a mysql replica set & service. I specify mysql v5.6 in the yaml file for the replica set.
After viewing the log for the pod, it looks like it is successful. So then I
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- bash
mysql -h mysql -p 3306 -u root
It prompts me for the password and then I get this error:
ERROR 1130 (HY000): Host '10.1.0.17' is not allowed to connect to this MySQL server
Apparently MySQL has a feature where it does not allow remote connections by default and I have to change the configuration files and I don't know how to do that inside a yaml file. Below is my YAML. How do I change it to allow remote connections?
Thanks
Siegfried
cat <<END-OF-FILE | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mysql
# labels so that we can bind a Service to this Pod
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: tododata
image: mysql:5.6
resources:
requests:
cpu: 1
memory: 2Gi
env:
# Environment variables are not a best practice for security,
# but we're using them here for brevity in the example.
# See Chapter 11 for better options.
- name: MYSQL_ROOT_PASSWORD
value: some-password-here
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
volumeMounts:
- name: tododata
# /var/lib/mysql is where MySQL stores its databases
mountPath: "/var/lib/mysql"
volumes:
- name: tododata
persistentVolumeClaim:
claimName: tododata
END-OF-FILE
Sat Oct 24 2020 3PM EDT Update: Try Bitnami MySQL
I like Ben's idea of using bitnami mysql because then I don't have to create my own custom docker image. However, when using bitnami and trying to connect to they mysql server I get
ERROR 2003 (HY000): Can't connect to MySQL server on 'my-release-mysql.default.svc.cluster.local' (111)
This happens after I successfully get a bash shell with this command:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
Then, as per the instructions, I do this and get the HY000 error above.
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p
Wed Nov 04 2020 Update:
Thanks Ben.. Yes -- I had already tried that on Oct 24 (approx) and when I do a k describe pod I get mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!.
Of course, when I run the mysql client as described in the nicely generated instructions, the client cannot connect because mysqld has died.
This is after having deleted the pvcs and stss and doing helm delete my-release prior to reinstalling via helm.
Unfortunately, when I tried this the first time (a couple of weeks ago) I did not set the root password and used the default generated password and I think it is still trying to use that.
This did work on azure kubernetes after having created a fresh azure kubernetes cluster. How can I reset my kubernetes cluster in my docker for desktop windows? I tried google searching and no luck so far.
Thanks
Siegfried
After a lot of help from the bitnami folks, I learned that my spinning disks on my 4 year old notebook computer are kinda slow (now why this is a problem with Bitnami MySQL and not Bitnami PostreSQL is a mystery).
This works for me:
helm install my-mysql bitnami/mysql \
--set image.debug=true \
--set primary.persistence.enabled=false,secondary.persistence.enabled=false \
--set primary.readinessProbe.enabled=false,primary.livenessProbe.enabled=false \
--set secondary.readinessProbe.enabled=false,secondary.livenessProbe.enabled=false
This turns off the peristent volumes so the data is lost when the pod dies.
Yes this is useful for me for development purposes and no one should be using Docker For Desktop/Kubernetes for production anyway... I just need to populate a tiny database and test my queries and if I need to repopulate database every time I reboot, well, that is not a big problem.
So maybe I need to get a new notebook computer? The price of notebook computers with 4TB of spinning disk space has gone up in the last couple of years.... And I cannot find any SSD drives of that size so even if I purchased a new replacement with spinning disks I might have the same problem? Hmm....
Thanks everyone for your help!
Siegfried
This appears to work just fine for me on windows. Complete the following steps:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release --set root.password=awesomePassword bitnami/mysql
This is all you need to run the mysql instance. It does not makes a few services and a statefulset. Then, to connect to it, you
Either have to be in another another kubernetes container. Without this, you will not find the dns record for my-release-mysql.default.svc.cluster.local
run my-release-mysql-client --rm --tty -i --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
For the password, it should be 'awesomePassword'
Port forward the service to your local machine.
kubectl port-forward svc/my-release-mysql 3306:3306
As a note, a bitnami container will have issues if you kill it and restart it with only your helm commands and the password is not set. The persistent volume claim will usually stick around - so you would need to set the password to the old password. If you do not specify the password, you can get the password by running the commands bitnami tells you about.
NAME: my-release
LAST DEPLOYED: Thu Oct 29 20:39:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES: Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w
--namespace default
Services:
echo Master: my-release-mysql.default.svc.cluster.local:3306 echo
Slave: my-release-mysql-slave.default.svc.cluster.local:3306
Administrator credentials:
echo Username: root echo Password : $(kubectl get secret
--namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
Run a pod that you can use as a client:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
To connect to master service (read/write):
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
To connect to slave service (read-only):
mysql -h my-release-mysql-slave.default.svc.cluster.local -uroot -p my_database
To upgrade this helm chart:
Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown
below:
ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64
--decode)
helm upgrade my-release bitnami/mysql --set root.password=$ROOT_PASSWORD

Include variable in jsonpath for oc patch (openshift CLI operations)

In bash I am trying to use a variable in a jsonpath for an openshift patch cli command:
OS_OBJECT='sample.k8s.io/element'
VALUE='5'
oc patch quota "my-object" -p '{"spec":{"hard":{"$OS_OBJECT":"$VALUE"}}}'
But that gives the error:
Error from server: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
indicating that the variable is not substituted/expanded.
If I write it explicitly it works:
oc patch quota "my-object" -p '{"spec":{"hard":{"sample.k8s.io/element":"5"}}}'
Any suggestions on how to include a variable in the jsonstring?
EDIT: Based on below answer I have also tried:
oc patch quota "my-object" -p "{'spec':{'hard':{'$OS_OBJECT':'$VALUE'}}}"
but that gives the error:
Error from server (BadRequest): invalid character '\'' looking for beginning of object key string
In single quotes everything is preserved by bash, you have to use double quotes for string interpolation to work (and use the escape sequence \" for the other double quotes).
Try this out:
oc patch quota "my-object" -p "{\"spec\":{\"hard\":{\"$OS_OBJECT\":\"$VALUE\"}}}"
Instead of OC patch would prefer OC apply on your templates. Templates are the best way to configure Openshift/Kubernetes objects which can be stored in git for version control to follow Infrastructure as code.
I am not the admin for my Openshift cluster, so can't access the resource quotas hence suggesting a way in Kubernetes but same can be applied in Openshift too, except the CLI change from kubectl to oc
Let's take a simple resource quota template:
$ cat resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: demo-quota
spec:
hard:
cpu: "1"
memory: 2Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]
Now configure your quota using kubectl apply on your template. This creates resource which is configured in the template, in our case its resourcequota
$ kubectl apply -f resourcequota.yaml
resourcequota/demo-quota created
$ kubectl get quota
NAME CREATED AT
demo-quota 2019-11-19T12:23:37Z
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 1
memory 0 2Gi
pods 0 10
As your looking for an update in resource quota using patch, I would suggest here to edit the template and execute kubectl apply again to update the object.
$ kubectl apply -f resourcequota.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
resourcequota/demo-quota configured
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 2
memory 0 4Gi
pods 0 20
Similarly, you can execute oc apply for your operations as oc patch is not so user-friendly to configure.

Procedure to install an Ingress controller

Unable to install ingress-nginx for kubernetes on Docker desktop
I was using the following in cmd line to install ingress nginx so far:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
as shown in the web page: https://che.eclipse.org/running-eclipse-che-on-kubernetes-using-docker-desktop-for-mac-5d972ed511e1
I seems like the installatio procedure has changed. Can anyone let me know step by step instructions to install ingress-nginx? I coudnt install it by following the procedure described here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md
Installation via helm works perfectly for me. Assuming you have kubectl binary installed and configured to use for your k8s cluster you can follow below steps one by one to achieve installation of nginx-ingress controller
1.Install helm binary (if doesn't exist)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/get_helm.sh | bash
2.Install helm for your cluster (if not installed yet)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/install.sh | bash
You should see output like
...
Waiting for tiller install...
Helm install complete
3.Then install nginx-ingress via helm
helm install stable/nginx-ingress --name nginx-ingress

Helm: could not find tiller

I'm getting this error message:
➜ ~ helm version
Error: could not find tiller
I've created tiller project:
➜ ~ oc new-project tiller
Now using project "tiller" on server "https://192.168.99.100:8443".
Then, I've created tiller into tiller namespace:
➜ ~ helm init --tiller-namespace tiller
$HELM_HOME has been configured at /home/jcabre/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
So, after that, I've been waiting for tiller pod is ready.
➜ ~ oc get pod -w
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s
^C%
Any ideas?
Try deleting your cluster tiller
kubectl get all --all-namespaces | grep tiller
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller
Initialise it again:
helm init
Now add the service account:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
This solved my issue!
You don't have helm configured yet, use the following command:
helm init
This will create .helm with repository, plugins, etc, in your home directory.
Background:
helm comes with client and server, if you have a different deployment environment, it might be possible that your helm server (known as tiller) is different, in that case, there are two ways to point to tiller
set environment variable TILLER_NAMESPACE
--tiller-namespace string namespace of Tiller (default "kube-system")
For more details check the helm READ.md file.
You installed tiller into a non-default namespace, so you have to tell helm where to look.
helm --tiller-namespace tiller version
First of all you need to create service account for teller to use in helm:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
To verify that Tiller is running:
kubectl get pods --namespace kube-system
DigitalOcean Reference
Now you can upgrade to the latest version of Helm or any version > 3.0.0.
You don't need to do
helm init
anymore.
The Tiller and client directories are initialised automatically when you start using helm. As mentioned here
I was facing the same issue, try to re-install helm by using the commands below:
For linux: (Via Snap)
sudo snap install helm --classic
For Linux (from Binary source):
Download your desired version
Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
Find the helm binary in the unpacked directory, and move it to its desired destination
(mv linux-amd64/helm /usr/local/bin/helm)
For MacOS (Via brew):
brew install kubernetes-helm
For windows (Via Chocolatey):
choco install kubernetes-helm
And finaly, intialize the helm:
helm init
With helm 3 releases, we do not need tiller anymore. Try to upgrade the helm version to 3. It provides more security to your cluster.Because tiller runs in your Kubernetes cluster with full administrative rights, which is a risk if somebody gets unauthorized access to the cluster.
If you migrate to helm3, you do not need to do helm init thereafter because helm version 3 is a tiller-less architecture.
try
cp /usr/local/bin/tiller ~/.helm/
and check if the helm is deployed on server with
helm version