Procedure to install an Ingress controller - kubernetes-ingress

Unable to install ingress-nginx for kubernetes on Docker desktop
I was using the following in cmd line to install ingress nginx so far:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
as shown in the web page: https://che.eclipse.org/running-eclipse-che-on-kubernetes-using-docker-desktop-for-mac-5d972ed511e1
I seems like the installatio procedure has changed. Can anyone let me know step by step instructions to install ingress-nginx? I coudnt install it by following the procedure described here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

Installation via helm works perfectly for me. Assuming you have kubectl binary installed and configured to use for your k8s cluster you can follow below steps one by one to achieve installation of nginx-ingress controller
1.Install helm binary (if doesn't exist)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/get_helm.sh | bash
2.Install helm for your cluster (if not installed yet)
curl -s https://raw.githubusercontent.com/nurlanf/deployments-kubernetes/master/helm/install.sh | bash
You should see output like
...
Waiting for tiller install...
Helm install complete
3.Then install nginx-ingress via helm
helm install stable/nginx-ingress --name nginx-ingress

Related

helm provide values file from variable

I'm having an ci/cd pipeline with has a yaml file, containing secrets in memory. I don't want to store the file on drive, since I have not guarantee that the file will be cleaned or is safe on the drive.
I would like to install a helm chart using helm install. Normally I would just provide the file using -f filename.yaml. But as I said, I don't have the file stored on the drive. Is there any alternative to pass a whole yaml file as string to a helm install command?
To inline values.yaml in your command line, you can use the following:
helm install <chart-name> -f - <<EOF
<your-inlined-values-yaml>
EOF
For example:
helm install --name my-release hazelcast/hazelcast -f - <<EOF
service:
type: LoadBalancer
EOF

jdbc driver does not work in kubernetes, fails with timeout

I have a java 11 app with jdbc driver running together with mysql 8.0, the app is able to connect to mysql and execute one sql, but it looks like it never gets a response back?
It looks like a connectivity issue.
At first it'd be good to look at the Java program output.
First simple checks are at the Kubernetes level to ensure that key components are alive:
$ kubectl get deployments
$ kubectl get services
$ kubectl get pods
Additional checks could be done from within the container where your Java app is running.
A possible approach is below.
List deployments of your app and their labels:
$ kubectl get deployments --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
hello-node 2/2 2 2 1h app=hello-node
Having got the label, you can list the relevant pods and their containers:
$ LABEL=hello-node; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
POD CONTAINER
hello-node-55b49fb9f8-7tbh4 hello-node
hello-node-55b49fb9f8-p7wt6 hello-node
Now it's possible to run basic diagnostic commands from within the Java app container.
Ping might not achieve the target but is available almost always in container and does primitive check of DNS resolution.
Services from the same namespace should be available via short DNS name.
Services from other namespaces inside of the same Kubernetes cluster should be available via internal FQDN.
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- ping -c1 hello-node.default.svc.cluster.local
$ kubectl exec hello-node-55b49fb9f8-p7wt6 -c hello-node -- mysql -u [username] -p [dbname] -e [query]
From here on the connectivity diagnostics is pretty similar to the bare-metal server except the fact that you are limited by the tools available inside of container. You might install missing packages into the container as needed.
As soon as you obtain more diagnostic information, you'll get a clue what to check next.

Helm: could not find tiller

I'm getting this error message:
➜ ~ helm version
Error: could not find tiller
I've created tiller project:
➜ ~ oc new-project tiller
Now using project "tiller" on server "https://192.168.99.100:8443".
Then, I've created tiller into tiller namespace:
➜ ~ helm init --tiller-namespace tiller
$HELM_HOME has been configured at /home/jcabre/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
So, after that, I've been waiting for tiller pod is ready.
➜ ~ oc get pod -w
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 0/1 Running 0 18s
NAME READY STATUS RESTARTS AGE
tiller-deploy-66cccbf9cd-84swm 1/1 Running 0 24s
^C%
Any ideas?
Try deleting your cluster tiller
kubectl get all --all-namespaces | grep tiller
kubectl delete deployment tiller-deploy -n kube-system
kubectl delete service tiller-deploy -n kube-system
kubectl get all --all-namespaces | grep tiller
Initialise it again:
helm init
Now add the service account:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
This solved my issue!
You don't have helm configured yet, use the following command:
helm init
This will create .helm with repository, plugins, etc, in your home directory.
Background:
helm comes with client and server, if you have a different deployment environment, it might be possible that your helm server (known as tiller) is different, in that case, there are two ways to point to tiller
set environment variable TILLER_NAMESPACE
--tiller-namespace string namespace of Tiller (default "kube-system")
For more details check the helm READ.md file.
You installed tiller into a non-default namespace, so you have to tell helm where to look.
helm --tiller-namespace tiller version
First of all you need to create service account for teller to use in helm:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
To verify that Tiller is running:
kubectl get pods --namespace kube-system
DigitalOcean Reference
Now you can upgrade to the latest version of Helm or any version > 3.0.0.
You don't need to do
helm init
anymore.
The Tiller and client directories are initialised automatically when you start using helm. As mentioned here
I was facing the same issue, try to re-install helm by using the commands below:
For linux: (Via Snap)
sudo snap install helm --classic
For Linux (from Binary source):
Download your desired version
Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
Find the helm binary in the unpacked directory, and move it to its desired destination
(mv linux-amd64/helm /usr/local/bin/helm)
For MacOS (Via brew):
brew install kubernetes-helm
For windows (Via Chocolatey):
choco install kubernetes-helm
And finaly, intialize the helm:
helm init
With helm 3 releases, we do not need tiller anymore. Try to upgrade the helm version to 3. It provides more security to your cluster.Because tiller runs in your Kubernetes cluster with full administrative rights, which is a risk if somebody gets unauthorized access to the cluster.
If you migrate to helm3, you do not need to do helm init thereafter because helm version 3 is a tiller-less architecture.
try
cp /usr/local/bin/tiller ~/.helm/
and check if the helm is deployed on server with
helm version

Unable to run startup script when creating instance on Google Cloud Platform

I have a simple startup script which looks like so:
#!/usr/bin/env bash
sudo apt update
sudo apt install -y ruby-full ruby-bundler build-essential
And create VM instance on GCP like so:
$ gcloud compute instances create test-app --boot-disk-size=10GB --image-family ubuntu-1604-lts --image-project=ubuntu-os-cloud --machine-type=g1-small --zone europe-west1-b --tags test-server --restart-on-failure --metadata-from-file startup-script=startup.sh
My startup.sh is executable. I set its rights like so:
$ chmod +x startup.sh
When however I enter the shell of my newly created instance and check bundler:
test-app:~$ bundle -v
I get these messages:
The program 'bundle' is currently not installed...
So, what is wrong with that and how can I fix it? PS. If I run all my commands just from inside the instance shell, it's all ok, so there is some problem with using startup script on GCP.
I tested with your use case, But the bundle package was installed without making any changes.
Output:
bundle -v
Bundler version 1.11.2
You can check VM serial console log output to verify if start-up script ran. Check the VM instance to verify if the package is installed using the commands below:
sudo apt list --installed | grep -i bundle
sudo egrep bundle /var/log/dpkg.log
In addition, check the gem list bundle

Run statsd as a daemon on EC2 instances programatically

EDIT: My goal is to be able to emit metrics from my spring-boot application and have them sent to a Graphite server. For that I am trying to set up statsd. If you can suggest a cleaner approach, that would be better.
I have a Beanstalk application which requires statsd to run as a background process. I was able to specify commands and packages through ebextensions config file as follows:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_run_statsd:
command: node stats.js exampleConfig.js
cwd: /home/ec2-user/statsd
When I try to deploy the application to a new environment, the EC2 node never comes up fully. I logged in to check what might be going on and noticed in /var/log/cfn-init.log that 01_nodejs_install, 02_mkdir_statsd and 03_fetch_statsd were executed successfully. So I guess the system was stuck on the fourth command (04_run_statsd).
2016-05-24 01:25:09,769 [INFO] Yum installed [u'git']
2016-05-24 01:25:37,751 [INFO] Command 01_nodejs_install succeeded
2016-05-24 01:25:37,755 [INFO] Command 02_mkdir_statsd succeeded
2016-05-24 01:25:38,700 [INFO] Command 03_fetch_statsd succeeded
cfn-init.log (END)
I need help with the following:
If there is a better way to install and run statsd while instantiating an environment, I would appreciate if you could provide details on that approach. This current scheme seems hacky.
If this is the approach I need to stick with, how can I run the fourth command so that statsd can be run as a background process?
Tried a few things and found that the following ebextensions configs work:
packages:
yum:
git: []
commands:
01_nodejs_install:
command: sudo yum -y install nodejs npm --enablerepo=epel
ignoreErrors: true
02_mkdir_statsd:
command: mkdir /home/ec2-user/statsd
03_fetch_statsd:
command: git clone https://github.com/etsy/statsd.git /home/ec2-user/statsd
ignoreErrors: true
04_change_config:
command: cat exampleConfig.js | sed 's/2003/<graphite server port>/g' | sed 's/graphite.example.com/my.graphite.server.hostname/g' > config.js
cwd: /home/ec2-user/statsd
05_run_statsd:
command: setsid node stats.js config.js >/dev/null 2>&1 < /dev/null &
cwd: /home/ec2-user/statsd
Note that I added another command (04_change_config) so that I may configure my own Graphite server and port in statsd configs. This change is not needed to address the original question, though.
The actual run command uses setsid to run the command as a daemon.