How to set nginx.conf permanently in kuberenetes ingress? - kubernetes-ingress

Our nginx controller does not support the SSL from windows 7 systems. We updated the cipher suites in the nginx.conf file in Kubernetes Nginx pods, and it started to work.
Now the issue is whenever the service restarts or the pod restarts the nginx.conf file resets to the default one.
How can we set the nginx.conf file permanently?

Create a config map for your nginx.conf and attached it to your pod in specific location.
Create config map by running the bellow command.
kubectl create configmap nginx-conf --from-file=nginx.conf
And attached it in specific directory.
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/
volumes:
- name: nginx-config
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: nginx-config //your config map name here
restartPolicy: Never

Related

how to overcome the readonly filesystem error for a secret that is mounted in mysql image

I'm working on setting up a mysql instance in K8s cluster with TLS support for the client connection.
For that I have setup a cert-manager to issue the self-signed cert. I can see ca.crt, tls.key, tls.crt created in the secrets within my mysql namespace successfully. I followed the following article https://www.jetstack.io/blog/securing-mysql-with-cert-manager/
Now to use this cert, my plan is to place the cert in the /var/lib/mysql directory and update the mysql.conf file using config map. Here is how the mysql.yaml pod spec looks.
kind: Service
metadata:
name: mysql
namespace: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config
data:
mysql.cnf: |-
[mysqld]
ssl-ca=/var/lib/mysql/ca.crt
ssl-cert= /var/lib/mysql/tls.crt
ssl-key=/var/lib/mysql/tls.key
require_secure_transport=ON
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
# securityContext:
# runAsUser: 0
containers:
- image: mysql:5.7
name: mysql
resources: {}
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-cert-secret
#mountPath: /app/ca.crt
mountPath: /var/lib/mysql/ca.crt
subPath: ca.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.crt
#mountPath: /app/tls.crt
subPath: tls.crt
- name: mysql-cert-secret
mountPath: /var/lib/mysql/tls.key
#mountPath: /app/tls.key
subPath: tls.key
- name: config-map-mysqlconf
mountPath: /etc/mysql/mysql.conf
volumes:
- name: mysql-cert-secret
secret:
secretName: mysql-server-tls
- name : config-map-mysqlconf
configMap:
name: mysql-config
If I update the mount path with say /app/ca.crt, then mounting works and I can see the certs in when I access in shell. But for the /var/lib/mysql* I get following error.
Error image
I tried using the securityContext but it didn't help since the directory is accessible by both root and mysql user. Any help would be greatly appreciated. If there is a better way to get this done, I'm happy to try that as well.
This is all done locally using KinD cluster.
Thank you
MySQL stores DB files in /var/lib/mysql by default and there would certainly be an attempt to set the ownership to mysql user. Perhaps here.
Any attempt to update a secret volume will result in an error rather than a successful change as they are read-only projections into the Pod's filesystem. I think that's the reason the article you followed does not suggest anywhere to use dir /var/lib/mysql.
If you still want to attempt this, you can perhaps try by changing the default db storage location to something other than /var/lib/mysql in file /etc/my.cnf or even the default mode of that volumeMount. But I'm not sure if it will work or there won't be any other issues.

I want to print the current podname in which my application is running in application logs in openshift

So my java application is running in several pod in openshift and I want to print the podname in application logs for some business purpose. Is there any way to do so? Thanks
You should be able to expose the Pod name to the application using the Kubernetes "Downward API". This can either be done by exposing an environment variable with the Pod name, or mounting a file that contains the name.
Here's the docs for doing so with an environment variable: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api
Here's a trimmed down version of the example on that page, to highlight just the Pod name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
As you can see from the docs, there's a bunch of other context that you can expose also.
The equivalent docs for mounting a volume file can be found here: https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#the-downward-api

Use External standalone-full.xml by the RHPAM kie pod in Openshift

I just want to use customized standalone-full.xml in the RHPAM kie server pod which is running in Openshift. I have created the configmap from file and not sure how to set it.
I created the configmap like this
oc create configmap my-config --from-file=standalone-full.xml.
And edited the deploymentconfig of rhpam kie server,
volumeMounts:
- name: config-volume
mountPath: /opt/eap/standalone/configuration
volumes:
- name: config-volume
configMap:
name: my-config
It starts a new container,with sttaus container creating and fails with error(sclaing down 1 to 0)
Am i setting the configmap correct?
You can mount the configmap in a pod as a volume. Here's a good example: just add 'volumes' (to specify configmap as a volume) and 'volumeMounts' (to specify mount point) blocks in the pod's spec:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never

kubectl cannot acces pod application

A have this pod specification :
apiVersion: v1
kind: Pod
metadata:
name: wp
spec:
containers:
- image: wordpress:4.9-apache
name: wordpress
env:
- name: WORDPRESS_DB_PASSWORD
value: mysqlpwd
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- image: mysql:5.7
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpwd
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumes:
- name: data
emptyDir: {}
I deployed it using :
kubectl create -f wordpress-pod.yaml
Now it is correctly deployed :
kubectl get pods
wp 2/2 Running 3 35h
Then when i do :
kubectl describe po/wp
Name: wp
Namespace: default
Priority: 0
Node: node3/192.168.50.12
Start Time: Mon, 13 Jan 2020 23:27:16 +0100
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.233.92.7
IPs:
IP: 10.233.92.7
Containers:
My issue is that i cannot access to the app :
wget http://192.168.50.12:8080/wp-admin/install.php
Connecting to 192.168.50.12:8080... failed: Connection refused.
Neither wget http://10.233.92.7:8080/wp-admin/install.php
works
Is there any issue in the pod description or deployment process ?
Thanks
With your current setup you need to use wget http://10.233.92.7:8080/wp-admin/install.php from within the cluster i.e by performing kubectl exec into another pod because 10.233.92.7 IP is valid only within the cluster.
You should create a service for exposing your pod. Create a cluster IP type service(default) for accessing from within the cluster. If you want to access from outside the cluster i.e from your desktop then create a NodePort or LoadBalancer type service.
Other way to access the application from your desktop will be port forwarding. In this case you don't need to create a service.
Here is a tutorial for accessing pods using NodePort service. In this case your node need to have public ip.
The problem with your configuration is lack of services that will allow external access to your WordPress.
There a lot of materials explaining what are the options and how they are strictly connected with infrastructure that Kubernetes works on.
Let me elaborate on 3 of them:
minikube
kubeadm
cloud provisioned (GKE, EKS, AKS)
The base of the WordPress configuration will be the same in each case.
Table of contents:
Running MySQL
Secret
PersistentVolumeClaim
Deployment
Service
Running WordPress
PersistentVolumeClaim
Deployment
Allowing external access
minikube
kubeadm
cloud provisioned (GKE)
There is a good tutorial on Kubernetes site: HERE!
Running MySQL
Secret
As the official Kubernetes documentation:
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.
-- Kubernetes secrets
Example below is a YAML definition of a secret used for MySQL password:
apiVersion: v1
kind: Secret
metadata:
name: mysql-password
type: Opaque
data:
password: c3VwZXJoYXJkcGFzc3dvcmQK
Take a specific look at:
password: c3VwZXJoYXJkcGFzc3dvcmQK
This password is base64 encoded.
To create this password invoke command from your terminal:
$ echo "YOUR_PASSWORD" | base64
Paste the output to the YAML definition and apply it with:
$ kubectl apply -f FILE_NAME.
You can check if it was created correctly with:
$ kubectl get secret mysql-password -o yaml
PersistentVolumeClaim
MySQL require a dedicated space for storing the data. There is an official documentation explaining it: Persistent Volumes
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Above YAML will create a storage claim for MySQL. Apply it with command:
$ kubectl apply -f FILE_NAME.
Deployment
Create a YAML definition of a deployment from the official example and adjust it if there were any changes to names of the objects:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Take a specific look on the part below, which is parsing secret password to the MySQL pod:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
Apply it with command: $ kubectl apply -f FILE_NAME.
Service
What was missing in your the configuration was service objects. This objects allows communication with other pods, external traffic etc. Look at below example:
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
This definition will create a object which will point to the MySQL pod.
It will create a DNS entry with the name of wordpress-mysql and IP address of the pod.
It will not expose it to external traffic as it's not needed.
Apply it with command: $ kubectl apply -f FILE_NAME.
Running WordPress
Persistent Volume Claim
As well as MySQL, WordPress require a dedicated space for storing the data. Create it with below example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Apply it with command: $ kubectl apply -f FILE_NAME.
Deployment
Create YAML definition of WordPress as example below:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
Take a specific look at:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-password
key: password
This part will parse secret value to the deployment.
Below definition will tell WordPress where MySQL is located:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
Apply it with command: $ kubectl apply -f FILE_NAME.
Allowing external access
There are many different approaches for configuring external access to applications.
Minikube
Configuration could differ between different hypervisors.
For example Minikube can expose WordPress to external traffic with:
NodePort
apiVersion: v1
kind: Service
metadata:
name: wordpress-nodeport
spec:
type: NodePort
selector:
app: wordpress
tier: frontend
ports:
- name: wordpress-port
protocol: TCP
port: 80
targetPort: 80
After applying this definition you will need to enter the minikube IP address with appropriate port to the web browser.
This port can be found with command:
$ kubectl get svc wordpress-nodeport
Output of above command:
wordpress-nodeport NodePort 10.76.9.15 <none> 80:30173/TCP 8s
In this case it is 30173.
LoadBalancer
In this case it will create NodePort also!
apiVersion: v1
kind: Service
metadata:
name: wordpress-loadbalancer
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
Ingress resource
Please refer to this link: Minikube: create-an-ingress-resource
Also you can refer to this Stack Overflow post
Kubeadm
With the Kubernetes clusters provided by kubeadm there are:
NodePort
The configuration process is the same as in minikube. The only difference is that it will create NodePort on each of every node in the cluster. After that you can enter IP address of any of the node with appropriate port. Be aware that you will neeed to be in the same network without firewall blocking your access.
LoadBalancer
You can create LoadBalancer object with the same YAML definition as in minikube. The problem is that with kubeadm provisioning on a bare metal cluster the LoadBalancer will not get IP address. The one of the options is: MetalLB
Ingress
Ingress resources share the same problem as LoadBalancer in kubeadm provisioned infrastructure. As above one of the options is: MetalLB.
Cloud Provisioned
There are many options which are strictly related to cloud that Kubernetes works on. Below is example for configuring Ingress resource with NGINX controller on GKE:
Apply both of the YAML definitions:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.1/deploy/static/provider/cloud-generic.yaml
Apply NodePort definition from minikube
Create Ingress resource:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: wordpress-nodeport
servicePort: wordpress-port
Apply it with command: $ kubectl apply -f FILE_NAME.
Check if Ingress resource got the address from cloud provider with command:
$ kubectl get ingress
The output should look like that:
NAME HOSTS ADDRESS PORTS AGE
ingress * XXX.XXX.XXX.X 80 26m
After entering the IP address from above command you should get:
Cloud provisioned example can be used for kubeadm provisioned clusters with the MetalLB configured.

How to setup error reporting in Stackdriver from kubernetes pods?

I'm a bit confused at how to setup error reporting in kubernetes, so errors are visible in Google Cloud Console / Stackdriver "Error Reporting"?
According to documentation
https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine
we need to enable fluentd' "forward input plugin" and then send exception data from our apps. I think this approach would have worked if we had setup fluentd ourselves, but it's already pre-installed on every node in a pod that just runs gcr.io/google_containers/fluentd-gcp docker image.
How do we enable forward input on those pods and make sure that http port available to every pod on the nodes? We also need to make sure this config is used by default when we add more nodes to our cluster.
Any help would be appreciated, may be I'm looking at all this from a wrong point?
The basic idea is to start a separate pod that receives structured logs over TCP and forwards it to Cloud Logging, similar to a locally-running fluentd agent. See below for the steps I used.
(Unfortunately, the logging support that is built into Docker and Kubernetes cannot be used - it just forwards individual lines of text from stdout/stderr as separate log entries which prevents Error Reporting from seeing complete stack traces.)
Create a docker image for a fluentd forwarder using a Dockerfile as follows:
FROM gcr.io/google_containers/fluentd-gcp:1.18
COPY fluentd-forwarder.conf /etc/google-fluentd/google-fluentd.conf
Where fluentd-forwarder.conf contains the following:
<source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
Then build and push the image:
$ docker build -t gcr.io/###your project id###/fluentd-forwarder:v1 .
$ gcloud docker push gcr.io/###your project id###/fluentd-forwarder:v1
You need a replication controller (fluentd-forwarder-controller.yaml):
apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/###your project id###/fluentd-forwarder:v1
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
You also need a service (fluentd-forwarder-service.yaml):
apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
Then create the replication controller and service:
$ kubectl create -f fluentd-forwarder-controller.yaml
$ kubectl create -f fluentd-forwarder-service.yaml
Finally, in your application, instead of using 'localhost' and 24224 to connect to the fluentd agent as described on https://cloud.google.com/error-reporting/docs/setting-up-on-compute-engine, use the values of evironment variables FLUENTD_FORWARDER_SERVICE_HOST and FLUENTD_FORWARDER_SERVICE_PORT.
To add to Boris' answer: As long as errors are logged in the right format (see https://cloud.google.com/error-reporting/docs/troubleshooting) and Cloud Logging is enabled (you can see the errors in https://console.cloud.google.com/logs/viewer) then errors will make it to Error Reporting without any further setup.
Boris' answer was great but was a lot more complicated then it really needed to be (no need to build a docker image). If you have kubectl configured on your local box (or you can use the Google Cloud Shell), copy and paste the following and it will install the forwarder in your cluster (I updated the version of fluent-gcp from the above answer). My solution uses a ConfigMap to store the file so it can be changed easily without rebuilding.
cat << EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-forwarder
data:
google-fluentd.conf: |+
<source>
type forward
port 24224
</source>
<match **>
type google_cloud
buffer_chunk_limit 2M
buffer_queue_limit 24
flush_interval 5s
max_retry_wait 30
disable_retry_limit
</match>
---
apiVersion: v1
kind: ReplicationController
metadata:
name: fluentd-forwarder
spec:
replicas: 1
template:
metadata:
name: fluentd-forwarder
labels:
app: fluentd-forwarder
spec:
containers:
- name: fluentd-forwarder
image: gcr.io/google_containers/fluentd-gcp:2.0.18
env:
- name: FLUENTD_ARGS
value: -qq
ports:
- containerPort: 24224
volumeMounts:
- name: config-vol
mountPath: /etc/google-fluentd
volumes:
- name: config-vol
configMap:
name: fluentd-forwarder
---
apiVersion: v1
kind: Service
metadata:
name: fluentd-forwarder
spec:
selector:
app: fluentd-forwarder
ports:
- protocol: TCP
port: 24224
EOF