We have a kubernetes cluster running in Google GKE. I want to permanently set another value for fs.aio-max-nr in sysctl, but it keeps changing back to default after running sudo reboot.
This is what I've tried:
sysctl -w fs.aio-max-nr=1048576
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/99-gke-defaults.conf
echo 'fs.aio-max-nr = 1048576' | sudo tee --append /etc/sysctl.d/00-sysctl.conf
Is it possible to change this permanently? And why isn't there a etc/sysctl.config but two sysctl files in sysctl.d/ folder?
I'd do this by deploying a DaemonSet on all the nodes on which you need this setting. The only drawback here is that the DaemonSet pod will need to run with elevated privileges. The container has access to /proc on the host, so then you just need to execute your sysctl commands in a script and then exit.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: sysctl
spec:
template:
spec:
containers:
- name: sysctl
image: alpine
command:
- /bin/sh
- -c
- sysctl fs.aio-max-nr=1048576
securityContext:
privileged: true
There's also example here.
I ended up switching node image from Googles default image cos_containerd to ubuntu containerd. This made the sysctl changes permanent.
Related
I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.
I'm following theses instructions (page 181) to create a persistent volume & claim, a mysql replica set & service. I specify mysql v5.6 in the yaml file for the replica set.
After viewing the log for the pod, it looks like it is successful. So then I
kubectl run -it --rm --image=mysql --restart=Never mysql-client -- bash
mysql -h mysql -p 3306 -u root
It prompts me for the password and then I get this error:
ERROR 1130 (HY000): Host '10.1.0.17' is not allowed to connect to this MySQL server
Apparently MySQL has a feature where it does not allow remote connections by default and I have to change the configuration files and I don't know how to do that inside a yaml file. Below is my YAML. How do I change it to allow remote connections?
Thanks
Siegfried
cat <<END-OF-FILE | kubectl apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mysql
# labels so that we can bind a Service to this Pod
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: tododata
image: mysql:5.6
resources:
requests:
cpu: 1
memory: 2Gi
env:
# Environment variables are not a best practice for security,
# but we're using them here for brevity in the example.
# See Chapter 11 for better options.
- name: MYSQL_ROOT_PASSWORD
value: some-password-here
livenessProbe:
tcpSocket:
port: 3306
ports:
- containerPort: 3306
volumeMounts:
- name: tododata
# /var/lib/mysql is where MySQL stores its databases
mountPath: "/var/lib/mysql"
volumes:
- name: tododata
persistentVolumeClaim:
claimName: tododata
END-OF-FILE
Sat Oct 24 2020 3PM EDT Update: Try Bitnami MySQL
I like Ben's idea of using bitnami mysql because then I don't have to create my own custom docker image. However, when using bitnami and trying to connect to they mysql server I get
ERROR 2003 (HY000): Can't connect to MySQL server on 'my-release-mysql.default.svc.cluster.local' (111)
This happens after I successfully get a bash shell with this command:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
Then, as per the instructions, I do this and get the HY000 error above.
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p
Wed Nov 04 2020 Update:
Thanks Ben.. Yes -- I had already tried that on Oct 24 (approx) and when I do a k describe pod I get mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!.
Of course, when I run the mysql client as described in the nicely generated instructions, the client cannot connect because mysqld has died.
This is after having deleted the pvcs and stss and doing helm delete my-release prior to reinstalling via helm.
Unfortunately, when I tried this the first time (a couple of weeks ago) I did not set the root password and used the default generated password and I think it is still trying to use that.
This did work on azure kubernetes after having created a fresh azure kubernetes cluster. How can I reset my kubernetes cluster in my docker for desktop windows? I tried google searching and no luck so far.
Thanks
Siegfried
After a lot of help from the bitnami folks, I learned that my spinning disks on my 4 year old notebook computer are kinda slow (now why this is a problem with Bitnami MySQL and not Bitnami PostreSQL is a mystery).
This works for me:
helm install my-mysql bitnami/mysql \
--set image.debug=true \
--set primary.persistence.enabled=false,secondary.persistence.enabled=false \
--set primary.readinessProbe.enabled=false,primary.livenessProbe.enabled=false \
--set secondary.readinessProbe.enabled=false,secondary.livenessProbe.enabled=false
This turns off the peristent volumes so the data is lost when the pod dies.
Yes this is useful for me for development purposes and no one should be using Docker For Desktop/Kubernetes for production anyway... I just need to populate a tiny database and test my queries and if I need to repopulate database every time I reboot, well, that is not a big problem.
So maybe I need to get a new notebook computer? The price of notebook computers with 4TB of spinning disk space has gone up in the last couple of years.... And I cannot find any SSD drives of that size so even if I purchased a new replacement with spinning disks I might have the same problem? Hmm....
Thanks everyone for your help!
Siegfried
This appears to work just fine for me on windows. Complete the following steps:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-release --set root.password=awesomePassword bitnami/mysql
This is all you need to run the mysql instance. It does not makes a few services and a statefulset. Then, to connect to it, you
Either have to be in another another kubernetes container. Without this, you will not find the dns record for my-release-mysql.default.svc.cluster.local
run my-release-mysql-client --rm --tty -i --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
For the password, it should be 'awesomePassword'
Port forward the service to your local machine.
kubectl port-forward svc/my-release-mysql 3306:3306
As a note, a bitnami container will have issues if you kill it and restart it with only your helm commands and the password is not set. The persistent volume claim will usually stick around - so you would need to set the password to the old password. If you do not specify the password, you can get the password by running the commands bitnami tells you about.
NAME: my-release
LAST DEPLOYED: Thu Oct 29 20:39:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES: Please be patient while the chart is being deployed
Tip:
Watch the deployment status using the command: kubectl get pods -w
--namespace default
Services:
echo Master: my-release-mysql.default.svc.cluster.local:3306 echo
Slave: my-release-mysql-slave.default.svc.cluster.local:3306
Administrator credentials:
echo Username: root echo Password : $(kubectl get secret
--namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
To connect to your database:
Run a pod that you can use as a client:
kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.22-debian-10-r0 --namespace default --command -- bash
To connect to master service (read/write):
mysql -h my-release-mysql.default.svc.cluster.local -uroot -p my_database
To connect to slave service (read-only):
mysql -h my-release-mysql-slave.default.svc.cluster.local -uroot -p my_database
To upgrade this helm chart:
Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown
below:
ROOT_PASSWORD=$(kubectl get secret --namespace default my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64
--decode)
helm upgrade my-release bitnami/mysql --set root.password=$ROOT_PASSWORD
I'm trying to mount PVC to MongoDB deployment without privileged access.
I've tried to set anyuid for pods via:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
In deployment I'm using securityContext config. I've tried several combination of fsGroup etc. :
spec:
securityContext:
runAsUser: 99
runAsGroup: 99
supplementalGroups:
- 99
fsGroup: 99
When I go to the pod uid and guid is set correctly:
bash-4.2$ id
uid=99(nobody) gid=99(nobody) groups=99(nobody)
bash-4.2$ whoami
nobody
bash-4.2$ cd /var/lib/mongodb/data
bash-4.2$ touch test.txt
touch: cannot touch 'test.txt': Permission denied
But pod can't write to the pvc directory:
ERROR: Couldn't write into /var/lib/mongodb/data
CAUSE: current user doesn't have permissions for writing to /var/lib/mongodb/data directory
DETAILS: current user id = 99, user groups: 99 0
DETAILS: directory permissions: drwxr-xr-x owned by 0:0, SELinux: system_u:object_r:container_file_t:s0:c234,c491
I've tried to instantiate also MySQL template with PVC without any configuration change from OpenShift catalog and it's the same issue.
Thanks for the help.
Temporary solution is to use init container with root privileges to change owner of mounted path:
initContainers:
- name: mongodb-init
image: alpine
command: ["sh", "-c", "chown -R 99 /var/lib/mongodb/data"]
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-pvc
But also I'm looking at tool named Udica. It can generate SELinux security policies for container: https://github.com/containers/udica
I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args:
- delete
- --purge
- $(helm ls -a -q temppods.*)
restartPolicy: OnFailure
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i get is:
Error: pods is forbidden: User "system:serviceacount:myproject:default" cannot list pods in the namespace "kube-system": no RBAC policy matched
i then created a service account cron-z and gave edit access to it. I added this serviceAccount to my yaml thinking when my pod will be created it will have the service account cron-z associated to it. Still no luck. I see the cron-z is not getting assoicated with the pod that gets created every 5 minutes and i still see default as the service name associated with the pod.
You'll need to have a service account for helm to use tiller with as well as an actual tiller service account github.com/helm/helm/blob/master/docs/rbac.md
I want to pass an environment variable that should get evaluated to the hostname of the running container. This is what I am trying to do
oc new-app -e DASHBOARD_PROTOCOL=http -e ADMIN_PASSWORD=abc#123 -e KEYCLOAK_URL=http://keycloak.openidp.svc:8080 -e KEYCLOAK_REALM=master -e DASHBOARD_HOSTNAME=$HOSTNAME -e GF_INSTALL_PLUGINS=grafana-simple-json-datasource,michaeldmoore-annunciator-panel,briangann-gauge-panel,savantly-heatmap-panel,briangann-datatable-panel grafana/grafana:5.2.1
How to ensure that the DASHBOARD_HOSTNAME gets evaluated to the value of the hostname of the running container image
For take the hostname value from a pod you could use the metadata.name.
follow the eg:
env:
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
After creating the application, you could edit the deployment config (oc edit dc/<deployment_config>) or patch it to configure the DASHBOARD_HOSTNAME environment variable using the Downward API.
This may be a personal preference but as much as oc new-app is convenient I'd rather work with (declarative) configuration files that are checked in and versioned in a code repo than with imperative commands.