"jx boot" fails in "openshift-3.11" provider with "tekton pipeline controller" pod into "crashloopbackoff" state - openshift

Summary:
I already have a setup of "static jenkins server" type jenkins-x running in openshift 3.11 provider. The cluster was crashed and I want to reinstall jenkins-x in my cluster but there is no support for "static jenkins server" now.
So I am trying to install "jenkins-x" via "jx boot" but the installation fails with "tekton pipeline controller" pod into "crashloopbackoff" state.
Steps to reproduce the behavior:
jx-requirements.yml:
autoUpdate:
enabled: false
schedule: ""
bootConfigURL: https://github.com/jenkins-x/jenkins-x-boot-config.git
cluster:
clusterName: cic-60
devEnvApprovers:
- automation
environmentGitOwner: cic-60
gitKind: bitbucketserver
gitName: bs
gitServer: http://rtx-swtl-git.fnc.net.local
namespace: jx
provider: openshift
registry: docker-registry.default.svc:5000
environments:
- ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
key: dev
repository: environment-cic-60-dev
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: staging
repository: environment-cic-60-staging
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: production
repository: environment-cic-60-production
gitops: true
ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
kaniko: true
repository: nexus
secretStorage: local
storage:
backup:
enabled: false
url: ""
logs:
enabled: false
url: ""
reports:
enabled: false
url: ""
repository:
enabled: false
url: ""
vault: {}
velero:
schedule: ""
ttl: ""
versionStream:
ref: v1.0.562
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: lighthouse
Expected behavior:
All the pods under jx namespace should be up & running and jenkins-x should be installed properly
Actual behavior:
Tekton pipeline controller pod is into "CrashLoopBackOff" state with error:
Pods with status in "jx" namespace:
NAME READY STATUS RESTARTS AGE
jenkins-x-chartmuseum-5687695d57-pp994 1/1 Running 0 1d
jenkins-x-controllerbuild-78b4b56695-mg2vs 1/1 Running 0 1d
jenkins-x-controllerrole-765cf99bdb-swshp 1/1 Running 0 1d
jenkins-x-docker-registry-5bcd587565-rhd7q 1/1 Running 0 1d
jenkins-x-gcactivities-1598421600-jtgm6 0/1 Completed 0 1h
jenkins-x-gcactivities-1598423400-4rd76 0/1 Completed 0 43m
jenkins-x-gcactivities-1598425200-sd7xm 0/1 Completed 0 13m
jenkins-x-gcpods-1598421600-z7s4w 0/1 Completed 0 1h
jenkins-x-gcpods-1598423400-vzb6p 0/1 Completed 0 43m
jenkins-x-gcpods-1598425200-56zdp 0/1 Completed 0 13m
jenkins-x-gcpreviews-1598421600-5k4vf 0/1 Completed 0 1h
jenkins-x-nexus-c7dcb47c7-fh7kx 1/1 Running 0 1d
lighthouse-foghorn-654c868bc8-d5w57 1/1 Running 0 1d
lighthouse-gc-jobs-1598421600-bmsq8 0/1 Completed 0 1h
lighthouse-gc-jobs-1598423400-zskt5 0/1 Completed 0 43m
lighthouse-gc-jobs-1598425200-m9gtd 0/1 Completed 0 13m
lighthouse-jx-controller-6c9b8994bd-qt6tc 1/1 Running 0 1d
lighthouse-keeper-7c6fd9466f-gdjjt 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-4c52j 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-8dh27 1/1 Running 0 1d
tekton-pipelines-controller-76c8c8dd78-llj4c 0/1 CrashLoopBackOff 436 1d
tiller-7ddfd45c57-rwtt9 1/1 Running 0 1d
Error log:
2020/08/24 18:38:00 Registering 4 clients
2020/08/24 18:38:00 Registering 3 informer factories
2020/08/24 18:38:00 Registering 8 informers
2020/08/24 18:38:00 Registering 2 controllers
{"level":"info","caller":"logging/config.go:108","msg":"Successfully created the logger."}
{"level":"info","caller":"logging/config.go:109","msg":"Logging level set to info"}
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
After downgrading the tekton image from "0.11.0" to "0.9.0" the tekton pipeline controller pod is into running state. And a new tekton pipeline webhook pod got created and it is into "Crashloopbackoff"
Jx version:
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
Diagnostic information:
The output of jx diagnose version is:
Running in namespace: jx
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
NAME VERSION
Kubernetes cluster v1.11.0+d4cacc0
kubectl (installed in JX_BIN) v1.16.6-beta.0
helm client 2.16.9
git 2.24.1
Operating System "CentOS Linux release 7.8.2003 (Core)"
Please visit https://jenkins-x.io/faq/issues/ for any known issues.
Finished printing diagnostic information
Kubernetes cluster: openshift - 3.11
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-15T09:45:30Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Operating system / Environment:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I need to install "jenkins-x" via "jx boot" in "openshift-3.11" which uses default kubernetes version - 1.11.0 but "jx boot" requires atleast 1.14.0. Please suggest if there is any work around to get jenkins-x on openshift-3.11

As the error message shows (in the crashloop), kubernetes version "v1.11.0" is not compatible, need at least "v1.14.0", which make it not installable on OpenShift 3 (as it ships with Kubernetes 1.11.0). It seems jenkins-X comes with Tetkon Pipelines v0.14.2 which requires at least Kubernetes 1.14.0 (and later releases like Tekton Pipelines v0.15.0 requires Kubernetes 1.16.0).
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
Theorically, setting KUBERNETES_MIN_VERSION in the controller deployment might make it work but it is not being tested and most likely the controller won't behave correctly as it's using feature that are not available in 1.11.0. Other than this, there is no workaround that I know of.

Related

define name for ALB when creating kubernetes ingress in AKS

I’m creating Kubernetes nginx ingress controller using Helm https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx Since I’m provisioning a private AKS cluster, I instruct via annotations that the Azure Load Balancer that gets created has a private rather than a public IP address (service.beta.kubernetes.io/azure-load-balancer-internal and service.beta.kubernetes.io/azure-load-balancer-internal-subnet). Here's the values.yaml file that I provide when running helm install
controller:
replicaCount: `
image:
registry: foo.azurecr.io
digest: ""
pullPolicy: Always
ingressClassResource:
# -- Name of the ingressClass
name: "internal-nginx"
# -- Is this ingressClass enabled or not
enabled: true
# -- Is this the default ingressClass for the cluster
default: false
# -- Controller-value of the controller that is processing this ingressClass
controllerValue: "k8s.io/internal-ingress-nginx"
admissionWebhooks:
patch:
image:
registry: foo.azurecr.io
digest: ""
service:
annotations:
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": subnet01
loadBalancerIP: "x.x.x.x"
watchIngressWithoutClass: true
ingressClassResource:
default: true
defaultBackend:
enabled: true
image:
registry: foo.azurecr.io
digest: ""
Each single ingress controller creates an Azure Load Balancer named kubernetes-internal:
Kubernetes-internal
I've searched LoadBalancer annotations but can't find a way to control what the actual name for the ALB will be, or is it always kubernetes-internal ?
Anyone has any ideas please ?

Unable to start nginx-ingress-controller Readiness and Liveness probes failed

I have installed using instructions at this link for the Install NGINX using NodePort option.
When I do ks logs -f ingress-nginx-controller-7f48b8-s7pg4 -n ingress-nginx I get :
W0304 09:33:40.568799 8 client_config.go:614] Neither --kubeconfig nor --master was
specified. Using the inClusterConfig. This might not work.
I0304 09:33:40.569097 8 main.go:241] "Creating API client" host="https://10.96.0.1:443"
I0304 09:33:40.584904 8 main.go:285] "Running in Kubernetes cluster" major="1" minor="23" git="v1.23.1+k0s" state="clean" commit="b230d3e4b9d6bf4b731d96116a6643786e16ac3f" platform="linux/amd64"
I0304 09:33:40.911443 8 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0304 09:33:40.916404 8 main.go:115] "Enabling new Ingress features available since Kubernetes v1.18"
W0304 09:33:40.918137 8 main.go:127] No IngressClass resource with name nginx found. Only annotation will be used.
I0304 09:33:40.942282 8 ssl.go:532] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0304 09:33:40.977766 8 nginx.go:254] "Starting NGINX Ingress controller"
I0304 09:33:41.007616 8 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"1a4482d2-86cb-44f3-8ebb-d6342561892f", APIVersion:"v1", ResourceVersion:"987560", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
E0304 09:33:42.087113 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:43.041954 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:44.724681 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:48.303789 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:33:59.113203 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
E0304 09:34:16.727052 8 reflector.go:138] k8s.io/client-go#v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1beta1.Ingress: failed to list *v1beta1.Ingress: the server could not find the requested resource
I0304 09:34:39.216165 8 main.go:187] "Received SIGTERM, shutting down"
I0304 09:34:39.216773 8 nginx.go:372] "Shutting down controller queues"
E0304 09:34:39.217779 8 store.go:178] timed out waiting for caches to sync
I0304 09:34:39.217856 8 nginx.go:296] "Starting NGINX process"
I0304 09:34:39.218007 8 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0304 09:34:39.219741 8 queue.go:78] "queue has been shutdown, failed to enqueue" key="&ObjectMeta{Name:initial-sync,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}"
I0304 09:34:39.219787 8 nginx.go:316] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0304 09:34:39.242501 8 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0304 09:34:39.242807 8 queue.go:78] "queue has been shutdown, failed to enqueue" key="&ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[]OwnerReference{},Finalizers:[],ClusterName:,ManagedFields:[]ManagedFieldsEntry{},}"
I0304 09:34:39.242837 8 status.go:84] "New leader elected" identity="ingress-nginx-controller-7f48b8-s7pg4"
I0304 09:34:39.252025 8 status.go:204] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7f48b8-s7pg4" node="fbcdcesdn02"
I0304 09:34:39.255282 8 status.go:132] "removing value from ingress status" address=[]
I0304 09:34:39.255328 8 nginx.go:380] "Stopping admission controller"
I0304 09:34:39.255379 8 nginx.go:388] "Stopping NGINX process"
E0304 09:34:39.255664 8 nginx.go:319] "Error listening for TLS connections" err="http: Server closed"
2022/03/04 09:34:39 [notice] 43#43: signal process started
I0304 09:34:40.263361 8 nginx.go:401] "NGINX process has stopped"
I0304 09:34:40.263396 8 main.go:195] "Handled quit, awaiting Pod deletion"
I0304 09:34:50.263585 8 main.go:198] "Exiting" code=0
When I do ks describe pod ingress-nginx-controller-7f48b8-s7pg4 -n ingress-nginx I get :
Name: ingress-nginx-controller-7f48b8-s7pg4
Namespace: ingress-nginx
Priority: 0
Node: fxxxxxxxx/10.XXX.XXX.XXX
Start Time: Fri, 04 Mar 2022 08:12:57 +0200
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=7f48b8
Annotations: kubernetes.io/psp: 00-k0s-privileged
Status: Running
IP: 10.244.0.119
IPs:
IP: 10.244.0.119
Controlled By: ReplicaSet/ingress-nginx-controller-7f48b8
Containers:
controller:
Container ID: containerd://638ff4d63b7ba566125bd6789d48db6e8149b06cbd9d887ecc57d08448ba1d7e
Image: k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
Image ID: k8s.gcr.io/ingress-nginx/controller#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--election-id=ingress-controller-leader
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 04 Mar 2022 11:33:40 +0200
Finished: Fri, 04 Mar 2022 11:34:50 +0200
Ready: False
Restart Count: 61
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-7f48b8-s7pg4 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvcnr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-zvcnr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 23m (x316 over 178m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 8m52s (x555 over 174m) kubelet Back-off restarting failed container
Normal Pulled 3m54s (x51 over 178m) kubelet Container image "k8s.gcr.io/ingress-nginx/controller:v0.48.1#sha256:e9fb216ace49dfa4a5983b183067e97496e7a8b307d2093f4278cd550c303899" already present on machine
When I try to curl the health endpoints I get Connection refused :
The state of the pods shows that they are both not ready :
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-4hzzk 0/1 Completed 0 3h30m
ingress-nginx-controller-7f48b8-s7pg4 0/1 CrashLoopBackOff 63 (91s ago) 3h30m
I have tried to increase the values for initialDelaySeconds in /etc/nginx/nginx.conf but when I attempt to exec into the container (ks exec -it -n ingress-nginx ingress-nginx-controller-7f48b8-s7pg4 -- bash) I also get an error error: unable to upgrade connection: container not found ("controller")
I am not really sure where I should be looking in the overall setup.
I have installed using instructions at this link for the Install NGINX using NodePort option.
The problem is that you are using outdated k0s documentation:
https://docs.k0sproject.io/v1.22.2+k0s.1/examples/nginx-ingress/
You should use this link instead:
https://docs.k0sproject.io/main/examples/nginx-ingress/
You will install the controller-v1.0.0 version on your Kubernetes cluster by following the actual documentation link.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
The result is:
$ sudo k0s kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-dw2f4 0/1 Completed 0 11m
ingress-nginx-admission-patch-4dmpd 0/1 Completed 0 11m
ingress-nginx-controller-75f58fbf6b-xrfxr 1/1 Running 0 11m

K3d gives "Error response from daemon: invalid reference format" error

I'm trying to run k3d with a previous version of k8s (v1.20.2, which matches the current version of k8s on OVH). I understand that the correct way of doing this is to specify the image of k3s in the config file. Running this fails with: Error response from daemon: invalid reference format (full logs below).
How can I avoid this error?
Command:
k3d cluster create bitbuyer-cluster --config ./k3d-config.yml
Config:
# k3d-config.yml
apiVersion: k3d.io/v1alpha3
kind: Simple
# version for k8s v1.20.2
image: rancher/k3s:v1.20.11+k3s2
options:
k3s:
extraArgs:
- arg: "--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%"
nodeFilters:
- server:*
- arg: "--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%"
nodeFilters:
- server:*
Logs:
# $ k3d cluster create bitbuyer-cluster --trace --config ./k3d-config.yml
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.9 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
DEBU[0000] Validating file ./k3d-config.yml against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:62}
INFO[0000] Using config file ./k3d-config.yml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 0
apiversion: k3d.io/v1alpha3
image: rancher/k3s:v1.20.11+k3s2
kind: Simple
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
k3s:
extraargs:
- arg: --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%
nodeFilters:
- server:*
- arg: --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%
nodeFilters:
- server:*
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:43681} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-bitbuyer-cluster-server-0
settings:
workerConnections: 1024
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 Created:2021-10-15 12:56:38.391159451 +0100 WEST Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.25.0.0/16 IPRange: Gateway:172.25.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-bitbuyer-cluster' (f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384)
INFO[0000] Created volume 'k3d-bitbuyer-cluster-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====
Runtime:
{}
Cluster:
&{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 External:false IPAM:{IPPrefix:172.25.0.0/16 IPsUsed:[172.25.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:k3d-bitbuyer-cluster-images}
ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}
============================
INFO[0000] Starting new tools node...
TRAC[0000] Creating node from spec
&{Name:k3d-bitbuyer-cluster-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.version:v5.0.1] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.role:noRole k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00064d1cf} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0004da000]}}
INFO[0001] Creating node 'k3d-bitbuyer-cluster-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-bitbuyer-cluster-server-0 Role:server Image:rancher/k3s:v1.20.11+k3s2 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images] Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz] Cmd:[] Args:[--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000654240} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.20.11+k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:43681 k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00035040f} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0003740c0]}}
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-bitbuyer-cluster-server-0: failed to create node: runtime failed to create node 'k3d-bitbuyer-cluster-server-0': failed to create container for node 'k3d-bitbuyer-cluster-server-0': docker failed to create container 'k3d-bitbuyer-cluster-server-0': Error response from daemon: invalid reference format
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'bitbuyer-cluster'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Kubectl:
# $ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

How to make rootless containers with Podman efficient

I launched web applications in containers with the help of podman and there was a performance problem if I run container as rootless user. I have to wait a few seconds for the page to load.
The application uses the postgres database.
When I run the containers as root, the application runs much faster.
There is some way to improve the performance of the rootless containers?
podman info:
host:
arch: amd64
buildahVersion: 1.21.3
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.29-1.module_el8.4.0+886+c9a8d9ad.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.29, commit: 97bba1e91aaab5be2e93bacd34ec4e66655a02ae'
cpus: 4
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: localhost
idMappings:
gidmap:
- container_id: 0
host_id: 5002
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 5002
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.18.0-305.12.1.el8_4.x86_64
linkmode: dynamic
memFree: 134930432
memTotal: 8145588224
ociRuntime:
name: runc
package: runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
path: /usr/bin/runc
version: |-
runc version spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/5002/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module_el8.4.0+641+6116a774.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 2139840512
swapTotal: 2147479552
uptime: 2h 58m 15.4s (Approximately 0.08 days)
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/user/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.6-1.module_el8.4.0+886+c9a8d9ad.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.6
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/admin/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/5002/containers
volumePath: /home/user/.local/share/containers/storage/volumes
version:
APIVersion: 3.2.3
Built: 1632432139
BuiltTime: Thu Sep 23 23:22:19 2021
GitCommit: ""
GoVersion: go1.15.14
OsArch: linux/amd64
Version: 3.2.3

How do I change the cgroup version for podman

I am trying to run podman with cgroups v2 enabled. I found a couple of blogposts explaining how to change the runtime to crun and the cgroup_manager to cgroupfs. But I don't know how to actually set the cgroup version to v2.
I am running podman on Manjaro Linx Kernerl 5.4, so, if i am correct, cgroups v2 should be supported.
here it the output of podman info:
host:
BuildahVersion: 1.14.3
CgroupVersion: v1
Conmon:
package: Unknown
path: /usr/bin/conmon
version: 'conmon version 2.0.15, commit: 1bddbf7051a973f4a4fecf06faa0c48e82f1e9e1'
Distribution:
distribution: manjaro
version: unknown
IDMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 65536
size: 66536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 65536
size: 66536
MemFree: 9938743296
MemTotal: 16709140480
OCIRuntime:
name: crun
package: Unknown
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
SwapFree: 18296258560
SwapTotal: 18296258560
arch: amd64
cpus: 6
eventlogger: journald
hostname: josef-pc
kernel: 5.4.30-1-MANJARO
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: Unknown
Version: |-
slirp4netns version 1.0.0
commit: a3be729152a33e692cd28b52f664defbf2e7810a
libslirp: 4.1.0
uptime: 18m 20.54s
registries:
search:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /home/josmos/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: Unknown
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 0.7.8
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
GraphRoot: /home/josmos/.local/share/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 9
RunRoot: /run/user/1000/containers
VolumePath: /home/josmos/.local/share/containers/storage/volumes
are you using OpenRC? From what I see, you need to set rc_cgroup_mode="unified" in the rc.conf file.
If you were using systemd instead, you'd need to run # grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1".