How do I change the cgroup version for podman - containers

I am trying to run podman with cgroups v2 enabled. I found a couple of blogposts explaining how to change the runtime to crun and the cgroup_manager to cgroupfs. But I don't know how to actually set the cgroup version to v2.
I am running podman on Manjaro Linx Kernerl 5.4, so, if i am correct, cgroups v2 should be supported.
here it the output of podman info:
host:
BuildahVersion: 1.14.3
CgroupVersion: v1
Conmon:
package: Unknown
path: /usr/bin/conmon
version: 'conmon version 2.0.15, commit: 1bddbf7051a973f4a4fecf06faa0c48e82f1e9e1'
Distribution:
distribution: manjaro
version: unknown
IDMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 65536
size: 66536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 65536
size: 66536
MemFree: 9938743296
MemTotal: 16709140480
OCIRuntime:
name: crun
package: Unknown
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
SwapFree: 18296258560
SwapTotal: 18296258560
arch: amd64
cpus: 6
eventlogger: journald
hostname: josef-pc
kernel: 5.4.30-1-MANJARO
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: Unknown
Version: |-
slirp4netns version 1.0.0
commit: a3be729152a33e692cd28b52f664defbf2e7810a
libslirp: 4.1.0
uptime: 18m 20.54s
registries:
search:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /home/josmos/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: Unknown
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 0.7.8
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
GraphRoot: /home/josmos/.local/share/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
ImageStore:
number: 9
RunRoot: /run/user/1000/containers
VolumePath: /home/josmos/.local/share/containers/storage/volumes

are you using OpenRC? From what I see, you need to set rc_cgroup_mode="unified" in the rc.conf file.
If you were using systemd instead, you'd need to run # grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1".

Related

Pre-populated mysql docker image doesn't start when setup pod template security context

I have a mysql image with a prepolutated schema, below I share the setup files.
My dockerfile:
FROM mysql:8.0.31 as builder
# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$#\"/echo \"not running $#\"/", "/usr/local/bin/docker-entrypoint.sh"]
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=test
COPY ./sql-scripts /docker-entrypoint-initdb.d/
# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]
FROM mysql:8.0.31
COPY --from=builder /initialized-db /var/lib/mysql
My pod template yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
label: 'backend'
spec:
shareProcessNamespace: true
containers:
- name: "maven"
image: maven:3.6.3-openjdk-11
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "10Gi"
cpu: "10"
command: [ sleep ]
args: [ 1h ]
securityContext:
capabilities:
add:
- SYS_PTRACE
- name: mysql
image: myDockerRegistry/mysql8-integration-test:v5
env:
- name: MYSQL_USER
value: test
- name: MYSQL_PASSWORD
value: test
- name: MYSQL_ROOT_PASSWORD
value: test
securityContext:
capabilities:
add:
- SYS_PTRACE
My pipeline:
pipeline {
agent {
kubernetes {
yaml libraryResource('pod-templates/backend.yaml')
}
}
stages { ... }
}
The above setup works fine, but I want to use a dynamic PVC for the workspace, then I add the following line to my pipeline after the pod template.
workspaceVolume dynamicPVC(accessModes: 'ReadWriteOnce',requestsSize: "10Gi", storageClassName: 'premium-rwo')
But I have to add securityContext to my pod template so jenkins will able to mount the PVC in the agent:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
With those changes the pod start and the volume is mounted correctly, but the mysql container doesn't work. This is the error log:
2022-11-03 09:33:25+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
'/var/lib/mysql/mysql.sock' -> '/var/run/mysqld/mysqld.sock'
2022-11-03T09:33:25.839933Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
2022-11-03T09:33:25.842508Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 13
2022-11-03T09:33:25.845263Z 0 [Warning] [MY-010122] [Server] One can only use the --user switch if running as root
mysqld: File './binlog.index' not found (OS errno 13 - Permission denied)
2022-11-03T09:33:25.845867Z 0 [ERROR] [MY-010119] [Server] Aborting
2022-11-03T09:33:25.846078Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.31) MySQL Community Server - GPL.
I asume is something related with root rights in the mysql container, but is weird because the official image runs perfectly.
Finally this is the raw yaml generated after inject the jenkins agent:
apiVersion: v1
kind: Pod
metadata:
annotations:
buildUrl: >-
http://jenkins.jenkins.svc.cluster.local:8080/job/LegacyProjects/job/my-project/job/k8s-test/79/
runUrl: job/LegacyProjects/job/my-project/job/k8s-test/79/
labels:
label: backend
jenkins/jenkins-jenkins-agent: 'true'
jenkins/label-digest: 4581eadfdfcb3d0141b8e8727b53b2ff9a3575ec
jenkins/label: LegacyProjects_my-project_k8s-test_79-xgtxd
name: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
namespace: jenkins
spec:
containers:
- args:
- 1h
command:
- sleep
image: 'maven:3.6.3-openjdk-11'
name: maven
resources:
limits:
memory: 10Gi
cpu: '10'
requests:
memory: 2Gi
cpu: '2'
securityContext:
capabilities:
add:
- SYS_PTRACE
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
- env:
- name: MYSQL_USER
value: test
- name: MYSQL_PASSWORD
value: test
- name: MYSQL_ROOT_PASSWORD
value: test
image: 'myDockerRegistry/mysql8-integration-test:v5'
name: mysql
securityContext:
capabilities:
add:
- SYS_PTRACE
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
- env:
- name: JENKINS_SECRET
value: '********'
- name: JENKINS_TUNNEL
value: 'jenkins-agent.jenkins.svc.cluster.local:50000'
- name: JENKINS_AGENT_NAME
value: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
- name: JENKINS_NAME
value: my-project-k8s-test-79-xgtxd-2xw2r-8wj64
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: 'http://jenkins.jenkins.svc.cluster.local:8080/'
image: 'jenkins/inbound-agent:4.11-1-jdk11'
name: jnlp
resources:
limits: {}
requests:
memory: 256Mi
cpu: 100m
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
readOnly: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Never
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsUser: 1000
shareProcessNamespace: true
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-workspace-my-project-test-79-xgtxd-2xw2r-8wj64
readOnly: false
Any help will be appreciated
COPY command by default works only as root user, you should specify --chown=1000:1000 flag to that command to set the correct user and group (in your case - it's user with uid and gid 1000, which is specified in securityContext), see https://stackoverflow.com/a/44766666 and https://docs.docker.com/engine/reference/builder/#copy for more details
While your use case might require pre-building image with database, consider running official mysql image as db, and your application with init container with liquibase/flyway or other (even in-built) database migration toolkit, which might be a more portable solution in a long run

Why does the K3s system upgrade controller fail with "not found, requeuing"?

I installed the system upgrade controller and applied this plan manifest:
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: master-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-master-upgrade
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
---
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: worker-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: k3s-worker-upgrade
operator: In
values:
- "true"
prepare:
args:
- prepare
- master-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable
I applied and checked the labels:
$ kubectl label node crux k3s-worker-upgrade=true
$ kubectl describe nodes crux | grep k3s-worker-upgrade
k3s-worker-upgrade=true
$ kubectl label node nemo k3s-master-upgrade=true
$ kubectl describe nodes nemo | grep k3s-master-upgrade
k3s-master-upgrade=true
According to kubectl get nodes I'm still on v1.23.6+k3s1, but the stable channel is on v1.24.4+k3s1.
I get the following errors:
$ kubectl -n system-upgrade logs deployment.apps/system-upgrade-controller
time="2022-09-12T11:29:31Z" level=error msg="error syncing 'system-upgrade/apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff': handler system-upgrade-controller: jobs.batch \"apply-worker-plan-on-crux-with-4190e4adda3866e909fc7735c1-f0dff\" not found, requeuing"
time="2022-09-12T11:30:35Z" level=error msg="error syncing 'system-upgrade/apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f': handler system-upgrade-controller: jobs.batch \"apply-master-plan-on-nemo-with-4190e4adda3866e909fc7735c1-9cf4f\" not found, requeuing"
$ kubectl -n system-upgrade get jobs -o yaml
- apiVersion: batch/v1
kind: Job
metadata:
labels:
upgrade.cattle.io/controller: system-upgrade-controller
upgrade.cattle.io/node: crux
upgrade.cattle.io/plan: worker-plan
upgrade.cattle.io/version: v1.24.4-k3s1
status:
conditions:
- lastProbeTime: "2022-09-12T12:14:31Z"
lastTransitionTime: "2022-09-12T12:14:31Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
failed: 1
startTime: "2022-09-12T11:59:31Z"
uncountedTerminatedPods: {}
Same here.
I have managed to upgrade k3s only by using manual upgrade using the binaries.
On all nodes:
wget https://github.com/k3s-io/k3s/releases/download/v1.24.4%2Bk3s1/k3s
On the server (master):
systemctl stop k3s
cp ./k3s /usr/local/bin/
systemctl start k3s
On the agents (workers):
systemctl stop k3s-agent
cp ./k3s /usr/local/bin/
systemctl start k3s-agent
It seems much faster and easier than struggling with the automated upgrade, no drains, no cordons....

K3d gives "Error response from daemon: invalid reference format" error

I'm trying to run k3d with a previous version of k8s (v1.20.2, which matches the current version of k8s on OVH). I understand that the correct way of doing this is to specify the image of k3s in the config file. Running this fails with: Error response from daemon: invalid reference format (full logs below).
How can I avoid this error?
Command:
k3d cluster create bitbuyer-cluster --config ./k3d-config.yml
Config:
# k3d-config.yml
apiVersion: k3d.io/v1alpha3
kind: Simple
# version for k8s v1.20.2
image: rancher/k3s:v1.20.11+k3s2
options:
k3s:
extraArgs:
- arg: "--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%"
nodeFilters:
- server:*
- arg: "--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%"
nodeFilters:
- server:*
Logs:
# $ k3d cluster create bitbuyer-cluster --trace --config ./k3d-config.yml
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.9 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
DEBU[0000] Validating file ./k3d-config.yml against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:62}
INFO[0000] Using config file ./k3d-config.yml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 0
apiversion: k3d.io/v1alpha3
image: rancher/k3s:v1.20.11+k3s2
kind: Simple
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
k3s:
extraargs:
- arg: --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%
nodeFilters:
- server:*
- arg: --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%
nodeFilters:
- server:*
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:43681} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-bitbuyer-cluster-server-0
settings:
workerConnections: 1024
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 Created:2021-10-15 12:56:38.391159451 +0100 WEST Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.25.0.0/16 IPRange: Gateway:172.25.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-bitbuyer-cluster' (f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384)
INFO[0000] Created volume 'k3d-bitbuyer-cluster-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====
Runtime:
{}
Cluster:
&{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 External:false IPAM:{IPPrefix:172.25.0.0/16 IPsUsed:[172.25.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:k3d-bitbuyer-cluster-images}
ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}
============================
INFO[0000] Starting new tools node...
TRAC[0000] Creating node from spec
&{Name:k3d-bitbuyer-cluster-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.version:v5.0.1] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.role:noRole k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00064d1cf} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0004da000]}}
INFO[0001] Creating node 'k3d-bitbuyer-cluster-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-bitbuyer-cluster-server-0 Role:server Image:rancher/k3s:v1.20.11+k3s2 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images] Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz] Cmd:[] Args:[--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000654240} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.20.11+k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:43681 k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00035040f} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0003740c0]}}
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-bitbuyer-cluster-server-0: failed to create node: runtime failed to create node 'k3d-bitbuyer-cluster-server-0': failed to create container for node 'k3d-bitbuyer-cluster-server-0': docker failed to create container 'k3d-bitbuyer-cluster-server-0': Error response from daemon: invalid reference format
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'bitbuyer-cluster'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Kubectl:
# $ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

How to make rootless containers with Podman efficient

I launched web applications in containers with the help of podman and there was a performance problem if I run container as rootless user. I have to wait a few seconds for the page to load.
The application uses the postgres database.
When I run the containers as root, the application runs much faster.
There is some way to improve the performance of the rootless containers?
podman info:
host:
arch: amd64
buildahVersion: 1.21.3
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.29-1.module_el8.4.0+886+c9a8d9ad.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.29, commit: 97bba1e91aaab5be2e93bacd34ec4e66655a02ae'
cpus: 4
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: localhost
idMappings:
gidmap:
- container_id: 0
host_id: 5002
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 5002
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.18.0-305.12.1.el8_4.x86_64
linkmode: dynamic
memFree: 134930432
memTotal: 8145588224
ociRuntime:
name: runc
package: runc-1.0.0-74.rc95.module_el8.4.0+886+c9a8d9ad.x86_64
path: /usr/bin/runc
version: |-
runc version spec: 1.0.2-dev
go: go1.15.14
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/5002/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module_el8.4.0+641+6116a774.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 2139840512
swapTotal: 2147479552
uptime: 2h 58m 15.4s (Approximately 0.08 days)
registries:
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/user/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.6-1.module_el8.4.0+886+c9a8d9ad.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.6
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/admin/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/5002/containers
volumePath: /home/user/.local/share/containers/storage/volumes
version:
APIVersion: 3.2.3
Built: 1632432139
BuiltTime: Thu Sep 23 23:22:19 2021
GitCommit: ""
GoVersion: go1.15.14
OsArch: linux/amd64
Version: 3.2.3

"jx boot" fails in "openshift-3.11" provider with "tekton pipeline controller" pod into "crashloopbackoff" state

Summary:
I already have a setup of "static jenkins server" type jenkins-x running in openshift 3.11 provider. The cluster was crashed and I want to reinstall jenkins-x in my cluster but there is no support for "static jenkins server" now.
So I am trying to install "jenkins-x" via "jx boot" but the installation fails with "tekton pipeline controller" pod into "crashloopbackoff" state.
Steps to reproduce the behavior:
jx-requirements.yml:
autoUpdate:
enabled: false
schedule: ""
bootConfigURL: https://github.com/jenkins-x/jenkins-x-boot-config.git
cluster:
clusterName: cic-60
devEnvApprovers:
- automation
environmentGitOwner: cic-60
gitKind: bitbucketserver
gitName: bs
gitServer: http://rtx-swtl-git.fnc.net.local
namespace: jx
provider: openshift
registry: docker-registry.default.svc:5000
environments:
- ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
key: dev
repository: environment-cic-60-dev
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: staging
repository: environment-cic-60-staging
- ingress:
domain: ""
externalDNS: false
namespaceSubDomain: ""
tls:
email: ""
enabled: false
production: false
key: production
repository: environment-cic-60-production
gitops: true
ingress:
domain: 172.29.35.81.nip.io
externalDNS: false
namespaceSubDomain: -jx.
tls:
email: ""
enabled: false
production: false
kaniko: true
repository: nexus
secretStorage: local
storage:
backup:
enabled: false
url: ""
logs:
enabled: false
url: ""
reports:
enabled: false
url: ""
repository:
enabled: false
url: ""
vault: {}
velero:
schedule: ""
ttl: ""
versionStream:
ref: v1.0.562
url: https://github.com/jenkins-x/jenkins-x-versions.git
webhook: lighthouse
Expected behavior:
All the pods under jx namespace should be up & running and jenkins-x should be installed properly
Actual behavior:
Tekton pipeline controller pod is into "CrashLoopBackOff" state with error:
Pods with status in "jx" namespace:
NAME READY STATUS RESTARTS AGE
jenkins-x-chartmuseum-5687695d57-pp994 1/1 Running 0 1d
jenkins-x-controllerbuild-78b4b56695-mg2vs 1/1 Running 0 1d
jenkins-x-controllerrole-765cf99bdb-swshp 1/1 Running 0 1d
jenkins-x-docker-registry-5bcd587565-rhd7q 1/1 Running 0 1d
jenkins-x-gcactivities-1598421600-jtgm6 0/1 Completed 0 1h
jenkins-x-gcactivities-1598423400-4rd76 0/1 Completed 0 43m
jenkins-x-gcactivities-1598425200-sd7xm 0/1 Completed 0 13m
jenkins-x-gcpods-1598421600-z7s4w 0/1 Completed 0 1h
jenkins-x-gcpods-1598423400-vzb6p 0/1 Completed 0 43m
jenkins-x-gcpods-1598425200-56zdp 0/1 Completed 0 13m
jenkins-x-gcpreviews-1598421600-5k4vf 0/1 Completed 0 1h
jenkins-x-nexus-c7dcb47c7-fh7kx 1/1 Running 0 1d
lighthouse-foghorn-654c868bc8-d5w57 1/1 Running 0 1d
lighthouse-gc-jobs-1598421600-bmsq8 0/1 Completed 0 1h
lighthouse-gc-jobs-1598423400-zskt5 0/1 Completed 0 43m
lighthouse-gc-jobs-1598425200-m9gtd 0/1 Completed 0 13m
lighthouse-jx-controller-6c9b8994bd-qt6tc 1/1 Running 0 1d
lighthouse-keeper-7c6fd9466f-gdjjt 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-4c52j 1/1 Running 0 1d
lighthouse-webhooks-56668dc58b-8dh27 1/1 Running 0 1d
tekton-pipelines-controller-76c8c8dd78-llj4c 0/1 CrashLoopBackOff 436 1d
tiller-7ddfd45c57-rwtt9 1/1 Running 0 1d
Error log:
2020/08/24 18:38:00 Registering 4 clients
2020/08/24 18:38:00 Registering 3 informer factories
2020/08/24 18:38:00 Registering 8 informers
2020/08/24 18:38:00 Registering 2 controllers
{"level":"info","caller":"logging/config.go:108","msg":"Successfully created the logger."}
{"level":"info","caller":"logging/config.go:109","msg":"Logging level set to info"}
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
After downgrading the tekton image from "0.11.0" to "0.9.0" the tekton pipeline controller pod is into running state. And a new tekton pipeline webhook pod got created and it is into "Crashloopbackoff"
Jx version:
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
Diagnostic information:
The output of jx diagnose version is:
Running in namespace: jx
Version 2.1.127
Commit 4bc05a9
Build date 2020-08-05T20:34:57Z
Go version 1.13.8
Git tree state clean
NAME VERSION
Kubernetes cluster v1.11.0+d4cacc0
kubectl (installed in JX_BIN) v1.16.6-beta.0
helm client 2.16.9
git 2.24.1
Operating System "CentOS Linux release 7.8.2003 (Core)"
Please visit https://jenkins-x.io/faq/issues/ for any known issues.
Finished printing diagnostic information
Kubernetes cluster: openshift - 3.11
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-15T09:45:30Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Operating system / Environment:
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I need to install "jenkins-x" via "jx boot" in "openshift-3.11" which uses default kubernetes version - 1.11.0 but "jx boot" requires atleast 1.14.0. Please suggest if there is any work around to get jenkins-x on openshift-3.11
As the error message shows (in the crashloop), kubernetes version "v1.11.0" is not compatible, need at least "v1.14.0", which make it not installable on OpenShift 3 (as it ships with Kubernetes 1.11.0). It seems jenkins-X comes with Tetkon Pipelines v0.14.2 which requires at least Kubernetes 1.14.0 (and later releases like Tekton Pipelines v0.15.0 requires Kubernetes 1.16.0).
{"level":"fatal","logger":"tekton","caller":"sharedmain/main.go:149","msg":"Version check failed","commit":"821ac4d","error":"kubernetes version \"v1.11.0\" is not compatible, need at least \"v1.14.0\" (this can be overridden with the env var \"KUBERNETES_MIN_VERSION\")","stacktrace":"github.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithConfig\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:149\ngithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain.MainWithContext\n\tgithub.com/tektoncd/pipeline/vendor/knative.dev/pkg/injection/sharedmain/main.go:114\nmain.main\n\tgithub.com/tektoncd/pipeline/cmd/controller/main.go:72\nruntime.main\n\truntime/proc.go:203"}
Theorically, setting KUBERNETES_MIN_VERSION in the controller deployment might make it work but it is not being tested and most likely the controller won't behave correctly as it's using feature that are not available in 1.11.0. Other than this, there is no workaround that I know of.