Unable to build podman compatiable containers using nix-build and dockerTools.buildImage - containers

The following is invidious.nix, which builds a container that contains nix packages for Bash, Busybox and Invidious:
let
# nixos-22.05 / https://status.nixos.org/
pkgs = import (fetchTarball "https://github.com/NixOS/nixpkgs/archive/d86a4619b7e80bddb6c01bc01a954f368c56d1df.tar.gz") {};
in rec {
docker = pkgs.dockerTools.buildImage {
name = "invidious";
contents = [ pkgs.busybox pkgs.bash pkgs.invidious ];
config = {
Cmd = [ "/bin/bash" ];
Env = [];
Volumes = {};
};
};
}
If I try to load the container with docker load < result, Docker can correctly load the container.
docker load < result
14508d34fd29: Loading layer [==================================================>] 156.6MB/156.6MB
Loaded image: invidious:2nrcdxgz46isccfgyzdcbirs0vvqhp55
However, if I attempt the same thing using podman, I get the following error:
podman load < result
Error: payload does not match any of the supported image formats:
* oci: initializing source oci:/var/tmp/podman3824611648:: open /var/tmp/podman3824611648/index.json: not a directory
* oci-archive: loading index: open /var/tmp/oci1927542201/index.json: no such file or directory
* docker-archive: loading tar component manifest.json: archive/tar: invalid tar header
* dir: open /var/tmp/podman3824611648/manifest.json: not a directory
If I inspect the result, it does appear to have the correct format for an OCI container:
tar tvfz result
dr-xr-xr-x root/root 0 1979-12-31 19:00 ./
-r--r--r-- root/root 391 1979-12-31 19:00 027302622543ef251be6d3f2d616f98c73399d8cd074b0d1497e5a7da5e6c882.json
dr-xr-xr-x root/root 0 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/
-r--r--r-- root/root 3 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/VERSION
-r--r--r-- root/root 353 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/json
-r--r--r-- root/root 156579840 1979-12-31 19:00 669db3729b40e36a9153569b747788611e547f0b50a9f7d77107a04c6ddd887e/layer.tar
-r--r--r-- root/root 280 1979-12-31 19:00 manifest.json
-r--r--r-- root/root 128 1979-12-31 19:00 repositories
How do I get nix-build to create compliant containers that podman can read?
nix-build version: 2.10.3
podman version: 4.2.0

It turns out, the version of podman I'm running can't read gzipped tar files. The following works:
zcat result | podman load

Related

OpenShift upgrade error 4.11.x -> 4.12.2 Marking Degraded due to: unexpected on-disk state validating against rendered-worker

I'm administrating RHEL OpenShift cluster. I'm upgrading from 4.10.x -> 4.11.x -> 4.12.2
There are 3 masters, and 7 worker nodes.
all 3 masters updated
3 of the 8 workers updated.
Thus far the upgrade is now stuck on worker0 with:
oc logs machine-config-daemon-4bs9x -n openshift-machine-config-operator
< snip >
I0216 21:00:08.555947 3136 daemon.go:1255] Current config: rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.555986 3136 daemon.go:1256] Desired config: rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
I0216 21:00:08.555992 3136 daemon.go:1258] state: Degraded
I0216 21:00:08.566365 3136 update.go:2089] Running: rpm-ostree cleanup -r
Deployments unchanged.
I0216 21:00:08.647332 3136 update.go:2104] Disk currentConfig rendered-worker-263c6ea5fafb6f1da35a31749a1180d7 overrides node's currentConfig annotation rendered-worker-8ebd95b2c00a22992daf1248ebc5640f
I0216 21:00:08.651201 3136 daemon.go:1564] Validating against pending config rendered-worker-263c6ea5fafb6f1da35a31749a1180d7
E0216 21:00:10.291740 3136 writer.go:200] Marking Degraded due to: unexpected on-disk state validating against rendered-worker-263c6ea5fafb6f1da35a31749a1180d7: expected target osImageURL "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee", have "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17" ("b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f")
I've had this problem before and followed the RedHat solutions to run the following command. But this is now failing.
oc debug node/worker0.xx.com
sh-4.4# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* db83d20cf09a263777fcca78594b16da00af8acc245d29cc2a1344abc3f0dac2
Version: 412.86.202301311551-0 (2023-01-31T15:54:05Z)
sh-4.4#
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee"
I0216 21:02:54.449270 3962714 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-821872843 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916d3c75fb02ee
I0216 21:03:48.349962 3962714 rpm-ostree.go:209] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0216 21:03:49.926169 3962714 rpm-ostree.go:246] No com.coreos.ostree-commit label found in metadata! Inspecting...
I0216 21:03:49.926234 3962714 rpm-ostree.go:412] Running captured: ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo
error: error running ostree refs --repo /run/mco-machine-os-content/os-content-821872843/srv/repo: exit status 1
error: opening repo: opendir(/run/mco-machine-os-content/os-content-821872843/srv/repo): No such file or directory
sh-4.4#
After a reboot and retry now I'm getting:
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
I0217 19:10:06.928154 1443914 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee
error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
W0217 19:10:07.176459 1443914 run.go:45] nice failed: running nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-903744214 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee failed: error: "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a8669163c75fb02ee" is not a valid image reference: invalid checksum digest length
: exit status 1; retrying...
^C
I tried this:
/run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:f454144d2c32aa6fd99b8c68082f59554751282865dce6a866916375fb02ee"
expecting this result ( from a previous upgrade problem ):
sh-4.4# chroot /host
sh-4.4# /run/bin/machine-config-daemon pivot "quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17"
I0208 21:50:00.408235 2962835 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-machine-os-content/os-content-3432684387 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17
I0208 21:50:29.727695 2962835 rpm-ostree.go:353] Running captured: rpm-ostree status --json
I0208 21:50:29.780350 2962835 rpm-ostree.go:261] Previous pivot: quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:7c252d64354d207cd7fb2a6e2404e611a29bf214f63a97345dee1846055c15d8
I0208 21:50:31.456928 2962835 rpm-ostree.go:293] Pivoting to: 411.86.202301242231-0 (b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f)
I0208 21:50:31.456966 2962835 rpm-ostree.go:325] Executing rebase from repo path /run/mco-machine-os-content/os-content-3432684387/srv/repo with customImageURL pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 and checksum b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f
I0208 21:50:31.457048 2962835 update.go:1972] Running: rpm-ostree rebase --experimental /run/mco-machine-os-content/os-content-3432684387/srv/repo:b5390d80a8b7f90b0b64f9db3e92848591c967612740716c656c6e88696e0c3f --custom-origin-url pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev#sha256:73b311468554ffe8bdd0dd51df7dafd7a791a16c3147374cc7b28f0d3d7fcc17 --custom-origin-description Managed by machine-config-operator
0 metadata, 0 content objects imported; 0 bytes content written
Staging deployment... done
Upgraded:
NetworkManager 1:1.30.0-16.el8_4 -> 1:1.36.0-12.el8_6
< snip>
zlib 1.2.11-18.el8_4 -> 1.2.11-19.el8_6
Removed:
ModemManager-glib-1.10.8-2.el8.x86_64
libmbim-1.20.2-1.el8.x86_64
libqmi-1.24.0-1.el8.x86_64
openvswitch2.16-2.16.0-108.el8fdp.x86_64
redhat-release-coreos-410.84-2.el8.x86_64
Added:
WALinuxAgent-udev-2.3.0.2-2.el8_6.3.noarch
glibc-gconv-extra-2.28-189.5.el8_6.x86_64
libbpf-0.4.0-3.el8.x86_64
openvswitch2.17-2.17.0-67.el8fdp.x86_64
redhat-release-8.6-0.1.el8.x86_64
redhat-release-eula-8.6-0.1.el8.x86_64
shadow-utils-subid-2:4.6-16.el8.x86_64
Run "systemctl reboot" to start a reboot
sh-4.4# systemctl reboot

Linux capabilities for container to update file atime programmatically

I have a container running as non-privileged mode. I'd like to update file atime via python code for some reason but found I could not do that due to permission issue, even though I can write to that file.
I tried to add linux capabilities to the container, but even with SYS_AMDIN, it still does not work.
Anyone happens to know what capabilities to add or what I missed there?
thank you!
bash-5.1$ id
uid=1000(contest) gid=1000(contest) groups=1000(contest)
bash-5.1$ ls -l
total 250
-rwxrwxrwx 1 root contest 0 Oct 27 07:16 anotherfile
-rwxrwxrwx 1 root contest 254823 Oct 27 07:37 outfile
-rwxrwxrwx 1 root contest 0 Oct 24 03:52 test
-rwxrwxrwx 1 root contest 364 Oct 27 07:16 test.py
-rwxrwxrwx 1 root contest 18 Oct 24 05:25 testfile
bash-5.1$ python3 test.py
1666854988.190472
1666851388.190472
Traceback (most recent call last):
File "/mnt/azurefile/test.py", line 19, in <module>
os.utime(myfile, (atime - 3600.0, mtime))
PermissionError: [Errno 1] Operation not permitted
bash-5.1$ capsh --print
Current: =
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_admin,cap_mknod,cap_audit_write,cap_setfcap
Ambient set =
Current IAB: !cap_dac_read_search,!cap_linux_immutable,!cap_net_broadcast,!cap_net_admin,!cap_ipc_lock,!cap_ipc_owner,!cap_sys_module,!cap_sys_rawio,!cap_sys_ptrace,!cap_sys_pacct,!cap_sys_boot,!cap_sys_nice,!cap_sys_resource,!cap_sys_time,!cap_sys_tty_config,!cap_lease,!cap_audit_control,!cap_mac_override,!cap_mac_admin,!cap_syslog,!cap_wake_alarm,!cap_block_suspend,!cap_audit_read
Securebits: 00/0x0/1'b0 (no-new-privs=0)
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
secure-no-ambient-raise: no (unlocked)
uid=1000(contest) euid=1000(contest)
gid=1000(contest)
groups=1000(contest)
Guessed mode: HYBRID (4)
my python code to update atime:
from datetime import datetime
import os
import time
myfile = "anotherfile"
current_time = time.time()
"""
Set the access time of a given filename to the given atime.
atime must be a datetime object.
"""
stat = os.stat(myfile)
mtime = stat.st_mtime
atime = stat.st_atime
print(mtime)
mtime = mtime - 3600.0
print(mtime)
os.utime(myfile, (atime - 3600.0, mtime))
pod yaml
---
kind: Pod
apiVersion: v1
metadata:
name: nginx-azurefile
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
nodeSelector:
"kubernetes.io/os": linux
containers:
- image: acheng.azurecr.io/capsh
name: nginx-azurefile
securityContext:
capabilities:
add: ["CHOWN","SYS_ADMIN","SYS_RESOURCES"]
command:
- "/bin/bash"
- "-c"
- set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 10; done
volumeMounts:
- name: persistent-storage
mountPath: "/mnt/azurefile"
imagePullSecrets:
- name: acr-secret
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: pvc-azurefile
tried to add SYS_ADMIN capabilities but didn't work.
if container runs in privileged mode, the code is able to update file access time as expected
answering my own question here.
After searching around, I found kubernetes does not support capabilities for non-root users. the capabilities added in container spec is for root user only. won't take effect for non-root users.
see this github issue for details: https://github.com/kubernetes/kubernetes/issues/56374
a workaround is to add cap directly to the executable file using setcap command (from libcap).
and the capability needed is CAP_FOWNER

Rotate custom logs on AMI Linux 2

I need to rotate my Resque logs on AWS Elastic Beanstalk on AMI Linux 2 with Ruby. My Puma and Nginx logs rotate properly. I've added the following config below, but the logs are not getting rotated.
.ebextensions/03_publish-logs.config
files:
"/opt/elasticbeanstalk/tasks/publishlogs.d/resque.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/resque/rotated/*
.ebextensions/04_rotate-logs.config
files:
"/etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.resque.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/resque/* {
su root root
size 10M
rotate 5
missingok
compress
notifempty
copytruncate
dateext
dateformat %s
olddir /var/log/resque/rotated
}
I'm following this documentation: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html#health-logs-logrotate
You forgot to add this config to logrotate.
Add this code to .ebextensions:
files:
"/etc/cron.hourly/cron.logrotate.elasticbeanstalk.resque.conf" :
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.resque.conf

OpenShift V3 and incremental builds

I have some issues using incremental builds with the image ruby-22-centos7.
I added the following script "save-artifacts" to .sti/bin directory :
#!/bin/sh -e
pushd ${HOME} >/dev/null
if [ -d ./bundle/ruby ]; then
tar cf - bundle/ruby
fi
popd >/dev/null
I have this error during the build steps :
I0330 13:53:05.022524 1 sti.go:213] Using assemble from image:///usr/libexec/s2i
15 I0330 13:53:05.022544 1 sti.go:213] Using run from image:///usr/libexec/s2i
16 I0330 13:53:05.022551 1 sti.go:213] Using save-artifacts from upload/src/.sti/bin
17 I0330 13:53:05.024552 1 sti.go:142] Existing image for tag 172.30.22.77:5000/blog/blog:latest detected for incremental build
18 I0330 13:53:05.024570 1 sti.go:147] Performing source build from file:///tmp/s2i-build462497527/upload/src
19 I0330 13:53:05.024654 1 sti.go:350] Saving build artifacts from image 172.30.22.77:5000/blog/blog:latest to path /tmp/s2i-build462497527/upload/artifacts
20 I0330 13:53:05.026788 1 docker.go:374] Both scripts and untarred source will be placed in '/tmp'
21 I0330 13:53:05.026820 1 docker.go:510] Creating container using config: {Hostname: Domainname: User: Memory:0 MemorySwap:0 CPUShares:0 CPUSet: AttachStdin:false AttachStdout:true AttachStderr:false PortSpecs:[] ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[] Cmd:[/tmp/scripts/save-artifacts] DNS:[] Image:172.30.22.77:5000/blog/blog:latest Volumes:map[] VolumeDriver: VolumesFrom: WorkingDir: MacAddress: Entrypoint:[] NetworkDisabled:false SecurityOpts:[] OnBuild:[] Mounts:[] Labels:map[]}
22 I0330 13:53:05.685226 1 docker.go:524] Attaching to container
23 I0330 13:53:05.686542 1 docker.go:530] Starting container
24 E0330 13:53:10.836202 1 tar.go:207] Error reading next tar header: io: read/write on closed pipe
25 W0330 13:53:10.859154 1 sti.go:150] Clean build will be performed because of error saving previous build artifacts
26 I0330 13:53:10.859172 1 sti.go:152] ERROR: timeout waiting for tar stream
Any help would be greatly appreciated !

Unable to mount volumes for pod

EDITED:
I've an OpenShift cluster with one master and two nodes. I've installed NFS on the master and NFS client on the nodes.
I've followed the wordpress example with NFS: https://github.com/openshift/origin/tree/master/examples/wordpress
I did the following on my master as: oc login -u system:admin:
mkdir /home/data/pv0001
mkdir /home/data/pv0002
chown -R nfsnobody:nfsnobody /home/data
chmod -R 777 /home/data/
# Add to /etc/exports
/home/data/pv0001 *(rw,sync,no_root_squash)
/home/data/pv0002 *(rw,sync,no_root_squash)
# Enable the new exports without bouncing the NFS service
exportfs -a
So exportfs shows:
/home/data/pv0001
<world>
/home/data/pv0002
<world>
$ setsebool -P virt_use_nfs 1
# Create the persistent volumes for NFS.
# I did not change anything in the yaml-files
$ oc create -f examples/wordpress/nfs/pv-1.yaml
$ oc create -f examples/wordpress/nfs/pv-2.yaml
$ oc get pv
NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON
pv0001 <none> 1073741824 RWO,RWX Available
pv0002 <none> 5368709120 RWO Available
This is also what I get.
Than I'm going to my node:
oc login
test-admin
And I create a wordpress project:
oc new-project wordpress
# Create claims for storage in my project (same namespace).
# The claims in this example carefully match the volumes created above.
$ oc create -f examples/wordpress/pvc-wp.yaml
$ oc create -f examples/wordpress/pvc-mysql.yaml
$ oc get pvc
NAME LABELS STATUS VOLUME
claim-mysql map[] Bound pv0002
claim-wp map[] Bound pv0001
This looks exactly the same for me.
Launch the MySQL pod.
oc create -f examples/wordpress/pod-mysql.yaml
oc create -f examples/wordpress/service-mysql.yaml
oc create -f examples/wordpress/pod-wordpress.yaml
oc create -f examples/wordpress/service-wp.yaml
oc get svc
NAME LABELS SELECTOR IP(S) PORT(S)
mysql name=mysql name=mysql 172.30.115.137 3306/TCP
wpfrontend name=wpfrontend name=wordpress 172.30.170.55 5055/TCP
So actually everyting seemed to work! But when I'm asking for my pod status I get the following:
[root#ip-10-0-0-104 pv0002]# oc get pod
NAME READY STATUS RESTARTS AGE
mysql 0/1 Image: openshift/mysql-55-centos7 is ready, container is creating 0 6h
wordpress 0/1 Image: wordpress is not ready on the node 0 6h
The pods are in pending state and in the webconsole they're giving the following error:
12:12:51 PM mysql Pod failedMount Unable to mount volumes for pod "mysql_wordpress": exit status 32 (607 times in the last hour, 41 minutes)
12:12:51 PM mysql Pod failedSync Error syncing pod, skipping: exit status 32 (607 times in the last hour, 41 minutes)
12:12:48 PM wordpress Pod failedMount Unable to mount volumes for pod "wordpress_wordpress": exit status 32 (604 times in the last hour, 40 minutes)
12:12:48 PM wordpress Pod failedSync Error syncing pod, skipping: exit status 32 (604 times in the last hour, 40 minutes)
Unable to mount +timeout. But when I'm going to my node and I'm doing the following (test is a created directory on my node):
mount -t nfs -v masterhostname:/home/data/pv0002 /test
And I place some file in my /test on my node than it appears in my /home/data/pv0002 on my master so that seems to work.
What's the reason that it's unable to mount in OpenShift?
I've been stuck on this for a while.
LOGS:
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.795267904Z" level=info msg="GET /containers/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832179 1148 mount_linux.go:103] Mount failed: exit status 32
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Mounting arguments: localhost:/home/data/pv0002 /var/lib/origin/openshift.local.volumes/pods/2bf19fe9-77ce-11e5-9122-02463424c049/volumes/kubernetes.io~nfs/pv0002 nfs []
Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.832279 1148 kubelet.go:1206] Unable to mount volumes for pod "mysql_wordpress": exit status 32; skipping pod
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.832794476Z" level=info msg="GET /containers/json?all=1"
Oct 21 10:44:52 ip-10-0-0-129 docker: time="2015-10-21T10:44:52.835916304Z" level=info msg="GET /images/openshift/mysql-55-centos7/json"
Oct 21 10:44:52 ip-10-0-0-129 origin-node: E1021 10:44:52.837085 1148 pod_workers.go:111] Error syncing pod 2bf19fe9-77ce-11e5-9122-02463424c049, skipping: exit status 32
Logs showed Oct 21 10:44:52 ip-10-0-0-129 origin-node: Output: mount.nfs: access denied by server while mounting localhost:/home/data/pv0002
So it failed mounting on localhost.
to create my persistent volume I've executed this yaml:
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "localhost"
}
}
}
So I was mounting to /home/data/pv0002 but this path was not on the localhost but on my master server (which is ose3-master.example.com. So I created my PV in a wrong way.
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "registry-volume"
},
"spec": {
"capacity": {
"storage": "20Gi"
},
"accessModes": [ "ReadWriteMany" ],
"nfs": {
"path": "/home/data/pv0002",
"server": "ose3-master.example.com"
}
}
}
This was also in a training environment. It's recommended to have a NFS server outside of your cluster to mount to.