As previously reported here, two pods can't mount the same disk even though one of them tries to do it as read-only mode.
This is supposed to be allowed from Kubernetes documentation.
Mounting scheme is:
UniqueCluster/PodA has successfully mounted gdeDisk1 as read-write
UniqueCluster/PodB fails to start when mounting gdeDisk1 as read-only
Node description:
Name: gke-zupcat-cluster-8fd35d81-node-1zr4
Labels: kubernetes.io/hostname=gke-zupcat-cluster-8fd35d81-node-1zr4
CreationTimestamp: Wed, 22 Jul 2015 14:47:56 -0300
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready True Thu, 23 Jul 2015 12:06:18 -0300 Wed, 22 Jul 2015 22:53:34 -0300 kubelet is posting ready status
Addresses: 10.240.17.72,146.148.79.174
Capacity:
cpu: 2
memory: 7679608Ki
pods: 40
Version:
Kernel Version: 3.16.0-0.bpo.4-amd64
OS Image: Debian GNU/Linux 7 (wheezy)
Container Runtime Version: docker://Unknown
Kubelet Version: v1.0.1
Kube-Proxy Version: v1.0.1
PodCIDR: 10.108.0.0/24
ExternalID: 11953122931827361742
Pods: (5 in total)
Namespace Name
default fastrwdiskpod-yu517
kube-system fluentd-cloud-logging-gke-zupcat-cluster-8fd35d81- node-1zr4
kube-system kube-dns-v8-i3h20
kube-system kube-ui-v1-8zdrq
kube-system monitoring-heapster-v5-e1zmi
No events.
Products versions:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.0", GitCommit:"cd821444dcf3e1e237b5f3579721440624c9c4fa", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.1", GitCommit:"6a5c06e3d1eb27a6310a09270e4a5fb1afa93e74", GitTreeState:"clean"}
docker version Docker version 1.7.1, build 786b29d
According to the GCE persistent disk documentation: "if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode."
The Kubernetes documentation for GCE PD volumes also explains this limitation: "A feature of PD is that they can be mounted as read-only by multiple consumers simultaneously. This means that you can pre-populate a PD with your dataset and then serve it in parallel from as many pods as you need. Unfortunately, PDs can only be mounted by a single consumer in read-write mode - no simultaneous readers allowed."
Related
First time posting on StackOverflow so please be gentle!
I'm setting up a new RHEL8 server to run Podman. Previously, I've done this on a pretty vanilla server but this one is setup in line with our corporate image. This means a homedir that is mounted over NFS.
When I try a simple podman command such as podman run centos, I get a couple of errors (see below). According to https://github.com/containers/podman/blob/main/rootless.md, Podman non-root is known to have problems with NFS homedirs.
Output from podman run centos (and others):
❯ podman run centos
Resolved "centos" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull quay.io/centos/centos:latest...
Getting image source signatures
Copying blob 7a0437f04f83 done
Error: writing blob: adding layer with blob "sha256:7a0437f04f83f084b7ed68ad9c4a4947e12fc4e1b006b38129bac89114ec3621": Error processing tar file(exit status 1): Error setting up pivot dir: mkdir /home/me/.local/share/containers/storage/overlay/2653d992f4ef2bfd27f94db643815aa567240c37732cae1405ad1c1309ee9859/diff/.pivot_root926823499: permission denied
No, my username isn't really 'me'
Is there a way to use podman non-root in this setup? I'd prefer to avoid creating a local user account to run things under (this is my dev server and isn't where the application will actually be running but will involve me building, running, destroying regularly so I'd rather avoid having to do anything 'clever')
Output of podman info:
❯ podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.32-1.module+el8.5.0+13852+150547f7.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.32, commit: 4b12bce835c3f8acc006a43620dd955a6a73bae0'
cpus: 1
distribution:
distribution: '"rhel"'
version: "8.5"
eventLogger: file
hostname: servername
idMappings:
gidmap:
- container_id: 0
host_id: 2000
size: 1
uidmap:
- container_id: 0
host_id: 10279927
size: 1
kernel: 4.18.0-348.12.2.el8_5.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 1881419776
memTotal: 3918233600
ociRuntime:
name: runc
package: runc-1.0.3-1.module+el8.5.0+13556+7f055e70.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.3
spec: 1.0.2-dev
go: go1.16.7
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/10279927/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module+el8.5.0+12582+56d94c81.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 4294963200
swapTotal: 4294963200
uptime: 2h 45m 20.28s (Approximately 0.08 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/me/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.8-1.module+el8.5.0+13754+92ec836b.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.8
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/me/.local/share/containers/storage
graphStatus:
Backing Filesystem: nfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/10279927/containers
volumePath: /home/me/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.2
Built: 1642068949
BuiltTime: Thu Jan 13 10:15:49 2022
GitCommit: ""
GoVersion: go1.16.7
OsArch: linux/amd64
Version: 3.4.2
Thank you!
Based on this article, https://www.redhat.com/sysadmin/rootless-podman-nfs, podman and nfs home directories don't mix well together.
This is worked around by changing the graphroot(which is described in the above article) to write to a local, non-nfs, location.
I have a new beanstalk that is a migration of an old one running an app under php5.6 platform on Amazon AMI Linux. The new beanstalk is running php7.3 on Amazon Linux2. I have worked through all the migration issues and the app is running correctly on my new beanstalk. I have a load-balancer (classic) and I run autoscaling with the max and min instance settings both set to 1.
The problem occurs when I terminate the ec2. The autoscaling is creating a new ec2 but it is't deploying the application to it.
Does anyone know why this might be, or where I can look to try and debug the issue?
What worked for me was to remove the old .ebextension config files related to cwlogs, and add the line
awslogs: []
to my config that does
packages:
yum:
then create a new conf file as follows
files:
"/tmp/start_aws_cloudwatch_service.sh":
content: |
#!/bin/sh
systemctl start awslogsd
systemctl status awslogsd
systemctl enable awslogsd.service
exit $?
mode : "000755"
owner : root
group : root
commands:
start_aws_cloudwatch_service:
cwd: /tmp
command: bash /tmp/start_aws_cloudwatch_service.sh
After this I could see that the service was up and running
$ systemctl status awslogsd
● awslogsd.service - awslogs daemon
Loaded: loaded (/usr/lib/systemd/system/awslogsd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-10-14 14:08:19 UTC; 34min ago
Main PID: 4029 (aws)
CGroup: /system.slice/awslogsd.service
└─4029 /usr/bin/python2 -s /usr/bin/aws logs push --config-file /etc/awslogs/awslogs.conf --additional-configs-dir /etc/awslogs/c...
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Started awslogs daemon.
Oct 14 14:08:19 ip-xxx-xxx-30-7.eu-west-1.compute.internal systemd[1]: Starting awslogs daemon...
See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
using openshift, and one pod keep pending, because nfs server cannot be mounted (nfs server is able to be mounted by mannually using command line, but cannot be mounted from the Pod)
I have installed nfs-common, so it's not the root cause. I trying to install nfs-utils, but I was failed, the error message is:
E: Unable to locate package: nfs-utils.
I also tried libnfs12 and libnfs-utils, they were the same as nfs-utils. I also used apt-get install upgade and update to solve the package locating problem, but they were useless.
I'm going to show the yaml file for connecting the nfs server
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test01
lables:
disktype: baas
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /baas
server: 9.111.140.47
readOnly: false
persistentVolumeReclaimPolicy: Recycle
after using "oc describe pod/mypod" for the pending Pod, below is the feedback:
Warning FailedMount 14s kubelet, localhost MountVolume.SetUp failed for volume "pv-test01" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01 --scope -- mount -t nfs 9.111.140.47:/baas /var/lib/origin/cluster-up/root/openshift.local.clusterup/openshift.local.volumes/pods/267db6f2-d875-11e9-80ba-005056bc3ce0/volumes/kubernetes.io~nfs/pv-test01
Output: Running scope as unit run-28094.scope.
mount: wrong fs type, bad option, bad superblock on 9.111.140.47:/baas,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
so how can I mount to nfs server from the Pod? should I keep installing nfs-utils? If yes, how can I install it?
I am testing Kubernetes redundancy features with a testbed made of one master and three minions.
Case: I am running a service with 3 replicas on minions 1 and 2 and minion3 stopped
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 NotReady 14d
centos-minion1 Ready 14d
centos-minion2 Ready 14d
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Test: After starting minion3 and stopping minion2 (on which 2 pods are running)
[root#centos-master ajn]# kubectl get nodes
NAME STATUS AGE
centos-minion3 Ready 15d
centos-minion1 Ready 14d
centos-minion2 NotReady 14d
Result: The service kind doesn't recover from minion failure and Kubernetes continue showing pods on the failed minion.
[root#centos-master ajn]# kubectl describe pods $MYPODS | grep Node:
Node: centos-minion2/192.168.0.107
Node: centos-minion1/192.168.0.155
Node: centos-minion2/192.168.0.107
Expected result (at least in my understanding): the service should have been built on the currently available minion 1 and 3
As far as I understand, the role of service kind is to make the deployment "globally" available so we can refer to them independently of where deployments are in the cluster.
Am I doing something wrong?
I'm using the follwoing yaml spec:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-www
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
It looks like you're always trying to read the same pods that are referenced in $MYPODS. Pod names are created dynamically by the ReplicationController, so instead of kubectl describe pods $MYPODS try this instead:
kubectl get pods -l app=nginx -o wide
This will always give you the currently scheduled pods for your app.
I used docker-machine with Azure as the driver to spin up a VM. I then deployed a simple nginx test container on to the host. My issue is that when I try to set and endpoint I am getting the following error:
azure vm endpoint create huldra 80 32769
info: Executing command vm endpoint create
+ Getting virtual machines
+ Reading network configuration
+ Updating network configuration
error: Parameter 'ConsoleScreenshotBlobUri' should not be set.
info: Error information has been recorded to /Users/ryan/.azure/azure.err
error: vm endpoint create command failed
When I look at the error log it pretty much repeats what the console said Parameter 'ConsoleScreenshotBlobUri' should not be set.
Here are my docker and azure environment details:
❯ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 3
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 21
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.2.0-18-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.636 GiB
Name: huldra
ID: PHUY:JRE3:DOJO:NNWO:JBBH:42H2:56ZO:HVSB:MZDE:QLOI:GO6F:SCC5
WARNING: No swap limit support
Labels:
provider=azure
~/Projects/dockerswarm master*
❯ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce51127b2bb8 nginx "nginx -g 'daemon off" 11 minutes ago Up 11 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp machinenginx
❯ azure --version
0.9.17 (node: 5.8.0)
❯ azure vm list
info: Executing command vm list
+ Getting virtual machines
data: Name Status Location DNS Name IP Address
data: ------ --------- -------- ------------------- -------------
data: huldra ReadyRole West US huldra.cloudapp.net x.x.x.x
info: vm list command OK