Rootless podman: use nfs mount - containers

i found this question first which is similar: How to mount an NFS share with rootless Podman?
long story short i am having trouble with rootless podman nfs volume. i am creating the volume with myuser
podman volume create --opt type=nfs4 --opt o=rw --opt device=my.server.ip.address:/data/nfs_data podman-nfs
but when trying to spawn a container using the volume i get a "mount.nfs: operation not permitted"
podman run -d -v podman-nfs:/tmp/data --name myapp myappimage:latest
i know that the nfs mount works because i managed to make it work manually. i used the user directive in fstab to allow myuser to mount it manually. i even managed to mount it manually in the path generated by podman (/home/myuser/.local/share/containers/storage/volumes/podman-nfs/_data)
the fstab entry looks like :
my.server.ip.address:/data/nfs_data /home/myuser/.local/share/containers/storage/volumes/podman-nfs/_data nfs rw,sync,user,noauto,_netdev 0 0
i could revert to a regular nfs mount on the filesystem and have podman use it like a file but i like the idea of having nfs managed by podman so it can gracefully close it if the container stops.
ADDITIONAL INFO : if i try using the --log-level=debug flag in podman run i get 'mount /bin/mount [...] failed with exit status 32'
as a side note i find it very weird that you can create volumes as a rootless podman user but cannot mount them. it feels like i'm missing something obvious. i found this howto which does it as root https://www.server-world.info/en/note?os=Rocky_Linux_8&p=podman&f=6
thank you for your time.

Me again.
I've figured it out. My understanding is that rootless podman cannot mount an NFS volume when starting a container even if the fstab has the user option for the mount.
Instead, what i do is during my ansible playbook as root i mount the nfs mount to a mountpoint (for this example, /app/myapp/myapp-nfs) and i use a bind mount when starting the container.
first make sure the nfs is properly mounted on the filesystem
# src must be accessible by nfs
- name: Make sure nfs is mounted
ansible.posix.mount:
src: nfs.ip.address.here:/shared/nfsdir
path: /app/myapp/myapp-nfs
opts: rw,sync,hard,_netdev
boot: yes
state: mounted
fstype: nfs
become: yes
second when starting the container use the available nfs as a bind mount
# src must be accessible by nfs
- name: Make sure my nfs-enabled-elite-app is started
containers.podman.podman_container:
name: nfs-enabled-elite-app
image: elite-app:latest
state: started
mounts:
- type=bind,source=/app/myapp/myapp-nfs,destination=/in/container/mount/point
so far this works.
note that you can all do this using the podman run command, just add the mount (NOT as a volume)
i really hope this gets to help people. i remain available in case you have any question just DM me.

Related

Container Optimized OS Examples

I've followed all the documentation here: https://cloud.google.com/container-optimized-os/docs/ to try to upgrade my existing configuration that used container-vm images that have now been deprecated, to a new configuration using container-optimized OS. But nothing works! I can't get the Docker container to bind to port 80 (ie. -p 80:80) and also my Docker container can't seem to write to /var/run/nginx.pid (yes I'm using nginx in my Docker container). I followed the instructions to disable AppArmour and I've also tried creating an AppArmour profile for nginx. Nothing works! Are they any examples out there using container-optimized OS that don't just use busybox image and print "Hello World" or sleep! How about an example that opens a port and writes to the file system?
I just installed Apache Guacamole on Container Optimized OS and it works like a charm. There are some constraints in place for security.
The root filesystem ("/") is mounted as read-only with some portions of it re-mounted as writable, as follows:
/tmp, /run, /media, /mnt/disks and /var/lib/cloud are all mounted
using tmpfs and, while they are writable, their contents are not
preserved between reboots.
Directories /mnt/stateful_partition, /var
and /home are mounted from a stateful disk partition, which means
these locations can be used to store data that persists across
reboots. For example, Docker's working directory /var/lib/docker is
stateful across reboots.
Among the writable locations, only
/var/lib/docker and /var/lib/cloud are mounted as "executable" (i.e.
without the noexec mount flag).
If you need to accept HTTP (port 80) connections from any source IP address, run the following commands on your Container-Optimzied OS instance:
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT
In general, it is recommended you configure the host firewall as a systemd service through cloud-init.
PS: Container-Optimized OS is capable of auto updates. This mechanism can be used to update a fleet of Compute Engine instances.
I can't get the Docker container to bind to port 80 (ie. -p 80:80) and also my Docker container can't seem to write to /var/run/nginx.pid (yes I'm using nginx in my Docker container).
I think you might be hitting some GCE firewall problem. The best way would be to verify/debug it step by step:
Try running a stupidly simple nginx container:
"-d" asks Docker to run it in daemon mode, "-p 80:80" maps the HTTP port, and "--name nginx-hello" names to container to nginx-hello.
docker run -d --name nginx-hello -p 80:80 nginx
(optional) Verifies that the container is running correctly: You should see the "nginx-hello" container listed.
docker ps
Verifies that nginx is working locally: You should see a good HTTP response.
curl localhost:80
If you are able to verify all the above steps correctly, then you would likely be facing a GCE firewall problem:
How do I enable http traffic for GCE instance templates?

Why is MariaDB data persistent in my Docker container? I don't have any volumes

I have a Docker container with MariaDB installed. I am not using any volumes.
[vagrant#devops ~]$ sudo docker volume ls
DRIVER VOLUME NAME
[vagrant#devops ~]$
Now something strange is happening. When I do sudo docker stop and sudo docker start the MariaDB data is still there. I expected this data to be lost.
Btw when I edit some file for example /etc/hosts I do see the expected behavior. Changes to this file are lost after restart.
How is it possible that MariaDB data is persistent without volumes? This shouldn't happen right?
docker stop does not remove a container, neither does docker start create a container.
docker run does create a new container from a image.
docker start starts a container which does exist but has been stopped before ( call it pause/resume if you like ).
Thus for start/stop no volumes are required to keep the state persistent.
if you though do docker stop <name> && docker rm <name> and then docker start <name> you get and error, that the container does no longer exist - so now you need docker run <args> youimage

Docker db container running. Another process with pid <id> is using unix socket file

I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.

Docker - Multiple duplicate volume declarations; what happens?

I'm trying to set up a persistent data volume for my MySQL docker container.
I'm using the official MySQL image which has this in the Dockerfile:
VOLUME /var/lib/mysql
If I invoke
-v /var/lib/mysql:/var/lib/mysql
during runtime, does my command take precedence, or do I have to remove the VOLUME declaration from the Dockerfile?
Take a look at https://docs.docker.com/reference/builder/#volume - the VOLUME command is declaring a mount point so it can be used by other hosts with the --volumes-from as well the VOLUME command tells docker that the contents of this directory is external to the image. While the -v /dir1/:/dir2/ will mount dir1 from the host into the running container at dir2 location.
In other words, you can use both together and docker will mount the -v properly.

Installation of system tables failed! boot2docker tutum/mysql mount file volume on Mac OS

I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.