Podman how remove all exited containers if there are many - linux-containers

On a server running containers with Podman I just realised, there are many containers with "Exited" status and wanted to remove all of them in one go.
How can I do it with Podman?

According to the official documentation there is a specific command for just that purpose:
Remove all stopped containers from local storage:
podman container prune

After searching for some time I found a quick and easy one liner to get my exited containers cleaned.
One option is:
podman rm -f $(podman ps -a -f "status=exited" -q)
The second option is:
podman ps -f status=exited --format "{{.ID}}" | xargs podman rm -f
A big thanks to dannotdaniel for the second option. This saved me at least an hour. :)

Related

Container name changes after system restart

I am starting and stopping container using systemd unit file service as.
Taking container name as hello
podman ps shows hello in output
Auto generate unit file for hello
podman generate systemd --new --files --name hello
The unit file contains
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon
--cgroups=no-conmon -d --hostname=first containerID
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
When I reboot system and check
systemctl status container-hello
I get status as Active: running
But if I run podman ps -a , I get to see hello as inactive as well as another container added say hello2 as running.
hello2 is associated with the unit file created in step 1 and hello is not.
I have used --hostname as suggested but I cannot see container with that name when checked with podman ps pr podman ps -a
From https://docs.podman.io/en/latest/markdown/podman-run.1.html:
Podman generates a UUID for each container, and if a name is not assigned to the container with --name then it will generate a random string name. The name is useful any place you need to identify a container. This works for both background and foreground containers.
So you may want to edit your unit file to contain
ExecStart=/usr/bin/podman run ... --name hello
If that fixes the problem but the way you generate the unit should cover the name, maybe it is worth filing a bug for podman.
What worked for me:
I added --name parameter in the ExecStart label inside unit file as:
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon -d --name=container_name ID
When podman auto generates unit file, it makes sure that once the container is stopped, it should be removed by,
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
I erased this line from the unit file.
Results:
I can start /stop/re start container now without the container getting removed.
When I restart my system (reboot), the container name remains same as it was before reboot. (name given in --name paramter)
Container auto restarts with same name everytime.

What is the right way to increase the hard and soft ulimits for a singularity-container image?

The task I want to complete: I need to run a python package inside of a singularity-container that is asking to open at least some 9704 files. This is the first I have heard of it and searching around this has something to do with a system’s ulimit.
What I currently have is the following def file.
I am setting the * hard nofile flag and the * soft nofile flag to 15 thousand. The sed line does edit the conf file but within the singularity shell my ulimit is still the default 1024.
Bootstrap: docker
From: fedora
%post
dnf -y update
dnf -y install nano pip wget libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
wget -c https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh
/bin/bash Anaconda3-2020.02-Linux-x86_64.sh -bfp /usr/local
conda config --file /.condarc --add channels defaults
conda config --file /.condarc --add channels conda-forge
conda update conda
sed -i '2s/#/\n* hard nofile 15000\n* soft nofile 15000\n\n#/g' /etc/security/limits.conf
bash
%runscript
python /Users/lamsal/count_of_monte_cristo/orthofinder_run/OrthoFinder_source/orthofinder.py -f /Users/lamsal/count_of_monte_cristo/orthofinder_run/concatanated_FAs/
I am following the “official” instuctions to change the ulimits for a RHEL based system from IBM’s webpage here: https://www.ibm.com/docs/en/rational-clearcase/9.0.2?topic=servers-increasing-number-file-handles-linux-workstations
Is the sed line not the right way to change ulimits for a singularity image?
Short answer:
Change the value on the host OS.
Long answer:
In this instance, running a singularity container is best thought of as any other binary you're executing in your host OS. It creates its own separate environment, but otherwise it follows the rules and restrictions of the user running it. Here, the ulimit is taken from the host kernel and completely ignores any configs that may exist in the container itself.
Compare the output from the following:
# check the ulimit on the host
ulimit -n
# check the ulimit in the singularity container
singularity exec -e image.sif ulimit -n
# docker only cares about container config settings
docker run --rm fedora:latest ulimit -n
# change your local ulimit
ulimit -n 4096
# verify it has changed
ulimit -n
# singularity has changed
singularity exec -e image.sif ulimit -n
# ... but docker hasn't
docker run --rm fedora:latest ulimit -n
To have a persistent fix, you'll need to modify the setting on your host OS. Assuming you're on MacOS this answer should take care of that.
If you don't have root privs or you're only using this intermittently you can run ulimit by before running singularity. Alternatively, you could use a wrapper script to run the image and set it in there.

How to start a container in cri-o with only specifying the image name?

I am trying to achieve something like
docker run -it <image_name> bash
I want to specify the image to run and do not care about anything else.
crictl requires config files for both a container and a pod for the run command, if I am not mistaken.
[hbaba#ip-XX-XX-XXX misc]$ sudo crictl -r /run/crio/crio.sock run -h
....
USAGE:
crictl run [command options] container-config.[json|yaml] pod-config.[json|yaml]
I am looking for the simplest way of starting a container, possibly with only a specified image.

docker cp doesn't work for this mysql container

Tried copying a directory and it doesn't seem to work.
Start a MySQL container.
docker cp mysql:/var/lib/mysql .
cd mysql
ls
NOTHING.
Here's the script to try it yourself.
extra info.
On Ubuntu 14.04
jc#dev:~/work/jenkins/copy-sql/mysql$ docker -v
Docker version 1.2.0, build fa7b24f
In the Dockerfile for the image your container comes from, there is the VOLUME instruction which tells Docker to leave the /var/lib/mysql directory out of the container filesystem.
The docker cp can only access the container filesystem and thus won't see the files in mounted volumes.
If you need to backup your mysql data, I suggest you follow the instructions from the Docker userguide in section Backup, restore, or migrate data volumes. You might also find the discordianfish/docker-backup docker image useful for that task.
Here's a little example to illustrate your case.
given a simple Dockerfile with just a VOLUME instruction
$ cat Dockerfile
FROM base
VOLUME /data
build an image named test
$ docker build --force-rm -t test .
run a container named container_1 which will create two files, one being on the mounted volume
$ docker run -d --name container_1 test bash -c 'echo foo > /data/foo.txt; echo bar > /tmp/bar.txt; while true; do sleep 1; done'
make sure the container is running
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e97aa18ac83 test:latest "bash -c 'echo foo > 3 seconds ago Up 2 seconds container_1
use the docker cp command to cp file /tmp/bar.txt and check its content
$ docker cp container_1:/tmp/bar.txt .
$ cat bar.txt
bar
try the same with the file which is in the mounted volume (won't work)
$ docker cp container_1:/data/foo.txt .
2014/09/27 00:03:43 Error response from daemon: Could not find the file /data/foo.txt in container container_1
now run a second container to print out the content of that file
$ docker run --rm --volumes-from container_1 base cat /data/foo.txt
foo
It looks like you're trying to pass the name of your container to the docker cp command. The docs say it takes a container id. Try grepping for "CONTAINER ID" in your script instead.
EDIT:
Since changing your script to grep for the Container ID didn't help, you should start by trying this manually (outside of your script).
The docker cp command works. The reason it's not working for you is either:
a permission thing
you're not formatting the command correctly, or
the directory doesn't exist in your container.
With a running container id of XXXX, try this (using your container id):
sudo docker cp XXXX:/var/lib/mysql .
If this doesn't work, and you don't get an error, I'd maybe suggest that that directory doesn't exist in your container.
EDIT2:
As I said, it's one of the 3 things above.
I get this when I run your script:
2014/09/26 16:10:18 lchown mysql: operation not permitted
Changing the last line of your script to prefix with sudo now gives no errors, but no directory either.
Run the container interactively:
docker run -t -i mysql /bin/bash
Once inside the container:
cd /var/lib/mysql
ls
...no files.
So your script is working fine. The directory is just empty (basically #3 above).
For reference, the mysql Dockerfile is here.

Using openshift rhc tail command

How do you tail openshift log files? I issued the following command:
rhc tail myapp
It seems to show first error line and then stops, but doesn't exit. If I press ctrl+C it asks whether to stop batch or not. How can I display last few errors and may be browse page by page? Is there page down/ page up shortcuts?
The 'rhc tail' command reads the last few lines of each of your log files and continues to feed subsequent log messages to your console. To view the entire log file, please review:
https://www.openshift.com/faq/how-to-troubleshoot-application-issues-using-logs
you can see by running:
rhc tail -a yourappname -l youremail -p yourpassword
Adding -a option fix this issue for me.
rhc tail -a {app_name}
Openshift place logs in different files, so if you want get logs of a specific file then you can add -f file/address/and/name
Example :
rhc tail -f app-root/logs/nodejs.log -a myAppName
also you can ask for specific number of lines by adding -o "-n 40" in command. Above command will get last 40 lines.
Example :
rhc tail -f app-root/logs/nodejs.log -o "-n 40" -a myAppName
You can also download them:
$ scp SHA#APP-DOMAIN.rhcloud.com:/var/lib/openshift/SHA/app-root/\
logs/APP.log "~/upstream.jbossas.log"
Feasible also in windows directly in git bash.