Container name changes after system restart - containers

I am starting and stopping container using systemd unit file service as.
Taking container name as hello
podman ps shows hello in output
Auto generate unit file for hello
podman generate systemd --new --files --name hello
The unit file contains
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon
--cgroups=no-conmon -d --hostname=first containerID
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
When I reboot system and check
systemctl status container-hello
I get status as Active: running
But if I run podman ps -a , I get to see hello as inactive as well as another container added say hello2 as running.
hello2 is associated with the unit file created in step 1 and hello is not.
I have used --hostname as suggested but I cannot see container with that name when checked with podman ps pr podman ps -a

From https://docs.podman.io/en/latest/markdown/podman-run.1.html:
Podman generates a UUID for each container, and if a name is not assigned to the container with --name then it will generate a random string name. The name is useful any place you need to identify a container. This works for both background and foreground containers.
So you may want to edit your unit file to contain
ExecStart=/usr/bin/podman run ... --name hello
If that fixes the problem but the way you generate the unit should cover the name, maybe it is worth filing a bug for podman.

What worked for me:
I added --name parameter in the ExecStart label inside unit file as:
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --sdnotify=conmon --cgroups=no-conmon -d --name=container_name ID
When podman auto generates unit file, it makes sure that once the container is stopped, it should be removed by,
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
I erased this line from the unit file.
Results:
I can start /stop/re start container now without the container getting removed.
When I restart my system (reboot), the container name remains same as it was before reboot. (name given in --name paramter)
Container auto restarts with same name everytime.

Related

Resuming docker mysql instance after restarting

I'm using docker to run a mysql 5.6 instance on my localhost (which is running ubuntu 20.04), using these instructions. When I create a new container for the database I use the following command
sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
That serves the intended purpose; I'm able to create the database using port 3310 and get on with what I want to do.
However when I reboot my localhost, I am unable to get back into sql5.6 using that port again.
When I list containers, I see none listed:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So I try to recreate it and am told that it already exists:
$ sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
docker: Error response from daemon: Conflict. The container name "/mysql-56-container" is already in use by container "a05582bff8fc02da37929d2fa2bba2e13c3b9eb488fa03fcffb09348dffd858f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
So I try starting it but with no luck:
$ sudo docker start my-56-container
Error response from daemon: No such container: my-56-container
Error: failed to start containers: my-56-container
I clearly am not understanding how this works so my question is, how do I resume work on databases I've created in a docker container after I reboot?
docker ps just list running containers. If you reboot your laptop, all of them will be stopped. You can use docker ps --all or docker container ls --all to list all containers (running or stopped). You can check more about the docker ps command in docker ps command line reference
Once a container is created, you cannot create another with the same name. Tha is the reason your second docker run is failing.
You should use docker start instead. But you are trying to start a container with a different name. Your docker start command is using a container named my-56-container but it is called mysql-56-container. Please check your first docker run command in the question.

How to start a container in cri-o with only specifying the image name?

I am trying to achieve something like
docker run -it <image_name> bash
I want to specify the image to run and do not care about anything else.
crictl requires config files for both a container and a pod for the run command, if I am not mistaken.
[hbaba#ip-XX-XX-XXX misc]$ sudo crictl -r /run/crio/crio.sock run -h
....
USAGE:
crictl run [command options] container-config.[json|yaml] pod-config.[json|yaml]
I am looking for the simplest way of starting a container, possibly with only a specified image.

Command to use with scratch docker container

I'm trying to start a docker container for mysql. The image for the container was built from scratch for a training I attended and I need to figure out how to configure it to run a command that will start the container.
The /bin/bash and /bin/sh commands don't work. When I docker inspect the container the CMD section doesn't contain anything. I've tried running CMD['/bin/bash'] or CMD['/bin/sh'] at the end of my docker container run command and that populates the CMD field but the container still won't run.
There are a number of other microservice containers I'm having the same problem with. This is the first one I need to solve however.
This is the command I'm running:
docker run -d -v infytel-mysql-volume:/var/lib/mysql --network=infytel-docker-networkMS --name=infytel-mysql-con2 -e MYSQL_PASSWORD_ROOT=root infytel-mysql-img:v1 /bin/bash
This is my error:
oci runtime error: container_linux.go:235: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory
[EDIT] Running docker logs gives the error shown above.
Running without the /bin/sh command states error response from daemon: No command specified

Docker db container running. Another process with pid <id> is using unix socket file

I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.

docker cp doesn't work for this mysql container

Tried copying a directory and it doesn't seem to work.
Start a MySQL container.
docker cp mysql:/var/lib/mysql .
cd mysql
ls
NOTHING.
Here's the script to try it yourself.
extra info.
On Ubuntu 14.04
jc#dev:~/work/jenkins/copy-sql/mysql$ docker -v
Docker version 1.2.0, build fa7b24f
In the Dockerfile for the image your container comes from, there is the VOLUME instruction which tells Docker to leave the /var/lib/mysql directory out of the container filesystem.
The docker cp can only access the container filesystem and thus won't see the files in mounted volumes.
If you need to backup your mysql data, I suggest you follow the instructions from the Docker userguide in section Backup, restore, or migrate data volumes. You might also find the discordianfish/docker-backup docker image useful for that task.
Here's a little example to illustrate your case.
given a simple Dockerfile with just a VOLUME instruction
$ cat Dockerfile
FROM base
VOLUME /data
build an image named test
$ docker build --force-rm -t test .
run a container named container_1 which will create two files, one being on the mounted volume
$ docker run -d --name container_1 test bash -c 'echo foo > /data/foo.txt; echo bar > /tmp/bar.txt; while true; do sleep 1; done'
make sure the container is running
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e97aa18ac83 test:latest "bash -c 'echo foo > 3 seconds ago Up 2 seconds container_1
use the docker cp command to cp file /tmp/bar.txt and check its content
$ docker cp container_1:/tmp/bar.txt .
$ cat bar.txt
bar
try the same with the file which is in the mounted volume (won't work)
$ docker cp container_1:/data/foo.txt .
2014/09/27 00:03:43 Error response from daemon: Could not find the file /data/foo.txt in container container_1
now run a second container to print out the content of that file
$ docker run --rm --volumes-from container_1 base cat /data/foo.txt
foo
It looks like you're trying to pass the name of your container to the docker cp command. The docs say it takes a container id. Try grepping for "CONTAINER ID" in your script instead.
EDIT:
Since changing your script to grep for the Container ID didn't help, you should start by trying this manually (outside of your script).
The docker cp command works. The reason it's not working for you is either:
a permission thing
you're not formatting the command correctly, or
the directory doesn't exist in your container.
With a running container id of XXXX, try this (using your container id):
sudo docker cp XXXX:/var/lib/mysql .
If this doesn't work, and you don't get an error, I'd maybe suggest that that directory doesn't exist in your container.
EDIT2:
As I said, it's one of the 3 things above.
I get this when I run your script:
2014/09/26 16:10:18 lchown mysql: operation not permitted
Changing the last line of your script to prefix with sudo now gives no errors, but no directory either.
Run the container interactively:
docker run -t -i mysql /bin/bash
Once inside the container:
cd /var/lib/mysql
ls
...no files.
So your script is working fine. The directory is just empty (basically #3 above).
For reference, the mysql Dockerfile is here.