runc: how to start stopped container - linux-containers

I am playing with raw containers. And I do not see a way to rerun a container that is currently stopped.
# runc run nginx --detach
# runc kill nginx
as the result I have got the stopped container.
How to run it again without delete?
This does not work:
# runc start nginx
ERRO[0000] cannot start a container that has stopped

Related

Can't start docker container 3306 is busy [duplicate]

When I run docker-compose up in my Docker project it fails with the following message:
Error starting userland proxy: listen tcp 0.0.0.0:3000: bind: address already in use
netstat -pna | grep 3000
shows this:
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN -
I've already tried docker-compose down, but it doesn't help.
In your case it was some other process that was using the port and as indicated in the comments, sudo netstat -pna | grep 3000 helped you in solving the problem.
While in other cases (I myself encountered it many times) it mostly is the same container running at some other instance. In that case docker ps was very helpful as often I left the same containers running in other directories and then tried running again at other places, where same container names were used.
How docker ps helped me:
docker rm -f $(docker ps -aq) is a short command which I use to remove all containers.
Edit: Added how docker ps helped me.
This helped me:
docker-compose down # Stop container on current dir if there is a docker-compose.yml
docker rm -fv $(docker ps -aq) # Remove all containers
sudo lsof -i -P -n | grep <port number> # List who's using the port
and then:
kill -9 <process id> (macOS) or sudo kill <process id> (Linux).
Source: comment by user Rub21.
I had the same problem. I fixed this by stopping the Apache2 service on my host.
You can kill the process listening on that port easily with one command below :
kill -9 $(lsof -t -i tcp:<port#>)
ex :
kill -9 $(lsof -t -i tcp:<port#>)
or for ubuntu:
sudo kill -9 `sudo lsof -t -i:8000`
Man page for lsof : https://man7.org/linux/man-pages/man8/lsof.8.html
-9 is for hard kill without checking any deps.
(Not related, but might be useful if its PORT 5000 mystery) - the culprit process is due to Mac OS monterery.
The port 5000 is commonly used to serve local development servers. When updating to the latest macOS operating system, I was unable the docker to bind to port 5000, because it was already in use. (You may find a message along the lines of Port 5000 already in use.)
By running lsof -i :5000, I found out the process using the port was named ControlCenter, which is a native macOS application. If this is happening to you, even if you use brute force (and kill) the application, it will restart itself. In my laptop, lsof -i :5000 returns that Control Center is being used by process id 433. I could do killall -p 433, but macOS keeps restarting the process.
The process running on this port turns out to be an AirPlay server. You can deactivate it in
System Preferences › Sharing, and unchecking AirPlay Receiver to release port 5000.
I had same problem,
docker-compose down --rmi all (in the same directory where you run docker-compose up)
helps
UPD: CAUTION - this will also delete the local docker images you've pulled (from comment)
For Linux/Unix:
Simple search for linux utility using following command
netstat -nlp | grep 8888
It'll show processing running at this port, then kill that process using PID (look for a PID in row) of that process.
kill PID
In some cases it is critical to perform a more in-depth debugging to the problem before stopping a container or killing a process.
Consider following the checklist below:
1) Check you current docker compose environment
Run docker-compose ps. If port is in use by another container, stop it with docker-compose stop <service-name-in-compose-file> or remove it by replacing stop with rm.
2) Check the containers running outside your current workspace
Run docker ps to see list of all containers running under your host.
If you find the port is in use by another container, you can stop it with docker stop <container-id>.
(*) Because you're not under the scope of the origin compose environment - it is a good practice first to use docker inspect to gather more information about the container that you're about to stop.
3) Check if port is used by other processes running on the host
For example if the port is 6379 run:
$ sudo netstat -ltnp | grep ':6379'
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 915/redis-server 12
tcp6 0 0 ::1:6379 :::* LISTEN 915/redis-server 12
(*) You can also use the lsof command which is mainly used to retrieve information about files that are opened by various processes (I suggest running netstat before that).
So, In case of the output above the PID is 915. Now you can run:
$ ps j 915
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 915 915 915 ? -1 Ssl 123 0:11 /usr/bin/redis-server 127.0.0.1:6379
And see the ID of the parent process (PPID) and the execution command.
You can also run: $ pstree -s <PID> to a visual display of the process and its related processes.
In our case we can see that the process probably is a daemon (PPID is 1) - In that case consider running: A) $ cat /proc/<PID>/status in order to get a more in-depth information about the process like the number of threads spawned by the process, its capabilities, etc'.
B) $ systemctl status <PID> in order to see the systemd unit that caused the creation of a specific process. If the service is not critical - you can stop and disable the service.
4) Restart Docker service
Run: sudo service docker restart.
5) You reached this point and..
Only if its not placing your system at risk - consider restarting the server.
In my case it was
Error starting userland proxy: listen tcp 0.0.0.0:9000: bind: address already in use
And all that I need is turn off debug listening in php storm
Most probably this is because you are already running a web server on your host OS, so it conflicts with the web server that Docker is attempting to start.
So try this one-liner before trying anything else:
sudo service apache2 stop; sudo service nginx stop; sudo nginx -s stop;
I had apache running on my ubuntu machine. I used this command to kill it!
sudo /etc/init.d/apache2 stop
I was getting the below error when i was trying to launch a new container -
listen tcp 0.0.0.0:8080: bind: address already in use.
To check which process is running on port 8080, run below command:
netstat -tulnp | grep 8080
i got the output below
[root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]# netstat -tulnp | grep 8080 tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN **12749**/java [root#ip-112-x6x-2x-xxx.xxxxx.compute.internal (aws_main) ~]#
run
kill -9 12749
Then try to relaunch the container it should work
If redis server is started as a service, it will restart itself when you using kill -9 <process_id> or sudo kill -9 `sudo lsof -t -i:<port_number>` . In that case you will need to stop the redis service using following command.
sudo service redis-server stop
I upgraded my docker this afternoon and ran into the same problem. I tried restarting docker but no luck.
Finally, I had to restart my computer and it worked. Definitely a bug.
Check docker-compose.yml, it might be the case that the port is specified twice.
version: '3'
services:
registry:
image: mysql:5.7
ports:
- "3306:3306" <--- remove either this line or next
- "127.0.0.1:3306:3306"
Changing network_mode: "bridge" to "host" did it for me.
This with
version: '2.2'
services:
bind:
image: sameersbn/bind:latest
dns: 127.0.0.1
ports:
- 172.17.42.1:53:53/udp
- 172.17.42.1:10000:10000
volumes:
- "/srv/docker/bind:/data"
environment:
- 'ROOT_PASSWORD=secret'
network_mode: "host"
I ran into the same issue several times. Restarting docker seems to do the trick
A variation of #DmitrySandalov's answer: I had tomcat/java running on 8080, which needed to keep going. Looked at the docker-compose.yml file and altered the entry for 8080 to another of my choosing.
nginx:
build: nginx
ports:
#- '8080:80' <-- original entry
- '8880:80'
- '8443:443'
Worked perfectly. (The only wrinkle is the change will be wiped if I ever update the project, since it's coming from an external repo.)
At first, make sure which service you are running in your specific port. In your case, you are already using port number 3000.
netstat -aof | findstr :3000
now stop that process which is running on specific port
lsof -i tcp:3000
I resolve the issue by restarting Docker.
It makes more sense to change the port of the docker update instead of shutting down other services that use port 80.
Just a side note if you have the same issue and is with Windows:
In my case the process in my way is just grafana-server.exe. Because I first downloaded the binary version and double click the executable, and it now starts as a service by user SYSTEM which I cannot taskkill (no permission)
I have to go to "Service manager" of Windows and search for service "Grafana", and stop it. After that port 3000 is no longer occupied.
Hope that helps.
The one that was using the port 8888 was Jupiter and I had to change the configuration file of Jupiter notebook to run on another port.
to list who is using that specific port.
sudo lsof -i -P -n | grep 9
You can specify the port you want Jupyter to run uncommenting/editing the following line in ~/.jupyter/jupyter_notebook_config.py:
c.NotebookApp.port = 9999
In case you don't have a jupyter_notebook_config.py try running jupyter notebook --generate-config. See this for further details on Jupyter configuration.
Before it was running on :docker run -d --name oracle -p 1521:1521 -p 5500:5500 qa/oracle
I just changed the port to docker run -d --name oracle -p 1522:1522 -p 5500:5500 qa/oracle
it worked fine for me !
On my machine a PID was not being shown from this command netstat -tulpn for the in-use port (8080), so i could not kill it, killing the containers and restarting the computer did not work. So service docker restart command restarted docker for me (ubuntu) and the port was no longer in use and i am a happy chap and off to lunch.
maybe it is too rude, but works for me. restart docker service itself
sudo service docker restart
hope it works for you also!
I have run the container with another port, like... 8082 :-)
I came across this problem. My simple solution is to remove the mongodb from the system
Commands to remove mongodb in Ubuntu:
sudo apt-get purge mongodb mongodb-clients mongodb-server mongodb-dev
sudo apt-get purge mongodb-10gen
sudo apt-get autoremove
Let me add one more case, because I had the same error and none of the solutions listed so far works:
serv1:
...
networks:
privnet:
ipv4_address: 10.10.100.2
...
serv2:
...
# no IP assignment, no dependencies
networks:
privnet:
ipam:
driver: default
config:
- subnet: 10.10.100.0/24
depending on the init order, serv2 may get assigned the IP 10.10.100.2 before serv1 is started, so I just assign IPs manually for all containers to avoid the error. Maybe there are other more elegant ways.
I have the same problem and by stopping docker container it was resolved.
sudo docker container stop <container-name>
i solved with this sudo service redis-server stop

Why is MariaDB data persistent in my Docker container? I don't have any volumes

I have a Docker container with MariaDB installed. I am not using any volumes.
[vagrant#devops ~]$ sudo docker volume ls
DRIVER VOLUME NAME
[vagrant#devops ~]$
Now something strange is happening. When I do sudo docker stop and sudo docker start the MariaDB data is still there. I expected this data to be lost.
Btw when I edit some file for example /etc/hosts I do see the expected behavior. Changes to this file are lost after restart.
How is it possible that MariaDB data is persistent without volumes? This shouldn't happen right?
docker stop does not remove a container, neither does docker start create a container.
docker run does create a new container from a image.
docker start starts a container which does exist but has been stopped before ( call it pause/resume if you like ).
Thus for start/stop no volumes are required to keep the state persistent.
if you though do docker stop <name> && docker rm <name> and then docker start <name> you get and error, that the container does no longer exist - so now you need docker run <args> youimage

CircleCI: MySQL starts on its own even after stopping the process

I am having some trouble with default MySQL installation on CircleCI. In 'post' section of 'machine', I stop mysql using, "- sudo service mysql stop". The reason behind doing so is that I want to use docker mysql container on port 3306. My "docker-compose up" takes some time to finish and sometimes before the docker mysql container starts, the mysql process starts again for no reason obvious to me. I have been tracking this issue using the following command.
while true; do sudo netstat -nlp | grep :3306; sleep 2; done
I have a build that ran fine with docker being able to register port 3306, and also a build in which mysqld started again even after stopping giving me the following error on docker-compose up.
ERROR: for dbm01 Cannot start service dbm01: failed to create endpoint minimum_dbm01_1 on network minimum_default: Error starting userland proxy: listen tcp 0.0.0.0:3306: bind: address already in use
ERROR: Encountered errors while bringing up the project.
Both the builds are of same commit so there is no difference in code. What might be the issue?

Docker Daemon stop - Timeout for container defaults 10s

docker stop has a standard timeout of 10s (Reference - Docker Stop). This time window may be to short for the shutdown of mysql or influxdb to reach a consistent state on file system. If I stopped the container by hand, I would shut it down with docker stop -t 60 mysql. But that is not the default case. The containers are on auto restart and started and stopped with the docker daemon.
If I stop the docker daemon (e.g. on system reboot), the daemon will stop all running container with a timeout of 10 seconds, which can cause inconsistent dbs.
Question: Is there a way to set the global timeout for docker stop or the daemon stop?
Update 2016-05-03: Added [Feature Request] Add config parameter to change stop timeout for containers or globally #22471 on github.
https://docs.docker.com/engine/reference/commandline/dockerd/
Use the following config
--shutdown-timeout int Set the default shutdown timeout (default 15)
or put it in docker daemon.json like below and restart docker daemon
tee /etc/docker/daemon.json <<-'EOF'
{
"shutdown-timeout": 30,
"live-restore": true
}
EOF
systemctl restart docker
systemctl status docker
I'm pretty sure there isn't a global setting for this. It would be a nice feature, and you should submit an issue to suggest it, and if you have time submit a pull request to implement.

Docker db container running. Another process with pid <id> is using unix socket file

I'm trying to run a docker mysql container with initialized db according instruction provided in this message https://stackoverflow.com/a/29150538/6086816. After first run it works ok, but on second run, after trying of executing /usr/sbin/mysqld from script, I get this error:
db_1 | 2016-03-19T14:50:14.819377Z 0 [ERROR] Another process with pid 10 is using unix socket file.
db_1 | 2016-03-19T14:50:14.819498Z 0 [ERROR] Unable to setup unix socket lock file.
...
mdir_db_1 exited with code 1
what can be the reason of it?
I was facing the same issue. Following are the steps that I tried to resolve this issue -
Firsly, stop your docker service by using following command - "sudo service docker stop"
Now,get into the docker folder in my Linux system using the following path -
/var/lib/docker.
Then within the docker folder you need to get into the volumes folder. This folder contains the volumes of all your containers (memory of each container) -
cd /volumes
After getting into volumes do 'sudo ls' you will find multiple folders with hash names. These folders are volumes of your containers. Each folder is named after its hash
(You need to inspect your docker container and get the hash of your container volume. For this, you need to do the following steps -
Run command "docker inspect 'your container ID' ".
Now you will get a JSON file. It is the config file of your docker container.
Seach for Mounts key within this JSON file. In Mounts, you will get the Name(hash) of your volume. (You will also get the path of your volume within the Mounts. Within Mounts "Name" key is your volume name and "Source" is the path where your volume is located.)).
Once you get the name of your volume you can go within your volume folder and within this folder you will find "_data" folder. Get into this folder.
Finally within "_data" folder use sudo ls command and you will find a folder with the name mysql.sock.lock. Remove this folder By "rm -f mysql.sock.lock".
Now restart your docker service and then start your docker container. It will start working.
Note- Use sudo in each command while you are in the docker container folder.
You should make sure the socket file have been deleted before you start mysql.Check my.cnf(/etc/mysql/my.cnf) file to get the path of socket file.
find sth like this socket = /var/run/mysqld/mysqld.sock.And delete the .sock.lock file as well.
This is a glitch with docker.
Execute following commands:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
This will stop all containers and remove them.
After this it should work just fine.
Just faced same problem.
After many research, summary of my solution:
Find host location of docker files
$ docker inspect <container_name> --> Mounts.Source section
In my case, it was /var/snap/docker/common/.../_data
As root, you can ls -l that directory and see the files that are preventing your container from starting, the socket mysql.sock and the file mysql.sock.lock
Simply delete them as root ($ sudo rm /var/snap/.../_data/mysql.sock*) and start your docker container.
NOTE: be sure you don't have any other mysql.sock... files than those two. In that case don't use wildcar (*), delete each of them individually.
Hope this helps.
I had the same problem and got rid of it in an easy and mysterious way.
First I have noticed that I am unable to start mysql_container container. Running docker logs mysql_container indicated exactly the same problem as described repeating for few times.
I wanted to get a look around by running the container in an interactive mode by docker start -i mysql_container from one bash window while running things like
docker exec -it mysql_container cat /etc/mysql/my.cnf in another.
I have done that and was very surprised to see that this time the container started successfully. I cannot understand why. I can only guess that starting an interactive mode together with running subsequent docker exec commands slowed down init process and some another process had a bit more time to remove its locks.
Hope that helps anybody.