I am connecting to a mysql container using another container running mysql client. When I exit out of this client the container stops obviously. But when I do a docker ps -a this container doesn't show. I have not been able to find a reason for this. I am following these instructions to start the containers. Any ideas would be helpful
The --rm option passed along docker run automatically removes the container after its stopped.
See clean up flag:
By default a container’s file system persists even after the container exits. This makes debugging a lot easier (since you can inspect the final state) and you retain all your data by default. But if you are running short-term foreground processes, these container file systems can really pile up. If instead you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag
Related
I have my Docker file , build through it in the Docker engine , and then run the Docker image using docker run -td --name <imagename>
Checks for it, it keeps running in the Docker engine.
But when I tag it to Bluemix and then push it to Bluemix containers(gets available in catalog), and then I ran
cf ic run -td --name ifx2container registry.ng.bluemix.net/namespace_container/ifx2:informixinstall
This creates the container but it gets stopped automatically after few seconds of start
do run docker with
docker run -itd
not with
docker run -td
-i : Keep STDIN open even if not attached
source : https://docs.docker.com/engine/reference/run/
Make sure that your container has a long-running command. Per docs: https://console.ng.bluemix.net/docs/containers/container_planning_container_ov.html#container_planning_images
To keep a container up and running at least one long-running process is required to be included in the container image. For example, echo "Hello world" is a short running process. If no other command is specified in the image, the container shuts down after the command is executed. To transform the echo "Hello world" command into a long running process, you can, for example, loop it multiple times, or include the echo command into another long running process inside your app.
Also, by default containers in Bluemix run in detached mode. You can review supported run flags here: https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html#container_cli_reference_cfic__run
I am trying to create a docker image from a mysql container.
The problem is that db of the new image is clean, but
files/folders, which I create manually
in the origin container before commit, are copied.
base mysql image is official 5.6
docker is 1.11.
I checked that folder
/var/lib/mysql/d1 appears when a db is created but new image
doesn't persist this folder, though folders in / root are persisted.
Several things happening here:
First, docker commit is a code smell. It tends to be used by those creating images with a manual process, rather than automating their builds with a Dockerfile that would allow for easy recreation. If at all possible, I recommend you transition to a Dockerfile for your image creation.
Next, a docker commit will not capture changes made to a volume. And this same issue occurs if you try to update a volume with a RUN step in a Dockerfile. Both of these capture changes to the container filesystem and store those changes as a layer in the docker image, and the volumes are not part of the container filesystem. This is also visible if you run docker diff against a container. In this case, the upstream image has defined the volume in their Dockerfile:
VOLUME /var/lib/mysql
And docker does not have a command to undo a created volume from the Dockerfile. You would need to either directly modify the image definition from outside of docker (not recommended) or build your own upstream image with that step removed (recommended).
What the mysql image does provide is the ability to inject your own database creation scripts in /docker-entrypoint-initdb.d, which you can add with your own image that extends mysql, or mount as a volume. This is where you would inject your schema, or initialize from a known backup for development.
Lastly, if the goal is to have persistence, you should store your data in a volume, not by committing containers:
docker run -v mysql-data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
The volume allows you to recreate the container, upgrade to a newer version of mysql when patches are released (e.g. security fixes) without losing your data.
To backup the volume this will export to a tgz:
docker run --rm -v mysql-data:/source busybox tar -cC /source . >backup.tgz
And to restore a volume, this creates one from a tgz:
docker run --rm -i -v mysql-data:/target busybox tar -xC /target <backup.tgz
You can make data persist by using docker commit command like below.
docker commit CONTAINER_ID REPOSITORY:TAG
docker commit | Docker Documentation
But just as BMitch's answer said, a docker commit will not capture changes made to a volume.
And usually you should use a volume to store data permanently and let a container be ephemeral without data being stored in itself.
So I guess many people think that trying to persist data without using a volume is a bad practice.
But there are some cases you might consider committing and freeze data into an image.
For example, it's handy when you have an image with all the tables and records in it if you use the image for automated test in CI.
In the case of github actions, only thing you need to do is just pull the image and create the database container and run tests against the database.
No need to think about migration of data.
I have a container that appears to be stuck.
In the status it currently shows that it is "Networking". However none of the ports work.
I also am unable to stop it. Just gives me an error...
Sometimes it happens that a container is in Networking state for too long and it usually means that the networking is being created for your Container so that the public and private IPs for your Container can be accessed and routed to your instance. When a container gets stuck on Networking then it is typically a problem with the infrastructure rather than anything you have done. You can try to create a new container from the same image with cf ic run or ice run. Please consider that if you reached the maximum quota you could need to delete the stuck container or to release unbound IPs in order to create a new one. You can delete a container using:
cf ic rm -f [containerId]
To get the container id you can run:
cf ic ps -a
You can list all IPs (available or not) using:
cf ic ip list -a
Then you can release an IP using:
cf ic ip release [IPAddr]
I had same situation and I was able to remove it via command line:
cf ic rm your_container_id
After couple of minutes it was removed. For some reason via web console didn't work but it worked by command line.
I want to move containers from one host to another. The containers have updated data in their filesystem, so I do not want to move the original images (docker save) but containers (using docker export).
So I use
docker export l4bnode > l4bnode.tar
on the old host, copy the file to new host, and import image
cat l4bnode.tar | docker import - andi/l4bnode
on the new one. But.. it looks like all the configuration data I had in the Dockerfile (and that I also could specify/had specified in the command line when running the container) is lost. I tried
docker run andi/l4bnode
and get
docker: Error response from daemon: No command specified.
Using docker inspect, I see that all data on the imported image is empty, though it is set on the exported running container. I mainly am missing startup command, working directory, environment variables and exposed ports (some of which I have to change then due to the migration and new environment).
How can I apply the original configuration on the new host, or preferrably, migrate it properly?
You can commit the current container state as new image. Then use save/load on the new image.
That being said this is something you generally should try to avoid. Runtime data should be kept in volumes, any configuration changes should happen via Dockerfile rebuilds.
If you look at dockerfiles the often contains lines like this:
sed 's/main$/main universe/' -i /etc/apt/sources.list
I think it is difficult to set up things like this.
Is it possible to launch a default OS image, then enter it interactive with a shell, do some modifications, and then print out the diff (filesystem diff)?
The diff should be used as the dockerfile to recreating the image.
But maybe I am missing something, since I am new to docker.
You can create docker images several ways.
I tend to have two windows open when I create a new docker image. One for my docker run -i -t centos bash, where I am writing all my commands to get it the way I want, and the other one with the Dockerfile, so I can put in whatever I do.
When it comes to config files, I am putting them in the files/folders that matches the one on the image.
Example, if I change /etc/something/file.conf, I will create the file in etc/something/file.conf in the same directory as my Dockerfile, and then use Dockers ADD command to add it whenever I do a build.
This works perfectly, since I can have all this in a git repository with a README.md containing the info I need for running/building the image.
The other thing you can do is to is to run docker ps -a after you are done with the changes you wanted to create an image on, and get the docker ID of the image of the container you just configured. You can tag this new image, or start it with docker run abc0123 bash just like you would a normal docker image.
The problem with this is that you wont be able to easily build it next time without bringing the whole image.
Dockerfiles with ADD is the way to go!
If you do not want to run sed (which is used to preserve the default file and of minimal changes to it), you can simply ADD the modifies file.
For that you can docker run -it --rm thebaseimage /bin/sh (or any other shell that is provided) and edit it in place. Then just copy it outside the container (or docker export it) and use it on your build.
The downside of ADD vs RUN sed… is that, if something changes in a new version of your base image, you will overwrite those changes.
The Dockerfile is (mostly) equivalent to a series of docker run and docker commit commands. You wouldn't want to look at the docker diff to see what files changed -- you'd want to see what docker run commands had occurred. You could get these from your host shell history and process these into a Dockerfile.