How to get logs of the container made with `buildah from`? - containers

I create an image using:
buildah bud --layers --format docker -t ${imageFullName()} -f ${componentName}/DockerfileTests ${buildArgsStr} ${componentName}
And then create a container:
buildah from ${imageFullName()}
The question is - how to get logs of the build container?

Related

How to migrate data from docker container to a newly created volume? [duplicate]

I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH

Resuming docker mysql instance after restarting

I'm using docker to run a mysql 5.6 instance on my localhost (which is running ubuntu 20.04), using these instructions. When I create a new container for the database I use the following command
sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
That serves the intended purpose; I'm able to create the database using port 3310 and get on with what I want to do.
However when I reboot my localhost, I am unable to get back into sql5.6 using that port again.
When I list containers, I see none listed:
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So I try to recreate it and am told that it already exists:
$ sudo docker run --name mysql-56-container -p 127.0.0.1:3310:3306 -e MYSQL_ROOT_PASSWORD=rootpassword -d mysql:5.6
docker: Error response from daemon: Conflict. The container name "/mysql-56-container" is already in use by container "a05582bff8fc02da37929d2fa2bba2e13c3b9eb488fa03fcffb09348dffd858f". You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.
So I try starting it but with no luck:
$ sudo docker start my-56-container
Error response from daemon: No such container: my-56-container
Error: failed to start containers: my-56-container
I clearly am not understanding how this works so my question is, how do I resume work on databases I've created in a docker container after I reboot?
docker ps just list running containers. If you reboot your laptop, all of them will be stopped. You can use docker ps --all or docker container ls --all to list all containers (running or stopped). You can check more about the docker ps command in docker ps command line reference
Once a container is created, you cannot create another with the same name. Tha is the reason your second docker run is failing.
You should use docker start instead. But you are trying to start a container with a different name. Your docker start command is using a container named my-56-container but it is called mysql-56-container. Please check your first docker run command in the question.

Restoring mysql data from a Docker Volume

It's the second time that when my local system (Mac OS) crashes and restarts, I lose the running Docker container of MySQL. By "lose" I mean even docker ps -a doesn't show it. It's vanished.
I am using the official mysql-server docker image (https://hub.docker.com/r/mysql/mysql-server) so luckily the data of /var/lib/mysql is in a volume. And I am lucky that after the loss of the container, the volume is there.
The question is, how can I restore the data (e.g. a mysqldump) out of a Docker volume of /var/lib/mysql?
Step 1: Find and verify the volume
Via docker volume ls you can find the name of the volume. Let's say it's <abcdef>.
Then, via docker run -it --rm -v <abcdef>:/var/lib/mysql busybox ls -l /var/lib/mysql make sure you see the files and the dates of the files matches your recent changes to the lost DB. (credits to this answer)
Optionally, you can create a backup out of this volume via this method.
Step 2: Create a new container, and mount this volume on
Whatever docker run command you are already using to start a MySQL container, add -v <abcdef>:/var/lib/mysql_old to it. It should give you a fresh MySQL container up and running, without any issue. Your data is not loaded there yet, just the files are accessible.
Step 3: Copy and overwrite the MySQL data
Now, go into the shell of that container (e.g. docker exec -it <CONTAINER_NAME> bash) and do ls /var/lib/mysql_old to make sure the files from your volume are there.
Then, do cp -R /var/lib/mysql_old/. /var/lib/mysql (or sudo cp ... depending on the user you got in with) and then chown -R mysql:mysql /var/lib/mysql. (Credits to this tutorial)
Step 4: Restart the container
Exit the container and do docker stop <CONTAINER_NAME> to stop the container. Then start it again via docker start <CONTAINER_NAME>. Voila! It should now be a DB with all your data.
Optionally, if you want to start off with a non-hacked container, you can do docker exec <CONTAINER_NAME> sh -c 'exec mysqldump -uroot -p --databases <DATABASE_NAME>' > dump.sql to get a mysqldump out of it, and import that dump.sql into a fresh new container via docker exec -i <CONTAINER_NAME> sh -c 'exec mysql ' < dump.sql.

how to create function in Apache Pulsar for docker

Step 1:
Copy Jar in container pulsar:
docker cp name.jar {ID_CONTAINER}:/pulsar/name.jar
Step 2:
Create function in container pulsar :
docker exec -it {ID_CONTAINER} bin/pulsar-admin functions create --jar /pulsar/name.jar --className com.mx.conuxi.main.TestFunction --inputs load-valition --output load-errores
Regards

docker cp doesn't work for this mysql container

Tried copying a directory and it doesn't seem to work.
Start a MySQL container.
docker cp mysql:/var/lib/mysql .
cd mysql
ls
NOTHING.
Here's the script to try it yourself.
extra info.
On Ubuntu 14.04
jc#dev:~/work/jenkins/copy-sql/mysql$ docker -v
Docker version 1.2.0, build fa7b24f
In the Dockerfile for the image your container comes from, there is the VOLUME instruction which tells Docker to leave the /var/lib/mysql directory out of the container filesystem.
The docker cp can only access the container filesystem and thus won't see the files in mounted volumes.
If you need to backup your mysql data, I suggest you follow the instructions from the Docker userguide in section Backup, restore, or migrate data volumes. You might also find the discordianfish/docker-backup docker image useful for that task.
Here's a little example to illustrate your case.
given a simple Dockerfile with just a VOLUME instruction
$ cat Dockerfile
FROM base
VOLUME /data
build an image named test
$ docker build --force-rm -t test .
run a container named container_1 which will create two files, one being on the mounted volume
$ docker run -d --name container_1 test bash -c 'echo foo > /data/foo.txt; echo bar > /tmp/bar.txt; while true; do sleep 1; done'
make sure the container is running
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e97aa18ac83 test:latest "bash -c 'echo foo > 3 seconds ago Up 2 seconds container_1
use the docker cp command to cp file /tmp/bar.txt and check its content
$ docker cp container_1:/tmp/bar.txt .
$ cat bar.txt
bar
try the same with the file which is in the mounted volume (won't work)
$ docker cp container_1:/data/foo.txt .
2014/09/27 00:03:43 Error response from daemon: Could not find the file /data/foo.txt in container container_1
now run a second container to print out the content of that file
$ docker run --rm --volumes-from container_1 base cat /data/foo.txt
foo
It looks like you're trying to pass the name of your container to the docker cp command. The docs say it takes a container id. Try grepping for "CONTAINER ID" in your script instead.
EDIT:
Since changing your script to grep for the Container ID didn't help, you should start by trying this manually (outside of your script).
The docker cp command works. The reason it's not working for you is either:
a permission thing
you're not formatting the command correctly, or
the directory doesn't exist in your container.
With a running container id of XXXX, try this (using your container id):
sudo docker cp XXXX:/var/lib/mysql .
If this doesn't work, and you don't get an error, I'd maybe suggest that that directory doesn't exist in your container.
EDIT2:
As I said, it's one of the 3 things above.
I get this when I run your script:
2014/09/26 16:10:18 lchown mysql: operation not permitted
Changing the last line of your script to prefix with sudo now gives no errors, but no directory either.
Run the container interactively:
docker run -t -i mysql /bin/bash
Once inside the container:
cd /var/lib/mysql
ls
...no files.
So your script is working fine. The directory is just empty (basically #3 above).
For reference, the mysql Dockerfile is here.