I've got Docker running on a Fedora Core VM and attempted to run a Frontier node like this:
$ sudo docker run -it -p 30303:30303 ethereum/client-go
which promptly fired up and started synchronizing the blockchain. it filled up my disk, so I added 40G more and it filled that up too. the space is being taken up by /var/lib/docker/devicemapper/devicemapper/data.
how much space do I need??? or is there a problem?
Related
I want to introduce testing into a huge legacy system with a web API. So because of the lack of features in the framework(symfony 2) I decided to write Postman tests to test the system from the outside. The problem is the huge database should be in place for the tests to work and it needs to be at a certain state for each test(it can't be reused by all tests because they might alter the data). From my tests using a sql dp to restore takes around 40 seconds which is not acceptable for each test.
Now I am in need of a solution or just give up on testing which I do not want to do.
One solution I have come up with but need verification that it works is to:
Bring up a MySQL docker container and use a sql dump to get the database to initial state.
Copy the MySQL data volume to someplace safe called initdata.
Copy the initdata to a location used as MySQL data volume.
Run the test
Delete container and modified data volume.
Reapeat from step 2 for each test.
This is the general idea but I need to know whether this works with MySQL docker and whether copying volumes is actually efficient and fast enough. Or maybe any other sane solution for this situation.
I helped a company that wanted to test a 1TB MySQL database repeatedly. The solution we ended up with was to use LVM filesystem snapshots, so one could quickly revert the whole filesystem to its saved state almost instantly.
But if using filesystem snapshots is not an option, you may still have to use some backup/restore tool.
Logical data loads (i.e. importing a mysqldump file) are known to be very time-consuming. There are some alternative tools like mysqlpump or mydumper, but they're all pretty slow.
Physical backup tools like Percona XtraBackup are much faster, especially on restore. But restoring a physical backup is a bit tricky because the MySQL Server must be shut down.
There's a good comparison of the performance of backup/restore tools for MySQL in this recent blog: https://www.percona.com/blog/backup-restore-performance-conclusion-mysqldump-vs-mysql-shell-utilities-vs-mydumper-vs-mysqlpump-vs-xtrabackup/
So this is what we did, and I'm writing it here for anyone that have faced the same problem.
We built a MySQL image with our data imported to it from a mysqldump file, and we bring up a container with that image, run our tests and then bring it down, remove it and do it all over for each test. Now this method is quite efficient and bringing up the container and stopping and removing it takes around 5 seconds for each test for a db with 500 tables and a 55mb dump (we removed all unnecessary rows).
Here is a sample of the docker file and the process that we used to build the image:
FROM docker.supply.codes/base/mysql:8.0.26
COPY ./mysql_data /var/lib/mysql
and we have a script that is run everytime our dump gets updated in git which imports the dump, builds image and pushes it to a docker registry:
# run a mysql container
docker run -d --name $MYSQL_CONTAINER_NAME -v mysql_data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD -e MYSQL_DATABASE=$MYSQL_DATABASE $MYSQL_IMAGE mysqld --default-authentication-plugin=mysql_native_password
# Wait until MySQL container is completely up
sleep 10
# import mysqldump
docker exec -i $MYSQL_CONTAINER_NAME sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" $MYSQL_DATABASE' < ./$MYSQL_DUMP_FILE_NAME
docker stop $MYSQL_CONTAINER_NAME
# keep the data directory in a new container
docker run -d --name $ALPINE_CONTAINER_NAME -v mysql_data:/mysql_data $ALPINE_IMAGE
# copy the directory to local
docker cp $ALPINE_CONTAINER_NAME:/mysql_data .
# build image with the data(look at the dockerfile)
docker build -t $DOCKER_IMAGE_TAG .
# push it to repo
docker push $DOCKER_IMAGE_TAG
Quite frankly I don't understand the need for copying data to an apline container and then back to the local machine, but the DevOps said it's required because this is being handled by the gitlab ci.
And this is a script that runs postman collections using newman cli in which I start and stop a db container with that image for each test:
for filename in ./collections/*.json; do
# run test symfony test db container
docker run --name "$dbContainerName" --network="$networkName" -d "$dbImageName" > /dev/null
# # sleep
sleep 5
# # run the collection
newman run "$filename" -d parameters.json
returnCode=$?
# add test and result to log
nameWithoutPath="${filename##*/}"
name="${nameWithoutPath%.postman_collection.json}"
tests+=("$name")
testResults+=($returnCode)
# stop and remove the symfony test db container
docker stop "$dbContainerName" > /dev/null
docker rm "$dbContainerName" > /dev/null
done
I'm running a container with MySQL 8.0.18 in Docker on a Synology nas. I'm just using it to support another container (Mediawiki) on that box. MySQL has one volume mounted at path /var/lib/mysql. I would like to move that to a shared volume so i can access it with File Station and also periodically back it up. How can i move it to the docker share shown below without breaking MySQL?
Here are available shares on the Synology nas.
Alternatively, is there a way i can simply copy that /var/lib/mysql to the shared docker folder? That should work as well for periodic backups.
thanks,
russ
EDIT: Showing result after following Zeitounator's plan. Before running the docker command (2.) i created the mediawiki_mysql_backups and 12-29-2019 folders in File Station. After running 2. and 3. all the files from mysql are here and i now have a nice backup!
Stop your current container to make sure mysql is not running.
Create a dummy temp container where you mount the 2 needed volumes. I tend to user busybox for this kind of tasks => docker run -it --rm --name temp_mysql_copy -v mediaviki-mysql:/old-mysql -v /path/to/ttpe/docker:/new-mysql busybox:latest
Copy all files => cp -a /old-mysql/* /new-mysql/
Exit the dummy container (which will cleanup by itself if you used my above command)
Create a new container with mysql:8 image mounting the new folder in /var/lib/mysql (you probably need to do this in your syno docker gui).
If everything works as expected, delete the old mediaviki-mysql volume
I have written a dockerfile that runs mysql on an ubuntu image. The Dockerfile is:
FROM ubuntu
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y mysql-server
RUN sed -i '43s/.*/bind-address = 0.0.0.0/' /etc/mysql/mysql.conf.d/mysqld.cnf
EXPOSE 3306
ENTRYPOINT service mysql start && bash
If I run:
docker run -dit mysql-server
after building the container everything works fine and my Apache/PHP container can communicate with it. However, if I run it with a volume attached (docker run -dit -v ~/vol/:/var/lib/mysql/ mysql-server) the container will stop running after 30 seconds (I'm pretty sure it's the same amount of time every time).
Does anyone know a way I can keep the container up and mount a volume? I've never had this problem before and can't find anything else online (I've been looking a while). Thanks.
This is because you are masking the contents of /var/lib/mysql with the contents of ~/vol which I'm assuming is empty. As such the MySQL server can't start as it's missing database files. I would personally use the official image over your custom implementation as it will handle what your looking for here is the link to Dockerhub. It has options for mounting your custom my.cnf file if you need those changes. However by default the image does bind to 0.0.0.0. See the Dockerhub link for config options.
Hope this helps
Dylan
I have a Docker container with MariaDB installed. I am not using any volumes.
[vagrant#devops ~]$ sudo docker volume ls
DRIVER VOLUME NAME
[vagrant#devops ~]$
Now something strange is happening. When I do sudo docker stop and sudo docker start the MariaDB data is still there. I expected this data to be lost.
Btw when I edit some file for example /etc/hosts I do see the expected behavior. Changes to this file are lost after restart.
How is it possible that MariaDB data is persistent without volumes? This shouldn't happen right?
docker stop does not remove a container, neither does docker start create a container.
docker run does create a new container from a image.
docker start starts a container which does exist but has been stopped before ( call it pause/resume if you like ).
Thus for start/stop no volumes are required to keep the state persistent.
if you though do docker stop <name> && docker rm <name> and then docker start <name> you get and error, that the container does no longer exist - so now you need docker run <args> youimage
I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.