I've setup a basic MariaDB instance running in Docker - basically from starting the container using the Kitematic UI, changing the settings, and letting it run.
Today, I wanted to make a backup, so I used Kitematic to change the port so I could access it from a machine to make automated backups. After changing the port in Kitematic, it seems to have started a fresh MariaDB container (i.e. all my data seems to be removed).
Is that the expected behavior? And, more importantly, is there any way to recover the seemingly missing data, or has it been completely removed?
Also, if the data is actually removed, what is the preferred way to change settings—such as the exposed ports—without losing all changes? docker commit?
Notes:
running docker 1.12.0 beta for OS X
docker -ps a shows the database status as "Up for X minutes" when the original had been up for several days
Thanks in advance!
UPDATE:
It looks like the recommended procedure to retain data (without creating a volume or similar) is to:
commit changes (e.g. docker commit <containerid> <name/tag>)
take the container offline
update settings such as exposed port or whatever else
run the image with committed changes
...taken from this answer.
Yes, this is expected behavior. If you want your data to be persistant you should mount volume from host (via --volume option for docker run) or from another container and store your database files at this volume.
docker run --volume /path/on/your/host/machine:/var/lib/mysql mariadb
Losing changes are actually core feature of containers so it can not be omitted. This way you can be sure that between every docker run you get fresh environment without any changes. If you want your changes to be permanent you should do them in your image's Dockerfile, not in container itself.
For more information please visit official documentation: https://docs.docker.com/engine/tutorials/dockervolumes/.
it looks like you dont mount container volume into certain path. You can read about volumes and storing data into container here
you need run container with volume option
$ docker run --name some-mariadb -v /my/own/datadir:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mariadb:tag
where /my/own/datadir is directory on host machine
Related
I have a container with MySQL that is configured to start with "-v /data:/var/lib/mysql" and therefore persists data between container restarts in the separate folder. Although this approach has some drawbacks, in particular, the user may not have write permissions for a specified directory. How exactly container should be reconfigured in order to use Docker's implicit per-container storage to save MySQL data in the /var/lib/docker/volumes in order to reuse it after the container is stopped and started again? Or is it better to consider another persistence options?
What you show is called bind mounts.
What you request is called volumes.
Just create volume and connect it
docker volume create foo
docker run ... -v foo:/var/lib/mysql <image> <command>
And you've done it! You can connect it to many containers at will.
There has been an issue with Google compute instances created with containers running the startup script up to 10-20 times.
Case 1:
The container is built through Docker, then pushed to the online registry, and then an instance is created with that container. The startup script "Test.py" is instantiated through the container creation instead of being built into the Docker File directly. The following command is used to create an instance with a container and arguments:
gcloud compute instances create-with-container busybox-vm --container-image gcr.io/example-project-id/ttime2 --container-command python --container-arg="/Test.py" --container-arg="Args"
Case 2:
Including the startup script (Test.py) and corresponding arguments within the docker image itself, and then instantiating an instance also resulted in multiple runs of the script.
Notes:
The startup script is ran as a sub-process so the standard output can be easily sent to a remote server where it can be monitored for debugging purposes.
The startup script is executed multiple times before the first execution is finished (as the end of the script kills the instance successfully).
When running this docker build locally, it performs as expected with just one code execution.
I've experienced this multiple startup script execution on several different docker images
Only one instance is created.
A solution it seems would be to check for subprocesses as they spawn and kill any duplicates, I'm just not sure how I'd identify them.
Edit:If you have some general tips on tackling problems with containers that have "crashlooping" I'd like to accept that as an answer. I was personally able to add the following flag --container-restart-policy="never" to the above gcloud command to get a large variety of tests to work (not sure why), so I'm done with this issue for now.
This could be one of many reasons. A good way to diagnose would be to:
change to --container-command "sleep 50000" and create a vm.
ssh into the vm and run sudo -i
run docker ps -a until you see a container of yours appear.
get its container id and docker exec -it <ID> bash(change to sh if necessary). Your container should be sleeping. This will let you go into your container.
execute Test.py from within your container to see if there's an error.
This requires your image to have sleep.
im trying to restart my mysql server. The server is on a Kubernetes/fedora container. I have tried to use # /etc/init.d/mysqld restart and # systemctl restart mysqld. The problem is that there is no files in the init.d.
When running # /etc/init.d/mysqld restart bash says No such file, obviously as there is no such file. When running # systemctl restart mysqld it responds "bash: systemctl: Command not found"
The mysql-servier is running fine and i can log into it, however i cant restart it. Please help.
To restart a server on Kubernetes you simply need to delete the pod with kubectl delete pod <id>. If you didn't create pod manually, but rather with a deployment, it will restart and come back online automatically.
Deleting a pod is a correct way of shutting down servers. First Kubernetes will send mysql a TERM signal that will politely ask it to shutdown. Then after some time (configurable) it will shoot it with KILL if it doesn't comply.
The mysql-servier is running fine and i can log into it, however i cant restart it.
You have 2 options and both have it's 'dangers' to address:
More likely, container is started with either CMD given in docker image or command directive in kubernetes manifest that is actually starting your mysql. In that case, regardless of the way you manage to do termination (during restart) of running instance of mysql on that container you will also - terminate the whole container. Kubernetes will then restart that container, or whole pod.
Less likely, but possible is that container was started with some other 'main' command, and mysql is started as part of a separate script. In that case inspecting docker file or kubernetes manifest will give you details about start/stop procedure and only in that case you can restart mysql without actually killing the container it is running onto.
Dangers: data persistence. If you don't have proper data persistence in place killing running container (either through restart or refresh) will also destroy any ephemeral data with it.
Additional note: have you tried service mysql status and service mysql restart?
I'm experiencing some weirdness with docker.
I have an Ubuntu server VM running in Windows Azure.
If I start a new docker container for e.g. Wordpress like so:
sudo docker run --name some-wordpress --link some-mysql:mysql -p 80:80 -d wordpress
everything works nicely, I get a resonably snappy site considering the low end VM settings.
However, if I reboot the VM, and start the containers:
sudo docker start some-mysql
sudo docker start some-wordpress
The whole thing runs very slowly, the response time for a single page gets up to some 2-4 seconds.
Removing the containers and starting new ones makes everything run normally again.
What can cause this?
I suspect it has to do with disk usage, does the MySQL container use local disk for storage? When you restart an existing docker container, you reuse the existing volume, normally stored at in a sub folder of /var/lib/docker, whereas a new container creates a new volume.
I find a few search results saying that Linux on Azure doesn't handle "soft" reboots well and that stuff doesn't get reconnected as it should. A "hard" reboot supposedly fixes that.
Not sure if it helps, my Docker experience is all from AWS.
Your containers are running on a disk which is stored in a blob storage with a max. 500 IOPS per disk. You can avoid hitting the disk (not very realistic with MySQL) or add more disks to use with striping (RAID0) or use SSD (D series in Azure) And depending on your use case, you might also rebase Docker completely to use ephemeral storage (/dev/sdb) - here's how for CoreOS. BTW, there are some MySQL performance (non-Docker) suggestions in azure.com.
When I run Couchbase server in a Docker container on GCE, using the ncolomer/couchbase image, I'm getting this error:
The maximum number of open files for the couchbase user is set too low.
It must be at least 10240. Normally this can be increased by adding
the following lines to /etc/security/limits.conf:
couchbase soft nofile <value>
couchbase hard nofile <value>
Where <value> is greater than 10240.
The docs in ncolomer/couchbase are recommending to update /etc/init/docker.conf and add limit nofile 262144, but I'm not sure that's even available when using Docker under GCE.
I see a few options:
In the Dockerfile, run a script to modify /etc/security/limits.conf as suggested by the couchbase error.
Call ulimit -n 64000 in the Dockerfile
Any suggestions?
The problem with ulimit is that the limits are bounded by the limits of the docker host process.
This is related to the known #4717 (and to a lesser extent #1916) Docker issue.
As I understand, the two options you mentioned should not work since it will only set ulimit on the child process (i.e. the container). From there, I see no choice but set the correct ulimit on the host before trying to increase it on your container.
The documented procedure should work fine, until you have the possibility to apply it.
I don't know the GCE platform very well, but if you have a root access to your instance, you should just apply the changes to the /etc/init/docker.conf file, restart the Docker service and fire up the Couchbase container.