How to mount google disk to docker run -v - google-compute-engine

Is it is possible to use docker-machine with a google disk?
I have a docker-machine running via the docker-machine driver. I then need to be able to: docker run -v"path to google disk" From the terminal / docker-machine?

That's an interesting use case. There isn't a Volume Plugin to do that at the moment. But I may look into it (I just experimented with writing a Volume Plugin for Google Cloud Storage).
However, you should be able to mount the disk on the Docker Machine itself, and then reference it as you would with any other filesystem directories.
E.g.,
Attach a disk to the instance
Format and mount (e.g. mount to /mnt/mydisk)
Run docker run -ti -v /mnt/mydisk:/data busybox /bin/sh

Related

Move MySQL /var/lib/mysql to shared volume

I'm running a container with MySQL 8.0.18 in Docker on a Synology nas. I'm just using it to support another container (Mediawiki) on that box. MySQL has one volume mounted at path /var/lib/mysql. I would like to move that to a shared volume so i can access it with File Station and also periodically back it up. How can i move it to the docker share shown below without breaking MySQL?
Here are available shares on the Synology nas.
Alternatively, is there a way i can simply copy that /var/lib/mysql to the shared docker folder? That should work as well for periodic backups.
thanks,
russ
EDIT: Showing result after following Zeitounator's plan. Before running the docker command (2.) i created the mediawiki_mysql_backups and 12-29-2019 folders in File Station. After running 2. and 3. all the files from mysql are here and i now have a nice backup!
Stop your current container to make sure mysql is not running.
Create a dummy temp container where you mount the 2 needed volumes. I tend to user busybox for this kind of tasks => docker run -it --rm --name temp_mysql_copy -v mediaviki-mysql:/old-mysql -v /path/to/ttpe/docker:/new-mysql busybox:latest
Copy all files => cp -a /old-mysql/* /new-mysql/
Exit the dummy container (which will cleanup by itself if you used my above command)
Create a new container with mysql:8 image mounting the new folder in /var/lib/mysql (you probably need to do this in your syno docker gui).
If everything works as expected, delete the old mediaviki-mysql volume

Container Optimized OS Examples

I've followed all the documentation here: https://cloud.google.com/container-optimized-os/docs/ to try to upgrade my existing configuration that used container-vm images that have now been deprecated, to a new configuration using container-optimized OS. But nothing works! I can't get the Docker container to bind to port 80 (ie. -p 80:80) and also my Docker container can't seem to write to /var/run/nginx.pid (yes I'm using nginx in my Docker container). I followed the instructions to disable AppArmour and I've also tried creating an AppArmour profile for nginx. Nothing works! Are they any examples out there using container-optimized OS that don't just use busybox image and print "Hello World" or sleep! How about an example that opens a port and writes to the file system?
I just installed Apache Guacamole on Container Optimized OS and it works like a charm. There are some constraints in place for security.
The root filesystem ("/") is mounted as read-only with some portions of it re-mounted as writable, as follows:
/tmp, /run, /media, /mnt/disks and /var/lib/cloud are all mounted
using tmpfs and, while they are writable, their contents are not
preserved between reboots.
Directories /mnt/stateful_partition, /var
and /home are mounted from a stateful disk partition, which means
these locations can be used to store data that persists across
reboots. For example, Docker's working directory /var/lib/docker is
stateful across reboots.
Among the writable locations, only
/var/lib/docker and /var/lib/cloud are mounted as "executable" (i.e.
without the noexec mount flag).
If you need to accept HTTP (port 80) connections from any source IP address, run the following commands on your Container-Optimzied OS instance:
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT
In general, it is recommended you configure the host firewall as a systemd service through cloud-init.
PS: Container-Optimized OS is capable of auto updates. This mechanism can be used to update a fleet of Compute Engine instances.
I can't get the Docker container to bind to port 80 (ie. -p 80:80) and also my Docker container can't seem to write to /var/run/nginx.pid (yes I'm using nginx in my Docker container).
I think you might be hitting some GCE firewall problem. The best way would be to verify/debug it step by step:
Try running a stupidly simple nginx container:
"-d" asks Docker to run it in daemon mode, "-p 80:80" maps the HTTP port, and "--name nginx-hello" names to container to nginx-hello.
docker run -d --name nginx-hello -p 80:80 nginx
(optional) Verifies that the container is running correctly: You should see the "nginx-hello" container listed.
docker ps
Verifies that nginx is working locally: You should see a good HTTP response.
curl localhost:80
If you are able to verify all the above steps correctly, then you would likely be facing a GCE firewall problem:
How do I enable http traffic for GCE instance templates?

Connecting to percona docker from a java docker container

I know there have been many similar questions, but none of them are what I want. I'm following this because I specifically need 5.5, at least for now. My java project (which accesses mysql) is in a container I built with
docker build -t projectname-testing .
The Dockerfile is pretty standard, it just copies over a built tarball and extracts it to a specific folder. The CMD is a shell script run_dev_server.sh that just launches the server with dev configurations rather than production ones.
I created a percona docker container with the command given in the link with
docker run --name projectname-mysql-server -e MYSQL_ROOT_PASSWORD="" -d percona:5.5
So now the way I see it, just need the link the two as mentioned in the link:
docker run -p 3306:3306 --name projectname-local --link projectname-mysql-server projectname-testing
Which gives me
docker: Error response from daemon: Cannot link to a non running container: /projectname-mysql-server AS /projectname-local/projectname-mysql-server.
ERRO[0000] error getting events from daemon: net/http: request canceled
Which isn't very helpful and doesn't tell me what happened. Am I understanding this process wrong? What should I be doing?
First of all, I would recommend using the official Percona docker image from Docker Hub, instead of building your own image. The official image has a 5.5 version; https://hub.docker.com/_/percona/
You can either extend this image if you need specific changes (such as a custom configuration), for example;
FROM percona:5.5
COPY my-config.cnf /etc/mysql/conf.d/
Important: I notice you are publishing port 3306 (-p 3306:3306). Publishing a port makes it publicly accessible on the host's network-interface. You should only do this if you have external software that needs to connect to the database. If only your application needs access to the database, publishing the port is not needed, because containers can connect with eachother through the docker container-container network, which is "private" and not reachable from outside the host.
The --link option on the default network is a legacy option that is still around for backward compatibility, but should not be used for most situations. The --link option has a number of limitations;
legacy links are not dynamic; it's not possible to replace a linked container without re-creating all containers linked to that container
restarting a linked container can break the link, with no option to re-establish a link
legacy links are uni-directional
environment variables are shared between containers, which can easily lead to leaking (e.g.) credentials to other containers.
Docker 1.9 introduced custom docker networks, which allows
A simple example;
create a network for your application;
docker network create mynet
create a database container, and attach it to the network; there is no need to publish its ports for other containers to connect to it. (I'm using an nginx image here, just to illustrate the concept);
docker run -d --name db --network mynet nginx:alpine
create an "application" container and attach it to the same network; doing so
allows it to communicate with the db container over that network;
docker run -dit --name app --network mynet alpine sh
The application container can now connect to the db container, using its name
as hostname (db); to illustrate this, open a shell in the app container, install curl and connect to http://db:80;
docker exec -it app sh
/ # apk add --no-cache curl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20161130-r1)
(2/4) Installing libssh2 (1.7.0-r2)
(3/4) Installing libcurl (7.52.1-r3)
(4/4) Installing curl (7.52.1-r3)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 15 packages
/ # curl http://db:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
You can read more about networks (also how to dynamically attach and detach a container from a network) in the []"docker container networking" section of the documentation](https://docs.docker.com/engine/userguide/networking/)

How do I backup data from MySQL container into a shared volume?

If I'm working with a containerized MySQL database that wasn't originally run with shared volume options, what's the easiest way to sort of externalize the data? Is there still a way to modify the container so that it shares its data with the Docker host in a specified directory?
Note: if you're still having problems with this question, please comment so I can improve it further.
Official Docker documentation provides a great overview on how to backup, restore, or migrate data volumes. For my problem, in particular, I did the following:
Run a throw-away Docker container that runs Ubuntu, shares volumes with currently running MySQL container, and backs up database data in local machine (as described in the overview):
docker run --rm --volumes-from some-mysql -v /path/to/local/directory:backup ubuntu:15.10 tar cvf /backup/mysql.tar /var/lib/mysql
(The official MySQL Docker image uses /var/lib/mysql for storing data.)
The previous step will result in creation of /path/to/directory/mysql.tar in the Docker host. This can now be extracted like:
tar -xvf mysql.tar
(Assuming cd /path/to/directory). The resulting directory (/var/lib/mysql) can now be used as shared volume with same instance, or any other instance of containerized MySQL.

Installation of system tables failed! boot2docker tutum/mysql mount file volume on Mac OS

I have trouble mounting a volume on tutum/mysql container on Mac OS.
I am running boot2docker 1.5
When I run
docker run -v $HOME/mysql-data:/var/lib/mysql tutum/mysql /bin/bash -c "/usr/bin/mysql_install_db"
i get this error
Installation of system tables failed! Examine the logs in /var/lib/mysql for more information.
Running the above command also creates an empty $HOME/mysql-data/mysql folder.
The tutum/mysql container runs smoothly when no mounting occurs.
I have successfully mounted a folder on the nginx demo container, which means that the boot2docker is setup correctly for mounting volumes.
I would guess that it's just a permissions issue. Either find the uid of the mysql user inside the container and chown the mysql-data dir to that user, or use a data container to hold the volumes.
For more information on data containers see the official docs.
Also note that as the Dockerfile declares volumes, mounting is taking place whether or not you use -v argument to docker run - it just happens in a directory on the host controlled by Docker (under /var/lib/docker) instead of a directory chosen by you.
I've also had a problem starting mysql docker container with error "Installation of system tables failed". There was no changes on the docker image, and there was no recent update on my machine or docker. One thing I was doing differently was that using images that could take up or more than 5GB memory on testing.
After cleaning dangling images and volumes, I was able to start mysql image as usual.
This blog seems to have a good instructions and explains all variations of clean up with docker.