I'm dumping a database into a sql dump:
docker exec mysql sh -c 'exec mysqldump --all-databases -uroot -ppassword' > all-databases.sql
Then I'm using a Dockerfile to build a mysql image and run as a container:
FROM mysql:5.6.41
# needed for intialization
ENV MYSQL_ROOT_PASSWORD=whateverPassword
ADD all-databases.sql /docker-entrypoint-initdb.d/
EXPOSE 3306
When I run the container if I exec into the container, can I access the all-databases.sql file and see the contents of my database in plaintext in the docker image?
Currently if I look into /docker-entrypoint-initdb.d/ it says all-databases.sql but I don't know where that file is stored/if it's encrypted.
If you docker exec into the container, the file will be unencrypted. (It's just a text file and you can look at it with more on most image bases.)
However, if you can run any Docker command at all, then generally it's trivial to get unrestricted root access on the system. (Consider using docker run -v /etc:/host-etc to add yourself to /etc/sudoers or to allow root logins with no password.)
Also remember that anyone who has the image can docker run it and see the file there, if that matters to your security concerns. If you're looking for a single file with root access on the system anyways, you can find it without too much effort in /var/lib/docker. They can also easily run docker history to see the database root password you've set.
Related
I have a MySQL Docker container running in my local Windows machine. I want to load Employees database into that docker container.
Employees Database Reference: https://dev.mysql.com/doc/employee/en/
I tried using MySQL Workbench and "Run SQL Script", but it's throwing below error:
[WinError 32] The process cannot access the file because it is being used by another process:
'C:\\Users\\roul\\AppData\\Local\\Temp\\tmp4fbw2bb4.cnf'
After reading some article I think we may have one option of attaching the script file location volume into the container and run the script from docker command prompt, but that I'm unable to do it.
Anyone here have already done that?
Find the datadir of you MySQL Server:
SHOW VARIABLES WHERE variable_Name LIKE "datadir"
Copy the content of the folder to your datadir (\. copy the content of the folder, not the folder; maybe you want improve this to not mess the datadir):
docker cp test_db-master/. CONTAINER:/var/lib/mysql/
Run the script inside the container:
docker exec -i CONTAINER /bin/bash -c "cd /var/lib/mysql/ && /usr/bin/mysql -u root --password=123456 < /var/lib/mysql/employees.sql"
I know how to restore a dump file from mysqldump. Now, I am attempting to do that using kubernetes and a docker container. The database files are in persistent (nfs) mount. The docker cannot be accessed outside of the cluster as there is no need for anything external to touch it.
I tried:
kubectl run -i -t dbtest --image=mariadb --restart=Never --rm=true --command -- mysql -uroot -ps3kr37 < dump.sql
and
kubectl exec mariadb-deployment-3614069618-mn524 -i -t -- mysql -u root -p=s3kr37 < dump.sql
But neither commands worked -- errors about TTY, sockets, and other things hinting that I am missing something vital here.
What am I not understanding here?
I could just stop the deployment, scp the database files, and restart the container and hope for the best. However, what can go right?
The question Install an sql dump file to a docker container with mariaDB sure looks like a duplicate but is not: first, I am on Linux not Windows and more importantly the answers all are about initialising with a dump. I want to be able to trash the data and revert to the dump data. This is a test system that will eventually be the "live" so I need to restore from many potential dumps.
As described in here you can use the following command to restore a DB on kubernetes pod from a dump in your machine
$ kubectl exec -it {{podName}} -n {{namespace}} -- mysql -u {{dbUser}} -p{{password}} {{DatabaseName}} < <scriptName>.sql
Example :
$ kubectl exec -it mysql-58 -n sql -- mysql -u root -proot USERS < dump_all.sql
What I did was this:
Create an NFS mount with two sub0drectories: mysql and initd.
In initd, I added several ,sql files, including the dump.
Mount initd as /docker-entrypoint-initdb.d in the deployment.This causes all the files to be read at initialisation time provided that it is the first time we run.
The mysql directory is mounted as /var/lib/mysql and contains all the mariaDB files.
If I need to revert, I trash all the contents of the mysql directory and re-create the deployment.
This should work:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml exec -i ddevdb-XXXXX -- mysql -u root -h mysqlservice -proot drupal < you-dump.sql
kubeconfig is optional, digitalocean for examples provides that so you can run your commands from your local.
To see if everything looks good:
kubectl --kubeconfig=k8s-XXXXXXX-kubeconfig.yaml run -it --rm --image=mariadb:10.4 --restart=Never mysql -- mysql -h mysqlservice -proot
After which you'll have a terminal inside mysql.
I'm new in docker, and i have two microservices running in two containers and i would like to create simple database for them.
i created it like that:
docker run --net=kajsnetwork -d -e MYSQL_ROOT_PASSWORD='mypassword' -v /storage/mysql1/mysql-datadir:/var/lib/mysql mysql
i enter the container using
docker exec -it containernumber /bin/bash
and then i created database... But when i went to /var/lib/mysql mysql on host i haven't there nothing new - no database which i created from docker file. Did i something wrong ?
I would like to have database with data stored on host, but running in a docker container (is it good solution?) ? How to do it correctly?
You should not have to docker exec to create an instance: the container should already have one.
The doc mentions:
The -v /my/own/datadir:/var/lib/mysql part of the command mounts the /my/own/datadir directory from the underlying host system as /var/lib/mysql inside the container, where MySQL by default will write its data files.
So the order matters.
The docker cmd option -v /storage/mysql1/mysql-datadir:/var/lib/mysql indicates that you are mounting host directory /storage/mysql1/mysql-datadir to /var/lib/mysql as a data volume of the container.
So if you check /var/lib/mysql from the container your should see the same contents as /storage/mysql1/mysql-datadir in your host machine.
More details:
https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume
What is the optimal way to load in a sql dump when using docker-compose + docker automated builds?
Have been ignoring docker-compose for a moment and trying to understand docker and it's automated builds at first but have come to realize that i will probably need docker-compose if i want to accomplish my project goal that is to use one 1 command and from that have a fully working 3 site Docker cluster
1xHAProxy
3xUbuntu/wp
3xMysqld
In my Dockerfile i can just include the db.sql from my Github repo like
ADD db.sql /tmp/db.sql
Failing to find a best practise how i should load my DB without writing any commands outside of build.
Want to know your solution to this using Dockerfile or Compose
By just executing one of the commands below a mysql FROM mysql with ADD db.sql db.sql should be build / run while loading db.sql in to mysql db wp
Dockerfile
$docker run -d user/repo:tag
docker-compose.yml
$docker-compose up
If am totally on the wrong path here please give me some references. Could also mention that am planning to use CoreOS once i feel OK with Docker. So if best practices on a CoreOS > Docker setup is something else, let me know!
There are two options for initializing a SQL file during build or run time:
The first would be to just base your MySQL image on the official image and place your SQL file in /docker-entrypoint-initdb.d (using something like ADD my.sql /docker-entrypoint-initdb.d/ in the Dockerfile). The official image has a fairly complex entrypoint script (https://github.com/docker-library/mysql/blob/master/5.7/docker-entrypoint.sh) which starts MySQL, initializes a username and password, and scripts from the /docker-entrypoint-initdb.d folder.
The other option would be to do something like the answer at https://stackoverflow.com/a/25920875/684908 and just add a command such as:
COPY dump.sql /tmp/
RUN /bin/bash -c "/usr/bin/mysqld_safe &" && \
sleep 5 && \
mysql -u root -e "CREATE DATABASE mydb" && \
mysql -u root mydb < /tmp/dump.sql
I am trying to automate the installation and running of set of linked docker containers using fig. The configuration is composed of a container running RStudio linked to a container running MySQL, such that I can query the MySQL database from RStudio.
On first run, I would like to create the MySQL container from the base MySQL image, and populate it with a user and database. From the command line, something like this:
#Get the latest database file
wget -P /tmp http://ergast.com/downloads/f1db.sql.gz && gunzip -f /tmp/f1db.sql.gz
#Create the database container with user, password and database
docker run --name ergastdb -e MYSQL_USER=ergast -e MYSQL_ROOT_PASSWORD=mrd -e MYSQL_DATABASE=f1db -d mysql
#Populate the database
docker run -it --link=ergastdb:mysql -v /tmp:/tmp/import --rm mysql sh -c 'exec mysql -h$MYSQL_PORT_3306_TCP_ADDR -P$MYSQL_PORT_3306_TCP_PORT -uergast -pmrd f1db < /tmp/import/f1db.sql'
#Fire up RStudio and link to the MySQL db
docker run --name f1djd -p 8788:8787 --link ergastdb:db -d rocker/hadleyverse
If I could get hold of a database image with the data preloaded, I guess that something like the following fig.yml script could link the elements?
gdrive:
command: echo created
image: busybox
volumes:
- "~/Google Drive/shareddata:/gdrive"
dbdata:
image: mysql_preloaded
environment:
MYSQL_USER=ergast
MYSQL_ROOT_PASSWORD=mrd
MYSQL_DATABASE=f1db
rstudio:
image: rocker/hadleyverse
links:
- dbdata:db
ports:
- "8788:8787"
volumes_from:
- gdrive
My question is, can I use a one-shot fig step to create the dbdata container, then perhaps mount a persistent volume, link to it and initialise the database, presumably as part of an initial fig up. If I then start and stop containers, I don't want to run the db initialisation step again, just link to the data volume container that contains the data I previously installed.
I also notice that the MySQL docker image looks like it will support arbitrary datadir definitions (Update entrypoints to read DATADIR from the MySQL configuration directly instead of assuming /var/lib/docker). As I understand it, the current definition of the MySQL image prevents mounting (and hence persisting) the database contents within the database container. I guess this might make it possible to create a mysql_preloaded image, but I don't think the latest version of the MySQL docker script has been pushed to dockerhub just yet and I can't quite think my way to how fig might then be able to make use of this alternative pathway?
Some options:
Edit the fig.yml to run a custom command that is different than the default image command/entrypoint.
From http://www.fig.sh/yml.html (example):
command: bundle exec thin -p 3000
Start the container locally, modify it and then commit it as a new image.
Modify the MySQL image docker-entrypoint.sh file to do your custom initialization.
https://github.com/docker-library/mysql/blob/567028d4e177238c58760bcd69a8766a8f026e2a/5.7/docker-entrypoint.sh
Couldn't you just roll your own version of the MySQL docker image? The official one from MySQL "upstream" is available at https://github.com/mysql/mysql-docker/blob/mysql-server/5.7/Dockerfile
What if you simply make your own copy of that, remove the VOLUME line (line 11) and then you can
docker build -t my_mysql .
docker run -d --name=empty_db my_mysql ...
# add data to the database running in the container
docker commit empty_db primed_db
docker rm -v empty_db
docker run -d --name=instance1 primed_db
docker run -d --name=instance2 primed_db
which should leave you with two running "identical" but fully isolated instances.