Docker. Create image based on MySQL with DB and User created - mysql

I want to create an image with docker for my app.
The app uses MySQL. I need my image to be based on MySQL image (mysql/mysql-server ?) .
IN the Dockerfile i need to set some instructions to create a DB with specific user/password . So, my app can work with that DB .
I don't need tables, only empty DB with specific name and user/password that can access this DB.
How can i do this?
I wanted something like
FROM mysql/mysql-server
# Create MySQL DB
mysql -u root -e "CREATE DATABASE MyDB"
But i don't know root user password here. It seems it is autogenerated ?
How can i do this?

That image auto-generates the root password by default, as stated in the image github repository (https://github.com/mysql/mysql-docker). You can set the MYSQL_ROOT_PASSWORD environment variable in your Dockerfile with the password you want.
Apart from that, if what you want is to create a database at image startup, you can use the environment variable MYSQL_DATABASE.
More info about the supported environment variables here:
https://github.com/mysql/mysql-docker#docker-environment-variables

You have 2 solutions here:
[Easy] Using the docker-compose and create a docker-compose.yml file like this one:
version: '3'
services:
mariadb:
container_name: mariadb
image: mariadb:latest
environment:
- MYSQL_DATABASE=_YOUR_DB_
- MYSQL_ROOT_PASSWORD="_YOUR_PASSWORD_"
- MYSQL_USER=_YOUR_USER_
- MYSQL_PASSWORD=_YOUR_USER_PASSWORD_
...
This configuration will bring a MariaDB database to you. also if you want to use it, you can simply check this page for the installation guide:
https://github.com/docker/compose
The final step is just to go into the directory you saved the docker-compose.yml and just run:
docker-compose up
or if you don't want to see the log inside the terminal just add -d flag to it.
2. [Little Complicated] You can create a custom image for your needs. in this case, it is better if you check the Dockerfile documents and then see this autogenerated default MariaDB Dockerfile for understanding what to do exactly to achieve your goal.

Related

Two MySQL docker container

I have a server that already run a MySQL server container on port 3306:3306 (build from a docker-compose.yml file)
I would like to run another MySQL container on port 3307:3306 from another docker-compose.yml. The problem is that for the second container the MYSQL_ROOT_PASSWORD is never set and I got an access denied.
Both containers are targeting different volumes.
Is it possible to run two MySQL container from two different docker-compose.yml file on the same server?
You are able to run 2 instances of MySQL on the same host and they don't interfere with each other.
What I think is causing your issue is that the environment variable MYSQL_ROOT_PASSWORD (and other environment variables) is only used when the container is started witout an existing database. If a database already exists, then the root password stored in the database is used.
You need to find out what root password was set when the database was created and use that.
The problem is solved.
docker-compose.yml doesn't accept env variable with the char $. To solve it we must escape the char like this : $$.
Thanks for your help guys.

How to import a mysql dump file into a Docker mysql container

Greetings and thanks in advance, I'm actually new to docker and docker-compose, watching a lot of videos and reading a lot of articles so far along with trying things.
I've got a front end container and a back end container that build and run alone as a Dockerfile and in a docker-compose setup.
(I've been building with Dockerfile first and then integrating the containers into docker-compose to make sure i understand things correctly)
I'm at the point where i need my database info, since i'll use docker-compose, as i understand it, it should build under the same network with a react front end and django back end.
I have a backup mysql dump file that I'm working with, what i think i need to do is have a container running mysql server and serving out my tables (like I have it locally working). I haven't been able to figure out how to import the backup into my docker mysql container.
Any help is appreciated.
What I've tried so far is using docker in the command line to outline the pieces i'll need in the Dockerfile and then what to move into the docker-compose as mentioned above:
docker run -d --name root -e MYSQL_ROOT_PASSWORD=root mysql # to create my db container
Then I've tried a bunch of commands and permutations of commands, recently in the CLI, here are some of my most recent trials and errors:
docker exec -i root mysql -uroot -proot --force < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1046 (3D000) at line 22: No database selected
docker exec -i f803170ce38b sh -c 'exec mysql -uroot -p"root"' < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1046 (3D000) at line 22: No database selected
docker exec -i f803170ce38b sh -c 'exec mysql -uroot -h 192.168.1.51 -p"root"' < /Users/homeImac/Downloads/dump-dev-2020-11-10-22-43-06.dmp
ERROR 1045 (28000): Access denied for user 'root'#'homeimac' (using password: YES)
I've scoured the web so far and i'm not sure where to go next, have I got the right idea? If anyone has an example of how to import a database dump (in dmp or dmp.gz), once i get that working, I'll actually do that in the docker-compose file.
Thinking about it, i just have to create the container and import so I might not even need a Dockerfile.
I'll cross that bridge when i get there. This is what I'm thinking though:
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: 'app'
etc etc
I've learned a lot super fast, maybe too fast. Thanks for any tips!
The answer to your question is given in the docker hub page of MySQL.
Initializing a fresh instance
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint->initdb.d. Files will be executed in alphabetical order. You can easily populate your mysql services by mounting a SQL dump into that directory and provide custom images with contributed data. SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
In your docker-compose.yml use:
volumes:
- ${PWD}/config/start.sql:/docker-entrypoint-initdb.d/start.sql
and that's it.
Here's the answer that worked for me after working with 2 colleagues that know backend better where I work.
It was pretty simple actually. I created a directory in my repo that would be empty.
I added *.sql and *.dmp to my .gitignore so the dump files would not increase the size of my image.
That directory using docker-compose would be used as a volume under the mysql service:
volumes:
- ~/workspace/app:/workspace/app
The dump file is placed there and is imported into the sql service when I run:
mysql -u app -papp app < /path/to/the/dumpfile
I can go in using docker exec and verify not only the database is there but the tables from my dump file are there as well.
For me, I had to create a new superuser also in my backend container through our Django app.
python3 manage.py createsuperuser
With that, then logging in on localhost:8000/api, everything was linked up between the mysql, backend, and frontend containers.
Hope this helps! I'm sure not all the details are the same for others post volumes, but using volumes, I didn't have to copy any dump file in and it ended up automatically imported and served. That was my big issue.
another way:
docker exec -i containername mysql -uroot -ppassword mysql < dump.sql
from the folder where dump.sql resides

Docker Compose mysql environment variables vs Application .env database variables. Majorly confused

I Have watched approximately 23.74 docker-compose tutorials for laravel and mysql containers!
Please can someone explain to me???
When I create my docker-compose file I create an mysql container from a mysql image.
THEN
I have to enter variables that look like this:
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
What is the difference between these variables and the variables that I enter into the .env file of my laravel app.
THESE ONES:
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=laradb
DB_USERNAME=root
DB_PASSWORD=secret
WHAT IS THE DIFFERENCE????
What is the docker-compose variables doing. And what are these .env variables doing. And Why am I setting these up twice?
The reason I am asking is because whenever I follow docker tutorials to set up mysql I can only get it to work if I use the variety of variables provided to me by the teacher?? This makes no sense. What about if I want to use my own variable values??!
As soon as I try use my own variables the db breaks and I can't connect to it.
Is homestead some special db instance that is for laravel? This was not an issue when I do it all locally without docker.
For example. The above docker-compose variables were used to create a mysql container and then when i connect to it with SQL workbench I see a schema called 'homestead'. Now what do I do if I don't want that Schema to be called homestead, or what if I want to add another Schema?? It doesn't let me??(permission denied).
I have now spent 3 days trying to create an empty laravel app that connects to a db in a separate container that uses mysql that I can connect to via SQL workbench to see the actual db. I want to be able to create the schema name I want to use in SQL workbench and then be able to set that schema as the db name in my laravel .env file.
Please HELP! You don't have to solve this problem for me but can you point me towards some helpful material that explains this stuff!! For docker-compose specifically mysql. No looking to use std docker commands in the terminal if possible.
.env file vs docker-compose.yml environment::
https://docs.docker.com/compose/env-file
https://docs.docker.com/compose/environment-variables
They have different scopes / precedence.
Passing environment variables in for the benefit of MySQL:
https://hub.docker.com/_/mysql
The MySQL container expects those environment variables to exist and have values.
Likewise for the Laravel container needs to be able to talk to the MySQL container, hence it needs the values to match and that is why there is overlap.
The bash command printenv might help tighten this up, as you can see what environment variables are exposed to which containers (docker run msql_container_name bash -c 'printenv' vs docker run laravel_container_name bash -c 'printenv').
https://github.com/reflexions/docker-laravel (as an example)
You've mentioned you don't mind being sent to references, so I've primarily done that - but I'm happy to elaborate in here or in comments if it still isn't making sense / I'm not addressing the main issue.

How to run a SQL script at every MySQL initialization?

I would like to like to run a SQL script at MySQL initialization. This script has basically some UPDATE commands and has be run at each initialization. Basically, the ideia is to update Root and User passwords at each initialization, with vault credentials that are obtained at each database startup. The MySQL database is being deployed inside a Docker container.
In this scenario, is there a way to preset a SQL script that can be run at every database initialization inside a Docker container? If so, please give us an example of how to do implement that. I do use docker-entrypoint.sh and foreground.sh for some customizations at such container.
mysqld accepts a parameter that specifies initialization scripts and you can use that.
docker-compose.yml
version: "3.3"
services:
db:
image: mariadb:10.1
environment:
- MYSQL_ROOT_PASSWORD
volumes:
- ./init.sql:/script/init.sql
command: "--init-file /script/init.sql"
init.sql
UPDATE mysql.user SET authentication_string=PASSWORD("mihai") WHERE USER="root";
UPDATE mysql.user SET plugin="mysql_native_password";
.env
MYSQL_ROOT_PASSWORD=rootpassword
Run the container and test it:
docker-compose up -d
docker-compose exec db mysql -hlocalhost -uroot -pmihai
select plugin from mysql.user where USER='root';
You can see that the plugin has also been updated so the scripts both worked.
You can remove the command and test with the original password as well. Make sure to remove the volumes between runs.

Issues with custom mysql docker image

I have been trying to dockerize my node js application which utilizes mysqldb for databses and I have been having issues as how to dockerize mysql . I want to build my custom docker image on top of mysql which when run create database and tables so that my node js application can use those databases and table . Below is the code of my custom mysql docker image
FROM mysql:5.7
ENV MYSQL_ROOT_PASSWORD root
ENV MYSQL_DATABASE degreeclearencedatabase
ENV MYSQL_USER root
ENV MYSQL_PASSWORD 123456
COPY ./sqlscripts/ /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD ["mysqld"]
and here are my sql scripts
CREATE TABLE employees (
first_name varchar(25),
last_name varchar(25),
department varchar(15),
email varchar(50)
);
While building this image I got following logs
root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
Now when I ran container of that custom image , I am unable to login to mysql and getting following error
Can't connect to local MySQL server through socket
'/var/run/mysqld/mysqld.sock'
I will be really grateful if someone helps me in identifying what mistakes I am doing in my docker file ?
Thanks,