Docker importing a sql script on mysql container - mysql

How can this docker script be modified to allow a sql file to be imported into the mysql container? I need to modify the database on the mysql container.
version: '3'
services:
devbox:
build:
context: ./
dockerfile: DevBox.DockerFile
ports:
- "80:80"
- "443:443"
volumes:
- .:/var/gen4
- ./offers:/var/www/vhosts/offers
devmysql:
image: mysql:5.7
platform: linux/x86_64
environment:
MYSQL_ROOT_PASSWORD: mypwd
MYSQL_DATABASE: offers
ports:
- "3306:3306"
restart: always

The official MySQL images support creating a volume called /docker-entrypoint-initdb.d/
So in your devmysql section of your compose file do something like this
volumes:
- ./data:/docker-entrypoint-initdb.d/:ro
In this case, you'd want to have a data/ folder in the root of your project (wherever compose is being run from) and in that data/ folder you can put a SQL file with whatever commands you want. They'll be run.
If you're not running the official images, you might be able to create your own image that manually does something similar.

Related

My mysql container in Docker, via the feature I gave it from the dockerfile to restart itself, keeps turning itself on and off

My mysql container in docker, via the feature I gave it from the dockerfile to restart itself, keeps turning itself on and off after I attempted the docker-compose up --build command.
My Dockerfile contains several containers, including an apache, a mysql with a volume to save the data of a database and a php.
I ran into this problem as I was carrying out various tests since I had just created the volume and I wanted to see if the database was not losing the data inside it.
After various commands of docker-compose down and docker-compose up it happened that the mysql container did not work anymore, not even after other commands docker-compose down and docker-compose build --no-cache.
below i added my docker.compose.yml
all versions of the images and data for the database are taken from the various dockerfiles
version: "3.2"
services:
php:
build:
context: './php/'
args:
PHP_VERSION: ${PHP_VERSION}
networks:
- backend
volumes:
- ${PROJECT_ROOT}/:/var/www/html/
container_name: php
apache:
build:
context: './apache/'
args:
APACHE_VERSION: ${APACHE_VERSION}
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "80:80"
volumes:
- ${PROJECT_ROOT}/:/var/www/html/
container_name: apache
mysql:
image: mysql:${MYSQL_VERSION:-latest}
restart: always
ports:
- "3306:3306"
volumes:
- data:/var/lib/mysql
networks:
- backend
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASSWORD}"
MYSQL_DATABASE: "${DB_NAME}"
MYSQL_USER: "${DB_USERNAME}"
MYSQL_PASSWORD: "${DB_PASSWORD}"
container_name: mysql networks:
frontend:
backend:
volumes:
data:
Anyone can describe me what I can do in this case in order not to lose the progress made so far within the database since doing a little research they tell me that the volume and the image of mysql may have been corrupted?

How can I import a dump sql file -DOCKER

I'm having trouble importing an .sql dump file with docker-compose.
With docker-entrypoint-initdb.d I should be able to load the .sql file...
.However, when I run docker-compose up, the sql file is not copied over to the container.
What am I doing wrong in my .yml script?
I have init.sql in the directory in the root directory where my compose file is.
Furthermore I the database but not the data (tables, inserts, more) are on adminer :(
version: '3'
services:
mysql-dev:
image: mysql:8.0.2
#command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: sdaapp
ports:
- "3308:3306"
volumes:
- "./data:/var/lib/mysql:rw"
- "./init:/docker-enttrypoint-initdb.d"
pgdb-dev:
image: postgres
restart: always
environment:
POSTGRES_USER: root
POSTGRES_PASSWORD: password
POSTGRES_DB: sdaapp
admin:
build:
context: .
dockerfile: Dockerfile
image: adminer
restart: always
ports:
- 8080:8080
THANKS for your help :)
Since your volume is pointed to ./init folder, you have to put your .sql script inside of it (or change the path of your volume). Also note that there is a typo in your docker-compose.yml file: docker-enttrypoint-initdb.d should be docker-entrypoint-initdb.d
And as pointed by MySQL's Documentation, the script is executed only for the first time you run the container. So you have to delete the database before running the container again and then be able to execute the script correctly.

Docker - how to alter my Docker Compose file to automate a bash script for the MySQL container?

I have the following setup Docker Composer setup and want to run a shell script to automate tasks like importing the DB into the MySQL database.
# Adopt version 2 syntax:
version: '2'
volumes:
database_data:
driver: local
services:
###########################
# Setup the Nginx container
###########################
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
volumes_from:
- php
###########################
# Setup the PHP container
###########################
php:
build: ./docker/php/
expose:
- 9000
volumes:
- .:/var/www
###########################
# Setup the Database (MySQL) container
###########################
mysql:
image: mysql:latest
expose:
- 3306
volumes:
- database_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
Best solution is to create a custom Dockerfile ,which extends mysql and add a custom shell script, which does what you want. For example:
start.sh
#!/bin/sh
mysqld
mysql -u project -ppropject project < /path/to/backup.sql
Don't forget to add your backup.sql either to your Dockerfile or docker-compose.yml
Now, Dockerfile:
FROM mysql:latest
COPY start.sh /tmp/start.sh
COPY backup.sql /path/to/backup.sql
CMD ["/tmp/start.sh"]
If you change your backup.sql frequently, it makes no sense to add it to Dockerfile. Instead, put it under volumes in docker-compose.yml:
mysql:
build: .
expose:
- 3306
volumes:
- ./backup.sql:/path/to/backup.sql
- database_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
You can keep using a the original image: load your setup script into the container as a Config (using the long-form definition so you can set the execute permission), and then override the Entrypoint to run your script (which should probably run the original entrypoint script once it finishes). So something like:
mysql:
image: mysql:latest
expose:
- 3306
volumes:
- database_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
configs:
- source: ./OverrideScript.sh
target: /OverrideScript.sh
#0777 will work too, but you can't write to it either way
mode: 0555
entrypoint: /OverrideScript.sh
The other answers are right that the "Proper" way would be to make your own image. But TBH if your override script is relatively small and lightweight, the workaround isn't so bad, and it gets you out of having to rebuild your custom image every time MySQL releases a new image.

Docker-compose mysql mount volume turns sql into folder

I am trying to use Docker to create a set of containers (wordpress and MySQL) that will help my local development with Wordpress. As we are running a live database, I want to mount a dump.sql file into the Docker mysql container. Below is my .yml file.
version: '2'
services:
db:
image: mysql:latest
volumes:
- ./data:/docker-entrypoint-initdb.d #./data holders my dump.sql file
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_PASSWORD: wordpress
volumes:
- ./wp-content/themes/portalV3:/var/www/html/wp-content/themes/portalV3
- ./wp-content/plugins:/var/www/html/wp-content/plugins
- ./wp-content/uploads:/var/www/html/wp-content/uploads
Everything works, but after ~10 seconds the docker container for mysql crashes. Going through the logs, I get the following error:
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/dump.sql
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR: Can't initialize batch_readline - may be the input source is a directory or a block device.
On closer inspection (attaching to the rebooted mysql container) I see that indeed my dump.sql file wasn't transferred to the container, but a folder with the same name was created in /docker-entrypoint-initdb.d.
Can anyone help me understand how I get docker-compose to copy my dump.sql file and import into the database?
Cheers,
Pieter
The problem you got with docker-entrypoint-initdb.d is that because your source 'data' is a directory and not a file, The destination file (docker-entrypoint-initdb.d) must be a directory too. And vice versa.
So either do
volumes:
- ./data:/docker-entrypoint-initdb.d/
or
volumes:
- ./data/mydump.sql:/docker-entrypoint-initdb.d/mydump.sql
Yes, that is how you should mount the .sql or .sh files i.e by adding a volume by mapping the SQL or .sh files to the docker container's docker-entrypoint-initdb.d folder. But, it's raising an error for some strange reason maybe because the MySQL docker version is old.
You could solve this by creating a custom image i.e,
Dockerfile
FROM mysql:5.7
COPY init.sql /docker-entrypoint-initdb.d/
It creates an image and also helps in running a init script while starting the container.
To use this in a compose file, put your SQL files and Dockerfile in a folder.
database
|---init.sql
|---Dockerfile
docker-compose.yml
version: '3'
services:
mysqldb:
image: mysqldb
build: ./database
container_name: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=test
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
By this, you could configure the environment variables easily.

How to create Mysql test database in dockerized Rails application?

I managed to dockerize my existing Rails application running on a Mysql database. But I wonder if it is possible to setup docker-compose to create the test database in the same container?
Here is my docker-compose.yml and it wirks fine with the mysql for developing
version: '2'
volumes:
db-data:
services:
db:
image: mysql:5.5
restart: always
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: verysecret
MYSQL_USER: appdb
MYSQL_PASSWORD: secret
MYSQL_DATABASE: appdb
volumes:
- db-data:/var/lib/mysql
web:
build: .
command: bundle exec rails s -p 3000
volumes:
- .:/app
ports:
- "3000:3000"
links:
- db
depends_on:
- db
Can I add a darabase more in environment part somehow?
You can create only one DB per Mysql container with docker compose. In your case, I think you should create a second DB container for the second database (isolated from the "real" DB, which is a good practice).
If you really want to have the 2 databases in the same container, you will have to create a Dockerfile based on the Mysql image, and add your command lines (RUN) to create the second DB.
HTH
Yes, you can, but it won't necessarily be as a single container, but a manager for coupling containers. If this is okay, I've added some steps that will help you configure your project. If not, I've added how to run a single image from a docker-compose file
You will want to start off by creating a docker-compose.yml file in the source directory for your project.
Then you'll want to add something like this inside your yml file.(this was taken from Docker's quick-start documentation. Modified to show mysql instead of postgres)
version: '3'
services:
db:
image: mysql
web:
build: .
command: bundle exec rails s -p 3306 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
Detail on how they create one can be found here:
https://docs.docker.com/compose/rails/#define-the-project.
Things to note:
web is the details about the rails container. So you will want to add an image property if you have already created your rails image.
Also, build: . is expecting your Dockerfile to be in the same location as your project. So if you create this docker-compose.yml somewhere else, you'll have to provide the path.
depends_on allows your app to build the DB before running rails
Once you finished creating the docker-compose.yml file run:
docker-compose build
followed by:
docker-compose up
If separating the containers, isn't what you are looking for: then you might want to look into creating a single image running both applications. Then use something like this to run from docker-compose.
version: '3'
services:
app:
image: {your-app-image}
build: .
volumes:
- .:/myapp
ports:
- "3000:3000"
- "3306:3306"
Note: somethings might vary on how you create your image from the Dockerfile.