Execute SQL script on docker compose - mysql

I have a project that runs when ./entrypoint.sh or docker-compose up is run from the root directory of project and generates the swagger API interface, but the calls return entry response no data.
If I run with MySQL on localhost without docker, works perfectly fine. How do I load the data?
entrypoint.sh
#!/bin/bash
docker network create turingmysql
docker container run -p 3306:3306 --name mysqldb --network turingmysql -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=tshirtshop -d mysql:5.7
docker-compose build
docker-compose up
DockerFile
FROM mysql:5.7
ADD ./database/tshirtshop.sql /docker-entrypoint-initdb.d
#### Stage 1: Build the application
FROM openjdk:8-jdk-alpine as build
# Set the current working directory inside the image
WORKDIR /app
# Copy maven executable to the image
COPY mvnw .
COPY .mvn .mvn
# Copy the pom.xml file
COPY pom.xml .
# Build all the dependencies in preparation to go offline.
# This is a separate step so the dependencies will be cached unless
# the pom.xml file has changed.
RUN ./mvnw dependency:go-offline -B
# Copy the project source
COPY src src
# Package the application
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
#### Stage 2: A minimal docker image with command to run the app
FROM openjdk:8-jre-alpine
ARG DEPENDENCY=/app/target/dependency
# Copy project dependencies from the build stage
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.turing.ecommerce.TuringApplication"]
docker-compose.yml
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysqldb # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysqldb:3306/tshirtshop?useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Database Service (Mysql)
mysqldb:
image: mysql:5.7
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: tshirtshop
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: root
volumes:
- db-data:/var/lib/mysql
networks:
- turingmysql
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:

Do you have two Dockerfiles? Looks like you built your own MySQL container?
Otherwise, these shouldn't be part of your Java multi-stage build
FROM mysql:5.7
ADD ./database/tshirtshop.sql /docker-entrypoint-initdb.d
Assuming that you did build a separate image for mysql, in the Docker-Compose, you're not using it, as you're still referring to image: mysql:5.7
Rather than building your own, you should mount the SQL script into it
For example
mysqldb:
image: mysql:5.7
...
volumes:
- db-data:/var/lib/mysql
- ./database/tshirtshop.sql:/docker-entrypoint-initdb.d/0_init.sql
Then, forget the Java service for a minute and use MySQL workbench or the mysql CLI to verify that data is actually there. Once you do, then startup the API

If you copying sql scipt already to docker build then you do not need to mapped it again in the docker-compose, if you have docker-compose then you do not the bash script single command docker-compose up --build will do the job.
So modify your docker-compose as per your Dockerfile.
Dockerfile
FROM mysql
ADD init.sql /docker-entrypoint-initdb.d
docker-compose
version: '3.7'
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysqldb # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysqldb:3306/tshirtshop?useSSL=false&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
networks: # Networks to join (Services on the same network can communicate with each other using container name)
- uringmysql
# Database Service (Mysql)
mysql:
build: .
environment:
MYSQL_ROOT_PASSWORD: root123
MYSQL_DATABASE: appdata
MYSQL_USER: test
MYSQL_PASSWORD: root123
volumes:
- db-data:/var/lib/mysql
tty: true
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:
Now just run
docker-compose up --build
this will build and up the container and you will not need to mapped the host init script, as it already in Docker image.
The directory structure will look like
Now you application will able to access DB using jdbc:mysql://mysqldb:3306/tshirtshop? this endpoint as both are in same network and can refer eacher other using name.

Thank you cricket_007 and Adii for the responses. They put me in the right direction. I want to document my experience and how the issue was resolved. New to dockerization so I was learning by practice. For anyone new to dockerization and having same issues in Spring Boot, MySQL and docker, this would surely help
First, my entrypoint.sh changed below. The docker-compose down is for restarts.
#!/bin/bash
docker-compose down -v
docker-compose up --build
Second, I had to use an existing mysql image instead of building one.
version: '3.7'
# Define services
services:
# App backend service
app-server:
# Configuration for building the docker image for the backend service
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
restart: always
depends_on:
- mysql # This service depends on mysql. Start that first.
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://mysql:3306/tshirtshop?useSSL=false&allowPublicKeyRetrieval=true&useLegacyDatetimeCode=false&serverTimezone=UTC
SPRING_DATASOURCE_USERNAME: turing
SPRING_DATASOURCE_PASSWORD: pass
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Database Service (Mysql)
mysql:
image: mysql/mysql-server
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: tshirtshop
MYSQL_USER: turing
MYSQL_PASSWORD: pass
volumes:
- db-data:/var/lib/mysql
- ./database:/docker-entrypoint-initdb.d
tty: true
networks: # Networks to join (Services on the same network can communicate with each other using their name)
- turingmysql
# Volumes
volumes:
db-data:
# Networks to be created to facilitate communication between containers
networks:
turingmysql:
driver: bridge
Needed to specify that the network is a bridge. My sql file was mounted from a folder relative to docker-compose.yml. Also had to add allowPublicKeyRetrieval=true to my jdbc url. Created a user to access the database tshirtshop.
And here is the Dockerfile.
#### Stage 1: Build the application
FROM openjdk:8-jdk-alpine as build
# Set the current working directory inside the image
WORKDIR /app
# Copy maven executable to the image
COPY mvnw .
COPY .mvn .mvn
# Copy the pom.xml file
COPY pom.xml .
# Build all the dependencies in preparation to go offline.
# This is a separate step so the dependencies will be cached unless
# the pom.xml file has changed.
RUN ./mvnw dependency:go-offline -B
# Copy the project source
COPY src src
# Package the application
RUN ./mvnw package -DskipTests
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf ../*.jar)
#### Stage 2: A minimal docker image with command to run the app
FROM openjdk:8-jre-alpine
ARG DEPENDENCY=/app/target/dependency
# Copy project dependencies from the build stage
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.turing.ecommerce.TuringApplication"]
to run, from root directory of project ./entrypoint.sh on mac and the rest is history.

Related

Error connecting Mysql from Go REST API with Docker Compose

I'm very new to Docker, and I'm trying to dockerize a Go REST API and MySQL database to communicate with each other using Docker Compose. I am getting the error [main] Error 1049: Unknown database 'puapp'
Docker compose:
version: '3'
services:
db:
build: ./mysql/
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
volumes:
- db_volume:/var/lib/mysql
api-service:
restart: always
build: ./
ports:
- "8080:80"
environment:
- DB_USER=root
- DB_PASS=root
- DB_ADDRESS=db:3306
- DB_PROTOCOL=tcp
- DB_NAME=puapp
depends_on:
- db
links:
- db
volumes:
db_volume:
Dockerfile for go service:
# syntax=docker/dockerfile:1
# Build stage
FROM golang:1.16-alpine AS builder
WORKDIR /app
COPY . .
RUN go mod download
WORKDIR /app/src/main
RUN go build -o restserv
# Run stage
FROM alpine:3.13
WORKDIR /app
COPY --from=builder /app/src/main/restserv .
EXPOSE 8080
CMD "./restserv"
Dockerfile for MySQL:
FROM mysql:latest
ADD dump.sql /docker-entrypoint-initdb.d
Full code - https://github.com/bens-schreiber/restservproj
Let me know if I need to add anything
Containers will be having their own ip addresses, so API container won't be able to access mysql container over 127.0.0.1. As mentioned in the comments, you want to utilize container's names to addresses from container from another. See this page for details.

docker-compose run does not run the entrypoint scripts of dependent services

I am trying to run a one time command on my application container using the command
docker-compose run --entrypoint="/usr/src/app/migrate.sh" app
app is the name of my service and the said entrypoint contains the one-time command that I'm trying to run.
Here's my docker-compose.yml file
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
depends_on:
- db1
- db2
# the environment variables are used in docker/config/env_config.rb to connect to different database containers
environment:
MYSQL_DB1_HOST: db1
MYSQL_DB1_PORT: 3306
MYSQL_DB2_HOST: db2
MYSQL_DB2_PORT: 3306
db1:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: test1
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
# mount volume of the schema script to /docker-entrypoint-initdb.d to execute the script on startup
volumes:
- ./docker/seed/db1:/docker-entrypoint-initdb.d
- db1-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
db2:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: test2
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
# mount volume of the schema script to /docker-entrypoint-initdb.d to execute the script on startup
volumes:
- ./docker/seed/db2:/docker-entrypoint-initdb.d
- db2-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1201:3306"
Everything works as expected when I start docker-compose up, but when I invoke docker-compose run, the dependent services db1 and db2 containers are up, but they are not initialised with the entrypoint script(as a result the mySQL database is not created). The volume is attached though.
How can I ensure that the entrypoint script of the dependent containers is invoked as well?

Seed data in mySQL container after start up

I have a requirement where I need to wait for a few commands before I seed the data for the database:
I have some Migration scripts that create the schema in the database (this command runs from my app container). After this executes, I want to seed data to the database.
As I read, the docker-entrypoint-initdb scripts is executed when the container is initialized. If I mount my seed.sql script to it, the data is seeded before the Migrate scripts. (The Migrate scripts actually drop all tables and create them from scratch). The seeded data is therefore lost.
How can I achieve this? (I cannot change the Migrate scripts)
Here's my docker-compose.yml file
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
environment:
MIGRATE: Y
<some env variables here>
config-dev:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: config_dev
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
# to persist data
- config-dev-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
<other database containers>
My Dockerfile for app container has the following ENTRYPOINT
# start the application
ENTRYPOINT /usr/src/app/docker-entrypoint.sh
Here's the docker-entrypoint.sh file
#!/bin/bash
if [ "$MIGRATE" = "Y" ];
then
<command to start migration scripts>
echo "------------starting application--------------"
<command to start application>
else
echo "------------starting application--------------"
<command to start application>
fi
Edit: Is there a way I can run a script in config-db container from the docker-entrypoint.sh file in app container?
This can be solved in two steps:
You need to wait until your db container is started and is ready.
Wait until started can be handled by adding depends_on in docker-compose file:
version: '3'
services:
app:
build: .
# mount the current directory (on the host) to /usr/src/app on the container, any changes in either would be reflected in both the host and the container
depends_on:
- config-dev
- <other containers (if any)>
volumes:
- .:/usr/src/app
# expose application on localhost:36081
ports:
- "36081:36081"
# application restarts if stops for any reason - required for the container to restart when the application fails to start due to the database containers not being ready
restart: always
environment:
MIGRATE: Y
<some env variables here>
config-dev:
image: mysql/mysql-server:5.7
environment:
MYSQL_DATABASE: config_dev
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
# to persist data
- config-dev-volume:/var/lib/mysql
restart: always
# to connect locally from SequelPro
ports:
- "1200:3306"
<other database containers>
Wait until db is ready is another case because sometimes it takes time for the db process to start listening on the tcp port.
Unfortunately, Docker does not provide a way to hook onto container state. There are many tools and scripts to have a workaround this.
You can go through this to implement the workaround.
https://docs.docker.com/compose/startup-order/
TL;DR
Download https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh inside the container and delete the ENTRYPOINT field (Not required for your use case) and use CMD field instead:
CMD ["./wait-for-it.sh", "<db_service_name_as_per_compose_file>:<port>", "--", "/usr/src/app/docker-entrypoint.sh"]
Now, That this is complete. Next part is to execute your seed.sql script.
That is easy and can be executed by adding following line into your /usr/src/app/docker-entrypoint.sh script.
sqlcmd -S -U -P -i inputquery_file_name -o outputfile_name
Place above command after migrate script in /usr/src/app/docker-entrypoint.sh

How to specify a docker container database to a app running in docker with docker-compose.yml?

Context
There is this docker-compose.yml:
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8080:80
links:
- database
volumes:
- /var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
database:
image: mariadb
restart: always
environment:
# #see https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/DefaultSettings.php
MYSQL_DATABASE: my_wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: example
MYSQL_RANDOM_ROOT_PASSWORD: yes
When I run docker ps I get:
89db8794029a mysql:latest "docker-entrypoint..." ... 0.0.0.0:8083->3306/tcp some-mysql
This is a mysql docker container running.
Question
How can I modify the docker-compose.yml in a way that the database to point to the mysql docker container (89db8794029a) already running?
you don't have to add the database service on the yml file.
In order mediawiki service connect to some-mysql container, the mediawiki container need to be on a same network with some-mysql container
assuming that the mediawiki already up
first, you need to know what network some-mysql use,
docker network ls
i'm guessing it would be 'some-mysql_default'
to connect media wiki to some-mysql
docker network connect some-mysql_default mediawiki
now, use 'some-mysql' as hostname database in mediawiki config
OR
yml file to automatically connect to mysql network
version: '3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- 8080:80
links:
- database
volumes:
- /var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
networks:
- default
- some-mysql_default
networks:
default: # this network
driver: bridge
some-mysql_default: # external network
external: true

Docker-compose: Copying files from local env to EC2 instance

Hello I have a configuration that builds docker containers for a flask app and a mysql instance.
I create a new VM with
docker-machine create -d amazonec2 --....... production
and then (after setting the correct environment)
docker-compose build -> docker-compose up -d
The problem is that all these happen whilst CWD is a local repo with the files I need. It turns out these files are not copied over.
I have looked at docker cp and docker scp but it seems they do not solve the problem. E.g. with SCP I cannot reference the specific machine I need to copy the repo over (xow_web_1)
Here is the .yml
web:
restart: always
volumes:
- .:/xow
build: .
ports:
- "80:80"
links:
- db
hostname: xowflask
command: python xow.py
db:
restart: always
hostname: xowmysql
image: mysql:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: somepasswordhere
MYSQL_DATABASE: somedatabase
data:
restart: always
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
How would be the most appropriate way to solve this? Is docker-compose the right approach? Looks awesome, but it doesn't solve an issue like this
The way we solved it in our organization is by using the COPY command to copy all of the data in the folder to the container.
For example, copying all of the files from the current dir to the container /src folder will look like this -
### Copy Code
COPY . /src
It looks like you should add this line into the web container in your docker-compose configuration.