docker pentaho mysql driver issue - mysql

I'm using Docker on Windows 10 to create a pentaho and mysql image that will run as containers on a network I define with docker network create.
The intention is that (as a first step) I will run a .KTR file with pan.sh that will read DB connection parameters from a .csv file and place these into the environment;
Get the DB connection parameters
Next a second .KTR checks to see if the DB exists using the above environment params;
Check DB exists
The problem is when I "Spin up" my project with docker-compose, step two fails with a driver not found issue. I've placed the drivers I require in the pentaho container's lib dir but I'm guessing this is not correct?
Ultimately, the intention is for a transformation to occur where data read from an OpenEdge DB is process via a series of steps in pentaho and written to the mysql DB.
Here's the supporting files;
Dockerfile;
FROM java:8-jre
MAINTAINER M Beynon
# Set required environment vars
ENV PDI_RELEASE=7.1 \
PDI_VERSION=7.1.0.0-12 \
CARTE_PORT=8181 \
PENTAHO_JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 \
PENTAHO_HOME=/home/pentaho
# Create user
RUN mkdir ${PENTAHO_HOME} && \
groupadd -r pentaho && \
useradd -s /bin/bash -d ${PENTAHO_HOME} -r -g pentaho pentaho && \
chown pentaho:pentaho ${PENTAHO_HOME}
# Add files
RUN mkdir $PENTAHO_HOME/docker-entrypoint.d
COPY docker-entrypoint.sh $PENTAHO_HOME/scripts/
RUN chown -R pentaho:pentaho $PENTAHO_HOME
RUN apt-get update && apt-get install -y libwebkitgtk-1.0-0
RUN apt-get update && apt-get install -y dos2unix
RUN dos2unix $PENTAHO_HOME/scripts/docker-entrypoint.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*
# Switch to the pentaho user
USER pentaho
# Download PDI
RUN /usr/bin/wget \
--progress=dot:giga \
http://downloads.sourceforge.net/project/pentaho/Data%20Integration/${PDI_RELEASE}/pdi-ce-${PDI_VERSION}.zip \
-O /tmp/pdi-ce-${PDI_VERSION}.zip && \
/usr/bin/unzip -q /tmp/pdi-ce-${PDI_VERSION}.zip -d $PENTAHO_HOME && \
rm /tmp/pdi-ce-${PDI_VERSION}.zip
ENV KETTLE_HOME=$PENTAHO_HOME/data-integration \
PATH=$KETTLE_HOME:$PATH
WORKDIR $KETTLE_HOME
ENTRYPOINT ["../scripts/docker-entrypoint.sh"]
The entrypoint;
#!/bin/bash
# based on https://github.com/aloysius-lim/docker-pentaho-di/blob/master/docker/Dockerfile
#exit script if any command fails (non-zero value)
set -e
cd resources
cp mysql-connector-java-5.1.42-bin.jar ../lib/
cp PROGRESS_DATADIRECT_JDBC_OE_ALL.jar ../lib
cd ../
echo 'Drivers copied!'
echo ''
echo 'Running transformation!'
#run a transformation (get db credentials)
./pan.sh -file=resources/Read-DBs.ktr
#run a transformation (does the db exist)
./pan.sh -file=resources/GoldBi-Exists.ktr
#redirect input variables
exec "$#"
The Docker compose file;
version: "2"
services:
db:
image: mysql:latest
networks:
- my-pdi-network
environment:
- MYSQL_ROOT_PASSWORD=tbitter
- MYSQL_DATABASE=mysql-db
ports:
- "3307:3306"
volumes:
- ./goldbi:/var/lib/mysql
pdi:
image: my-pdi-image:latest
networks:
- my-pdi-network
volumes:
- C:\Docker-Pentaho\resource:/home/pentaho/data-integration/resources
networks:
my-pdi-network:
The error coming from pentaho;
2017/05/30 15:28:56 - Table exists.0 - Error occurred while trying to connect to the database
2017/05/30 15:28:56 - Table exists.0 -
2017/05/30 15:28:56 - Table exists.0 - Error connecting to database: (using class org.gjt.mm.mysql.Driver)
2017/05/30 15:28:56 - Table exists.0 - Communications link failure
2017/05/30 15:28:56 - Table exists.0 -
2017/05/30 15:28:56 - Table exists.0 - The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
Many Thanks.
P.S. Does anyone know how to prevent build from rebuilding everything, even if it's only a small change to the dockerfile or entrypoint file?

I seem to have found the solution;
There were two issues. The first seems to be that the ENV vars set in the first transformation are not being utilised in the second transformation. The second was that the host name was wrong in the second transformation (DB-Exists). It should have been 'db' which is the name specified for the container in the docker-compose file. As the contains are both running on a custom network I'd specified, they can automatically 'talk' to each other via their service names...reverse DNS?

Related

install wordpress with cli in docker gives "error establishing database connection" with mariadb container

I try to set up a wordpress website with docker-compose, by having one container for nginx, one for mariadb, and a last one for wordpress, and also two volumes for worpdress and mariadb
I'm not there yet, I'm stuck at an intermediate step : using wp-cli (https://make.wordpress.org/cli/handbook/how-to-install/) I want to configure wordpress with the database, but I get an error :
project architecure :
|_ docker-compose.yml
|_ .env
|_ mariadb/
|_ Dockerfile
|_ wordpress/
|_ Dockerfile
wordpress dockerfile : (it's missing steps, like an entrypoint or a cmd, but i'm already getting an error)
FROM debian:buster
RUN apt update && apt install -y \
php7.3 \
php7.3-mysqli \
curl
# install wp-cli : https://make.wordpress.org/cli/handbook/guides/installing/
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar &&\
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp
ARG WP_DIR=/var/www/html
ARG DB_NAME
ARG DB_USER
ARG DB_PSWD
ARG WP_URL
ARG WP_TITLE
ARG WP_ADMIN
ARG WP_ADMIN_PSWD
ARG WP_ADMIN_EMAIL
# install wordpress with cli : https://make.wordpress.org/cli/handbook/how-to-install/
RUN wp core download --path=${WP_DIR} --allow-root
RUN wp config create --dbname=${DB_NAME} \
--dbuser=${DB_USER} \
--dbpass=${DB_PASS} \
--path=${WP_DIR} \
--allow-root \
--skip-check
# this command gives an error :
RUN wp core install --url=${WP_URL} \
--title=${WP_TITLE} \
--admin_user=${WP_ADMIN} \
--admin_password=${WP_ADMIN_PSWD} \
--admin_email=${WP_ADMIN_EMAIL} \
--path=${WP_DIR} \
--allow-root
mariadb dockerfile :
FROM debian:buster
ARG DB_NAME
ARG DB_USER
ARG DB_PSWD
RUN apt update && apt install -y \
mariadb-client \
mariadb-server \
&& \
rm -rf /var/lib/apt/lists/*
# configure wp database
RUN service mysql start && \
mariadb --execute="CREATE DATABASE ${DB_NAME};" && \
mariadb --execute="CREATE USER '${DB_USER}'#'localhost' IDENTIFIED BY '${DB_PSWD}';" && \
mariadb --execute="GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'#'localhost' with grant option;"
# start mysql server (https://www.mysqltutorial.org/mysql-adminsitration/start-mysql)
CMD [ "service", "mysql", "start" ]
docker-compose.yml :
version: "3.8"
services:
mariadb:
env_file: .env
build:
context: ./mariadb
args:
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PSWD=${DB_PSWD}
image: mariadb
container_name: mymariadb
wordpress:
env_file: .env
build:
context: ./wordpress
args:
- WP_URL=${WP_URL}
- WP_TITLE=${WP_TITLE}
- WP_ADMIN=${WP_ADMIN}
- WP_ADMIN_PSWD=${WP_ADMIN_PSWD}
- WP_ADMIN_EMAIL=${WP_ADMIN_EMAIL}
- DB_NAME=${DB_NAME}
- DB_USER=${DB_USER}
- DB_PSWD=${DB_PSWD}
image: wordpress
container_name: mywordpress
.env file :
## MARIADB SETUP
DB_NAME=db_wp
DB_USER=db_user
DB_PSWD=db_pswd
## WORDPRESS SETUP
WP_URL=wp_url.fr
WP_TITLE=wp_blog
WP_ADMIN=wp_admin
WP_ADMIN_PSWD=wp_admin_pswd
WP_ADMIN_EMAIL=wp_email#wp.fr
if I run docker-compose build I get this error :
Error establishing a database connection. This either means that the username and password information in your `wp-config.php` file is incorrect or that contact with the database server at `localhost` could not be established. This could mean your host’s database server is down.
I think that the wordpress container cannot use the database running in the mariadb container. I tried to gives an explicit network but it didn't works either. I also tried to make wordpress build depends on mariadb, but it was not successful either :
wordpress:
...
depends_on:
mariadb:
condition: service_completed_successfully
...
I don't know if it's even possible to install wordpress during build time ? maybe I should launch a script at run time ? I'm new to all of this (docker, worpress, mariadb, and nginx, php, php-fpm that I didn't show here because it's not relevant to this error) so I'm certainly doing a lot of mistakes, my apologies
I'm confused about the line CMD [ "service", "mysql", "start" ] in mariadb dockerfile, it doesn't act good, if I run the container it stays up for 5 or 6 seconds then it exits. But if I use CMD [ "mysqld" ] instead it works great, although I don't understand why. But I don't think it is connected to my problem with wordpress installation

Bitbucket pipeline import mysql schema

I'm trying to import database schema to mysql service through following statment
mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
and it return mysql: not found. I have even tried the following command
docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
Even though received error + docker exec -i mysql mysql --user=$DB_USERNAME --password=$DB_PASSWORD 5i < DB_Schema.sql
Error: No such container: mysql
What would be the best way to use mysql so that I can import a stance of DB into it for testing purpose and how?
Please find the .yml file below.
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# Specify a docker image from Docker Hub as your build environment.
# All of your pipeline scripts will be executed within this docker image.
image: php:8.0-fpm-alpine
# All of your Pipelines will be defined in the `pipelines` section.
# You can have any number of Pipelines, but they must all have unique
# names. The default Pipeline is simply named `default`.
pipelines:
default:
# Each Pipeline consists of one or more steps which each execute
# sequentially in separate docker containers.
# name: optional name for this step
# script: the commands you wish to execute in this step, in order
- parallel:
- step:
name: Installing Dependancies and Composer
caches:
- composer
script:
# Your Pipeline automatically contains a copy of your code in its working
# directory; however, the docker image may not be preconfigured with all
# of the PHP/Laravel extensions your project requires. You may need to install
# them yourself, as shown below.
- apt-get update && apt-get install -qy git curl libmcrypt-dev unzip libzip-dev libpng-dev zip git gnupg gnupg2 php-mysql
- docker-php-ext-configure gd --enable-gd --with-freetype --with-jpeg --with-webp && \
- docker-php-ext-install gd && \
- docker-php-ext-install exif && \
- docker-php-ext-install zip && \
- docker-php-ext-install pdo pdo_mysql
- rm -rf ./vendor
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install --ignore-platform-reqs
- composer dump-autoload
# Here we create link between the .env.pipelines file and the .env file
# so that our database can retreieve all the variables inside .env.pipelines
- ln -f -s .env.pipelines .env
artifacts:
- vendor/**
- step:
name: Installing and Running npm
image: node:16
caches:
- node
script:
- npm install -g grunt-cli
- npm install
- npm run dev
artifacts:
- node_modules/**
- step:
name: Running Test
deployment: local
script:
# Start up the php server so that we can test against it
- php artisan serve &
# # Give the server some time to start
- sleep 5
# - php artisan migrate
- docker ps
- docker container ls
- mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
# - docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD -e "SHOW DATABASES"
- php artisan optimize
- php artisan test
services:
- mysql
- docker
# You might want to create and access a service (like a database) as part
# of your Pipeline workflow. You can do so by defining it as a service here.
definitions:
services:
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: $DB_DATABASE
MYSQL_USER: $DB_USERNAME
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_PASSWORD
SERVICE_TAGS: mysql
SERVICE_NAME: mysql
You cannot install/update/change you main image in the first step for them to be there in the last step. Make your custom Docker image with all those installations, which will make it faster to run the pipeline and will let you use other tools you need in your pipeline.
I prefer to use the "mysql" client outside Docker and have it reach into the Docker container based on the port mapping set up. Then, conceptually, it is like reading to a "mysqld" server on a separate "server".
LOAD DATA INFILE and INSERT, including use of mysql ... < dump.sql works fine.

Can't connect a go app to mysql (both inside a gitlab runner)

The go app simply inserts a harcoded value in a mysql table and spits it back out. This is done using this database driver. It works fine on linux servers, but during gitlab's CI this returns:
dial tcp 127.0.0.1:3306: connect: connection refused
This is the .gitlab-ci.yml
image: mysql
services:
- mysql:latest
variables:
MYSQL_DATABASE: storage
MYSQL_ROOT_PASSWORD: root
MYSQL_HOST: mysql
MYSQL_USER: root
MYSQL_TRANSAPORT: tcp
MYSQL_ADDRESS: "127.0.0.1:3306"
job:
script:
- apt-get update -qq && apt-get install -qq curl && apt-get install -qq git
- echo "SHOW GLOBAL VARIABLES LIKE 'PORT';" | mysql --user="$MYSQL_USER" --password="$MYSQL_ROOT_PASSWORD" --host="$MYSQL_HOST" "$MYSQL_DATABASE"
- curl -O https://dl.google.com/go/go1.10.1.linux-amd64.tar.gz
- tar -C /usr/local -xzf go1.10.1.linux-amd64.tar.gz
- rm go1.10.1.linux-amd64.tar.gz
- echo "export PATH=\$PATH:/usr/local/go/bin" >> ~/.bashrc
- echo "export GOPATH=\$HOME/go" >> ~/.bashrc
- echo "export PATH=\$PATH:\$GOPATH/bin" >> ~/.bashrc
- source ~/.bashrc
- go get github.com/go-sql-driver/mysql
- go build main.go
- ./main
Is there a standard way to use mysql from golang during CI?

Import into dockerized mariadb on initial build with script

I'm using MariaDB, but I think this could probably apply to MySQL as well.
I have a project that works off of MariaDB, and there is some initial setup for the database that needs to be done to create tables, insert initial data, etc. Based on other answers, I could normally do ADD dump.sql /docker-entrypoint-initdb.d, but I don't have a dump.sql -- instead what I have is a python script that connects to MariaDB directly and creates the tables and data.
I have a docker-compose.yml
version: '3'
services:
db:
build: ./db
ports:
- "3306:3306"
container_name: db
environment:
- MYSQL_ROOT_PASSWORD=root
web:
build: ./web
command: node /app/src/index.js
ports:
- "3000:3000"
links:
- db
"Web" is not so important right now since I just want to get db working.
The Dockerfile I've attempted for DB is:
# db/Dockerfile
FROM mariadb:10.3.2
RUN apt-get update && apt-get install -y python python-pip python-dev libmariadbclient-dev
RUN pip install requests mysql-python
ADD pricing_import.py /scripts/
RUN ["/bin/sh", "-c", "python /scripts/pricing_import.py"]
However this doesn't work for various reasons. I've gotten up to the point where pip install mysql-python doesn't compile:
_mysql.c:2005:41: error: 'MYSQL' has no member named 'reconnect'
if ( reconnect != -1 ) self->connection.reconnect = reconnect;
I think this has to do with the installation of mysql-python.
However before I go down the hole too far, I want to make sure my approach even makes sense since I don't even think the database will be started once I get to the ./pricing_import.py script and since it tries to connect to the database and runs queries, it probably won't work.
Since I can't get the python installation to work on the mariadb container anyway, I was also thinking about creating another docker-compose entry that depends on db and runs the python script on build to do the initial import during docker-compose build.
Are either of these approaches correct, or is there a better way to handle running an initialization script against MariaDB?
We use docker-compose healthcheck with combination of makefile and bash to handle running an initialization script. So your docker-compose.yml would look something like that:
version: '3'
services:
db:
build: ./db
ports:
- "3306:3306"
container_name: db
environment:
- MYSQL_ROOT_PASSWORD=root
healthcheck:
test: mysqlshow --defaults-extra-file=./database/my.cnf
interval: 5s
timeout: 60s
Where ./database/my.cnf is the config with credentials:
[client]
host = db
user = root
password = password
Then you can use this health-check.bash script to check the health:
#!/usr/bin/env bash
DATABASE_DOCKER_CONTAINER=$1
# Check on health database server before continuing
function get_service_health {
# https://gist.github.com/mixja/1ed1314525ba4a04807303dad229f2e1
docker inspect -f '{{if .State.Running}}{{ .State.Health.Status }}{{end}}' $DATABASE_DOCKER_CONTAINER
}
until [[ $(get_service_health) != starting ]];
do echo "database: ... Waiting on database Docker Instance to Start";
sleep 5;
done;
# Instance has finished starting, will be unhealthy until database finishes startup
MYSQL_HEALTH_CHECK_ATTEMPTS=12
until [[ $(get_service_health) == healthy ]]; do
echo "database: ... Waiting on database service"
sleep 5
if [[ $MYSQL_HEALTH_CHECK_ATTEMPTS == 0 ]];
then echo $DATABASE_DOCKER_CONTAINER ' failed health check (not running or unhealthy) - ' $(get_service_mysql_health)
exit 1
fi;
MYSQL_HEALTH_CHECK_ATTEMPTS=$((MYSQL_HEALTH_CHECK_ATTEMPTS-1))
done;
echo "Database is healthy"
Finally, you can use makefile to connect all things together. Something like that:
docker-up:
docker-compose up -d
db-health-check:
db/health-check.bash db
load-database:
docker run --rm --interactive --tty --network your_docker_network_name -v `pwd`:/application -w /application your_docker_db_image_name python /application/pricing_import.py
start: docker-up db-health-check load-database
Then start your app with make start.

Docker integration test with static data at launch

I have been struggling with this issue for a while now. I want to do a performance test using a specific set of data.
To achieve this I am using Docker-compose and I have exported a couple of .sql files.
When I first build and run the container using docker-compose build and docker-compose up the dataset is fine. Then I run my test (which also inserts) data. After running my test I want to perform it again, so I relaunch the container using docker-compose up. But this time, for some reason I don't understand, the data that was inserted the last time (by my test) is still there. I get different behaviour now.
At the moment I have the following Dockerfile:
FROM mysql:5.7
ENV MYSQL_DATABASE dev_munisense1\
MYSQL_ROOT_PASSWORD pass
EXPOSE 3306
ADD docker/data/ /docker-entrypoint-initdb.d/
Because I read that the mysql Docker image runs everything in /docker-entrypoint-initdb.d/. The first time it works properly.
I have also tried what these posts suggested:
How do i migrate mysql data directory in docker container?
http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/
How to create populated MySQL Docker Image on build time
How to make a docker image with a populated database for automated tests?
And a couple of identical other posts.
None of them seem to work currently.
How can I make sure the dataset is exactly the same each time I launch the container? Without having to rebuild the image each time (this takes kind of long because of a large dataset).
Thanks in advance
EDIT:
I have also tried running the container with different arguments like:
docker-compose up --force-recreate --build mysql but this has no success. The container is rebuilt and restarted but the db is still affected by my test. Currently the only solution to my problem is to remove the entire container and image.
I managed to fix the issue (with mysql image) by doing the following:
change the mounting point of the sql storage (This is what actually caused the problem) I used the solution suggested here: How to create populated MySQL Docker Image on build time but I did it by running a sed command: RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
add my scripts to a folder inside the container
run import.sh script that inserts data using the daemon (using the wait-for-it.sh script)
remove the sql scripts
expose port like regular
The docker file looks like this (the variables are used to select different SQL files, I wanted multiple versions of the image):
FROM mysql:5.5.54
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /utils/wait-for-it.sh
COPY docker/import.sh /usr/local/bin/
RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_ALLOW_EMPTY_PASSWORD
ARG DEVICE_INFORMATION
ARG LAST_NODE_STATUS
ARG V_NODE
ARG NETWORKSTATUS_EVENTS
ENV MYSQL_DATABASE=$MYSQL_DATABASE \
MYSQL_USER=$MYSQL_USER \
MYSQL_PASSWORD=$MYSQL_PASSWORD \
MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
DEVICE_INFORMATION=$DEVICE_INFORMATION \
LAST_NODE_STATUS=$LAST_NODE_STATUS \
V_NODE=$V_NODE \
MYSQL_ALLOW_EMPTY_PASSWORD=$MYSQL_ALLOW_EMPTY_PASSWORD
#Set up tables
COPY docker/data/$DEVICE_INFORMATION.sql /usr/local/bin/device_information.sql
COPY docker/data/$NETWORKSTATUS_EVENTS.sql /usr/local/bin/networkstatus_events.sql
COPY docker/data/$LAST_NODE_STATUS.sql /usr/local/bin/last_node_status.sql
COPY docker/data/$V_NODE.sql /usr/local/bin/v_node.sql
RUN chmod 777 /usr/local/bin/import.sh && chmod 777 /utils/wait-for-it.sh && \
/bin/bash /entrypoint.sh mysqld --user='root' & /bin/bash /utils/wait-for-it.sh -t 0 localhost:3306 -- /usr/local/bin/import.sh; exit
RUN rm -f /usr/local/bin/*.sql
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
the script looks like this:
#!/bin/bash
echo "Going to insert the device information"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/device_information.sql
echo "Going to insert the last_node_status"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/last_node_status.sql
echo "Going to insert the v_node"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/v_node.sql
echo "Going to insert the networkstatus_events"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/networkstatus_events.sql
echo "Database now has the following tables"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --execute="SHOW TABLES;"
So now all I have to do to start my performance tests is
#!/usr/bin/env bash
echo "Shutting down previous containers"
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-test-10k-half.yml down
docker-compose -f docker-compose-test-100k-half.yml down
docker-compose -f docker-compose-test-500k-half.yml down
docker-compose -f docker-compose-test-1m-half.yml down
echo "Launching rabbitmq container"
docker-compose up -d rabbitmq & sh wait-for-it.sh -t 0 -h localhost -p 5672 -- sleep 5;
echo "Going to execute 10k test"
docker-compose -f docker-compose-test-10k-half.yml up -d mysql_10k & sh wait-for-it.sh -t 0 -h localhost -p 3306 -- sleep 5 && ./networkstatus-event-service --env=performance-test --run-once=true;
docker-compose -f docker-compose-test-10k-half.yml stop mysql_10k
couple of more of these lines (slightly different, cause different container names)
Running docker-compose down after your tests will destroy everything associated with your docker-compose.yml
Docker Compose is a container life cycle manager and by default it tries to keep everything across multiple runs. As Stas Makarov mentions, there is a VOLUME defined in the mysql image that persists the data outside of the container.