I'm actually deploying my Laravel App on ElasticBeanstalk.
I want to switch the .env file based on the environment but I can't managed to make my test work correctly.
Here is what I have written in my .ebextensions/02-env-file.config
option_settings:
aws:elasticbeanstalk:application:environment:
ENV_NAME: '`{ "Ref" : "AWSEBEnvironmentName" }`' # assign the actual env name to ENV_NAME
container_commands:
01-config-environment:
command: mv /var/app/ondeck/.env.staging /var/app/ondeck/.env
test: '[[ $ENV_NAME = "Staging" ]]'
command: mv /var/app/ondeck/.env.demo /var/app/ondeck/.env
test: '[[ $ENV_NAME = "Demo" ]]'
But it seems that it always run the last command whatever my environment is.
I guess i'mm missing something here but can't find what.
Thank you for your help.
I finally have the answer to my question. I post it here in case someone has the same problem.
It turns out that if we bundle 2 commands, ElasticBeanstalk may not execute them correctly.
To achieve what I want I need to split my commands in 2 like this :
container_commands:
01-config-environment-staging:
command: mv .env.staging .env
test: '[[ ${ENV_NAME} = "Staging" ]]'
02-config-environment-demo:
command: mv .env.demo .env
test: '[[ ${ENV_NAME} = "Demo" ]]'
Related
I'm trying to import database schema to mysql service through following statment
mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
and it return mysql: not found. I have even tried the following command
docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
Even though received error + docker exec -i mysql mysql --user=$DB_USERNAME --password=$DB_PASSWORD 5i < DB_Schema.sql
Error: No such container: mysql
What would be the best way to use mysql so that I can import a stance of DB into it for testing purpose and how?
Please find the .yml file below.
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# Specify a docker image from Docker Hub as your build environment.
# All of your pipeline scripts will be executed within this docker image.
image: php:8.0-fpm-alpine
# All of your Pipelines will be defined in the `pipelines` section.
# You can have any number of Pipelines, but they must all have unique
# names. The default Pipeline is simply named `default`.
pipelines:
default:
# Each Pipeline consists of one or more steps which each execute
# sequentially in separate docker containers.
# name: optional name for this step
# script: the commands you wish to execute in this step, in order
- parallel:
- step:
name: Installing Dependancies and Composer
caches:
- composer
script:
# Your Pipeline automatically contains a copy of your code in its working
# directory; however, the docker image may not be preconfigured with all
# of the PHP/Laravel extensions your project requires. You may need to install
# them yourself, as shown below.
- apt-get update && apt-get install -qy git curl libmcrypt-dev unzip libzip-dev libpng-dev zip git gnupg gnupg2 php-mysql
- docker-php-ext-configure gd --enable-gd --with-freetype --with-jpeg --with-webp && \
- docker-php-ext-install gd && \
- docker-php-ext-install exif && \
- docker-php-ext-install zip && \
- docker-php-ext-install pdo pdo_mysql
- rm -rf ./vendor
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install --ignore-platform-reqs
- composer dump-autoload
# Here we create link between the .env.pipelines file and the .env file
# so that our database can retreieve all the variables inside .env.pipelines
- ln -f -s .env.pipelines .env
artifacts:
- vendor/**
- step:
name: Installing and Running npm
image: node:16
caches:
- node
script:
- npm install -g grunt-cli
- npm install
- npm run dev
artifacts:
- node_modules/**
- step:
name: Running Test
deployment: local
script:
# Start up the php server so that we can test against it
- php artisan serve &
# # Give the server some time to start
- sleep 5
# - php artisan migrate
- docker ps
- docker container ls
- mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD $DB_DATABASE < DB_Schema.sql
# - docker exec -i mysql mysql -h 127.0.0.1 -u $DB_USERNAME -p$DB_PASSWORD -e "SHOW DATABASES"
- php artisan optimize
- php artisan test
services:
- mysql
- docker
# You might want to create and access a service (like a database) as part
# of your Pipeline workflow. You can do so by defining it as a service here.
definitions:
services:
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: $DB_DATABASE
MYSQL_USER: $DB_USERNAME
MYSQL_PASSWORD: $DB_PASSWORD
MYSQL_ROOT_PASSWORD: $DB_PASSWORD
SERVICE_TAGS: mysql
SERVICE_NAME: mysql
You cannot install/update/change you main image in the first step for them to be there in the last step. Make your custom Docker image with all those installations, which will make it faster to run the pipeline and will let you use other tools you need in your pipeline.
I prefer to use the "mysql" client outside Docker and have it reach into the Docker container based on the port mapping set up. Then, conceptually, it is like reading to a "mysqld" server on a separate "server".
LOAD DATA INFILE and INSERT, including use of mysql ... < dump.sql works fine.
How to set up a new Symfony project with MySQL database using Docker?
I've been trying to set up a new project using Docker for over a week now. I've read trough Docker documentation, found a few tutorials, but nothing really worked for me. And I'm just not able to crack how Docker set up works. Last time I tried I just got a RuntimeException and an ErrorException errors
Project Structure:
-myProject
-bin
-...
-config
-...
-docker
-build
-php
-Dockerfile
-php
-public
-index.php
-src
-...
-var
-...
-vendor
-...
-docker-compose.yaml
-...
My docker-compose.yaml:
version: '3.7'
services:
php:
build:
context: .
dockerfile: docker/build/php/Dockerfile
ports:
- "8100:80"
# Configure the database
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root}
My Dockerfile:
FROM php:7.3-apache
COPY . /var/www/html/
I expected to have "Welcome to Symfony" page but I got an error page.
Errors:
ErrorException
Warning: file_put_contents(/var/www/html/var/cache/dev/srcApp_KernelDevDebugContainerDeprecations.log): failed to open stream: Permission denied
AND
RuntimeException
Unable to write in the cache directory (/var/www/html/var/cache/dev)
What I need is some help to set up my Symfony 4 project with MySQL using Docker
OK so to make it work I just needed to give permision to var folder using chmod in Dockerfile
FROM php:7.3.2-apache
COPY . /var/www/html/
RUN chmod -R 777 /var/www/html/ /var/www/html/
Found this answer in the comments, but the person that left it removed the comment
You actualy have no need to chmod your project root folder to something unnecessary open, like 0777.
In php:* containers php workers run from www-data user. So all you need to do is chown your current project root dir to www-data and verify that www-data user can actualy create folders in it (ls -lah will help you).
Here is my php stage from symfony 4.3 projects:
FROM php:7.3-fpm as runtime
# install php ext/libraries and do other stuff.
WORKDIR /var/www/app
RUN chown -R www-data:www-data /var/www/app
COPY --chown=www-data:www-data --from=composer /app/vendor vendor
COPY --chown=www-data:www-data bin bin
COPY --chown=www-data:www-data config config
COPY --chown=www-data:www-data public public
COPY --chown=www-data:www-data src src
COPY --chown=www-data:www-data .env .env
currently working on moving our application to start using docker. It's a typical app with backend and frontend. I don't have any troubles with front, while still can't launch back.
I have Docker file for backend:
FROM williamyeh/java8
RUN apt-get -y update && apt-get install -y maven
WORKDIR /explorerbackend
ADD settings.xml /root/.m2/settings.xml
ADD pom.xml /explorerbackend
ADD src /explorerbackend/src
RUN ["mvn", "clean", "install"]
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
RUN sh -c 'touch /explorerbackend/app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /explorerbackend/app.jar" ]
and Docker file for mysql:
FROM mysql
ADD createDB.sql /docker-entrypoint-initdb.d
The reason i'm using a separate Docker file for mysql instead of just using image in docker-compose is necessity to create 2 databases on start (otherwise backend will not launch)
createDB.sql file looks as:
CREATE DATABASE IE;
CREATE DATABASE IE_test;
Now i have docker-compose.yml file which is supposed to start 2 containers and make backend connect to database:
version: "3.0"
services:
database:
environment:
MYSQL_ROOT_PASSWORD: root
build:
context: *PATH_TO_DIR_WITH_DOCKERFILE*
dockerfile: Dockerfile
ports:
- 3306:3306
volumes:
- db_data:/var/lib/mysql
backend:
build:
context: *PATH_TO_DIR_WITH_DOCKERFILE*
dockerfile: Dockerfile
ports:
- 3000:3000
depends_on:
- database
volumes:
db_data:
When I run the command docker-compose up database container is up and running while backend is failing:
backend_1 | java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
However I'm able to log in to database container and I do see databases created:
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| IE |
| IE_test |
| mysql |
| performance_schema |
| sys |
+--------------------+
6 rows in set (0.00 sec)
The only reason I see might be related to yml property file of backend:
app:
data-base:
name: IE
link: database
port: 3306
.................
From the frontend container I'm able to ping database (but am I allowed to put into property file just link:database):
root#897b187f9042:/frontend# ping database
PING database (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: icmp_seq=0 ttl=64 time=0.086 ms
64 bytes from 172.19.0.2: icmp_seq=1 ttl=64 time=0.088 ms
So, I assume it's pingable from backend container as well, but why it's not able to connect to db server?
MySQL takes a few seconds to start-up. In-order to confirm this is a race-condition, try the following:
$ docker-compose up -d database && sleep 5 && docker-compose up
When/if this confirms the race-condition, you can alleviate that with a HEALTHCHECK on your database image.
See: https://github.com/docker-library/healthcheck/tree/master/mysql
Script from above link:
#!/bin/bash
set -eo pipefail
if [ "$MYSQL_RANDOM_ROOT_PASSWORD" ] && [ -z "$MYSQL_USER" ] && [ -z "$MYSQL_PASSWORD" ]; then
# there's no way we can guess what the random MySQL password was
echo >&2 'healthcheck error: cannot determine random root password (and MYSQL_USER and MYSQL_PASSWORD were not set)'
exit 0
fi
host="$(hostname --ip-address || echo '127.0.0.1')"
user="${MYSQL_USER:-root}"
export MYSQL_PWD="${MYSQL_PASSWORD:-$MYSQL_ROOT_PASSWORD}"
args=(
# force mysql to not use the local "mysqld.sock" (test "external" connectibility)
-h"$host"
-u"$user"
--silent
)
if select="$(echo 'SELECT 1' | mysql "${args[#]}")" && [ "$select" = '1' ]; then
exit 0
fi
exit 1
Eventually, we found the problem which is a kind of oversight.
The root cause was backend dockerfile:
FROM williamyeh/java8
RUN apt-get -y update && apt-get install -y maven
WORKDIR /explorerbackend
ADD settings.xml /root/.m2/settings.xml
ADD pom.xml /explorerbackend
ADD src /explorerbackend/src
RUN ["mvn", "clean", "install"]
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
RUN sh -c 'touch /explorerbackend/app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /explorerbackend/app.jar" ]
The idea is pretty simple:
1. Take java image
2. install maven
3. copy src folder of my project from host
4. install with maven in container
5. move jar to workdir inside container
6. launch it
However, option 5. doesn't look correct, as instead of copying jar file what was just created by maven inside container i was copying it from my host.
Issue was resolved simply replacing
ADD target/explorer-backend-1.0.jar /explorerbackend/app.jar
with
RUN cp /explorerbackend/target/explorer-backend-1.0.jar /explorerbackend/app.jar
Thanks Rawcode for looking into it!
I have been struggling with this issue for a while now. I want to do a performance test using a specific set of data.
To achieve this I am using Docker-compose and I have exported a couple of .sql files.
When I first build and run the container using docker-compose build and docker-compose up the dataset is fine. Then I run my test (which also inserts) data. After running my test I want to perform it again, so I relaunch the container using docker-compose up. But this time, for some reason I don't understand, the data that was inserted the last time (by my test) is still there. I get different behaviour now.
At the moment I have the following Dockerfile:
FROM mysql:5.7
ENV MYSQL_DATABASE dev_munisense1\
MYSQL_ROOT_PASSWORD pass
EXPOSE 3306
ADD docker/data/ /docker-entrypoint-initdb.d/
Because I read that the mysql Docker image runs everything in /docker-entrypoint-initdb.d/. The first time it works properly.
I have also tried what these posts suggested:
How do i migrate mysql data directory in docker container?
http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/
How to create populated MySQL Docker Image on build time
How to make a docker image with a populated database for automated tests?
And a couple of identical other posts.
None of them seem to work currently.
How can I make sure the dataset is exactly the same each time I launch the container? Without having to rebuild the image each time (this takes kind of long because of a large dataset).
Thanks in advance
EDIT:
I have also tried running the container with different arguments like:
docker-compose up --force-recreate --build mysql but this has no success. The container is rebuilt and restarted but the db is still affected by my test. Currently the only solution to my problem is to remove the entire container and image.
I managed to fix the issue (with mysql image) by doing the following:
change the mounting point of the sql storage (This is what actually caused the problem) I used the solution suggested here: How to create populated MySQL Docker Image on build time but I did it by running a sed command: RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
add my scripts to a folder inside the container
run import.sh script that inserts data using the daemon (using the wait-for-it.sh script)
remove the sql scripts
expose port like regular
The docker file looks like this (the variables are used to select different SQL files, I wanted multiple versions of the image):
FROM mysql:5.5.54
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh /utils/wait-for-it.sh
COPY docker/import.sh /usr/local/bin/
RUN sed -i 's|/var/lib/mysql|/var/lib/mysql2|g' /etc/mysql/my.cnf
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD
ARG MYSQL_ALLOW_EMPTY_PASSWORD
ARG DEVICE_INFORMATION
ARG LAST_NODE_STATUS
ARG V_NODE
ARG NETWORKSTATUS_EVENTS
ENV MYSQL_DATABASE=$MYSQL_DATABASE \
MYSQL_USER=$MYSQL_USER \
MYSQL_PASSWORD=$MYSQL_PASSWORD \
MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD \
DEVICE_INFORMATION=$DEVICE_INFORMATION \
LAST_NODE_STATUS=$LAST_NODE_STATUS \
V_NODE=$V_NODE \
MYSQL_ALLOW_EMPTY_PASSWORD=$MYSQL_ALLOW_EMPTY_PASSWORD
#Set up tables
COPY docker/data/$DEVICE_INFORMATION.sql /usr/local/bin/device_information.sql
COPY docker/data/$NETWORKSTATUS_EVENTS.sql /usr/local/bin/networkstatus_events.sql
COPY docker/data/$LAST_NODE_STATUS.sql /usr/local/bin/last_node_status.sql
COPY docker/data/$V_NODE.sql /usr/local/bin/v_node.sql
RUN chmod 777 /usr/local/bin/import.sh && chmod 777 /utils/wait-for-it.sh && \
/bin/bash /entrypoint.sh mysqld --user='root' & /bin/bash /utils/wait-for-it.sh -t 0 localhost:3306 -- /usr/local/bin/import.sh; exit
RUN rm -f /usr/local/bin/*.sql
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
the script looks like this:
#!/bin/bash
echo "Going to insert the device information"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/device_information.sql
echo "Going to insert the last_node_status"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/last_node_status.sql
echo "Going to insert the v_node"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/v_node.sql
echo "Going to insert the networkstatus_events"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE < /usr/local/bin/networkstatus_events.sql
echo "Database now has the following tables"
mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --execute="SHOW TABLES;"
So now all I have to do to start my performance tests is
#!/usr/bin/env bash
echo "Shutting down previous containers"
docker-compose -f docker-compose.yml down
docker-compose -f docker-compose-test-10k-half.yml down
docker-compose -f docker-compose-test-100k-half.yml down
docker-compose -f docker-compose-test-500k-half.yml down
docker-compose -f docker-compose-test-1m-half.yml down
echo "Launching rabbitmq container"
docker-compose up -d rabbitmq & sh wait-for-it.sh -t 0 -h localhost -p 5672 -- sleep 5;
echo "Going to execute 10k test"
docker-compose -f docker-compose-test-10k-half.yml up -d mysql_10k & sh wait-for-it.sh -t 0 -h localhost -p 3306 -- sleep 5 && ./networkstatus-event-service --env=performance-test --run-once=true;
docker-compose -f docker-compose-test-10k-half.yml stop mysql_10k
couple of more of these lines (slightly different, cause different container names)
Running docker-compose down after your tests will destroy everything associated with your docker-compose.yml
Docker Compose is a container life cycle manager and by default it tries to keep everything across multiple runs. As Stas Makarov mentions, there is a VOLUME defined in the mysql image that persists the data outside of the container.
I'm setting up the development environment for my application inside Docker containers, at the moment I have these containers:
myapp-data - Holds application source code and log files
myapp-phpfpm - Runs the php5-fpm process for Nginx
myapp-nginx - Runs the Nginx web server that serves the application
This setup works beautifully, I'm really happy with it. But my application needs a MySQL database to connect to, so I'm using the official MySQL image, and running it like so:
sudo docker run --name myapp-mysql -e "MYSQL_ROOT_PASSWORD=iamroot" -e "MYSQL_USER=redacted" -e "MYSQL_PASSWORD=redacted" -e "MYSQL_DATABASE=redacted" -d mysql
This also works great. But my myapp-phpfpm container needs to be linked to the myapp-mysql container in order to expose MySQL's connection details to my application. So I restart my myapp-phpfpm container:
sudo docker run --privileged=true --name myapp-phpfpm --volumes-from myapp-data --link myapp-mysql:mysql -d readr/phpfpm
So now my myapp-phpfpm container is linked to my myapp-mysql container so I should be able to access the database within my PHP application.
The problem is I can't. The environment variables don't exist inside the PHP application. If I do:
die(var_dump(`printenv`));
I don't get the MySQL environment variables. To try to debug I did a whoami to find out what user PHP is running as, which is www-data. I then created a bash process inside the container, used su www-data to become the www-data user and did printenv there. Sure enough, the MySQL environment variables do exist there:
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP=tcp://172.17.1.118:3306
MYSQL_ENV_MYSQL_ROOT_PASSWORD=iamroot
... etc ...
So, how can I access the environment variables that Docker exposes about my myapp-mysql container within PHP?
I solved this by creating a custom start.sh script that then gets called from my Dockerfile:
#!/bin/sh
# Function to update the fpm configuration to make the service environment variables available
function setEnvironmentVariable() {
if [ -z "$2" ]; then
echo "Environment variable '$1' not set."
return
fi
# Check whether variable already exists
if grep -q $1 /etc/php5/fpm/pool.d/www.conf; then
# Reset variable
sed -i "s/^env\[$1.*/env[$1] = $2/g" /etc/php5/fpm/pool.d/www.conf
else
# Add variable
echo "env[$1] = $2" >> /etc/php5/fpm/pool.d/www.conf
fi
}
# Grep for variables that look like MySQL (MYSQL)
for _curVar in `env | grep MYSQL | awk -F = '{print $1}'`;do
# awk has split them by the equals sign
# Pass the name and value to our function
setEnvironmentVariable ${_curVar} ${!_curVar}
done
# start php-fpm
exec /usr/sbin/php5-fpm
This then adds the environment variables to the PHP5-FPM config so they can be accessed from within PHP scripts.
php-fpm by default clears all environment variables, /etc/php5/fpm/pool.d/www.conf:
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no
you can fix this by uncommenting in your Dockerfile:
RUN sed -i -e "s/;clear_env\s*=\s*no/clear_env = no/g" /etc/php5/fpm/pool.d/www.conf
I'd recommend using something like fig and just passing the env vars to both containers at startup. If you really want to you could docker inspect any container from any other container if you bind-mount the docker socket, then do something like this:
docker inspect -f {{.Config.Env}} myapp-mysql
The problem may not be the environment variables - it may be your PHP installation.
TL;DR environment variables that are accessible when you're running your application under Apache & PHP may not be available if you're using nginx or lighttpd and fastcgi.
The longer version
Here's the way I understand it (and it's probably wrong or incomplete because my experience with this is quite limited). Because PHP is not running as part of the browser under nginx with fastCGI, it does not have access to the shell in which the browser was started and therefore does not have access to the environment variables in that shell.
The solution is to declare the variables you're interested in as part of the configuration. This answer is kind of terse, but it contains the basic answer to this problem.