How to set up Docker for Symfony project with MySQL? - mysql

How to set up a new Symfony project with MySQL database using Docker?
I've been trying to set up a new project using Docker for over a week now. I've read trough Docker documentation, found a few tutorials, but nothing really worked for me. And I'm just not able to crack how Docker set up works. Last time I tried I just got a RuntimeException and an ErrorException errors
Project Structure:
-myProject
-bin
-...
-config
-...
-docker
-build
-php
-Dockerfile
-php
-public
-index.php
-src
-...
-var
-...
-vendor
-...
-docker-compose.yaml
-...
My docker-compose.yaml:
version: '3.7'
services:
php:
build:
context: .
dockerfile: docker/build/php/Dockerfile
ports:
- "8100:80"
# Configure the database
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD:-root}
My Dockerfile:
FROM php:7.3-apache
COPY . /var/www/html/
I expected to have "Welcome to Symfony" page but I got an error page.
Errors:
ErrorException
Warning: file_put_contents(/var/www/html/var/cache/dev/srcApp_KernelDevDebugContainerDeprecations.log): failed to open stream: Permission denied
AND
RuntimeException
Unable to write in the cache directory (/var/www/html/var/cache/dev)
What I need is some help to set up my Symfony 4 project with MySQL using Docker

OK so to make it work I just needed to give permision to var folder using chmod in Dockerfile
FROM php:7.3.2-apache
COPY . /var/www/html/
RUN chmod -R 777 /var/www/html/ /var/www/html/
Found this answer in the comments, but the person that left it removed the comment

You actualy have no need to chmod your project root folder to something unnecessary open, like 0777.
In php:* containers php workers run from www-data user. So all you need to do is chown your current project root dir to www-data and verify that www-data user can actualy create folders in it (ls -lah will help you).
Here is my php stage from symfony 4.3 projects:
FROM php:7.3-fpm as runtime
# install php ext/libraries and do other stuff.
WORKDIR /var/www/app
RUN chown -R www-data:www-data /var/www/app
COPY --chown=www-data:www-data --from=composer /app/vendor vendor
COPY --chown=www-data:www-data bin bin
COPY --chown=www-data:www-data config config
COPY --chown=www-data:www-data public public
COPY --chown=www-data:www-data src src
COPY --chown=www-data:www-data .env .env

Related

Docker, paths, json-file not found: not found

What is the problem? everytime happens this...
=> CACHED [2/5] WORKDIR ../app 0.0s
=> ERROR [3/5] COPY package.json . 0.0s
------
> [3/5] COPY package.json .:
------
failed to compute cache key: "/package.json" not found: not found
I can`t understand what am I doing wrong.
Dockerfile:
FROM node
WORKDIR /app
COPY ../app/package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I use docker build . in powershell
You can't include files from outside your context (the current directory you are in) to a docker image
So you can either move your app directory on the host to the folder of the Dockerfile or
You can do the following
docker build -f docker\Dockerfile .
in the parent directory (ie the directory containing the docker and the app folders). And adjust your Dockerfile as follows
COPY app/package.json .
You are updating the WORKDIR, as the document says:
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
I think, saying just
COPY package.json
should be enough.
Dockerfiles don't have access to parent directories, unfortunately. However you can build the dockerfile from the parent directory by using docker build -f path/to/dockerfile .. Keep in mind that your COPY is relative to the directory you run the docker build from, which is the parent directory.
Also, WORKDIR uses/creates a directory inside the container. WORKDIR /app creates the 'app' folder in the root directory of the container, so /../app is unnecessary (although it achieves the same result).
You cannot reference to parent directory in dockerfile for the reason explained in docker documentation:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
Source
The easiest walk-around is:
docker build -t <some tag> -f <dir/dir/Dockerfile> .
Another thing you can add is build context. Instead of copying all files in the current directory to Docker Daemon, you can configure the context for example
docker build -t <some tag> -f ./dir/to/Dockerfile ./dir/to/build/context

Mariadb multi stage container with an existing database

I need to create a container with an existing database and operate some actions on it before distribute it.
In production, I've got some really big databases:
[root]# ls /home/db-backup/test/prepare/26_06_2020/full/
ibdata1
ib_logfile0
ib_logfile1
ibtmp1
mysql
performance_schema
my_existing_database1
my_existing_database2
my_existing_database3
I just need to publish a container with my_existing_database1 for my team.
I tried many Dockerfile but I can't find a way do to it and I don't understand why.
Here a simplified Dockerfile :
FROM mariadb:latest as builder
ENV MYSQL_ALLOW_EMPTY_PASSWORD yes
# for easier debug, i will remove that in prod
RUN sed -i '/\[mysqld\]/a plugin-load-add = auth_socket.so' /etc/mysql/my.cnf
WORKDIR /initialized-db
COPY ibdata1 .
COPY mysql ./mysql
COPY performance_schema ./performance_schema
COPY my_existing_database1 ./my_existing_database1
COPY db-init.sh /docker-entrypoint-initdb.d/
RUN chown mysql:mysql . \
&& chmod 660 ibdata1 \
&& chmod +x /docker-entrypoint-initdb.d/db-init.sh
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db", "--aria-log-dir-path", "/initialized-db"]
# No file named test
RUN ls /initialized-db/
FROM mariadb:latest
COPY --from=builder /initialized-db /var/lib/mysql
As you can see, I try to execute my script db-init.sh. Simplified version :
#!/bin/bash
set -e -x
mysql -u root -e "CREATE USER 'test'#'%' IDENTIFIED BY 'test';"
touch /initialized-db/test
Unfortunetly, my script is not executed as the file test is not created.
I try to bypass the /usr/local/bin/docker-entrypoint.sh with my own script (copy/paste of the file with some edit) but it's not working either (user and file test not created).
Can you help me on this one please?
You should preferably mount a directory for the data and distribute it then extra.
If you must distibute it with the container itself, you should be able to edit the config of that database to use a custom data directory and initialize your databases in it with RUN and COPY.

Docker-compose : mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)

I'm having an issue when starting the db service with docker compose:
version: '3'
services:
# Mysql DB
db:
image: percona:5.7
#build: ./docker/mysql
volumes:
- "./db/data:/var/lib/mysql"
- "./db/init:/docker-entrypoint-initdb.d"
- "./db/backups:/tmp/backups"
- "./shared/home:/home"
- "./shared/root:/home"
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db_name
MYSQL_USER: user
MYSQL_PASSWORD: pass
ports:
- "3307:3306"
I have tried everything with no luck:
"./db/data:/var/lib/mysql:rw"
Creating a dockerfile and create from build instead of image:
FROM percona:5.7
RUN adduser mysql
RUN sudo chown mysql /var/lib/mysql
RUN sudo chgrp mysql /var/lib/mysql
Also I have tried to add a user on db service:
user: "1000:50"
But any of those could solve that.. What I'm missing?
MySQL 5.7 installation error `mysqld: Can't create/write to file '/var/lib/mysql/is_writable'`
I had to change ./db/data user:group to 999:999, so docker user is who is making the changes.
sudo chown 999:999 ./db/data
Make sure that the user who is running docker has access to ./db/data
# Not in the dockerfile
sudo chown $(whoami) ./db/data
sudo chgrp $(whoami) ./db/data
Docker tells you that you don't have the permissions, it might also mean that you need to verify that you shared volume ./db/data need to have the correct permissions.
According Dockerfile Percona 5.7 images runs under CentOS v8 and user mysql. Check the user ID (uid) and group ID (gid) inside container:
user#host$ docker run --rm -t percona:5.7.29 sh -c 'id'
uid=999(mysql) gid=999(mysql) groups=999(mysql)
Default user inside container uses 999 uid and gid. Than change your directory rights to 999:999:
sudo chown 999:999 ./db/data
This is an addition to Albeis answer.
I spent a whole day with a similar (almost exactly) problem. I also changed ownership of the related files, only to see them get wiped out and come back with permissions issues. I changed the ownership of my curl-installed docker-compose executable. I didn't receive a bit of reprieve until adding the volumes to the .dockerignore, as was suggested in this Github issue reply.
I suffered this issue and it took quite some time to figure out what the culprit was.
In my case, I have a dual boot system Winblowz-Linux.
My code of the problematic project was on a Windows filesystem.
Once I cloned the project into my Linux drive, on a ext4 filesystem, the problem went away.
need permission to execute scripts in directory
sudo chown 999:999 ./db/data
sudo chmod +x ./db/data

Symfony 3 GitLab CI SQLSTATE[HY000] [2002] No such file or directory MySQL connection

Feel free to give an advice about CI because I am newbie about CI.
I have symfony (4) application and it is working good in production server. After that I wanted to write CI file for this project. But I got an error from MySQL (5.7).
I checked MySQL linked to main container also I am pretty sure what I am doing.
I used remote MySQL server but I got same error again.
I got the error in this line
...php bin/console doctrine:schema:create...
this is .gitlab-ci.yml file
image: php:7.1-cli
services:
- mysql:latest
variables:
APP_ENV: prod
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: user_master
MYSQL_PASSWORD: secret
MYSQL_DATABASE: db_master
DATABASE_URL: mysql://user_master:secret#mysql:3306/db_master
cache:
paths:
- vendor/
stages:
- build
- test
- deploy
before_script:
- apt-get update -yqq
- apt-get install -yqq libicu-dev libxml2-dev libxslt-dev libmcrypt-dev zlib1g-dev
- docker-php-ext-configure intl
- docker-php-ext-install -j$(nproc) intl dom xmlrpc xsl pdo pdo_mysql mysqli mcrypt zip
- docker-php-ext-enable opcache
after_script:
- php bin/console doctrine:schema:drop --force
build:app:
stage: build
script:
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- /usr/local/bin/composer install --no-dev --no-progress --no-suggest --optimize-autoloader --quiet
- rm -rf /usr/local/bin/composer
- php bin/console doctrine:schema:create --env=prod --ansi
test:app:
stage: test
script:
- echo "Yes this is test - YAYY"
deploy:app:
stage: deploy
script:
- echo "Yes, this is deploy - YAYY"
I searched in GitLab docs, GitHub also Stackoverflow. I found solutions but already same.
I tried;
sleep 60s
remote MySQL server
pdo, pdo_mysql enough I know but I download mysqli
add --env=prod for doctrine:schema:create
add seprate job. Like build:mysql and build:app (actually I don't know how can I get container ip address)
also I check some Laravel questions but those didn't work.
Should I install MySQL server to inside the container? Is this a good idea for CI? Or, do I have to install a dependencies for this container like libmysql-dev?
Actually if you have a good example for (symfony, mysql and gitlab CI) that I can look that, let me know.
Answer
I solved but actually I am not sure how I did. I just add
DATBASE_URL: mysql://<user>:<password>#<hostname>:<port>/<db_name>
Could you guide me about that.
Probably your application searches for this ENV variable and uses it if it is present: https://symfony.com/doc/current/doctrine/dbal.html
"To get started, configure the DATABASE_URL environment variable"

Docker - can not start mysql permission error

I have problem with docker and mysql. I have build an image based on phusion/baseimage. I want to create image where /var/lib/mysql directory is shared with my host (OS X), because I dont want to store my data on container.
Everything works fine when directory /var/lib/mysql is not shared. When I share this directory, mysql service can not start. In logs are informations about problems with permissions while starting.
Result of ls -la from /var/lib is:
[...]
drwxr-xr-x 1 lc staff 170 Jan 3 16:55 mysql
[...]
The mysql user should be an owner. I tried to do:
sudo chown -R mysql:mysql mysql/
But this command didn't return any error and didn't change owner.
I have also tried to add my user (from container) to mysql group:
lc#cbe25ac0681e:~$ groups lc
lc : lc sudo mysql
But It also didn't work. Have anybody any idea how to solve this issue?
My docker-compose.yml file:
server:
image: lukasz619/light-core_server
ports:
- "40080:80"
- "40022:22"
- "40443:443"
- "43306:3306"
volumes:
- /Users/lukasz/Workspace/database/mysql:/var/lib/mysql
This is my Dockerfile:
FROM phusion/baseimage
RUN apt-get update
RUN apt-get install -y apache2
# Enable SSH
RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh
# public key for root
ADD public_key.pub /tmp/public_key.pub
RUN cat /tmp/public_key.pub >> /root/.ssh/authorized_keys && rm -f /tmp/public_key.pub
EXPOSE 80
CMD ["/sbin/my_init"]
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
When we say that users are shared between the container and the host, it's not quite true; it's the UIDs that are actually shared. What this means is that the user named mysql in the container most likely has a different UID to the user mysql on the host.
We can try to fix this by using the image to set the permissions:
docker run --user root lukasz619/light-core_server chown -R mysql:mysql /var/lib/mysql
This may work, depending on how you've set up the image. There is also a further complication from the VM running in-between.
The problem is OS X.. I ran this docker-composer.yml
server:
image: lukasz619/light-core_server
ports:
- "40080:80"
- "40022:22"
- "40443:443"
- "43306:3306"
volumes:
- /var/mysql:/var/lib/mysql
Its possible to share directories (and do chown) ONLY between boot2docker and container, but do not work properly shareing between Os X and container.