Beanstalk puts the nginx conf file into the wrong directory - amazon-elastic-beanstalk

I followed the documentation so to replace the default nginx.conf file. So the tree looks like this:
app_root/
├─ .platform/
│ ├─ nginx/
│ │ ├─ nginx.conf
│ │ ├─ conf.d/
│ │ │ ├─ my_custom_conf.conf
Although everything seems correct, after the deploy, the configuration ends up being moved from .platform to /var/proxy/staging; the eb-engine.log, in facts, reports this:
[INFO] Running command /bin/sh -c cp -rp /var/app/staging/.platform/nginx/. /var/proxy/staging/nginx
This means that the real config file /etc/nginx/nginx.conf is still the default one.

You can find the answer in the AWS documentation
/var/app/staging/ – Where application source code is processed during deployment.
/var/app/current/ – Where application source code runs after processing.
When a new code version is deployed, /var/app/staging will be used to run build commands and test the settings. It is also used to test the nginx config file. If the deployment goes through, the code in /staging will be moved to /current and the nginx config will be moved to /etc/nginx/nginx.conf.
/etc/nginx/nginx.conf is the active config file. You can see this by using nginx -t.
[ec2-user#ip-172-31-1-161 ~]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
However, the content of /etc/nginx/nginx.conf is NOT the default config. Whatever you put in /.platform/nginx/nginx.conf of your app directory will end up in /etc/nginx/nginx.conf.
In eb-engine.log you can see the config check in staging. If successful, you will see the cp command that copies the file(s) to /etc/nginx:
2022/09/07 14:23:03.027695 [INFO] Running command /bin/sh -c /usr/sbin/nginx -t -c /var/proxy/staging/nginx/nginx.conf
2022/09/07 14:23:03.078615 [INFO] nginx: the configuration file /var/proxy/staging/nginx/nginx.conf syntax is ok
nginx: configuration file /var/proxy/staging/nginx/nginx.conf test is successful
2022/09/07 14:23:03.078683 [INFO] Running command /bin/sh -c cp -rp /var/proxy/staging/nginx/* /etc/nginx

Related

Reduce build time in GitOps by using Docker image layers from previous build with Azure Registry

I'm building a Docker Image on Pull Requests in my Github Actions setup. The images are built and pushed to Azure Container Registry. Often, it's only a small update in the code, and if I could reuse the layers from the previous build (pushed to ACR), I could save a lot of time.
As shown in Dockerfile, yarn install could be skipped, since new changes occur in the COPY statement below it only:
FROM node:16
# create dirs and chown
RUN mkdir -p /usr/src/node-app && chown -R node:node /usr/src/node-app
WORKDIR /usr/src/node-app
COPY package.json yarn.lock tsconfig.json ./
USER node
# install node modules
RUN yarn install --pure-lockfile
# ensure ownership
COPY --chown=node:node . .
# set default env
RUN mv .env.example .env
EXPOSE 3001
# entrypoint is node
# see https://github.com/nodejs/docker-node/blob/main/docker-entrypoint.sh
# default command: prod start
CMD ["yarn", "start"]
How can I download the previous image from ACR and use the layers there? Simply downloading the previous image (with different tag) does not seem to work.
You need to provide the --cache-from flag to the docker build command if you want to use the downloaded image as cache source.
https://docs.docker.com/engine/reference/commandline/build/#options

Docker, paths, json-file not found: not found

What is the problem? everytime happens this...
=> CACHED [2/5] WORKDIR ../app 0.0s
=> ERROR [3/5] COPY package.json . 0.0s
------
> [3/5] COPY package.json .:
------
failed to compute cache key: "/package.json" not found: not found
I can`t understand what am I doing wrong.
Dockerfile:
FROM node
WORKDIR /app
COPY ../app/package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
I use docker build . in powershell
You can't include files from outside your context (the current directory you are in) to a docker image
So you can either move your app directory on the host to the folder of the Dockerfile or
You can do the following
docker build -f docker\Dockerfile .
in the parent directory (ie the directory containing the docker and the app folders). And adjust your Dockerfile as follows
COPY app/package.json .
You are updating the WORKDIR, as the document says:
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile.
I think, saying just
COPY package.json
should be enough.
Dockerfiles don't have access to parent directories, unfortunately. However you can build the dockerfile from the parent directory by using docker build -f path/to/dockerfile .. Keep in mind that your COPY is relative to the directory you run the docker build from, which is the parent directory.
Also, WORKDIR uses/creates a directory inside the container. WORKDIR /app creates the 'app' folder in the root directory of the container, so /../app is unnecessary (although it achieves the same result).
You cannot reference to parent directory in dockerfile for the reason explained in docker documentation:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
Source
The easiest walk-around is:
docker build -t <some tag> -f <dir/dir/Dockerfile> .
Another thing you can add is build context. Instead of copying all files in the current directory to Docker Daemon, you can configure the context for example
docker build -t <some tag> -f ./dir/to/Dockerfile ./dir/to/build/context

Nginx Docker Volumes empty

When using docker-compose, nginx isn't showing files like other images might.
For example with Mysql, the below code will save the data created at /var/lib/mysql to the local machine at ./volumes/db/data
./volumes/db/data:/var/lib/mysql
Another example, with Wordpress, the below code will save the data created at /var/www/html/wp-content/uploads to the local machine at ./volumes/uploads/data
./volumes/uploads/data:/var/www/html/wp-content/uploads
This is not working with nginx though so no matter what I change /some/nginx/path to, it never appears at ./volumes/nginx/data
./volumes/nginx/data:/some/nginx/path
Does nginx work differently in this regard?
Update
Using a named volume with the following configurations solved this problem:
In the services section of the docker-compose file, I changed ./volumes/nginx/data:/some/nginx/path to nginx_data:/some/nginx/path
And then my volumes section reads as follows
volumes:
nginx_data:
driver: local
driver_opts:
o: bind
device: ${PWD}/volumes/nginx/data
There should be no difference, a volume is mounting a local directory to a directory in the container. Either you are not mounting correctly or you are mounting an incorrect path inside the nginx container (one which nginx does not use).
Based on the offical nginx docker image docs on https://docs.docker.com/samples/library/nginx/ you should mount to /usr/share/nginx/html
$ docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx
In additonal I would include full paths in your docker-compose.yaml:
volumes:
- /full_path/volumes/nginx/data:/usr/share/nginx/html
If this is still not working you should exec into the container and confirm that the directory is mounted:
$ docker exec -it <container_name> sh
$ df -h | grep nginx
# write data, confirm you see it on the docker host's directory
$ cd /usr/share/nginx/html
$ touch foo
# on docker host
$ ls /full_path/volumes/nginx/data/foo
If any of this is failing I would look at docker logs to see if there was an issue mounting the directory, perhaps a path or permission issue.
$ docker logs <container_name>
--- UPDATE ---
I ran everything you are using and it just worked:
$ cat Dockerfile
FROM nginx
RUN touch /usr/share/nginx/html/test1234 && ls /usr/share/nginx/html/
$ docker build -t nginx-image-test .; docker run -p 8889:80 --name some-nginx -v /full_path/test:/usr/share/nginx/html:rw -d nginx-image-test; ls ./test;
...
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33ccbea6c1c1 nginx-image-test "nginx -g 'daemon of…" 4 minutes ago Up 4 minutes 0.0.0.0:8889->80/tcp some-nginx
$ cat test/index.html
hello world
$ curl -i http://localhost:8889
HTTP/1.1 200 OK
Server: nginx/1.15.3
Date: Thu, 06 Sep 2018 15:35:43 GMT
Content-Type: text/html
Content-Length: 12
Last-Modified: Thu, 06 Sep 2018 15:31:11 GMT
Connection: keep-alive
ETag: "5b91483f-c"
Accept-Ranges: bytes
hello world
-- UPDATE 2 --
Awesome you figured it out, this post seems to explain why:
docker data volume vs mounted host directory
The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile because built images should be portable. A host directory wouldn’t be available on all potential hosts.
If you have some persistent data that you want to share between containers, or want to use from non-persistent containers, it’s best to create a named Data Volume Container, and then to mount the data from it.
I ran across this issue as well when trying to create a volume to place my own nginx configurations into etc/nginx. While I never found out the real issue I think it has to do with how nginx is built from the Dockerfile.
I solved the problem by using my own Dockerfile to extend the original and copy over the configuration files at build time. Hopefully this helps.
FROM nginx:1.15.2
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./global /etc/nginx/global
COPY ./conf.d /etc/nginx/conf.d

How to configure ExecStart for Gunicorn without WSGI?

Systemd and Gunicorn require a wsgi file of some sort as the last arg to ExecStart: http://docs.gunicorn.org/en/latest/deploy.html?highlight=ExecStart#systemd
With Django, this was in the main module as wsgi.py:
ExecStart=/home/admin/django/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.wsgi
But this file obviously doesn't exist when using Sanic and uvloop (I believe the new protocol is called ASGI). I tried substituting it for app.py which unsurprisingly didn't work:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.app
How should this parameter be configured when using Sanic?
If you want to start sanic with systemd, why don't you used supervisrod: Supervisord.
Boot -> Systemd -> supervisord -> gunicorn -> sanic
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
logfile_maxbytes=50MB ; maximum size of logfile before rotation
logfile_backups=10 ; number of backed up logfiles
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:ctrlapi]
directory=/home/ubuntu/api
command=/home/ubuntu/api/venv3/bin/gunicorn api:app --bind 0.0.0.0:8000 --worker-class sanic.worker.GunicornWorker -w 2
stderr_logfile = log/api_stderr.log
stdout_logfile = log/api_stdout.log
I have not yet deployed this myself with Systend and gunicorn. But, the documentation seems pretty good on this.
In order to run Sanic application with Gunicorn, you need to use the special sanic.worker.GunicornWorker for Gunicorn worker-class argument:
gunicorn myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
With this in mind, how about this:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
I think the big piece you are missing is the GunicornWorker worker class.

MySQL in Docker on Windows: World-writable files ignored

I'm using Docker-Compose with the MySQL Image to fire up a MySQL database as part of a larger project.
As documented in the MySQL image's documentation, I'm mapping in a custom configuration file to /etc/mysql/conf.d/config-file.cnf:
database:
environment:
MYSQL_ROOT_PASSWORD: foo
ports:
- "3306:3306"
volumes:
- "./mysql/conf.d/:/etc/mysql/conf.d"
image: mysql:5.5
This works perfectly fine when running on Mac OS X as the host system (using docker-machine), but it fails when running on Windows (also using docker-machine). MySQL complains about the fact that the /etc/mysql/conf.d/config-file.cnf is world-writable
Warning: World-writable config file '/etc/mysql/conf.d/config-file.cnf' is ignored
When entering the database container, the file is indeed shown as having 0777 permission. This seems to be due to the host file system's permissions (Windows).
Is there any way to change this? I've tried mounting the volume in read-only mode, but the file still has the same permissions.
Any other way around this problem? At the moment, I'm mounting the file to another folder in the container and then copying/chmod'ing it to the required location as part of the startup command:
database:
environment:
MYSQL_ROOT_PASSWORD: foo
ports:
- "3306:3306"
volumes:
- "./mysql/conf.d/:/usr/local/mysqlconf"
image: mysql:5.5
command: >
bash -c "
cp /usr/local/mysqlconf/*.cnf /etc/mysql/conf.d/
&& chmod 644 /etc/mysql/conf.d/*.cnf
&& /entrypoint.sh mysqld
"
Is there a better way to solve this issue?
When you start up a container with -v, --volumes or use volumes: in docker-compose.yml
docker run -v source:/dest:rw busybox ls -l /dest
Docker mounts the source directory from the Linux VM into a container as /dest. Docker mounted volumes only provide their own options rw and r and I think z for selinux. Owner and permission info will be passed through to the container exactly as the Linux VM sees them. If someone who clones your repo runs docker or docker-compose from their local host rather than on the VM, they will mount blank directories.
Docker Machine's Users share
A docker-machine created local VM will share your computer's local users directory by default. C:\Users on Windows and /Users on OSX. This is done as a VirtualBox shared folder called Users. This share is then mounted on the Linux side via VirtualBox's vboxsf mount tool as /Users or maybe /c/Users via a Linux startup script /etc/rc.d/vbox.
When you docker-machine ssh default you should be able to see all your computers file at /Users/nwinkler.
This share allows docker-compose to reference a relative, local directory within C:\Users and have it work on the Linux VM. Outside of C:\Users the data doesn't exist on the VM.
World readable files
I believe what you are seeing is vboxsf's POSIX representation of a Windows file system. If you run:
docker-machine ssh default
$ cd /Users/nwinkler/path/to/mysql-docker
$ ls -l
$ docker run -v $PWD:/mysql busybox ls -l /mysql
You should see all your files as world writable. The only way to change the represented permissions is via the vboxsf mounted share.
The mount options vboxsf provides are:
Available mount options are:
rw mount writable (the default)
ro mount read only
uid=UID set the default file owner user id to UID
gid=GID set the default file owner group id to GID
ttl=TTL set the "time to live" to TID for the dentry
dmode=MODE override the mode of all directories to (octal) MODE
fmode=MODE override the mode of all regular files to (octal) MODE
umask=UMASK set the umask to (octal) UMASK
dmask=UMASK set the umask applied to directories only
fmask=UMASK set the umask applied to regular files only
iocharset CHARSET use the character set CHARSET for I/O operations
(default set is utf8)
convertcp CHARSET convert the folder name from CHARSET to utf8
On the Docker Linux VM, edit /etc/rc.d/box and append the options to the mountOptions variable. These options will apply to all files and directories under /Users on that mount.
You could set an fmask=007 to remove other permissions from all files or fmode=750 to override all permissions for all files.
mountOptions='defaults,iocharset=utf8'
if grep -q '^docker:' /etc/passwd; then
mountOptions="${mountOptions},uid=$(id -u docker),gid=$(id -g docker),fmask=007"
fi
If you ever upgrade or recreate your docker-machine VM you will need to do this again.
I tend to skip relying on virtualbox shares and have local files monitored and synchronised to my hosts on change (fsevents and rsync).