I've dockerized a Meteor-app with Meteord, and that works fine, my problem is that I want to pass some settings to the app.
Meteord does not start the app with a settings-file as one would usually do to give settings to an app (meteor --settings file.json). This is also possible to do with an environement variable called METEOR_SETTINGS.
As I want the webapp to run with other services, I'm using Docker Compose.
I have my settings.json-file that I want to be read in as a environment variable, so something like:
environment:
- METEOR_SETTINGS=$cat(settings.json)
This doesn't work though.
How can I make Docker compose dynamically create this environment variable based on a JSON-file?
An easy way to do this is to load the JSON file in to a local env var, then use that in your yaml file.
In docker-compose.yml
environment:
METEOR_SETTINGS: ${METEOR_SETTINGS}
Load the settings file before invoking docker-compose:
❯ METEOR_SETTINGS=$(cat settings.json) docker-compose up
Not possible without some trickery, depending on the amount of tweakable variables in settings.json:
If it's a lot of settings it's fairly easy to template the docker-compose.yml with a simple shell script that replaces a token in the template with the contents of settings.json, much like in your example. You also want to wrap the docker-compose call in that case. Simplified example:
docker-compose.yml.template:
environment:
- METEOR_SETTINGS=##_METEOR_SETTINGS_##
dc.sh:
#!/bin/sh
# replace ##_METEOR_SETTINGS_## with contents of settings.json and output to docker-compose.yml
sed -e 's|##_METEOR_SETTINGS_##|'"$(cat ./settings.json)"'|' \
"./docker-compose.yml.template" > "./docker-compose.yml"
# wrap docker-compose, passing all arguments
docker-compose "$#"
Put the 2 files into your project root, then chmod +x dc.sh to make the wrapper executable and call ./dc.sh -h.
If it's only a few settings you could handle the templating inside the container when its starting. Simply replace tokens placed in a prepared settings.json with ENV values passed to docker before starting Meteor. This allows you to just use the docker-compose ENV features to configure Meteor.
Related
Current situation:
A Dockerfile that is based on an Ubuntu image, installs Wget and declares that a bashscript will run when a container is started.
A Docker container is started based on the image with the necessary environment variables in the command. These variables will be used inside the Wget command in the bashscript.
docker run -i -e ‘ENV_VARIABLE=VALUE’ [imagename]
The container runs the bashscript, containing a Wget HTTP PUT:
wget --method=PUT --body-data=”{\“key\”:\”${ENV_VARIABLE}\”}” ……
Desired situation:
The current situation works, but I don’t prefer this solution. This is because of the quote-escaping (\”) I have to use.
I tried to solve this by constructing the --body-data as below, with surrounding single quotes.
‘{“key”:”${ENV_VARIABLE}”}’
However, this will not set the ENV_VARIABLE since the payload is a full String now.
A more preferable solution would be to separate the JSON to a JSON-file, which I can refer to in the Wget call. This arises the following questions:
How to refer to the JSON-file? My best guess is to first copy the file on the build of the image into the image and then refer to it via a path in the Wget call, but then again, how do I refer to it?
If the above point is correct, will I still be able to refer to the Docker environment variables?
I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!
I need to do the following
Change environment variables according to the published env. Set Set up cron jobs according to the dev. I I would like to run just 1 command line "eb deploy dev" or something similar.
Use setenv
You can set environment variables with setenv. These will then be remembered for that environment.
More details: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
Example
For example, suppose you have created an EB environment called 'staging' and you want to set the variable DB to 'localhost', you can use:
eb setenv DB=localhost -e staging
Crons
Now that you have a different environment variables, you can check them in a script etc. to decide if the cron should be set up.
Note that the crons may not actually have access to your environment variables so you need to set those again for the cron while setting up the cron.
This is my solution to the problem, it took some time to setup but now i can do all the changes with 1 command line.
Make your own folder with all the files for all the environments.
In .ebextensions folder setup empty config files for eb.
npm runs a script named "deploy.js" together with the flag of the specific env.
The script will do the following
copy the requested env data to the empty files according to the env
git stash the changes of .ebextensions folder (eb deploys using git)
eb use env
eb deploy
So now i can tun npm run deploy:dev and everything runs
I wanted clarification on the possible scripts that can be added in the .s2i/bin directory in my project repo.
The docs say when you add these files they will override the default files of the same name when the project is built. For example, if I place my own "assemble" file in the .s2i/bin directory will the default assemble file run also or be totally replaced by my script? What If I want some of the behavior of the default file? Do I have to copy the default "assemble" contents into my file so both will be executed?
you will need to call out the original "assemble" script from your own. Similar to this
#!/bin/bash -e
# The assemble script builds the application artifacts from a source and
# places them into appropriate directories inside the image.
# Execute the default S2I script
source ${STI_SCRIPTS_PATH}/assemble
# You can write S2I scripts in any programming language, as long as the
# scripts are executable inside the builder image.
Using OpenShift, I want to execute my own run script (run).
So, I added in the src of my application a file in ./s2i/run
that slightly changes the default run file
https://github.com/sclorg/nginx-container/blob/master/1.20/s2i/bin/run
Here is my run file
#!/bin/bash
source /opt/app-root/etc/generate_container_user
set -e
source ${NGINX_CONTAINER_SCRIPTS_PATH}/common.sh
process_extending_files ${NGINX_APP_ROOT}/src/nginx-start ${NGINX_CONTAINER_SCRIPTS_PATH}/nginx-start
if [ ! -v NGINX_LOG_TO_VOLUME -a -v NGINX_LOG_PATH ]; then
/bin/ln -sf /dev/stdout ${NGINX_LOG_PATH}/access.log
/bin/ln -sf /dev/stderr ${NGINX_LOG_PATH}/error.log
fi
#nginx will start using the custom nginx.conf from configmap
exec nginx -c /opt/mycompany/mycustomnginx/nginx-conf/nginx.conf -g "daemon off;"
Then, changed the dockerfile to execute my run script as follows
The CMD command can be called once and dictates where is the script located that is executed when the Deployment pod starts.
FROM registry.access.redhat.com/rhscl/nginx-120
# Add application sources to a directory that the assemble script expects them
# and set permissions so that the container runs without root access
USER 0
COPY dist/my-portal /tmp/src
COPY --chmod=0755 s2i /tmp/
RUN ls -la /tmp
USER 1001
# Let the assemble script to install the dependencies
RUN /usr/libexec/s2i/assemble
# Run script uses standard ways to run the application
#CMD /usr/libexec/s2i/run
# here we override the script that will be executed when the deployment pod starts
CMD /tmp/run
I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.