EB: Trigger container commands / deploy scripts on configuration change - amazon-elastic-beanstalk

I am running my web server on Elastic Beanstalk, and using Papertrail for logging. I am using the official .ebextensions script to get papertrail set up during deployment, but I have a problem. I use environment variables as part of my hostname used as the sender when remote_syslog uploads logs to papertrail, and while this works fine during deployment, when the 01_set_logger_hostname container command is triggered, I run into problems whenever I change environment variables by modifying the environment's configuration, since it seems an eb config call will only restart the application server, not run any of the scripts run during deployment, including the ebextensions container commands.
"/tmp/set-logger-hostname.sh":
mode: "00555"
owner: root
group: root
encoding: plain
content: |
#!/bin/bash
logger_config="/etc/log_files.yml"
appname=`{ "Ref" : "AWSEBEnvironmentName" }`
instid=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
myhostname=${SOME_VARIABLE}_${appname}_${instid}
if [ -f $logger_config ]; then
# Sub the hostname
sed "s/hostname:.*/hostname: $myhostname/" -i $logger_config
fi
As you can see, since my hostname depends on ${SOME_VARIABLE}, I need to refresh the hostname whenever ${SOME_VARIABLE} is modified following eb config.
Is there a way to trigger a script to be run whenever an eb config command is run, so that I can not only restart my web application but also reconfigure and restart remote_syslog with the updated hostname?

This is now possible on AWS Linux 2 based environments with Configuration deployment platform hooks.
For example, you can make a shell script .platform/confighooks/predeploy/predeploy.sh that will run on all configuration changes. Make sure that you make this file executable according to git, or Elastic Beanstalk will give you a permission denied error.

Related

Script that uses google-drive-ocamlfuse fails when run through Rundeck

I have a script that runs fine when run directly from the shell of the server hosting Rundeck. It uses google-drive-ocamlfuse to mount my google drive to a local directory, creates a folder in the directory, and then unmounts.
name=New-Folder-Name
google-drive-ocamlfuse /home/user/mygoogledrive/
mkdir /home/user/mygoogledrive/$name
fusermount -u /home/user/mygoogledrive/
If I try to run this as an ad hoc command in Rundeck:
sudo ./var/lib/rundeck/scripts/create-folder.sh
... it errors out with:
Error: no DISPLAY environment variable specified
/bin/sh: 1: google-chrome: not found
/bin/sh: 1: chromium-browser: not found
/bin/sh: 1: open: not found
Cannot retrieve auth tokens.
Failure("Error opening URL:https://accounts.google.com/o/oauth2/auth?client_id=REDACTING-PERSONAL-INFO")
mkdir: cannot create directory ‘/home/user/mygoogledrive/New-Folder-Name’: No such file or directory
fusermount: failed to unmount /home/home-db/mygoogledrive: Invalid argument
I am new to Rundeck and am not yet comfortable with permissions and I don't have a good sense of how a command is being run on the server by Rundeck. It must be accessing and executing the file, given the error output, but maybe there are some limitations in the environment due to permissioning that doesn't allow for the use of certain libraries need by google-drive-ocamlfuse? Any ideas?
To use sudo on a target remote node, you need to set the sudo parameters. Otherwise, if you need to use sudo locally, the easier way is to use this plugin in your Rundeck instance.

How to enable mutual SSL verification mode in Redhat-SSO image for OpenShift

I am using the template sso72-x509-postgresql-persistent, which is based on Redhat-SSO and Keycloak, to create an application in OpenShift.
I am going to enable its mutual SSL mode, so that a user has to only provide his certificate instead of user name and password in his request. The document (https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.2/html-single/server_administration_guide/index#x509) told me to edit the standalone.xml file to add configuration sections. It worked fine.
But the template image sso72-x509-postgresql-persistent had problem with this procedure, because after it was deployed on the OpenShift, any changes on the files within the docker have been lost after restart of the docker.
Is there anyway to enable the mutual SSL mode through another level matter like commandline or API instead of editting a configuration file, except making my own docker image?
Ok, I'm including this anyway. I wasn't able to get this working due to permissions issues (the mounted files didn't persist the same permissions as before, so the container continued to fail. But a lot of work went into this answer, so hopefully it points you in the right direction!
You can add a Persistent Volume (PV) to ensure your configuration changes survive a restart. You can add a PV to your deployment via:
DON'T DO THIS
oc set volume deploymentconfig sso --add -t pvc --name=sso-config --mount-path=/opt/eap/standalone/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
This will bring up your RH-SSO image with a blank configuration directory, causing the pod to get stuck in Back-off restarting failed container. What you should do instead is:
Backup the existing configuration files
oc rsync <rhsso_pod_name>:/opt/eap/standalone/configuration ~/
Create a temporary, busybox deployment that can act as an intermediary for uploading the configuration files. Wait for deployment to complete
oc run busybox --image=busybox --wait --command -- /bin/sh -c "while true; do sleep 10; done"
Mount a new PV to the busybox deployment. Wait for deployment to complete
oc set volume deploymentconfig busybox --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/configuration --claim-mode=ReadWriteOnce --claim-size=1Gi
Edit your configuration files now
Upload the configuration files to your new PV via the busybox pod
oc rsync ~/configuration/ <busybox_pod_name>:/configuration/
Destroy the busybox deployment
oc delete all -l run=busybox --force --grace-period=0
Finally, you attach your already created and ready-to-go persistent configuration to the RH SSO deployment
oc set volume deploymentconfig sso --add -t pvc --name=sso-volume --claim-name=sso-config --mount-path=/opt/eap/standalone/configuration
Once your new deployment is...still failing because of permission issues :/

How to setup multiple environments (dev\stage\production) with elastic beanstalk?

I need to do the following
Change environment variables according to the published env. Set Set up cron jobs according to the dev. I I would like to run just 1 command line "eb deploy dev" or something similar.
Use setenv
You can set environment variables with setenv. These will then be remembered for that environment.
More details: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-setenv.html
Example
For example, suppose you have created an EB environment called 'staging' and you want to set the variable DB to 'localhost', you can use:
eb setenv DB=localhost -e staging
Crons
Now that you have a different environment variables, you can check them in a script etc. to decide if the cron should be set up.
Note that the crons may not actually have access to your environment variables so you need to set those again for the cron while setting up the cron.
This is my solution to the problem, it took some time to setup but now i can do all the changes with 1 command line.
Make your own folder with all the files for all the environments.
In .ebextensions folder setup empty config files for eb.
npm runs a script named "deploy.js" together with the flag of the specific env.
The script will do the following
copy the requested env data to the empty files according to the env
git stash the changes of .ebextensions folder (eb deploys using git)
eb use env
eb deploy
So now i can tun npm run deploy:dev and everything runs

What is <OPENSHIFT_MYSQL_ENABLED> environment variable in openshift v3?

How can I to set enabled = "true" on datasource of standalone.xml of Openshift v3 Wildfly container like below.
<datasource jndi-name="java:jboss/datasources/MySQLDS" enabled="true" use-java-context="true" pool-name="MySQLDS" use-ccm="true">
I put the OPENSHIFT_MYSQL_ENABLED environment variable to "true" but nothing happended.
The answer reference site is the below URL:
https://developer.jboss.org/wiki/DataserviceBuilderOnOpenShiftV3Online
I was dealing with the same problem: the environment variable OPENSHIFT_MYSQL_ENABLED is being ignored by variable substitution process, so I had to activate the data source with my bare hands, and that's what I did:
(I'm going to assume you have the OC tools installed on your system)
log into OC: oc login
list all pods and find the WildFly instance: oc get pods
enter the container's SSH console: oc rsh <<pod-name>>
edit the standalone.xml file vi /wildfly/standalone/configuration/standalone.xml
search for the word "datasource" by typing /datasource on vi editor then press enter
find the attribute "enabled" of your data source and update its value from false to true (to do so, press i to go to vi insert mode)
save the file by pressing esc then :x
I'm using OpenShift community edition, so to restart the container is always a hassle: it takes a very long time to find resources available (like memory and CPU) and start the server again, however, you won't have your data source enabled unless you restart the server. In this regard, to do so, you don't need to restart the container, just reload WildFly by using jboss-cli.sh command line tools. (I didn't try to kill the process and start it again, so if you did try, please comment if it worked).
The following steps should be executed on container's terminal using oc rsh <<podname>> or using the terminal on web console.
Enter jboss-cli using the command /wildfly/bin/jboss-cli.sh
Type connect to log into the WildFly console, you'll be prompted for user and password. If you do not have credentials, exit this console and create a management user by executing the script /wildfly/bin/add-user.sh
Check your data source properties by typing data-source read-resource --name=<<YOUR_DATASOURCE_NAME>> --include-runtime=true --recursive=true and follow up on the "enabled" property.
If your data source is disabled, you should enable it by entering the command data-source enable --name=<<YOUR_DATASOURCE_NAME>>
reload WildFly by entering the reload command. Once WildFly reboots you'll need to access jboss-cli.sh and log into the console again.
test your data source connection using the command data-source test-connection-in-pool --name=<<YOUR_DATASOURCE_NAME>>. If the command output was true your data source is up and running.
Openshift v3 is based on docker containers, therefore I'm afraid if you do restart the container, this configuration will probably be lost. The most appropriated solution would be to include this actions on docker's script, which I don't know yet how it works along with Openshift platform.
Hope it helps!

Hide/obfuscate environmental parameters in docker

I'm using the mysql image as an example, but the question is generic.
The password used to launch mysqld in docker is not visible in docker ps however it's visible in docker inspect:
sudo docker run --name mysql-5.7.7 -e MYSQL_ROOT_PASSWORD=12345 -d mysql:5.7.7
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b98afde2fab7 mysql:5.7.7 "/entrypoint.sh mysq 6 seconds ago Up 5 seconds 3306/tcp mysql-5.7.7
sudo docker inspect b98afde2fab75ca433c46ba504759c4826fa7ffcbe09c44307c0538007499e2a
"Env": [
"MYSQL_ROOT_PASSWORD=12345",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MYSQL_MAJOR=5.7",
"MYSQL_VERSION=5.7.7-rc"
]
Is there a way to hide/obfuscate environment parameters passed when launching containers. Alternatively, is it possible to pass sensitive parameters by reference to a file?
Weirdly, I'm just writing an article on this.
I would advise against using environment variables to store secrets, mainly for the reasons Diogo Monica outlines here; they are visible in too many places (linked containers, docker inspect, child processes) and are likely to end up in debug info and issue reports. I don't think using an environment variable file will help mitigate any of these issues, although it would stop values getting saved to your shell history.
Instead, you can pass in your secret in a volume e.g:
$ docker run -v $(pwd)/my-secret-file:/secret-file ....
If you really want to use an environment variable, you could pass it in as a script to be sourced, which would at least hide it from inspect and linked containers (e.g. CMD source /secret-file && /run-my-app).
The main drawback with using a volume is that you run the risk of accidentally checking the file into version control.
A better, but more complicated solution is to get it from a key-value store such as etcd (with crypt), keywhiz or vault.
You say "Alternatively, is it possible to pass sensitive parameters by reference to a file?", extract from the doc http://docs.docker.com/reference/commandline/run/ --env-file=[] Read in a file of environment variables.