Setting dynamic path in the redis.conf using the Environment variable - configuration

I have a environment variable MY_HOME which has a path to a directory /home/abc
Now, I have a redis.conf file In which I need to set this path like this
**redis.conf**
pidfile $MY_HOME/local/var/pids/redis.pid
logfile $MY_HOME/local/var/log/redis.log
dir $MY_HOME/local/var/lib/redis/
like we do in command line, so that my config file picks the path based on the Environment variable.

Because Redis can read its config from stdin, I do something very similar to what #jolestar suggested. I put placeholder variables in my redis.conf and then replace them using sed in my Redis launcher. For example:
==========
$MY_HOME/redis/redis.conf
==========
...
pidfile {DIR}/pids/server{n}.pid
port 123{n}
...
Then I have a script to start Redis:
==========
runredis.sh
==========
DIR=$MY_HOME/redis
for n in {1..4}; do
echo "starting redis-server #$n ..."
sed -e "s/{n}/$n/g" -e "s/{DIR}/$DIR/g" < $DIR/redis.conf | redis-server -
done
I've been using this approach forever and it works out well.

this is not supported by Redis, however it's achievable with the use of envsubst (which's installed by default on almost all modern distros) -to substitute in the values of environment variables before running redis-server.
envsubst '$HOME:$MY_HOME' < ~/.tmpl_redis.conf > ~/.redis.conf && redis-server ~/.redis.conf
# or
envsubst '$MY_HOME' < ~/.tmpl_redis.conf > ~/.redis.conf && redis-server !#:5 # 5th argument from the previous command

i also find a solution,but redis config is no support for env var.
i think have 2 method:
start up redis by a script,script get env var and change redis config.
start up redis by command line,and send env var as parameter.

Related

How can I use Ansible when I only have read-only access?

I am using Ansible to automate some network troubleshooting tasks, but when I try to ping all my devices as a sanity check I get the following error:
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\".
When I run the command in Ansible verbose mode, right before this error I get the following output:
<10.25.100.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" && echo ansible-tmp-1500330345.12-194265391907358="echo Cmd exec error./.ansible/tmp/ansible-tmp-1500330345.12-194265391907358" ) && sleep 0'
I am an intern and thus only have read-only access to all devices; therefore, I believe the error is occurring because of the mkdir command. My two questions are thus:
1) Is there anyway to configure Ansible to not create any temp files on the devices?
2) Is there some other factor that may be causing this error that I might have missed?
I have tried searching through the Ansible documentation for any relevant configurations, but I do not have much experience working with Ansible so I have been unable to find anything.
The question does not make sense in a broader context. Ansible is a tool for server configuration automation. Without write access you can't configure anything on the target machine, so there is no use case for Ansible.
In a narrower context, although you did not post any code, you seem to be trying to ping the target server. Ansible ping module is not an ICMP ping. Instead, it is a component which connects to the target server, transfers Python scripts and runs them. The scripts produce a response which means the target system meets minimal requirements to run Ansible modules.
However you seem to want to run a regular ping command using Ansible command module on your control machine and check the status:
- hosts: localhost
vars:
target_host: 192.168.1.1
tasks:
- command: ping {{ target_host }}
You might want to play with failed_when, ignore_errors, or changed_when parameters. See Error handling in playbook.
Note, that I suggested running the whole play on localhost, because in your configuration, it doesn't make sense to configure the target machines to which you have limited access rights in the inventory.
Additionally:
Is there anyway to configure Ansible to not create any temp files on the devices?
Yes. Running commands through raw module will not create temporary files.
As you seem to have an SSH access, you can use it to run a command and check its result:
- hosts: 192.168.1.1
tasks:
- raw: echo Hello World
register: echo
- debug:
var: echo.stdout
If someone have multiple nodes and sudo permission, and you want to bypass Read Only restriction, try to use raw module, to remount disk, on remoute node with raed/write option, it was helful for me.
Playbook example:
---
- hosts: bs
gather_facts: no
pre_tasks:
- name: read/write
raw: ansible bs -m raw -a "mount -o remount,rw /" -b --vault-password-file=vault.txt
delegate_to: localhost
tasks:
- name: dns
raw: systemctl restart dnsmasq
- name: read only
raw: mount -o remount,ro /

Where to set system default environment variables in Alpine linux?

I know, with Ubuntu, you can set default values for environment variables in /etc/environment; I do not see that file in Alpine linux. Is there a different location for setting system-wide defaults?
It seems that /etc/profile is the best place I could find. At least, some environment variables are set there:
export CHARSET=UTF-8
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PAGER=less
export PS1='\h:\w\$ '
umask 022
for script in /etc/profile.d/*.sh ; do
if [ -r $script ] ; then
. $script
fi
done
According to the contents of /etc/profile, you can create a file with .sh extension in /etc/profile.d/ and you have to pass --login every time to load the env variables e.g docker exec -it container sh --login.
If you are talking about Alpine Docker image, then you can define those env variables inside Dockerfile like below. Here you don't need to pass --login every time. These variables will be automatically available system wide globally.
FROM alpine
ENV GITHUB_TOKEN=XXXXXXXXXXXXXXXXXXXXXXX \
COMPOSER_HOME=/home/deploy/.composer
Also you can define your alias, env etc inside /etc/profile and define a ENV inside Dockerfile like below to source the profile automatically.
FROM alpine
ENV GITHUB_TOKEN=XXXXXXXXXXXXXXXXXXXXXXX \
COMPOSER_HOME=/home/deploy/.composer
ENV ENV="/etc/profile"

Accessing environment variables in Docker containers linked with --link

I'm setting up the development environment for my application inside Docker containers, at the moment I have these containers:
myapp-data - Holds application source code and log files
myapp-phpfpm - Runs the php5-fpm process for Nginx
myapp-nginx - Runs the Nginx web server that serves the application
This setup works beautifully, I'm really happy with it. But my application needs a MySQL database to connect to, so I'm using the official MySQL image, and running it like so:
sudo docker run --name myapp-mysql -e "MYSQL_ROOT_PASSWORD=iamroot" -e "MYSQL_USER=redacted" -e "MYSQL_PASSWORD=redacted" -e "MYSQL_DATABASE=redacted" -d mysql
This also works great. But my myapp-phpfpm container needs to be linked to the myapp-mysql container in order to expose MySQL's connection details to my application. So I restart my myapp-phpfpm container:
sudo docker run --privileged=true --name myapp-phpfpm --volumes-from myapp-data --link myapp-mysql:mysql -d readr/phpfpm
So now my myapp-phpfpm container is linked to my myapp-mysql container so I should be able to access the database within my PHP application.
The problem is I can't. The environment variables don't exist inside the PHP application. If I do:
die(var_dump(`printenv`));
I don't get the MySQL environment variables. To try to debug I did a whoami to find out what user PHP is running as, which is www-data. I then created a bash process inside the container, used su www-data to become the www-data user and did printenv there. Sure enough, the MySQL environment variables do exist there:
MYSQL_PORT_3306_TCP_PORT=3306
MYSQL_PORT_3306_TCP=tcp://172.17.1.118:3306
MYSQL_ENV_MYSQL_ROOT_PASSWORD=iamroot
... etc ...
So, how can I access the environment variables that Docker exposes about my myapp-mysql container within PHP?
I solved this by creating a custom start.sh script that then gets called from my Dockerfile:
#!/bin/sh
# Function to update the fpm configuration to make the service environment variables available
function setEnvironmentVariable() {
if [ -z "$2" ]; then
echo "Environment variable '$1' not set."
return
fi
# Check whether variable already exists
if grep -q $1 /etc/php5/fpm/pool.d/www.conf; then
# Reset variable
sed -i "s/^env\[$1.*/env[$1] = $2/g" /etc/php5/fpm/pool.d/www.conf
else
# Add variable
echo "env[$1] = $2" >> /etc/php5/fpm/pool.d/www.conf
fi
}
# Grep for variables that look like MySQL (MYSQL)
for _curVar in `env | grep MYSQL | awk -F = '{print $1}'`;do
# awk has split them by the equals sign
# Pass the name and value to our function
setEnvironmentVariable ${_curVar} ${!_curVar}
done
# start php-fpm
exec /usr/sbin/php5-fpm
This then adds the environment variables to the PHP5-FPM config so they can be accessed from within PHP scripts.
php-fpm by default clears all environment variables, /etc/php5/fpm/pool.d/www.conf:
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no
you can fix this by uncommenting in your Dockerfile:
RUN sed -i -e "s/;clear_env\s*=\s*no/clear_env = no/g" /etc/php5/fpm/pool.d/www.conf
I'd recommend using something like fig and just passing the env vars to both containers at startup. If you really want to you could docker inspect any container from any other container if you bind-mount the docker socket, then do something like this:
docker inspect -f {{.Config.Env}} myapp-mysql
The problem may not be the environment variables - it may be your PHP installation.
TL;DR environment variables that are accessible when you're running your application under Apache & PHP may not be available if you're using nginx or lighttpd and fastcgi.
The longer version
Here's the way I understand it (and it's probably wrong or incomplete because my experience with this is quite limited). Because PHP is not running as part of the browser under nginx with fastCGI, it does not have access to the shell in which the browser was started and therefore does not have access to the environment variables in that shell.
The solution is to declare the variables you're interested in as part of the configuration. This answer is kind of terse, but it contains the basic answer to this problem.

Gunicorn listening always at http://127.0.0.1:8000

I have set up my django application on webfaction and now I am trying to move to using Guicorn to serve my application. When I set up my files and config everything seems to be working except that it is always listening at 127.0.0.1:8000.
My configuration is as below.
supervisord.conf
[unix_http_server]
file=/home/devana/tmp/supervisor.sock
[supervisord]
logfile=/home/devana/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/home/devana/webapps/devana/etc/supervisord.pid
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///home/devana/tmp/supervisor.sock
[include]
files = /home/devana/webapps/devana/etc/supervisord/*.ini
Supervisor.ini
[program:devana]
command=/home/devana/webapps/devana/scripts/start_server
directory=/home/devana/webapps/devana/csiop/
user=devana
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /home/devana/tmp/gunicorn_supervisor.log
start_server
NAME="devana" # Name of the application
DJANGODIR=/home/devana/webapps/devana/csiop # Django project directory
SOCKFILE=/home/devana/webapps/devana/run/gunicorn.sock # we will communicte using this
unix socket
USER=devana # the user to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=devana.settings.production # which settings should Django use
DJANGO_WSGI_MODULE=devana.wsgi # WSGI module name
BIND=2.14.5.58:31148 (IP and the port number provided by webfaction in this place)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/devana/webapps/devana/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--log-level=debug \
--bind=$BIND
Now when I type the '../bin/Supervisord' command, guicorn is starting but it listening at 127.0.0.1:8000 instead of the bind variable I provided and I am not able to open my website using http://mywebsite.com.
Could someone point me what I am doing wrong
I found the problem. Instead of using BIND variable containing both IP and port, I separated them into two different variables and used --bind=$IP:$PORT. That seems to work
If gunicon listens on 127.0.0.1:8000 it probably is the default that is applied because the supplied -b / --bind parameter cannot be applied.
In my case, I ran gunicorn via Docker and had the following directive in my Dockerfile to run as default command:
CMD ["gunicorn", "config.wsgi", "--bind 0.0.0.0:8000"] # listening on 127.0.0.1:8000
CMD ["gunicorn", "config.wsgi", "--bind", "0.0.0.0:8000"] # listening on 0.0.0.0:8000
I'm not sure what was broken in your case but if someone from the future stumbles upon this: check how the --bind value is passed to gunicorn.

Issues with MySQL restart on running through a crontab scheduler

I have written a shell script which starts MySQL when its killed/terminated. I am running this shell script using a crontab.
My cron looks for the script file named mysql.sh under /root/mysql.sh
sh /root/mysql.sh
mysql.sh:
cd /root/validate-mysql-status
sh /root/validate-mysql-status/validate-mysql-status.sh
validate-mysql-status.sh:
# mysql root/admin username
MUSER="xxxx"
# mysql admin/root password
MPASS="xxxxxx"
# mysql server hostname
MHOST="localhost"
MSTART="/etc/init.d/mysql start"
# path mysqladmin
MADMIN="$(which mysqladmin)"
# see if MySQL server is alive or not
# 2&1 could be better but i would like to keep it simple
$MADMIN -h $MHOST -u $MUSER -p${MPASS} ping 2>/dev/null 1>/dev/null
if [ $? -ne 0 ]; then
# MySQL's status log file
MYSQL_STATUS_LOG=/root/validate-mysql-status/mysql-status.log
# If log file not exist, create a new file
if [ ! -f $MYSQL_STATUS_LOG ]; then
cat "Creating MySQL status log file.." > $MYSQL_STATUS_LOG
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
else
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
fi
# Restarting MySQL
/etc/init.d/mysql start
now1="$(date)"
echo [$now1] info : MySQL started >> $MYSQL_STATUS_LOG
cat $MYSQL_STATUS_LOG
fi
When I run the above mysql shell script manually using webmin's crontab, MySQL started successfully (when its killed).
However, when I schedule it using a cron job, MySQL doesn't starts. The logs are printed properly (it means my cron runs the scheduled script successfully, however MySQL is not restarting).
crontab -l displays:
* * * * * sh /root/mysql.sh
I found from URL's that we should give absolute path to restart MySQL through schedulers like cron. However, it haven't worked for me.
Can anyone please help me!
Thank You.
First, crontab normaly looks like this:
* * * * * /root/mysql.sh
So remove the surplus sh and put it at the beginning of the script - #!/bin/bash I suppose (why are you referring to sh instead of bash?) and don't forget to have an execute permission on the file (chmod +x /root/mysql.sh)
Second, running scripts within crontab is tricky, because the environment is different! You have to set it manually. We start with PATH: go to console and do echo $PATH, and then copy-paste the result into export PATH=<your path> to your cron script:
mysql.sh:
#!/bin/bash
export PATH=.:/bin:/usr/local/bin:/usr/bin:/opt/bin:/usr/games:./:/sbin:/usr/sbin:/usr/local/sbin
{
cd /root/validate-mysql-status
/root/validate-mysql-status/validate-mysql-status.sh
} >> OUT 2>> ERR
Note that I also redirected all the output to files so that you don't receive emails from cron.
Problem is how to know which other variables (besides PATH) matter. Try to go through set | less and try to figure out which variables might be important to set in the cron script too. If there are any MYSQL related variables, you must set them! You may also examine the cron script environment by putting set > cron.env to the cron script and then diff-ing it against console environment to look for significant differences.