Using Monit with Puma in Jruby - jruby

I have Rails app set up using Jruby with puma as the web server. Puma doesn't daemonize on its own, so I wrapped it in a bash script to handle generating a pid (as described in the Monit FAQ). The script is below:
#!/bin/bash
APP_ROOT="/home/user/public_html/app"
export RAILS_ENV=production
export JRUBY_OPTS="--1.9"
export PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH
case $1 in
start)
echo $$ > $APP_ROOT/puma.pid;
cd $APP_ROOT;
exec 2>&1 puma -b tcp://127.0.0.1:5000 1>/tmp/puma.out
;;
stop)
kill `cat $APP_ROOT/puma.pid` ;;
*)
echo "usage: puma {start|stop}" ;;
esac
exit 0
This works from the command line and it works even if I execute it after running the below to simulate the monit shell:
env -i PATH=/bin:/usr/bin:/sbin:/usr/sbin /bin/sh
The relevant monitrc lines are below:
check process puma with pidfile /home/user/public_html/app/puma.pid
start program = "/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh start"
stop program = "/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh stop"
The monit log shows it constantly try to start puma, and it even gets so far as regenerating a new PID, but is never able to actually start puma. Every time I try to run this script from every other context I can think of it works - except from monit.

I managed to get this to work after reading this post: running delayed_job under monit with ubuntu
For some reason, changing my monitrc to use the following syntax made this work. I have no idea why:
start program = "/bin/su - user -c '/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh start'"
stop program = "/bin/su - user -c '/usr/bin/env PATH=/home/user/.rbenv/shims:/home/user/.rbenv/bin:$PATH /home/user/puma.sh stop'"

Related

My sh script always run after entrypoint DOCKERFILE

I tried to set up a wordpress solution (installing by myself and not using an official image). I have one container with apache, php and mariadb-client (to interrogate mariadb-server from another container)
I have another container with mariadb on it.
I use wp-cli to configure wordpress website but when i build my docker image, I can't execute the command (inside sh file) which is
wp core config --allow-root --dbname=$MYSQL_DATABASE --dbuser=$MYSQL_USER --dbpass=$MYSQL_PASSWORD --dbhost=172.20.0.2 --dbprefix=wp --path=/var/www/html/wordpress
because my mariadb container isn't up.
So I tried to run this script with entrypoint parameters and when I do my docker-compose up, my script is played and I have the message:
apache-php_SDV | Success: Generated 'wp-config.php' file.
apache-php_SDV | Error: The 'wp-config.php' file already exists.
My script is played every second, I tried to use CMD before and it doesn't work, it's like CMD wasn't run
I have the same result if I want to put CMD after ENTRYPOINT, I can run my script only when both containers are up.
I also tried to use command on my docker-compose.yml but not helpful. Does anyone have a solution?
I resolve myself, it could be useful, i kill PID 1 who run my first script and i run another script who start apache and take PID 1 to the container, there is the code :
#!/bin/bash
set -o errexit
case "$1" in
sh|bash)
set -- "$#"
;;
*)
set -- apache2ctl -D FOREGROUND "$#"
;;
esac
exec "$#" ```

Error : The command '/bin/sh returned a non-zero code: 1

When I am trying to build one of my projects by running a script written by previous team in my ubuntu 16.04
sudo ./build
I am getting error :
Step 8/24 : RUN service mysql start
---> Running in 3djjk653642d
* Starting MySQL database server mysqld
...fail!
The command '/bin/sh -c service mysql start' returned a non-zero code: 1
My Dockerfile looks like:
COPY schema.sql /tmp/schema.sql
### User with ALL accesses (winter/toor)
RUN service mysql start
RUN mysql < /tmp/schema.sql
RUN mysql -e "CREATE USER 'winter'#'%' IDENTIFIED BY 'toor'"
RUN service mysql start && mysql -e "GRANT ALL ON its.* TO 'winter'#'%'"
Please ,any help ?
RUN statements in a Dockerfile are used to run a command which will have some effect on the filesystem, that is then saved in another layer.
It's not normal to start a service like this, as the state of the memory (where the service is running) is not stored in the image, it can only be running in a running container.
The normal way to do stuff like this would be to write a bash script, (called start.sh, or something similar), copy it into the image and then run from an ENTRYPOINT / CMD line at the end of the Dockerfile. This will be run when the container is created in a docker run ... command
start.sh:
service mysql start
mysql < /tmp/schema.sql
mysql -e "CREATE USER 'winter'#'%' IDENTIFIED BY 'toor'"
service mysql start && mysql -e "GRANT ALL ON its.* TO 'winter'#'%'"
Dockerfile:
COPY schema.sql /tmp/schema.sql
COPY start.sh /
ENTRYPOINT ["/start.sh"]
Have a read here for some information on the difference between ENTRYPOINT & CMD and when each should be used.
Better still - use the official MySQL image from Docker hub. Through the use of environment variables, you could probably achieve all you require.
For me the error was:
yum -y install nginx' returned a non-zero code: 1
This docker file helped me:
FROM centos:7
MAINTAINER linuxtechlab
LABEL Remarks="This is a dockerfile example for Centos system"
RUN yum -y update
RUN yum -y install httpd
RUN yum clean all
RUN yum -y install nginx
EXPOSE 80
#ENV HOME /root
#WORKDIR /root
#ENTRYPOINT ["ping"]
#CMD ["google.com"]

If a service is started during docker build, should it be running at runtime?

I have been working on setting up a self contained rails app in a single container. This means getting both rails and a data persistence service running at the same time in one container. In our case, that means mysql.
However, I ran into multiple issues getting this working, because mysql wasn't running.
During the build step, if I had RUN mysqld and then a separate RUN rake db:create step, rake would crash, because mysql was down.
So I worked around this by wrapping the two commands into a script. However, at runtime, rails would fail to startup because mysql wasn't running.
My intuition says that if mysql is started during the build, it should be available at runtime, but I did not have that experience. Starting the rails server had to be wrapped in a script with another call to mysqld.
Here's the dockerfile:
FROM ruby:2.2
RUN mkdir -p $APPDIR
WORKDIR $APPDIR
ADD Gemfile* $APPDIR/
RUN bundle install
RUN apt-get update -qq
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y -qq nodejs mysql-server --no-install-recommends
RUN rm -rf /var/lib/apt/lists/* # */ broken syntax highlighting
COPY . $APPDIR
RUN script/mysql-setup.sh # contents are: mysqld_safe; rake db:create; rake db:migrate
EXPOSE 3000
CMD ["script/rails-launcher.sh"] # contents are: mysqld_safe; rails s
Do I need to do something differently in the Dockerfile? Why isn't mysql up at runtime?
My intuition says that if mysql is started during the build, it should be available at runtime
This is incorrect. Docker will start the service for you and perform the subsequent steps you've defined in the same RUN command, but then it bundles everything up into an intermediate image for subsequent commands. The image doesn't have a known state of running processes, only whatever is required for startup such as init.d scripts.
Your solution would be to use a server startup script or continue to invoke mysqld_safe as you do in your CMD line.
A good idea is to use supervisord to maintain all of your services in a non-daemon mode. Phusion also provides a nice base image with a runit initializer script.
Eventually, you'll come to see how the power of Docker lies in how you can actually break MySQL out of your Rails app container and run it in an entirely different container linked together.
The RUN Command is used to configure your image, each time it is called, a new layer is created with the results of run command. So, if you need to configure your database on the image build step, you have 2 solutions: you can call a number of command in a single RUN call, like
RUN /bin/bash -c "mysqld_safe" && "rake db:create" && "rake db:migrate"
Or via call of single script, as you did.
In both cases, you have to inderstand, that the fact, you runned something during the image build, it'll not run automatcally on the container start up. So, you have to start your database server manually on container start up.

how to reinitialize the database

I have downloaded a demo copy of Hybris for evaluation purposes, and it has been more than 30 days since I downloaded it, and recently I tried to restart it, but it would not, and instead gave me the following message:
"This licence is only for demo or develop usage and is valid for 30 days.
After this time you have to reinitialize database to continue your work."
I am/have been running it on a Mac, and the database is MySQL...
What (UNIX) commands do I use to re-initialise the database, so that I can start up the Hybris Server?
Using command line in the Terminal application - goto YOURPATH/hybris/bin/platform and run the ant clean all then ant initialize command then start hybris:
1) Goto your platform directory
cd $YOURPATH/hybris/bin/platform
2) Set ant's environment by runing "dot" "space" "dot-slash" setantenv.sh
. ./setantenv.sh
3) Then run ant clean all (to clean environment)
ant clean all
4) then run ant initialize (to re-initialize environment)
ant initialize
5) Re-start the hybris server process by running hybrisserver.sh
./hybrisserver.sh
6) have a nice rest of your day! (if this helped you then please give an UP vote - thanks!)
:)
you can use Ant command ant initialize and error will go away
Ant initialize would removes tables that exists in Hybris items.xml files? If you want to reset your DB i have a script that i use across various projects (can be found here, on GitHub)
#!/bin/bash
MUSER="$1"
MPASS="$2"
MDB="$3"
# Detect paths
MYSQL=$(which mysql)
AWK=$(which awk)
GREP=$(which grep)
if [ $# -ne 3 ]
then
echo "Usage: $0 {MySQL-User-Name} {MySQL-User-Password} {MySQL-Database-Name}"
echo "Drops all tables from a MySQL"
exit 1
fi
TABLES=$($MYSQL -u $MUSER -p$MPASS $MDB -e 'show tables' | $AWK '{ print $1}' | $GREP -v '^Tables' )
for t in $TABLES
do
echo "Deleting $t table from $MDB database..."
$MYSQL -u $MUSER -p$MPASS $MDB -e "drop table $t"
done
You need to reinitialize, [ant all] and rebuild hybris as you have did in first time:
Reason : Evaluation copy works only for 30 days and after it will be expired.
When you start your server it will show in console like below image. Pls Check.
Yo can also use Hybris Administration Console to initialization
Platfrom -> Initialization

Issues with MySQL restart on running through a crontab scheduler

I have written a shell script which starts MySQL when its killed/terminated. I am running this shell script using a crontab.
My cron looks for the script file named mysql.sh under /root/mysql.sh
sh /root/mysql.sh
mysql.sh:
cd /root/validate-mysql-status
sh /root/validate-mysql-status/validate-mysql-status.sh
validate-mysql-status.sh:
# mysql root/admin username
MUSER="xxxx"
# mysql admin/root password
MPASS="xxxxxx"
# mysql server hostname
MHOST="localhost"
MSTART="/etc/init.d/mysql start"
# path mysqladmin
MADMIN="$(which mysqladmin)"
# see if MySQL server is alive or not
# 2&1 could be better but i would like to keep it simple
$MADMIN -h $MHOST -u $MUSER -p${MPASS} ping 2>/dev/null 1>/dev/null
if [ $? -ne 0 ]; then
# MySQL's status log file
MYSQL_STATUS_LOG=/root/validate-mysql-status/mysql-status.log
# If log file not exist, create a new file
if [ ! -f $MYSQL_STATUS_LOG ]; then
cat "Creating MySQL status log file.." > $MYSQL_STATUS_LOG
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
else
now="$(date)"
echo [$now] error : MySQL not running >> $MYSQL_STATUS_LOG
fi
# Restarting MySQL
/etc/init.d/mysql start
now1="$(date)"
echo [$now1] info : MySQL started >> $MYSQL_STATUS_LOG
cat $MYSQL_STATUS_LOG
fi
When I run the above mysql shell script manually using webmin's crontab, MySQL started successfully (when its killed).
However, when I schedule it using a cron job, MySQL doesn't starts. The logs are printed properly (it means my cron runs the scheduled script successfully, however MySQL is not restarting).
crontab -l displays:
* * * * * sh /root/mysql.sh
I found from URL's that we should give absolute path to restart MySQL through schedulers like cron. However, it haven't worked for me.
Can anyone please help me!
Thank You.
First, crontab normaly looks like this:
* * * * * /root/mysql.sh
So remove the surplus sh and put it at the beginning of the script - #!/bin/bash I suppose (why are you referring to sh instead of bash?) and don't forget to have an execute permission on the file (chmod +x /root/mysql.sh)
Second, running scripts within crontab is tricky, because the environment is different! You have to set it manually. We start with PATH: go to console and do echo $PATH, and then copy-paste the result into export PATH=<your path> to your cron script:
mysql.sh:
#!/bin/bash
export PATH=.:/bin:/usr/local/bin:/usr/bin:/opt/bin:/usr/games:./:/sbin:/usr/sbin:/usr/local/sbin
{
cd /root/validate-mysql-status
/root/validate-mysql-status/validate-mysql-status.sh
} >> OUT 2>> ERR
Note that I also redirected all the output to files so that you don't receive emails from cron.
Problem is how to know which other variables (besides PATH) matter. Try to go through set | less and try to figure out which variables might be important to set in the cron script too. If there are any MYSQL related variables, you must set them! You may also examine the cron script environment by putting set > cron.env to the cron script and then diff-ing it against console environment to look for significant differences.