Why doesn't gunicorn write log to reopened file while rotating a log file? - gunicorn

My test scenario is as below.
Run gunicorn
$ gunicorn main:app --conf=./settings/gunicorn/gunicorn.py
Remove access.log file
Send a signal USR1 to let gunicorn reopen new log access.log file.
kill -USR1 $(cat ./data/pid/gunicorn.pid)
gunicorn write to the new access.log file.
Step 1~3 seems working as expected, creating a new access.log file after gunicorn receives USR1 signal.
However, gunicorn doesn't write log anymore to the new log file.
# gunicorn.py (config)
import multiprocessing
worker_class = 'uvicorn.workers.UvicornWorker'
workers = multiprocessing.cpu_count()
pidfile = './data/pid/gunicorn.pid'
bind = 'unix:./data/socket/auth.sock'
accesslog = './data/log/gunicorn/access.log'
errorlog = './data/log/gunicorn/error.log'

Related

gitlab-runner randomly runs as root rather than user gitlab-runner

I have a simple gitlab-runner setup on its own ubuntu server. It was registered using:
sudo gitlab-runner register -n --url https://gitlab.com/ --registration-token {{GITLAB_REGISTRATION_TOKEN}} --executor shell --description "{{GITLAB_RUNNER_DESCRIPTION}}"
The only gitlab-runner process (ps -ef) that is running is:
/usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
My /etc/gitlab-runner/config.toml config file contains:
concurrent = 1
check_interval = 0
[[runners]]
name = "iouze/landing"
url = "https://gitlab.com/"
token = "0530b.....6f9"
executor = "shell"
[runners.cache]
When my CI pipeline triggers, jobs sometimes gets picked up as user gitlab-runner user, in which case they run correctly (as a sheell executor).
But sometimes, it runs as root, in which case it runs as a docker executor, and then gives errors (because the script is running in an unexpected environment).
Why would it run sometimes as root when there is clearly a --user on the service?
I am running on gitlab.com and the pipeline jobs were sometimes getting picked up by the shared runners that are already configured there. I disabled the shared runners

How to configure ExecStart for Gunicorn without WSGI?

Systemd and Gunicorn require a wsgi file of some sort as the last arg to ExecStart: http://docs.gunicorn.org/en/latest/deploy.html?highlight=ExecStart#systemd
With Django, this was in the main module as wsgi.py:
ExecStart=/home/admin/django/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.wsgi
But this file obviously doesn't exist when using Sanic and uvloop (I believe the new protocol is called ASGI). I tried substituting it for app.py which unsurprisingly didn't work:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.app
How should this parameter be configured when using Sanic?
If you want to start sanic with systemd, why don't you used supervisrod: Supervisord.
Boot -> Systemd -> supervisord -> gunicorn -> sanic
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
logfile_maxbytes=50MB ; maximum size of logfile before rotation
logfile_backups=10 ; number of backed up logfiles
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:ctrlapi]
directory=/home/ubuntu/api
command=/home/ubuntu/api/venv3/bin/gunicorn api:app --bind 0.0.0.0:8000 --worker-class sanic.worker.GunicornWorker -w 2
stderr_logfile = log/api_stderr.log
stdout_logfile = log/api_stdout.log
I have not yet deployed this myself with Systend and gunicorn. But, the documentation seems pretty good on this.
In order to run Sanic application with Gunicorn, you need to use the special sanic.worker.GunicornWorker for Gunicorn worker-class argument:
gunicorn myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
With this in mind, how about this:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
I think the big piece you are missing is the GunicornWorker worker class.

Orion Context Broker functional test failure

I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.

unoconv return error when running as www-data

When running this from command line as root it works
unoconv -f csv $file
But when running it as www-data this error is returned
Traceback (most recent call last):
File "/usr/bin/unoconv", line 1114, in <module>
office_environ(of)
File "/usr/bin/unoconv", line 203, in office_environ
os.environ['PATH'] = realpath(office.basepath, 'program') + os.pathsep + os.environ['PATH']
File "/usr/lib/python3.4/os.py", line 633, in __getitem__
raise KeyError(key) from None
KeyError: 'PATH'
update
echo shell_exec('echo $PATH');
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
centos 7.3 with php via php-fpm
the env in php is cleaned by php-fpm
u can use putenv to set evn["PATH"] in php code, examples
putenv("PATH=/sbin:/bin:/usr/sbin:/usr/bin");
var_dump(shell_exec('unoconv -vvvv -f pdf -o 123.pdf 123.doc));
or u can set env use one line shell cmd
var_dump(shell_exec('PATH=/sbin:/bin:/usr/sbin:/usr/bin'.' unoconv -vvvv -f pdf -o 123.pdf 123.doc));
or u can change /etc/php-fpm.d/www.conf to pass the env to php, add this line
clean_env = no
and the restart php-fpm
systemctl restart php-fpm.service
The PHP call you used (pasted from chat):
exec("unoconv -f csv $file")
My guess is that exec() is giving you an environment that is too limited. To work around this, you could set up a polled directory. The PHP script would copy files to be converted into the polled directory and wait for the files to be converted.
Then create a bash script (either running as root or a somewhat more secure user) to run in an infinite loop and check the polled directory for any incoming files. See How to keep polling file in a directory till it arrives in Unix for what the bash script might look like.
When the bash script sees incoming files, it runs unoconv.
Found a solution myself by running libreoffice directly
sudo libreoffice --headless --convert-to csv --outdir $tmp_path $file

How to create a capistrano task to download latest database_backup.tgz and import locally?

I'm trying to do something with a Capistrano task that is similar to the heroku db:pull functionality if you are familiar with it.
I have a remote server. On that server I have a bunch of backups in the /path/db_backups/ folder. And in that folder there's a backup of the database everyday.
All I want to do is
Download the latest backup on the client machine.
Untar it.
Import it into local mysql db.
Anyone know of a good way to handle this? Is there a gem I am unaware of? Is there a script you have handy?
I'm not sure if there is a gem for that. I usually copy/pastle this task on capistrano (config/deploy.rb) to pull a compressed database from the server and store it on my development environment
namespace :utils do
desc 'pull the DB from the server'
task :pull_db, :roles => :db, :only => { :primary => true } do
website = "http://www.my_website.com"
filename = "#{application}.dump.#{Time.now.to_f}.sql"
filename_bz2 = "#{filename}.bz2"
remote_file = "#{current_path}/public/#{filename_bz2}"
text = capture "cat #{deploy_to}/current/config/database.yml"
yaml = YAML::load(text)
on_rollback { run "rm #{remote_file}" }
run "mysqldump -h#{yaml[rails_env]['host']} -u #{yaml[rails_env]['username']} -p #{yaml[rails_env]['database']} | bzip2 -c > #{remote_file}" do |ch, stream, out|
ch.send_data "#{yaml[rails_env]['password']}\n" if out =~ /^Enter password:/
end
local_text = run_locally("cat config/database.yml")
local_yaml = YAML::load(local_text)
run_locally("wget #{website}/#{filename_bz2}")
run_locally("bzip2 -d #{filename_bz2}")
run_locally("bundle exec rake db:drop")
run_locally("bundle exec rake db:create")
if local_yaml['development']['password'] && !local_yaml['development']['password'].blank?
run_locally("mysql -h#{local_yaml['development']['host']} -u#{local_yaml['development']['username']} -p#{local_yaml['development']['password']} #{local_yaml['development']['database']} < #{filename}")
else
run_locally("mysql -h#{local_yaml['development']['host']} -u#{local_yaml['development']['username']} #{local_yaml['development']['database']} < #{filename}")
end
run_locally("rm #{filename}")
run "rm #{remote_file}"
end
end
The following script should achieve that:
# Find out which file to copy and save its name in a local text file:
# ssh allows you to specify a command that should be executed on the remote
# machine instead of opening a terminal session on it. I use this to get
# a sorted (ls -t sorts by modification date) list of all backups. I then
# truncate this list to one entry using head -1 and save the file name in a
# local file (filename.txt).
# (12.34.56.78 is a placeholder for the ip/hostname of your server)
ssh 12.34.56.78 ls -t /path/to/backups/ | head -1 > filename.txt
# Copy the backup specified in filename.txt to the tmp dir on your local machine.
scp 12.34.56.78:/path/to/backups/`cat filename.txt` /tmp/db_backup.sql.tar
# Untar the backup archive.
cd /tmp && tar -xf db_backup.sql.tar
# Import into database of choice.
mysql -u your_username -p database_to_import_to < /tmp/db_backup.sql
(This assumes that you are on a UNIX system and have scp and tar installed...)