How to configure ExecStart for Gunicorn without WSGI? - gunicorn

Systemd and Gunicorn require a wsgi file of some sort as the last arg to ExecStart: http://docs.gunicorn.org/en/latest/deploy.html?highlight=ExecStart#systemd
With Django, this was in the main module as wsgi.py:
ExecStart=/home/admin/django/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.wsgi
But this file obviously doesn't exist when using Sanic and uvloop (I believe the new protocol is called ASGI). I tried substituting it for app.py which unsurprisingly didn't work:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py --bind unix:/tmp/api.sock myapp.app
How should this parameter be configured when using Sanic?

If you want to start sanic with systemd, why don't you used supervisrod: Supervisord.
Boot -> Systemd -> supervisord -> gunicorn -> sanic
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
logfile_maxbytes=50MB ; maximum size of logfile before rotation
logfile_backups=10 ; number of backed up logfiles
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:ctrlapi]
directory=/home/ubuntu/api
command=/home/ubuntu/api/venv3/bin/gunicorn api:app --bind 0.0.0.0:8000 --worker-class sanic.worker.GunicornWorker -w 2
stderr_logfile = log/api_stderr.log
stdout_logfile = log/api_stdout.log

I have not yet deployed this myself with Systend and gunicorn. But, the documentation seems pretty good on this.
In order to run Sanic application with Gunicorn, you need to use the special sanic.worker.GunicornWorker for Gunicorn worker-class argument:
gunicorn myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
With this in mind, how about this:
ExecStart=/home/admin/sanic/bin/gunicorn --config /home/admin/src/gunicorn.py myapp:app --bind 0.0.0.0:1337 --worker-class sanic.worker.GunicornWorker
I think the big piece you are missing is the GunicornWorker worker class.

Related

Pm2 startup issue with CENTOS 8 / SELinux

Please, do you know how resolve this issue ?
I searched everywhere without finding.
06:45 SELinux is preventing systemd from open access on the file /root/.pm2/pm2.pid. For complete SELinux messages run: sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
06:45 pm2-root.service: Can't convert PID files /root/.pm2/pm2.pid O_PATH file descriptor to proper file descriptor: Permission denied systemd 2
06:45 Failed to start PM2 process manager.
I have executed this command : sealert -l d84a5a0b-cfcf-4cb9-918a-c0952bf70600 setroubleshoot
Messages d'audit bruts
type=AVC msg=audit(1591498085.184:7731): avc: denied { open } for pid=1 comm="systemd" path="/root/.pm2/pm2.pid" dev="dm-0" ino=51695937 scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:admin_home_t:s0 tclass=file permissive=0
PM2 Version : 4.4.0
NODE version : 12.18.0
CentOS Version : 8
my systemd service :
[Unit]
Description=PM2 process manager
Documentation=https://pm2.keymetrics.io/
After=network.target
[Service]
Type=forking
User=root
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Environment=PATH=/sbin:/bin:/usr/sbin:/usr/bin:/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Environment=PM2_HOME=/root/.pm2
PIDFile=/root/.pm2/pm2.pid
Restart=on-failure
ExecStart=/usr/lib/node_modules/pm2/bin/pm2 resurrect
ExecReload=/usr/lib/node_modules/pm2/bin/pm2 reload all
ExecStop=/usr/lib/node_modules/pm2/bin/pm2 kill
[Install]
WantedBy=multi-user.target
Thank you
As said in the comments, I had the exact same issue.
To solve this, just run the following commands as root after trying to start the PM2 service (in your case, this start attempt would be systemctl start pm2-root)
ausearch -c 'systemd' --raw | audit2allow -M my-systemd
semodule -i my-systemd.pp
This looks pretty generic, but it works. These lines were suggested by SELinux itself. To get them, I had to run the command journalctl -xe after trying to start the service
Two options:
Edit the systemd file that starts pm2 and specify an alternative location for the pm2 PIDFile). You'll have to make two changes, one to tell pm2 where to place the PIDFile, and one to tell systemd where to look for it. Replace the existing PIDFile line with the following two lines
Environment=PM2_PID_FILE_PATH=/run/pm2.pid
PIDFile=/run/pm2.pid
Create an SELinux rule that allows this particular behavior. You can do that exactly as Backslash36 suggest in their answer. If you want to create the policy file yourself rather than through audit2allow,the following should work, although then you have to compile it to a usable .pp file yourself.
module pm2 1.0;
require {
type user_home_t;
type init_t;
class file read;
}
#============= init_t ==============
allow init_t user_home_t:file read;

change socket address mariadb_config

I can't understand what follows, can someone explain me and help me solve the problem?
I have a mariadb-server a front-end application in C.
I have 2 make files and i'd like that i can use both of them.
The first one is this
all:
gcc -g src/*.c -o applicazione `mysql_config --cflags --include --libs`
clean:
-rm applicazione
and it works. If i compile with this, my application runs without any trouble.
The second one is this
all:
gcc -g src/*.c -o applicazione `mariadb_config --cflags --include --libs`
clean:
-rm applicazione
The difference is that in the first I used mysql_config, while in the second I used mariadb_config.
My problem is that with the second makefile, (after some problems) I can successfully compile, but as soon as I try to connect to the server I get this error
fabiano#fabiano-HP-15-Notebook-PC:~/Scrivania/BackupProgetto/0226198$ ./applicazione
Inserisci Matricola: g1
Inserisci Password: *
Connection error: Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
Reading on the net i understand that the problem is that the socket is not where my application try to find it.
Indeed if i execute sudo mariadb and after that \system i can read this
UNIX socket: /var/run/mysqld/mysqld.sock
Now my questions:
why my application run successfully with the first make file but it doesn't with the second one ?
what can i do for let my application works with both make files ?
My OS is Ubuntu 18.04.3 LTS.
If you compare output from mysql_config --libs with output from mariadb_config --libs you will probably notice that different libraries from different locations will be used.
mariadb_config is part of MariaDB Connector/C, the default build uses /tmp/mysql.sock for the socket file:
IF(NOT MARIADB_UNIX_ADDR)
SET(MARIADB_UNIX_ADDR "/tmp/mysql.sock")
ENDIF()
The libraries from mysql_config output were compiled with default socket located at /var/run/mysqld while the libraries from mariadb_config where compiled with socket located in the tmp directory.
There are several options to fix that:
1) Change the socket in your my.cnf file. This needs to be done in [mysqld] section, but also in [mysql] section to make sure that the client tools will work properly.
2) Set the environment variable MYSQL_UNIX_PORT to /var/run/mysqld/mysql.sock before running your application
3) If you build MariaDB Connector/C on your own:
cd mariadb-connector-c
mkdir bld
cd bld
cmake .. -DMARIADB_UNIX_ADDR=/var/run/mysqld/mysql.sock
cmake --build .
4) Before connecting you can specify the location of the socket in your application:
mysql= nysql_init(NULL);
rc= mysql_options(mysql, MARIADB_OPT_UNIXSOCKET, "/var/run/mysqld/mysql.sock");

Orion Context Broker functional test failure

I have successfully forked and built the Context Broker source code on a CentOS 6.9 VM and now I am trying to run the functional tests as the official documentation suggests. First, I installed the accumulator-server.py script:
$ make install_scripts INSTALL_DIR=~
Verified that it is installed:
$ accumulator-server.py -u
Usage: accumulator-server.py --host <host> --port <port> --url <server url> --pretty-print -v -u
Parameters:
--host <host>: host to use database to use (default is '0.0.0.0')
--port <port>: port to use (default is 1028)
--url <server url>: server URL to use (default is /accumulate)
--pretty-print: pretty print mode
--https: start in https
--key: key file (only used if https is enabled)
--cert: cert file (only used if https is enabled)
-v: verbose mode
-u: print this usage message
And then run the functional tests:
$ make functional_test INSTALL_DIR=~
But the test fails and exits with the message below:
024/927: 0000_ipv6_support/ipv4_ipv6_both.test ........................................................................ (FAIL 11 - SHELL-INIT exited with code 1) testHarness.sh/IPv6 IPv4 Both : (0000_ipv6_support/ipv4_ipv6_both.test)
make: *** [functional_test] Error 11
$
I checked the file ../0000_ipv6_support/ipv4_ipv6_both.shellInit.stdout for any hint on what may be going wrong but error log does not lead me anywhere:
{ "dropped" : "ftest", "ok" : 1 }
accumulator running as PID 6404
Unable to start listening application after waiting 30
Does anyone have any idea about what may be going wrong here?
I checked the script which prints the error line Unable to start listening application after waiting 30 and noticed that stderr for accumulator-server.py is logged into the /tmp folder.
The accumulator_9977_stderr file had this log: 0000_ipv6_support/ipv4_ipv6_both.shellInit: line 27: accumulator-server.py: command not found
Once I saw this log I understood the mistake I made. I was running the
functional tests with sudo and the secure_path was being used instead of my PATH variable.
So at the end, running the functional tests with the command below solved the issue for me.
$ sudo "PATH=$PATH" make functional_test INSTALL_DIR=~
This can also be solved by editing the /etc/sudoers file by:
$ sudo visudo
and modifying the secure_path value.

Docker - Enable Remote HTTP API with SystemD and "daemon.json"

Disclaimer:
On a old machine with Ubuntu 14.04 with Upstart as init system I have enabled the HTTP API by defining DOCKER_OPTS on /etc/default/docker. It works.
$ docker version
Client:
Version: 1.11.2
(...)
Server:
Version: 1.11.2
(...)
Problem:
This does solution does not work on a recent machine with Ubuntu 16.04 with SystemD.
As stated on the top of the recent file installed /etc/default/docker:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/articles/systemd/
#
(...)
As I checked this information on the Docker documentation page for SystemD I need to fill a daemon.json file but as stated on the reference there are some properties self-explanatory but others could be under-explained.
That being said, I'm looking for help to convert this:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -G myuser --debug"
to the daemon.jsonobject?
Notes
PS1: I'm aware that the daemon.json have a debug: true as default.
PS2: Probably the group: "myuser" it will work like this or with an array of strings.
PS3: My main concern is to use SOCK and HTTP simultaneous.
EDIT (8/08/2017)
After reading the accepted answer, check the #white_gecko answer for more input on the matter.
With a lot of fragmented documentation it was difficult to solve this.
My first solution was to create the daemon.json with
{
"hosts": [
"unix:///var/run/docker.sock",
"tcp://127.0.0.1:2376"
]
}
This does not worked this error docker[5586]: unable to configure the Docker daemon with file /etc/docker/daemon.json after tried to restart the daemon with service docker restart.
Note: There was more on the error that I failed to copy.
But what this error meant it at the start the daemon it a conflict with a flag and configurations on daemon.json.
When I looked into it with service docker status this it was the parent process: ExecStart=/usr/bin/docker daemon -H fd://.
What it was strange because is different with configurations on /etc/init.d/docker which I thought that were the service configurations.
The strange part it was that the file on init.d does contain any reference to daemon argument neither -H fd://.
After some research and a lot of searches of the system directories, I find out these directory (with help on the discussion on this issue docker github issue #22339).
Solution
Edited the ExecStart from /lib/systemd/system/docker.service with this new value:
/usr/bin/docker daemon
And created the /etc/docker/daemon.json with
{
"hosts": [
"fd://",
"tcp://127.0.0.1:2376"
]
}
Finally restarted the service with service docker start and now I get the "green light" on service docker status.
Tested the new configurations with:
$ docker run hello-world
Hello from Docker!
(...)
And,
$ curl http://127.0.0.1:2376/v1.23/info
[JSON]
I hope that this will help someone with a similar problem as mine! :)
I had the same problem and actually in my eyes the easiest solution which should doesn't touch any existing files, which are managed by the system update process is, to use a systemd drop-in:
Just create a file /etc/systemd/system/docker.service which overwrites the specific part of the service in /lib/systemd/system/docker.service.
In this case the content of /etc/systemd/system/docker.service would be:
[Service]
ExecStart=/usr/bin/dockerd --tlsverify --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server-cert.pem --tlskey=/etc/docker/server-key.pem -H=tcp://127.0.0.1:2375 -H=fd://
(You could even create a directory docker.service.d which contains multiple files to overwrite different parameters.)
After adding the file you just run:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
The solution described at https://docs.docker.com/engine/admin/#troubleshoot-conflicts-between-the-daemonjson-and-startup-scripts works for me:
One notable example of a configuration conflict that is difficult to
troubleshoot is when you want to specify a different daemon address
from the default. Docker listens on a socket by default. On Debian and
Ubuntu systems using systemd), this means that a -H flag is always
used when starting dockerd. If you specify a hosts entry in the
daemon.json, this causes a configuration conflict (as in the above
message) and Docker fails to start.
To work around this problem, create a new file
/etc/systemd/system/docker.service.d/docker.conf with the following
contents, to remove the -H argument that is used when starting the
daemon by default.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
Note that the line with ExecStart= is actually required, otherwise it'll fail with the error:
docker.service: Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.
After creating the file you must run:
sudo systemctl daemon-reload
sudo systemctl restart docker
For me worked on Ubuntu 18.04.1 LTS and Docker 18.06.0-ce create
/etc/systemd/system/docker.service.d/remote-api.conf
with following content:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock
then run sudo systemctl daemon-reload and sudo systemctl restart docker
See result calling:
curl http://localhost:2376/info
You might need to configure proxy, if your docker is behind a proxy.
To achiev this paste in /etc/default/docker file following:
http_proxy="http://85.22.53.71:8080/"
https_proxy="http://85.22.53.71:8080/"
HTTP_PROXY="http://85.22.53.71:8080/"
HTTPS_PROXY="http://85.22.53.71:8080/"
# below you can list some *.enterprise_domain.com as well
NO_PROXY="localhost,127.0.0.1,::1"
Or Create
/etc/systemd/system/docker.service.d/remote-api.conf with following content:
[Service]
Environment="HTTP_PROXY=http://<you_proxy_ip>:<port>"
Environment="HTTPS_PROXY=https://<you_proxy_ip>:<port>/"
Environment="NO_PROXY=localhost,127.0.0.1,::1"
I hope it helps someone...

Gunicorn listening always at http://127.0.0.1:8000

I have set up my django application on webfaction and now I am trying to move to using Guicorn to serve my application. When I set up my files and config everything seems to be working except that it is always listening at 127.0.0.1:8000.
My configuration is as below.
supervisord.conf
[unix_http_server]
file=/home/devana/tmp/supervisor.sock
[supervisord]
logfile=/home/devana/tmp/supervisord.log
logfile_maxbytes=50MB
logfile_backups=10
loglevel=info
pidfile=/home/devana/webapps/devana/etc/supervisord.pid
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///home/devana/tmp/supervisor.sock
[include]
files = /home/devana/webapps/devana/etc/supervisord/*.ini
Supervisor.ini
[program:devana]
command=/home/devana/webapps/devana/scripts/start_server
directory=/home/devana/webapps/devana/csiop/
user=devana
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile = /home/devana/tmp/gunicorn_supervisor.log
start_server
NAME="devana" # Name of the application
DJANGODIR=/home/devana/webapps/devana/csiop # Django project directory
SOCKFILE=/home/devana/webapps/devana/run/gunicorn.sock # we will communicte using this
unix socket
USER=devana # the user to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=devana.settings.production # which settings should Django use
DJANGO_WSGI_MODULE=devana.wsgi # WSGI module name
BIND=2.14.5.58:31148 (IP and the port number provided by webfaction in this place)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/devana/webapps/devana/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--log-level=debug \
--bind=$BIND
Now when I type the '../bin/Supervisord' command, guicorn is starting but it listening at 127.0.0.1:8000 instead of the bind variable I provided and I am not able to open my website using http://mywebsite.com.
Could someone point me what I am doing wrong
I found the problem. Instead of using BIND variable containing both IP and port, I separated them into two different variables and used --bind=$IP:$PORT. That seems to work
If gunicon listens on 127.0.0.1:8000 it probably is the default that is applied because the supplied -b / --bind parameter cannot be applied.
In my case, I ran gunicorn via Docker and had the following directive in my Dockerfile to run as default command:
CMD ["gunicorn", "config.wsgi", "--bind 0.0.0.0:8000"] # listening on 127.0.0.1:8000
CMD ["gunicorn", "config.wsgi", "--bind", "0.0.0.0:8000"] # listening on 0.0.0.0:8000
I'm not sure what was broken in your case but if someone from the future stumbles upon this: check how the --bind value is passed to gunicorn.