Gunicorn not reloading code completely with the HUP signal - gunicorn

I'm running gunicorn and I use the HUP signal to reload gunicorn gracefully. However, it seems that changes in models.py are not reloaded for some reason. To be specific I do:
sudo kill -HUP `cat masterpid`
I also run gunicorn with supervisor, so I end up doing a hard restart of gunicorn with supervisor, but it is not graceful and there is a second or two of downtime (plus some possibly broken requests). Does anyone have a solution for this?

Are you using run_gunicorn (now deprecated)?
https://github.com/benoitc/gunicorn/issues/536

Related

Is there an alternative when using supervisor with Gunicorn?

I am creating a Flask app and I using Nginx and Gunicorn inside a virtual enviroment. When I start Gunicorn, gunicorn app:app everything works fine. Then when I activate the Supervisor to keep gunicorn active, it gives me a 500 error. I am reading in my log in var/log/ that is error happens when I try to open a file that should have been created after subprocess.run(command, capture_output=True, shell=True) So this line is not being executed correctly.
Is there an alternative to supervisor to keep my app running when my putty is closed?
Thanks.
I found the answer here.
https://docs.gunicorn.org/en/stable/deploy.html
It says that one good option is using Runit.
EDIT:
I ended up using the Gunicorn function called --deamon. It is similar and makes everything much simpler.

gUnicorn with systemd Watchdog

We have a requirement to monitor and try to restart our gUnicorn/Django app if it goes down. We're using gunicorn 20.0.4.
I have the following nrs.service running fine with systemd. I'm trying to figure out if it's possible to integrate systemd's watchdog capabilities with gUnicorn. Looking through the source I don't see anywhere a sd_notify("WATCHDOG=1") is being called so I'm thinking that no, gunicorn doesn't know how to keep systemd aware that it's up (it calls sd_notify("READY=1...") at startup but in its run loop there's no signal being sent saying it's still running)
Here's the nrs.service file. I have commented out the watchdog vars because it obviously sends my service into a failed state shortly after it starts.
[Unit]
Description=Gunicorn instance to serve NRS project
After=network.target
[Service]
WorkingDirectory=/etc/nrs
Environment="PATH=/etc/nrs/bin"
ExecStart=/etc/nrs/bin/gunicorn --error-logfile /etc/nrs/logs/gunicorn_error.log --certfile=/etc/httpd/https_certificate/nrs.cer --keyfile=/etc/httpd/https_certificate/server.key --access-logfile /etc/nrs/logs/gunicorn_access.log --capture-output --bind=nrshost:8800 anomalyalerts.wsgi
#WatchdogSec=15s
#Restart=on-failure
#StartLimitInterval=1min
#StartLimitBurst=4
[Install]
WantedBy=multi-user.target
So systemd watchdog is doing its thing, just looks like out of the box gunicorn doesn't support it. Not very familiar with 'monkey-patching' but I'm thinking if we want to use this method of monitoring, I'm going to have to implement some custom code? Other thought was just to have a watch command check the service and try to restart it, which might be easier.
Thanks
Jason
monitor and try to restart our gUnicorn/Django app if it goes down
systemd's watchdog will not help in the described case. The reason is that the the watchdog is intended to monitor the main service process, which does not run your app directly.
The Gunicorn's master process, which is the main service process from the systemd's perspective, is a loop that manages the worker processes. Your app is running inside the worker process, so if anything happens there, the worker process is the one that should be restarted, not the master process.
Worker processes' restart is handled by Gunicorn automatically (see timeout setting). As for the main service process, in a rare case when it dies, the Restart=on-failure option can restart it even without a watchdog (see the docs for details on how it behaves).

Frankenserve... how to stop polyserve?

While at the (excellent!) Polymer Summit in London, as part of the codelabs I ran "polymer serve" and got the application template up and running: http://localhost:8080/
Great! But how do I stop the server? It's continually running and survives a reboot. I need to get on with another project :P
I'm on Windows (W10 64). I have tried the usual method I have used before to stop node servers (is Polyserve node based?).
If i run netstat -an there is nothing listed related to 8080
If I run
netstat -ano | find "LISTENING" | find "8080"
nothing is returned.
Answering my own question, I guess this is just down to the hard caching by the service worker, as a hard reload refreshes as expected.
Lots of potential for confusion with service worker lifecycle!
Edit: "unregister service worker" in devtools did the trick!

MySql container stuck at "Restarting..." on Dokku

I tried to create a new mySql database on my Dokku container.
Using
dokku mysql:create bookmarks
The container has been created, but it seems it is unable to start.
The command
# dokku mysql:list
NAME VERSION STATUS EXPOSED PORTS LINKS
bookmarks mysql:5.6.26 restarting - -
I am unable to stop, restart or destroy this container.
# dokku mysql:destroy bookmarks
! WARNING: Potentially Destructive Action
! This command will destroy bookmarks MySQL service.
! To proceed, type "bookmarks"
> bookmarks
-----> Deleting bookmarks
Deleting container data
! Service is already stopped
Removing container
Error response from daemon: Conflict, You cannot remove a running container. Stop the container before attempting removal or use -f
Error: failed to remove containers: [dokku.mysql.bookmarks]
I also tried to reboot the entire server, without any success.
To me, it seems like something went wrong during the creation of this container that makes the system unable to start it. The problem is that at the same time I am unable to stop or restart it, and being unable to stop it I cannot remove it and start from scratch.
# dokku mysql:stop bookmarks
! Service is already stopped
# dokku mysql:restart bookmarks
! Service is already stopped
-----> Starting container
No container exists for bookmarks
-----> Please call dokku ps:restart on all linked apps
The error message says something about "forcing" the process, but I can't find anywhere how to use it.
Does anyone have any idea?
Thank you in advance,
Simone
So, finally with the help from people from DigitalOcean I've been able to stop and destroy that faulty container.
Here is what I did:
Check Docker processes running
docker ps -a
Identify the process that is causing the problem, in my case it was:
8549c8ec4e53 mysql:5.6.26 "/entrypoint.sh mysql" 17 hours ago Restarting (1) 2 hours ago 3306/tcp dokku.mysql.bookmarks
Kill the Docker process
docker kill 8549c8ec4e53
Remove the container from Dokku
dokku mysql:destroy bookmarks
Hope this answer helps others having similar problems.

CGI Bash script to spawn daemon process

I am working on a project to stream HDTV from a personal computer to devices supporting HTTP Live Streaming (think iOS devices and some android). I have the formatting of the video and the streaming aspects down. What I am now trying to implement is an easy way to change the channel remotely.
My current method involves connecting via SSH to kill the old stream and begin a new stream. This works, but isn't pretty. I want something my Mom or girlfriend could use. I decided I would build an HTML5 app that would issue the channel switching over CGI scripts. I currently have a parent process with a form that calls a child process to decide if the stream is running and then a subchild process to actually tune the stream.
As I am streaming live video from my computer I need the subchild process to run indefinitely. Unfortunately it seems that when my parent process is finished the background process started in the subchild process terminates.
I have tried a simple &, using nohup, setsid, and daemon. daemon runs cleanest but still terminates when the parent finishes. even with a -r flag. I'll place my code below and maybe someone will have an idea on how I could implement this or a better way to achieve the same thing? Thanks! (oh and i know killing vlc is not a pretty way to kill the stream, if you have a better way i'm all ears)
parent invoking child:
----------------------
./ChangeChannel.sh $channel #passed from form submission
child (ChangeChannel.sh):
-------------------------
#!/bin/bash
directory=./Channels/
newchannel=$1
if [ $(pidof vlc) ]
then
sudo kill $(pidof vlc)
fi
daemon -r -v -d $directory$newchannel &
subchild example:
-----------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}'
how can i keep the subchild from terminating??? Running Apache on Ubuntu 12.04
I got it!
For anyone interested in how, i changed my tactics to use nohup, &, disown, and > /dev/null 2>&1.
Honestly, still not quite sure how I got it working... just a lot of trial and error with some educated guesses. My code follows:
parent invocation:
------------------
nohup ./ChangeChannel.sh $channel & disown
child invocation:
-----------------
sudo nohup su user $directory$newchannel &> /dev/null 2>&1
subchild invocation:
--------------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}' & disown
ChangeChannel.sh uses sudo to execute su via cgi in order to execute vlc as user other than root. It seems a little messy but hell it works.