While at the (excellent!) Polymer Summit in London, as part of the codelabs I ran "polymer serve" and got the application template up and running: http://localhost:8080/
Great! But how do I stop the server? It's continually running and survives a reboot. I need to get on with another project :P
I'm on Windows (W10 64). I have tried the usual method I have used before to stop node servers (is Polyserve node based?).
If i run netstat -an there is nothing listed related to 8080
If I run
netstat -ano | find "LISTENING" | find "8080"
nothing is returned.
Answering my own question, I guess this is just down to the hard caching by the service worker, as a hard reload refreshes as expected.
Lots of potential for confusion with service worker lifecycle!
Edit: "unregister service worker" in devtools did the trick!
Related
I am following what's suggested in this article to change the iptables rules in order to allow incoming connections. For some reason, the qemu hooks does not run. I simply tried to write into a file with echo 'some output' > someweirdfilename before making any vm name checks to run the actual script itself to later check the existence of the file. It looks like the hook is not executed at all. Made sure that libvirtd.service is restarted, so is guest and eventually tried the complete reboot. All resulted in the same. Running libvirt 7.6.0 on a fedora 35. Does anyone have any suggestions for troubleshooting?
I am creating a Flask app and I using Nginx and Gunicorn inside a virtual enviroment. When I start Gunicorn, gunicorn app:app everything works fine. Then when I activate the Supervisor to keep gunicorn active, it gives me a 500 error. I am reading in my log in var/log/ that is error happens when I try to open a file that should have been created after subprocess.run(command, capture_output=True, shell=True) So this line is not being executed correctly.
Is there an alternative to supervisor to keep my app running when my putty is closed?
Thanks.
I found the answer here.
https://docs.gunicorn.org/en/stable/deploy.html
It says that one good option is using Runit.
EDIT:
I ended up using the Gunicorn function called --deamon. It is similar and makes everything much simpler.
We have a requirement to monitor and try to restart our gUnicorn/Django app if it goes down. We're using gunicorn 20.0.4.
I have the following nrs.service running fine with systemd. I'm trying to figure out if it's possible to integrate systemd's watchdog capabilities with gUnicorn. Looking through the source I don't see anywhere a sd_notify("WATCHDOG=1") is being called so I'm thinking that no, gunicorn doesn't know how to keep systemd aware that it's up (it calls sd_notify("READY=1...") at startup but in its run loop there's no signal being sent saying it's still running)
Here's the nrs.service file. I have commented out the watchdog vars because it obviously sends my service into a failed state shortly after it starts.
[Unit]
Description=Gunicorn instance to serve NRS project
After=network.target
[Service]
WorkingDirectory=/etc/nrs
Environment="PATH=/etc/nrs/bin"
ExecStart=/etc/nrs/bin/gunicorn --error-logfile /etc/nrs/logs/gunicorn_error.log --certfile=/etc/httpd/https_certificate/nrs.cer --keyfile=/etc/httpd/https_certificate/server.key --access-logfile /etc/nrs/logs/gunicorn_access.log --capture-output --bind=nrshost:8800 anomalyalerts.wsgi
#WatchdogSec=15s
#Restart=on-failure
#StartLimitInterval=1min
#StartLimitBurst=4
[Install]
WantedBy=multi-user.target
So systemd watchdog is doing its thing, just looks like out of the box gunicorn doesn't support it. Not very familiar with 'monkey-patching' but I'm thinking if we want to use this method of monitoring, I'm going to have to implement some custom code? Other thought was just to have a watch command check the service and try to restart it, which might be easier.
Thanks
Jason
monitor and try to restart our gUnicorn/Django app if it goes down
systemd's watchdog will not help in the described case. The reason is that the the watchdog is intended to monitor the main service process, which does not run your app directly.
The Gunicorn's master process, which is the main service process from the systemd's perspective, is a loop that manages the worker processes. Your app is running inside the worker process, so if anything happens there, the worker process is the one that should be restarted, not the master process.
Worker processes' restart is handled by Gunicorn automatically (see timeout setting). As for the main service process, in a rare case when it dies, the Restart=on-failure option can restart it even without a watchdog (see the docs for details on how it behaves).
I've followed the directions here, but when I run ./svc.sh run, I receive the following error:
Could not find domain for port (Aqua)
I'm SSH-ing into a box to run this command, it seems to work fine when I'm not in a headless session, but I need this to be headless and as a background service. Anyone else run into this?
I was able to resolve this, addressed here
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchDaemons}/your.plist
I was able to reboot my machine without logging in and see the runner active
Solution by #futbolpal not good, because LaunchDaemons doesn't have access to keychain.
Better copy to LaunchAgents
Like:
sudo cp {/Users/xxx/Library/LaunchAgents,/Library/LaunchAgents}/your.plist
I keep getting this error on my marathon dashboard
Framework with ID 'a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001' does not exist on slave with ID '9959ba51-f6f7-448f-99d2-289767f12179-S2'.
The path to make this error occur is to click "Sandbox" next to a task on the main marathon dashboard.
The path looks something like this
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/frameworks/a5a96e8c-c3f2-4591-8eb3-43f8dc902585-0001/executors/rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3/browse
However, if I go to the slave through the slave panel, and click the framework from there, I am able to access the sandbox. The link in this case looks like the following
http://mesos.dev.internal/#/slaves/9959ba51-f6f7-448f-99d2-289767f12179-S2/browse?path=%2Ftmp%2Fmesos%2Fslaves%2Fc223b6b1-cef8-4599-8cea-b402bf20afc5-S0%2Fframeworks%2F20160108-205802-16842879-5050-1210-0001%2Fexecutors%2Frabbitmq.91b8bbf6-ceba-11e5-8047-0242ffdabb3e%2Fruns%2Fc66eb4d5-ea6d-451d-982f-6a0d29b25441
Any ideas on what I have misconfigured?
Mesos Web UI does not proxy logs through mesos-master (although it would be nice). Basically you need to be able to resolve slave's name from your browser (computer) and port 5051 needs to be open for you:
$ nc -z -w5 mesos.dev.internal 5051; echo $?
0 # port is open
It's not a good idea to leave Mesos ports open for public, so either you can:
connect via VPN
whitelist your public IP on all slaves
use CLI instead of Web UI
Using CLI is quite easy, once you set master's URI. You can install it:
pip install mesos.cli mesos.interface
Then you can list all tasks using mesos ps, or fetch stdout:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3
and stderr:
mesos tail -f rabbitmq.6316bf0a-d089-11e5-b895-fa163e196ca3 stderr
Note that the mesos-cli is no longer developed, similar features and much more you should be able to do with Mesosphere's DCOS CLI