How about I don't want to output to nuhup.out file in gunicorn - gunicorn

I am running my script with gunicorn in background like this:
nohup gunicorn -b 127.0.0.1:12345 CryptoDepth:app.server&
It would produce log file in nohup.out. However I don't want this since it exhaust my disk. Anyone knows how to do without any output? thanks.

Related

How to store PM2's configuration files under /etc/pm2 like Nginx's

I'd like to have PM2 configuration files structured under /etc/pm2 like Nginx
/etc/pm2
/etc/pm2/pm2.conf
/etc/pm2/sites-enabled/*.json
/etc/pm2/sites-available/*.json
The reason for that is so all the configuration is structured in a consistent way, easy to manage PM2's user's permissions and easy to restart the processes (similar to sudo service restart/reload nginx). In addition I'd like the server automatically start all the processes when the machine is rebooted.
Is there an official/recommended way to accomplish something similar to that?
If not, how can I create a main /etc/pm2/pm2.conf that will include configuration files /etc/pm2/sites-enabled/*?
$ export PM2_HOME='/etc/pm2'
$ pm2 list

Is there a good way to detect MySQL is "ready?"

I am not a MySQL expert.
I have a script that installs MySQL, starts mysqld, and then uses mysql to do some initialization.
Currently, in order to have this work, I enter into a loop that (apologize for the pseudocode mixing multiple languages):
mysqld_safe /* ... */ & /* ampersand to start in background so we can continue */
while(fileDoesNotExist("/tmp/mysql.sock")) {
sleepFor100ms();
}
mysql -u root /* and so forth */ initialize.sql
This seems to work (!) but has multiple problems:
polling smells funny,
I am not smart enough about MySQL to know whether looking at that hard-coded pathname /tmp/mysql.sock is smart at all.
And yet it's a lot easier than trying to (for example) consume and parse the stdout (or is it stderr?) of mysqld_safe to figure out whether the server has started.
My narrow question is whether there's a way to issue a blocking start of mysqld: can I issue any command that blocks until the database has started, and then exits (and detaches, maybe leaving a PID file), and has a companion stop command? (Or maybe allows me to read the PID file and issue my own SIGTERM?)
My broader question is, am I on the right track, or is there some totally different and easier (to be "easier" for me it would have to be lightweight; I'm not that interested in installing a bunch of tools like Puppet or DbMaintain/Liquibase or whatever) approach to solving the problem I articulated? That is, starting with a .gz file containing MySQL, install a userland MySQL and initialize a database?
Check out the init shell script for mysqld. They do polling, in a function called wait_for_pid().
That function checks for the existence of the pid file, and if it doesn't exist yet, sleeps for 1 whole second, then tries again. There's a timeout that defaults to 900 seconds, at which point it gives up waiting and concludes that it's not going to start (and outputs a totally unhelpful message "The server quit without updating PID file").
You don't have to guess where the pid file is. If you're starting mysqld_safe, you should tell it where it should create the pid file, using the --pid-file option.
One tricky part is that the pid file isn't created until mysqld initializes. This can take a while if it has to perform crash recovery using the InnoDB log files, and the log files are large. So it could happen that 900 seconds of timeout isn't long enough, and you get a spurious error, even though mysqld successfully starts a moment after the timeout.
You can also read the error log or the console output of mysqld. It should eventually output a line that says "ready for connections."
To read until you get this line, and then terminate the read, you could use:
tail -f | sed -e '/ready for connections/q'
You can use
mysqladmin -h localhost status
or use a pure bash solution like wait-for-it
./wait-for-it.sh --timeout 10 -h localhost -p 3306

How to get AWStats to generate static HTML files?

I want to get AWStats running on my webserver that runs Debian 4.4.5-8 with Apache 2.
There are several websites that all have their own configuration file, similar to this:
Include "/etc/awstats/awstats.model.conf"
LogFile="/var/customers/logs/myname-example.com-access.log"
LogType=W
LogFormat = 1
LogSeparator=" "
SiteDomain="example.com"
HostAliases="*.example.com"
DirData="/www/myname/awstats/example.com/"
What I expect is that HTML files are written to /www/myname/awstats/example.com/ which I can then access through Apache. However when I run /usr/share/awstats/tools/buildstatic.sh what happens is that .txt files are written to that directory and HTML files that I want are written to /var/cache/awstats. The error file in /tmp remains empty.
Why is this happening and how do I make it work the way I want?
DirData is not supposed to be read directly by the Web Server. It is used by awstats.pl.
The fact is that /var/cache/awstats is hardcoded in buildstatic.sh so you have to change the two lines mentioning it:
mkdir -p /var/cache/awstats/$c/$Y/$m/
and
-dir=/var/cache/awstats/$c/$Y/$m/ >$TMPFILE 2>&1

CGI Bash script to spawn daemon process

I am working on a project to stream HDTV from a personal computer to devices supporting HTTP Live Streaming (think iOS devices and some android). I have the formatting of the video and the streaming aspects down. What I am now trying to implement is an easy way to change the channel remotely.
My current method involves connecting via SSH to kill the old stream and begin a new stream. This works, but isn't pretty. I want something my Mom or girlfriend could use. I decided I would build an HTML5 app that would issue the channel switching over CGI scripts. I currently have a parent process with a form that calls a child process to decide if the stream is running and then a subchild process to actually tune the stream.
As I am streaming live video from my computer I need the subchild process to run indefinitely. Unfortunately it seems that when my parent process is finished the background process started in the subchild process terminates.
I have tried a simple &, using nohup, setsid, and daemon. daemon runs cleanest but still terminates when the parent finishes. even with a -r flag. I'll place my code below and maybe someone will have an idea on how I could implement this or a better way to achieve the same thing? Thanks! (oh and i know killing vlc is not a pretty way to kill the stream, if you have a better way i'm all ears)
parent invoking child:
----------------------
./ChangeChannel.sh $channel #passed from form submission
child (ChangeChannel.sh):
-------------------------
#!/bin/bash
directory=./Channels/
newchannel=$1
if [ $(pidof vlc) ]
then
sudo kill $(pidof vlc)
fi
daemon -r -v -d $directory$newchannel &
subchild example:
-----------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}'
how can i keep the subchild from terminating??? Running Apache on Ubuntu 12.04
I got it!
For anyone interested in how, i changed my tactics to use nohup, &, disown, and > /dev/null 2>&1.
Honestly, still not quite sure how I got it working... just a lot of trial and error with some educated guesses. My code follows:
parent invocation:
------------------
nohup ./ChangeChannel.sh $channel & disown
child invocation:
-----------------
sudo nohup su user $directory$newchannel &> /dev/null 2>&1
subchild invocation:
--------------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}' & disown
ChangeChannel.sh uses sudo to execute su via cgi in order to execute vlc as user other than root. It seems a little messy but hell it works.

Redirect output to different directories for sun grid engine array jobs

I'm running a lot of jobs with Sun Grid Engine. Since these are a jobs (~100000), I would like to use array jobs, which seems to be easier on the queue.
Another problem is that each jobs produces an stdout and stderr file, which I need to track error. If I define them in the qsub -t 1-100000 -o outputdir -e errordir I will end up having directories with 100000 files in them, which is too much.
Is there a way to have each job write the output file to a directory (say, a directory which consists of the first 2 characters of the job ID, which is random hex letters; or the job number modulu 1000, or something of that sort).
Thanks
I can't think of a good way to do this with qsub as there are no programmatic interfaces into the -o and -e options. There is, however, a way to accomplish what you want.
Run your qsub with -o and -e pointing to /dev/null. Make the command you run be some type of wrapper that redirects it's own stdout and stderr to files in whatever fashion you want (i.e., your broken down directory structure) before it execs the real job.