CGI Bash script to spawn daemon process - html

I am working on a project to stream HDTV from a personal computer to devices supporting HTTP Live Streaming (think iOS devices and some android). I have the formatting of the video and the streaming aspects down. What I am now trying to implement is an easy way to change the channel remotely.
My current method involves connecting via SSH to kill the old stream and begin a new stream. This works, but isn't pretty. I want something my Mom or girlfriend could use. I decided I would build an HTML5 app that would issue the channel switching over CGI scripts. I currently have a parent process with a form that calls a child process to decide if the stream is running and then a subchild process to actually tune the stream.
As I am streaming live video from my computer I need the subchild process to run indefinitely. Unfortunately it seems that when my parent process is finished the background process started in the subchild process terminates.
I have tried a simple &, using nohup, setsid, and daemon. daemon runs cleanest but still terminates when the parent finishes. even with a -r flag. I'll place my code below and maybe someone will have an idea on how I could implement this or a better way to achieve the same thing? Thanks! (oh and i know killing vlc is not a pretty way to kill the stream, if you have a better way i'm all ears)
parent invoking child:
----------------------
./ChangeChannel.sh $channel #passed from form submission
child (ChangeChannel.sh):
-------------------------
#!/bin/bash
directory=./Channels/
newchannel=$1
if [ $(pidof vlc) ]
then
sudo kill $(pidof vlc)
fi
daemon -r -v -d $directory$newchannel &
subchild example:
-----------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}'
how can i keep the subchild from terminating??? Running Apache on Ubuntu 12.04

I got it!
For anyone interested in how, i changed my tactics to use nohup, &, disown, and > /dev/null 2>&1.
Honestly, still not quite sure how I got it working... just a lot of trial and error with some educated guesses. My code follows:
parent invocation:
------------------
nohup ./ChangeChannel.sh $channel & disown
child invocation:
-----------------
sudo nohup su user $directory$newchannel &> /dev/null 2>&1
subchild invocation:
--------------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}' & disown
ChangeChannel.sh uses sudo to execute su via cgi in order to execute vlc as user other than root. It seems a little messy but hell it works.

Related

Deploying an application with database inside mysql container inside docker [duplicate]

I'm trying to wrap my head around Docker from the point of deploying an application which is intended to run on the users on desktop. My application is simply a flask web application and mongo database. Normally I would install both in a VM and, forward a host port to the guest web app. I'd like to give Docker a try but I'm not sure how I'm meant to use more than one program. The documentations says there can only be only ENTRYPOINT so how can I have Mongo and my flask application. Or do they need to be in separate containers, in which case how do they talk to each other and how does this make distributing the app easy?
There can be only one ENTRYPOINT, but that target is usually a script that launches as many programs that are needed. You can additionally use for example Supervisord or similar to take care of launching multiple services inside single container. This is an example of a docker container running mysql, apache and wordpress within a single container.
Say, You have one database that is used by a single web application. Then it is probably easier to run both in a single container.
If You have a shared database that is used by more than one application, then it would be better to run the database in its own container and the applications each in their own containers.
There are at least two possibilities how the applications can communicate with each other when they are running in different containers:
Use exposed IP ports and connect via them.
Recent docker versions support linking.
I strongly disagree with some previous solutions that recommended to run both services in the same container. It's clearly stated in the documentation that it's not a recommended:
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.
There are good use cases for supervisord or similar programs but running a web application + database is not part of them.
You should definitely use docker-compose to do that and orchestrate multiple containers with different responsibilities.
I had similar requirement of running a LAMP stack, Mongo DB and my own services
Docker is OS based virtualisation, which is why it isolates its container around a running process, hence it requires least one process running in FOREGROUND.
So you provide your own startup script as the entry point, thus your startup script becomes an extended Docker image script, in which you can stack any number of the services as far as AT LEAST ONE FOREGROUND SERVICE IS STARTED, WHICH TOO TOWARDS THE END
So my Docker image file has two line below in the very end:
COPY myStartupScript.sh /usr/local/myscripts/myStartupScript.sh
CMD ["/bin/bash", "/usr/local/myscripts/myStartupScript.sh"]
In my script I run all MySQL, MongoDB, Tomcat etc. In the end I run my Apache as a foreground thread.
source /etc/apache2/envvars
/usr/sbin/apache2 -DFOREGROUND
This enables me to start all my services and keep the container alive with the last service started being in the foreground
Hope it helps
UPDATE: Since I last answered this question, new things have come up like Docker compose, which can help you run each service on its own container, yet bind all of them together as dependencies among those services, try knowing more about docker-compose and use it, it is more elegant way unless your need does not match with it.
Although it's not recommended you can run 2 processes in foreground by using wait. Just make a bash script with the following content. Eg start.sh:
# runs 2 commands simultaneously:
mongod & # your first application
P1=$!
python script.py & # your second application
P2=$!
wait $P1 $P2
In your Dockerfile, start it with
CMD bash start.sh
I would recommend to set up a local Kubernetes cluster if you want to run multiple processes simultaneously. You can 'distribute' the app by providing them a simple Kubernetes manifest.
They can be in separate containers, and indeed, if the application was also intended to run in a larger environment, they probably would be.
A multi-container system would require some more orchestration to be able to bring up all the required dependencies, though in Docker v0.6.5+, there is a new facility to help with that built into Docker itself - Linking. With a multi-machine solution, its still something that has to be arranged from outside the Docker environment however.
With two different containers, the two parts still communicate over TCP/IP, but unless the ports have been locked down specifically (not recommended, as you'd be unable to run more than one copy), you would have to pass the new port that the database has been exposed as to the application, so that it could communicate with Mongo. This is again, something that Linking can help with.
For a simpler, small installation, where all the dependencies are going in the same container, having both the database and Python runtime started by the program that is initially called as the ENTRYPOINT is also possible. This can be as simple as a shell script, or some other process controller - Supervisord is quite popular, and a number of examples exist in the public Dockerfiles.
Docker provides a couple of examples on how to do it. The lightweight option is to:
Put all of your commands in a wrapper script, complete with testing
and debugging information. Run the wrapper script as your CMD. This is
a very naive example. First, the wrapper script:
#!/bin/bash
# Start the first process
./my_first_process -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_first_process: $status"
exit $status
fi
# Start the second process
./my_second_process -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_second_process: $status"
exit $status
fi
# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container will exit with an error
# if it detects that either of the processes has exited.
# Otherwise it will loop forever, waking up every 60 seconds
while /bin/true; do
ps aux |grep my_first_process |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep my_second_process |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they will exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit -1
fi
sleep 60
done
Next, the Dockerfile:
FROM ubuntu:latest
COPY my_first_process my_first_process
COPY my_second_process my_second_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
I agree with the other answers that using two containers is preferable, but if you have your heart set on bunding multiple services in a single container you can use something like supervisord.
in Hipache for instance, the included Dockerfile runs supervisord, and the file supervisord.conf specifies for both hipache and redis-server to be run.
If a dedicated script seems like too much overhead, you can spawn separate processes explicitly with sh -c. For example:
CMD sh -c 'mini_httpd -C /my/config -D &' \
&& ./content_computing_loop
In docker, there are two ways you can run a program
CMD
ENTRYPOINT
If you want to know the difference between them, please refer here
In CMD/ENTRYPOINT, there are two formats to run a command
SHELL format
EXEC format
SHELL format:
CMD executable_first arg1; executable_second arg1 arg2
ENTRYPOINT executable_first arg1; executable_second arg1 arg2
This version will create a shell and executes above command. Here you can use any shell syntax such as ";", "&", "|", etc. So you can run any number of commands here. If you have complex set of commands to run, you can create separate shell script and use it.
CMD my_script.sh arg1
ENTRYPOINT my_script.sh arg1
EXEC format:
CMD ["executable", "parameter 1", "parameter 2", …]
ENTRYPOINT ["executable", "parameter 1", "parameter 2", …]
Here you can notice that only first parameter is an executable. From the second parameter, everything become an arguments/parameters for that executable.
To run multiple commands in EXEC format
CMD ["/bin/sh", "-c", "executable_first arg1; executable_second"]
CMD ["/bin/sh", "-c", "executable_first arg1; executable_second"]
In above command, we have used shell command as executable to run the command. This is the only way to run multiple commands in EXEC format.
Following are WRONG
CMD ["executable_first parameter", "executable_second parameter"]
ENTRYPOINT ["executable_first parameter", "executable_second parameter"]
CMD ["executable_first", "parameter", ";", "executable_second", "parameter"]
ENTRYPOINT ["executable_first", "parameter", ";", "executable_second", "parameter"]
Can I run multiple programs in a Docker container?
Yes. But with significant risks.
Below is the same answer as above. But with details and a recommended resolution. If you're interested in those.
Not Recommended
Warning. Using the same container for multiple services is not recommended by the Docker community, though. The Docker documentation reads: "It is generally recommended that you separate areas of concern by using one service per container." Source at:
• https://archive.ph/3Roa6#selection-307.2-307.100
• https://docs.docker.com/config/containers/multi-service_container/
If you choose to ignore the recommendation above, you container risk to be with weaker security, increasingly unstable, and in the future a painful growth.
If you are ok with those risks above, the documentation to use one container for multiple services is at:
• https://archive.ph/3Roa6#selection-335.0-691.1
• https://docs.docker.com/config/containers/multi-service_container/
Recommended
If you need a container(s) with stronger security, and more stability, and in the future, scale bigger, as well as better performance, then the Docker community recommends those two steps:
Use one service per Docker container. The end result is that you will have multiple containers.
Use this Docker "Networking" feature to connect any of those containers to your liking.

How to get all processes in a display from Xvfb?

I have a program that is launching ChromeDrivers, which launches Chrome browsers + later attempts to close both after doing some task (using Selenium). But often times my program can't kill off the ChromeDriver/Chrome browser. When I try to kill the ChromeDriver the Chrome browser + all it's other child processes aren't killed off.
I have tried to look at /proc/x/environ to determine if I can extract the DISPLAY of the process, but found that no such environment variable was set for the browser + child processes.
Is there any other way to detect all processes in a specific Xvfb display and kill them all?
This looks quite promising if you only have one child process:
xvfb-run sleep 60 &
pid_xvfb=$!
kill $(ps -o pid= --ppid $pid_xvfb)

Is there a good way to detect MySQL is "ready?"

I am not a MySQL expert.
I have a script that installs MySQL, starts mysqld, and then uses mysql to do some initialization.
Currently, in order to have this work, I enter into a loop that (apologize for the pseudocode mixing multiple languages):
mysqld_safe /* ... */ & /* ampersand to start in background so we can continue */
while(fileDoesNotExist("/tmp/mysql.sock")) {
sleepFor100ms();
}
mysql -u root /* and so forth */ initialize.sql
This seems to work (!) but has multiple problems:
polling smells funny,
I am not smart enough about MySQL to know whether looking at that hard-coded pathname /tmp/mysql.sock is smart at all.
And yet it's a lot easier than trying to (for example) consume and parse the stdout (or is it stderr?) of mysqld_safe to figure out whether the server has started.
My narrow question is whether there's a way to issue a blocking start of mysqld: can I issue any command that blocks until the database has started, and then exits (and detaches, maybe leaving a PID file), and has a companion stop command? (Or maybe allows me to read the PID file and issue my own SIGTERM?)
My broader question is, am I on the right track, or is there some totally different and easier (to be "easier" for me it would have to be lightweight; I'm not that interested in installing a bunch of tools like Puppet or DbMaintain/Liquibase or whatever) approach to solving the problem I articulated? That is, starting with a .gz file containing MySQL, install a userland MySQL and initialize a database?
Check out the init shell script for mysqld. They do polling, in a function called wait_for_pid().
That function checks for the existence of the pid file, and if it doesn't exist yet, sleeps for 1 whole second, then tries again. There's a timeout that defaults to 900 seconds, at which point it gives up waiting and concludes that it's not going to start (and outputs a totally unhelpful message "The server quit without updating PID file").
You don't have to guess where the pid file is. If you're starting mysqld_safe, you should tell it where it should create the pid file, using the --pid-file option.
One tricky part is that the pid file isn't created until mysqld initializes. This can take a while if it has to perform crash recovery using the InnoDB log files, and the log files are large. So it could happen that 900 seconds of timeout isn't long enough, and you get a spurious error, even though mysqld successfully starts a moment after the timeout.
You can also read the error log or the console output of mysqld. It should eventually output a line that says "ready for connections."
To read until you get this line, and then terminate the read, you could use:
tail -f | sed -e '/ready for connections/q'
You can use
mysqladmin -h localhost status
or use a pure bash solution like wait-for-it
./wait-for-it.sh --timeout 10 -h localhost -p 3306

How to kill a zombie process which always initiated whenever geany does

I am using Geany for editing a large text data in Ubuntu (600MB or so). But after a while, a zombie process starts whenever I start Geany and it couldn't load the file so that I edit the content. It took 100% of my CPU while Geany runs. I try to kill the zombie process with the following:
kill -HUP `ps -A -ostat,ppid,pid,cmd | grep -e '^[Zz]' | awk '{print $2}'`
But once I start the application again, the zombie process starts automatically. Also tried logout.
What can I do to kill the zombie process once and for all? Thanks!
You can't kill a zombie process since it's already dead.
On Unix and Unix-like computer operating systems, a zombie process or
defunct process is a process that has completed execution (via the
exit system call) but still has an entry in the process table: it is a
process in the "Terminated state".
(from Wikipedia)
It's simply an entry in the process table with no associated process. It exists because the spawning (parent) process has yet to collect the return status (via wait()). Other than that it will consume no resources.
So I suspect the parent process is either busy or not working properly. I would first of all try to identify that process (via the PPID column in ps, for example)
EDIT: I note there's a geany issue raised/resolved around this

How to solve jenkins 'Disk space is too low' issue?

I have deployed Jenkins in my CentOS machine, Jenkins was working well for 3 days, but yesterday there was a Disk space is too low. Only 1.019GB left. problem.
How can I solve this problem, it make my master offline for hours?
You can easily change the threshold from jenkins UI (my version is 1.651.3):
[]
Update: How to ensure high disk space
This feature is meant to prevent working on slaves with low free disk space. Lowering the threshold would not solve the fact that some jobs do not properly cleanup after they finish.
Depending on what you're building:
Make sure you understand what is the disk output of your build - if possible - restrict the output to happen only to the job workspace. Use workspace cleanup plugin to cleanup the workspace as post build step.
If the process must write some data to external folders - clean them up manually on post build steps.
Alternative1 - provision a new slave per job (use spot slaves - there are many plugins that integrate with different cloud provider to provision on the fly machines on demand)
Alternative2 - run the build inside a container. Everything will be discarded once the build is finished
Beside above solutions, there is a more "COMMON" way - directly delete the largest space consumer from Linux machine. You can follow the below steps:
Login to Jenkins machine (Putty)
cd to the Jenkins installation path
Using ls -lart to list out hidden folder also, normally jenkin
installation is placed in .jenkins/ folder
[xxxxx ~]$ ls -lart
drwxrwxr-x 12 xxxx 4096 Feb 8 02:08 .jenkins/
list out the folders spaces
Use df -h to show Disk space in high level
du -sh ./*/ to list out total memory for each subfolder in current path.
du -a /etc/ | sort -n -r | head -n 10 will list top 10 directories eating disk space in /etc/
Delete old build or other large size folder
Normally ./job/ folder or ./workspace/ folder can be the largest folder. Please go inside and delete base on you need (DO NOT
delete entire folder).
rm -rf theFolderToDelete
You can limit the reduce of disc space by discarding the old builds. There's a checkbox for this in the project configuration.
This is actually a legitimate question so I don't understand the downvotes, perhaps it belongs on Superuser or Serverfault. This is a soft warning threshold not hard limit where the disk is out of space.
For hudson see where to configure hudson node disk temp space thresholds - this is talking about the host, not nodes
Jenkins is the same. The conclusion is for many small projects the system property called hudson.diagnosis.HudsonHomeDiskUsageChecker.freeSpaceThreshold could be decreased.
In saying that I haven't tested it and there is a disclaimer
No compatibility guarantee
In general, these switches are often experimental in nature, and subject to change without notice. If you find some of those useful, please file a ticket to promote it to the official feature.
I got the same issue. My jenkins version is 2.3 and its UI is slightly different. Putting it here so that it may helps someone. Increasing both disk space thresholds to 5GB fixed the issue.
I have a cleanup job with the following build steps. You can schedule it #daily or #weekly.
Execute system groovy script build step to clean up old jobs:
import jenkins.model.Jenkins
import hudson.model.Job
BUILDS_TO_KEEP = 5
for (job in Jenkins.instance.items) {
println job.name
def recent = job.builds.limit(BUILDS_TO_KEEP)
for (build in job.builds) {
if (!recent.contains(build)) {
println "Preparing to delete: " + build
build.delete()
}
}
}
You'd need to have Groovy plugin installed.
Execute shell build step to clean cache directories
rm -r ~/.gradle/
rm -r ~/.m2/
echo "Disk space"
du -h -s /
To check the free space as Jenkins Job:
Parameters
FREE_SPACE: Needed free space in GB.
Job
#!/usr/bin/env bash
free_space="$(df -Ph . | awk 'NR==2 {print $4}')"
if [[ "${free_space}" = *G* ]]; then
free_space_gb=${x/[^0-9]*/}
if [[ ${free_space_gb} -lt ${FREE_SPACE} ]]; then
echo "Warning! Low space: ${free_space}"
exit 2
fi
else
echo "Warning! Unknown: ${free_space}"
exit 1
fi
echo "Free space: ${free_space}"
Plugins
Set build description
Post-Build Actions
Regular expression: Free space: (.*)
Description: Free space: \1
Regular expression for failed builds: Warning! (.*)
Description for failed builds: \1
For people who do not know where the configs are, download the tmpcleaner from
https://updates.jenkins-ci.org/download/plugins/tmpcleaner/
You will get an hpi file here. Go to Manage Jenkins-> Manage plugins-> Advanced and then upload the hpi file here and restart jenkins
You can immediately see a difference if you go to Manage Nodes.
Since my jenkins was installed in a debian server, I did not understand most of the answers related to this since i cannot find a /etc/default folder or jenkins file.
If someone knows where the /tmp folder is or how to configure it for debian , do let me know in comments