How to kill a zombie process which always initiated whenever geany does - zombie-process

I am using Geany for editing a large text data in Ubuntu (600MB or so). But after a while, a zombie process starts whenever I start Geany and it couldn't load the file so that I edit the content. It took 100% of my CPU while Geany runs. I try to kill the zombie process with the following:
kill -HUP `ps -A -ostat,ppid,pid,cmd | grep -e '^[Zz]' | awk '{print $2}'`
But once I start the application again, the zombie process starts automatically. Also tried logout.
What can I do to kill the zombie process once and for all? Thanks!

You can't kill a zombie process since it's already dead.
On Unix and Unix-like computer operating systems, a zombie process or
defunct process is a process that has completed execution (via the
exit system call) but still has an entry in the process table: it is a
process in the "Terminated state".
(from Wikipedia)
It's simply an entry in the process table with no associated process. It exists because the spawning (parent) process has yet to collect the return status (via wait()). Other than that it will consume no resources.
So I suspect the parent process is either busy or not working properly. I would first of all try to identify that process (via the PPID column in ps, for example)
EDIT: I note there's a geany issue raised/resolved around this

Related

Is there a time limit for the startup script to finish before it stops?

I need to create a VM instance in google compute engine with a startup script that takes 30 minutes, but it never finishes, it stops around 10 minutes after the instance boots. Is there a timeout? Is there another alternative to accomplish what I need to do? Thanks!
Given the additional clarification in the comments:
My script downloads another script and then executes it, and what that script does is download some big files, and then compute some values based on latitude/longitude. Then, when the process is finished, the VM is destroyed.
My recommendation would be to run the large download and processing asynchronously rather than synchronously. The reason being is that if it's synchronous, it's part of the VM startup (in the critical path), and the VM monitoring infrastructure notices that the VM is not completing its startup phase within a reasonable amount of time and is terminating it.
Instead, take the heavy-duty processing out of the critical path and do it in the background, i.e., asynchronously.
In other words, the startup script currently probably looks like:
# Download the external script
curl [...] -o /tmp/script.sh
# Run the file download, computation, etc. and shut down the VM.
/tmp/script.sh
I would suggest converting this to:
# Download the external script
curl [...] -o /tmp/script.sh
# Run the file download, computation, etc. and shut down the VM.
nohup /tmp/script.sh &
What this does is start the heavy processing in the background, but also disconnect it from the parent process such that it is not automatically terminated when the parent process (the actual startup script) is terminated. We want the main startup script to terminate so that the entire VM startup phase is marked completed.
For more info, see the Wikipedia page on nohup.

What does the command line arguments for PM2 startup mean precisely

I am a little confused about start up scripts and the command line options. I am building a small raspberry pi based server for my node applications. In order to provide maximum protection against power failures and flash write corruption, the root file system is read only, and that embraces the home directory of my main user, were the production versions of my apps (two of them) are stored. Because the .pm2 directory here is no good for logs etc I currently set PM2_HOME environment variable to a place in /var (which has 512kb unused space around it to ensure writes to i. The eco-system.json file reads this environment variable also to determine where to place its logs.
In case I need to, I also have a secondary user with a read write home directory in another (protected by buffer space around it) partition. This contains development versions of my application code which because of the convenience of setting environments up etc I also want to monitor with PM2. If I need to investigate a problem I can log in to that user and run and test the application there.
Since this is a headless box, and with watchdog and kernel panic restarts built in, I want pm2 to start during boot and at minimum restart the two production apps. Ideally it should also starts the two development versions of the app also but I can live without that if its impossible.
I can switch the read only root partition to read/write - indeed it does so automatically when I ssh into my production user account. It switches back to read only automatically when I log out.
So I went to this account to try and create a startup script. It then said (unsurprisingly) that I had to run a sudo command like so:-
sudo su -c "env PATH=$PATH:/usr/local/bin pm2 startup ubuntu -u pi --hp /home/pi"
The key issue for me here is the --hp switch. I went searching for some clue as to what it means. Its clearly a home directory, but it doesn't match PM2_HOME - which is set to /var/pas in my case to take it out of the read only area. I don't want to try and and spray my home directory with files that shouldn't be there. So am asking for some guidance here
I found out by experiment what it does with an "ubuntu" start up script. It uses it to set PM2_HOME in the script by appending "/.pm2" to it.
However there is nothing stopping you editing the script once it has created it and setting PM2_HOME to whatever you want.
So effectively its a helper for the script, but only that and nothing more special.

How to get all processes in a display from Xvfb?

I have a program that is launching ChromeDrivers, which launches Chrome browsers + later attempts to close both after doing some task (using Selenium). But often times my program can't kill off the ChromeDriver/Chrome browser. When I try to kill the ChromeDriver the Chrome browser + all it's other child processes aren't killed off.
I have tried to look at /proc/x/environ to determine if I can extract the DISPLAY of the process, but found that no such environment variable was set for the browser + child processes.
Is there any other way to detect all processes in a specific Xvfb display and kill them all?
This looks quite promising if you only have one child process:
xvfb-run sleep 60 &
pid_xvfb=$!
kill $(ps -o pid= --ppid $pid_xvfb)

Is there a good way to detect MySQL is "ready?"

I am not a MySQL expert.
I have a script that installs MySQL, starts mysqld, and then uses mysql to do some initialization.
Currently, in order to have this work, I enter into a loop that (apologize for the pseudocode mixing multiple languages):
mysqld_safe /* ... */ & /* ampersand to start in background so we can continue */
while(fileDoesNotExist("/tmp/mysql.sock")) {
sleepFor100ms();
}
mysql -u root /* and so forth */ initialize.sql
This seems to work (!) but has multiple problems:
polling smells funny,
I am not smart enough about MySQL to know whether looking at that hard-coded pathname /tmp/mysql.sock is smart at all.
And yet it's a lot easier than trying to (for example) consume and parse the stdout (or is it stderr?) of mysqld_safe to figure out whether the server has started.
My narrow question is whether there's a way to issue a blocking start of mysqld: can I issue any command that blocks until the database has started, and then exits (and detaches, maybe leaving a PID file), and has a companion stop command? (Or maybe allows me to read the PID file and issue my own SIGTERM?)
My broader question is, am I on the right track, or is there some totally different and easier (to be "easier" for me it would have to be lightweight; I'm not that interested in installing a bunch of tools like Puppet or DbMaintain/Liquibase or whatever) approach to solving the problem I articulated? That is, starting with a .gz file containing MySQL, install a userland MySQL and initialize a database?
Check out the init shell script for mysqld. They do polling, in a function called wait_for_pid().
That function checks for the existence of the pid file, and if it doesn't exist yet, sleeps for 1 whole second, then tries again. There's a timeout that defaults to 900 seconds, at which point it gives up waiting and concludes that it's not going to start (and outputs a totally unhelpful message "The server quit without updating PID file").
You don't have to guess where the pid file is. If you're starting mysqld_safe, you should tell it where it should create the pid file, using the --pid-file option.
One tricky part is that the pid file isn't created until mysqld initializes. This can take a while if it has to perform crash recovery using the InnoDB log files, and the log files are large. So it could happen that 900 seconds of timeout isn't long enough, and you get a spurious error, even though mysqld successfully starts a moment after the timeout.
You can also read the error log or the console output of mysqld. It should eventually output a line that says "ready for connections."
To read until you get this line, and then terminate the read, you could use:
tail -f | sed -e '/ready for connections/q'
You can use
mysqladmin -h localhost status
or use a pure bash solution like wait-for-it
./wait-for-it.sh --timeout 10 -h localhost -p 3306

CGI Bash script to spawn daemon process

I am working on a project to stream HDTV from a personal computer to devices supporting HTTP Live Streaming (think iOS devices and some android). I have the formatting of the video and the streaming aspects down. What I am now trying to implement is an easy way to change the channel remotely.
My current method involves connecting via SSH to kill the old stream and begin a new stream. This works, but isn't pretty. I want something my Mom or girlfriend could use. I decided I would build an HTML5 app that would issue the channel switching over CGI scripts. I currently have a parent process with a form that calls a child process to decide if the stream is running and then a subchild process to actually tune the stream.
As I am streaming live video from my computer I need the subchild process to run indefinitely. Unfortunately it seems that when my parent process is finished the background process started in the subchild process terminates.
I have tried a simple &, using nohup, setsid, and daemon. daemon runs cleanest but still terminates when the parent finishes. even with a -r flag. I'll place my code below and maybe someone will have an idea on how I could implement this or a better way to achieve the same thing? Thanks! (oh and i know killing vlc is not a pretty way to kill the stream, if you have a better way i'm all ears)
parent invoking child:
----------------------
./ChangeChannel.sh $channel #passed from form submission
child (ChangeChannel.sh):
-------------------------
#!/bin/bash
directory=./Channels/
newchannel=$1
if [ $(pidof vlc) ]
then
sudo kill $(pidof vlc)
fi
daemon -r -v -d $directory$newchannel &
subchild example:
-----------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}'
how can i keep the subchild from terminating??? Running Apache on Ubuntu 12.04
I got it!
For anyone interested in how, i changed my tactics to use nohup, &, disown, and > /dev/null 2>&1.
Honestly, still not quite sure how I got it working... just a lot of trial and error with some educated guesses. My code follows:
parent invocation:
------------------
nohup ./ChangeChannel.sh $channel & disown
child invocation:
-----------------
sudo nohup su user $directory$newchannel &> /dev/null 2>&1
subchild invocation:
--------------------
vlc atsc://frequency=605029000 --intf=dummy --sout-transcode-audio-sync :live-cache=3000 --sout='#transcode{vcodec=h264,vb=150,fps=25,width=480,scale=1,venc=x264{aud,profile=baseline,level=30,keyint=15,bframes=0,ref=1},acodec=aac,ab=40,channels=2,samplerate=22050}:duplicate{dst=std{mux=ts,dst=-,access=livehttp{seglen=16,delsegs=true,numsegs=10,index=/var/www/stream/live.m3u8,index-url=content/live-######.ts},mux=ts{use-key-frames},dst=/var/www/stream/content/live-######.ts,ratecontrol=true}}' & disown
ChangeChannel.sh uses sudo to execute su via cgi in order to execute vlc as user other than root. It seems a little messy but hell it works.