How do I PSRemote start procdump such that it persists after the session ends - powershell-remoting

I can start a persistent process on unix with:
nohup process &
It will continue to run after I close my bash session. I cannot seem to do the same with PowerShell remoting on Windows. I can open a PSRemote session with a server and start a process, but as soon as I close that session it dies. My assumption is this is a benefit of strong sandboxing, but it's a benefit I'd rather work around somehow. Any ideas?
So far I've tried:
$exe ='d:\procdump.exe'
$processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps'
1) [System.Diagnostics.Process]::Start($exe,$processArgs)
2) Start-Job -ScriptBlock {param($exe,$processArgs) [System.Diagnostics.Process]::Start($exe,$processArgs)} -ArgumentList ($exe,$processArgs)
3) start powershell {param($exe ='d:\procdump.exe', $processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps') [System.Diagnostics.Process]::Start($exe,$processArgs)}
4) start powershell {param($exe ='d:\procdump.exe', $processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps') Start-Job -ScriptBlock {param($exe,$processArgs) [System.Diagnostics.Process]::Start($exe,$processArgs)} -ArgumentList ($exe,$processArgs)}
The program runs up until I close the session, then the procdump is reaped. The coolest thing about procdump is it will self-terminate, and I'd like to leave it running to take advantage of that fact.
I'd been starting ADPlus remotely, holding a session open, and just terminating the session to kill the captures. That's kind of handy, but it requires an awful lot of polling, inspecting, and deciding when is the right moment to kill the capture process before filling up the hard drive but after capturing enough dumps to be useful. I can leave procdump running indefinitely while it waits for an appropriate trigger and when it's captured enough data it will just die. That's lovely.
I just need to get procdump to keep running after I terminate my remote session. It's probably not worth creating a procdump scheduled task and starting it, but that's about the last idea I've got left.
Thanks.

This is not directly possible. Indirectly, yes a task or a service could be created and started remotely, but simply pushing a process off into the SYSTEM space is not.
I resolved my issue by spawning a local job to start the remote job and remain alive for the required period of time. The local job holds the remote session open then dies at the appropriate time, and the parent local process is able to continue to run uninterrupted and harvest the return value of the remote procdump with ReceiveJob if I happen to care.

Related

Soft restart daemon containers in docker swarm

we use multiple PHP workers. Every PHP worker is organized in one container. To scale the amount of parallel working processes we handle it in a docker swarm.
So the PHP is running in a loop and waiting for new jobs (Get jobs from Gearman).
If a new job is receiving, it would be processed. After that, the script is waiting for the next job without quitting/leaving the PHP script.
Now we want to update our workers. In this case, the image is the same but the PHP script is changed.
So we have to leave the PHP script, update the PHP script file, and restart the PHP script.
If I use this docker service update command. Docker will stop the container immediately. In the worst case, a running worker will be canceled during this work.
docker service update --force PHP-worker
Is there any possibility to restart the docker container soft?
Soft means, give the container a sign: "I have to do a restart, please cancel all running processes." That the container has the chance to quit his work.
In my case, before I run the next process in the loop. I will check this cancel flag. If this cancel flag set I will end the loop and end running the PHP script.
Environment:
Debian: 10
Docker: 19.03.12
PHP: 7.4
In the meantime, we have solved it with SIGNALS.
In PHP work with signals is very easy. In our case, this structure helped us.
//Terminate flag
$terminate = false;
//Register signals
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function() use (&$terminate) {
echo"Get SIGTERM. End worker LOOP\n";
$terminate = true;
});
pcntl_signal(SIGHUP, function() use (&$terminate) {
echo"Get SIGHUP. End worker LOOP\n";
$terminate = true;
});
//loop
while($terminate === false){
//do next job
}
Before the next job is started it is checked if the terminate flag is set.
Docker has great support for gracefully stopping containers.
To define the time to wait we used the tag "stop_grace_period".

How to send binary flashing file to embedded system with only serial console?

I have an embedded Linux system that uses ramdisk boot so it has run time no persistent storage available (it does have Flash to store kernel and ramdisk).
The only connectivity is RS-232 serial login console. So I am limited by what is provided by its built in busybox. I want to retrieve the ramdisk, modify it, and rewrite the ramdisk. The kernel does not have Flash filesystem support built-in. The ramdisk partition size is about 10 MBytes. When all files in the user directory are deleted, the free ramdisk size is about 14 MBytes.
The command dd is available so I can copy the ramdisk partition to the ramdisk, and can write to the flash from a ramdisk file. flashcp is also available.
So my problem is now how to receive and send binary files through the RS-232 serial console?
I research the followings and none is useful for me:
Linux command to send binary file to serial port with HW flow control? on stackoverflow
Binary data over serial terminal on stackoverflow
Transferring files using serial console on k.japko.eu
File transfer over a serial line on superuser.com
How to get file to a host when all you have is a serial console? on stackexchange
Mostly because x/y/zmodem are not available in the busybox.
Any idea? Thanks!
Per the request, here's what I should have included in the first place.
Available u-boot commands:
U-Boot >?
? - alias for 'help'
askenv - get environment variables from stdin
base - print or set address offset
bdinfo - print Board Info structure
boot - boot default, i.e., run 'bootcmd'
bootd - boot default, i.e., run 'bootcmd'
bootm - boot application image from memory
cmp - memory compare
coninfo - print console devices and information
cp - memory copy
crc32 - checksum calculation
crc32_chk_uimage- checksum calculation of an image for u-boot
echo - echo args to console
editenv - edit environment variable
env - environment handling commands
exit - exit script
false - do nothing, unsuccessfully
fatinfo - print information about filesystem
fatload - load binary file from a dos filesystem
fatls - list files in a directory (default /)
fatwrite- write file into a dos filesystem
go - start application at address 'addr'
gpio - input/set/clear/toggle gpio pins
help - print command description/usage
i2c - I2C sub-system
iminfo - print header information for application image
imxtract- extract a part of a multi-image
itest - return true/false on integer compare
loadb - load binary file over serial line (kermit mode)
loads - load S-Record file over serial line
loady - load binary file over serial line (ymodem mode)
loop - infinite loop on address range
md - memory display
mdc - memory display cyclic
mm - memory modify (auto-incrementing address)
mw - memory write (fill)
mwc - memory write cyclic
nm - memory modify (constant address)
printenv- print environment variables
reset - Perform RESET of the CPU
run - run commands in an environment variable
saveenv - save environment variables to persistent storage
saves - save S-Record file over serial line
setenv - set environment variables
sf - SPI flash sub-system
showvar - print local hushshell variables
sleep - delay execution for some time
source - run script from memory
sspi - SPI utility command
test - minimal test like /bin/sh
true - do nothing, successfully
usb - USB sub-system
usbboot - boot from USB device
version - print monitor, compiler and linker version
U-Boot >
Available busybox commands:
BusyBox v1.13.2 (2015-03-16 10:50:56 EDT) multi-call binary
Copyright (C) 1998-2008 Erik Andersen, Rob Landley, Denys Vlasenko
and others. Licensed under GPLv2.
See source distribution for full notice.
Usage: busybox [function] [arguments]...
or: function [arguments]...
BusyBox is a multi-call binary that combines many common Unix
utilities into a single executable. Most people will create a
link to busybox for each function they wish to use and BusyBox
will act like whatever it was invoked as!
Currently defined functions:
[, [[, addgroup, adduser, ar, ash, awk, basename, blkid,
bunzip2, bzcat, cat, chattr, chgrp, chmod, chown, chpasswd,
chroot, chvt, clear, cmp, cp, cpio, cryptpw, cut, date,
dc, dd, deallocvt, delgroup, deluser, df, dhcprelay, diff,
dirname, dmesg, du, dumpkmap, dumpleases, echo, egrep, env,
expr, false, fbset, fbsplash, fdisk, fgrep, find, free,
freeramdisk, fsck, fsck.minix, fuser, getopt, getty, grep,
gunzip, gzip, halt, head, hexdump, hostname, httpd, hwclock,
id, ifconfig, ifdown, ifup, inetd, init, insmod, ip, kill,
killall, klogd, last, less, linuxrc, ln, loadfont, loadkmap,
logger, login, logname, logread, losetup, ls, lsmod, makedevs,
md5sum, mdev, microcom, mkdir, mkfifo, mkfs.minix, mknod,
mkswap, mktemp, modprobe, more, mount, mv, nc, netstat,
nice, nohup, nslookup, od, openvt, passwd, patch, pidof,
ping, ping6, pivot_root, poweroff, printf, ps, pwd, rdate,
rdev, readahead, readlink, readprofile, realpath, reboot,
renice, reset, rm, rmdir, rmmod, route, rtcwake, run-parts,
sed, seq, setconsole, setfont, sh, showkey, sleep, sort,
start-stop-daemon, strings, stty, su, sulogin, swapoff,
swapon, switch_root, sync, sysctl, syslogd, tail, tar, tcpsvd,
tee, telnet, telnetd, test, tftp, tftpd, time, top, touch,
tr, traceroute, true, tty, udhcpc, udhcpd, udpsvd, umount,
uname, uniq, unzip, uptime, usleep, vconfig, vi, vlock,
watch, wc, wget, which, who, whoami, xargs, yes, zcat
In uboot you could use loady/loadx to get file from pc via uart.I usually use teraterm to send file.
The process should be this:
run loady in uboot
use teraterm send data
the file is transfer to you device's memory located in 0x01000000.
Independently I found a way to upload binary files through the Linux console and I'll document the steps here in case others find it useful since I had a hard time looking for this information on the net.
Here's the theory: change the console mode to raw so all the binary traffic are't interpretted as console command, e.g. ctrl-C. Turn off echo so it doesn't add extra serial traffic. Run tar to accept input from the stdin. Since ctrl-C won't work, and tar won't know when to terminate, use a background task to kill the login shell so you can login again to do your staff.
Steps:
Create a script to run in the background. Change myvar variable so it kills the login shell after the transfer is complete. Currently 120 corresponds to 1200 seconds, sufficient for a 10 MBytes file. In addition edit the 808 to match your login shell PID:
create bg file:
myvar=120
while [ $myvar -gt 0 ]
do
myvar=$(( $myvar-1 ))
echo -e " $myvar \n"
ls -l
sleep 10
done
kill -9 808
Launch the script in the background:
in console type:
source ./bg &
Use stty to change console to raw mode and do not echo
in console type:
stty raw -echo
Start tar to untar stdin. Note: I have to use ctrl-J since no longer work after the stty command
in console type and ends with ctrl-j, not :
tar zx -f - 1> 1.log 2> 2.log
Start Teraterm to send binary file
Wait for completion and the new login prompt
I forgot I asked this question. I figured out how to make ssh connection which in turn allows many more things to be done more easily. Of course it requires sshd in addition to nc and stty so you are out of luck if these are not available on your embedded Linux. I have tried it several times and it seems to work well, allowing multiple ssh sessions to be established, and mc to transfer files.
You will need two shell sessions on the host computer, one to loop the serial port to socket, and the other for the ssh, and more if you want to establish more ssh sessions.
First you need to setup the serial port. The '--noreset' option for picocom does this:
sudo picocom --noreset -b 115200 -e b /dev/ttyUSB3
Quit picocom once this is done (^B^X to exit).
Next we need to verify that the line endings are not translated or else ssh won't work. In the first shell run:
cat /dev/ttyUSB3 | hexdump -C
In the second shell run:
echo "echo -e \"LFLF\\n\\nCRCR\\r\\rEND\"" > /dev/ttyUSB3
You may see that \n (0x0A) is translated to \r\n (0x0D0x0A)
Use stty to set raw mode without echo and you should see no more translation:
echo "stty raw -echo" > /dev/ttyUSB3
echo "echo -e \"LFLF\\n\\nCRCR\\r\\rEND\"" > /dev/ttyUSB3
Finally in the first shell run nc to funnel local traffic between the serial port and ssh socket:
cat /dev/ttyUSB3 | nc -l -p 2222 > /dev/ttyUSB3
and funnel remote serial traffic to sshd:
echo "while true ; do nc localhost 22 ; done" > /dev/ttyUSB3
and connect ssh with port forwarding:
ssh -vvv root#localhost -p 2222 -L 0.0.0.0:22022:localhost:22
you can make more ssh connections simultaneously:
ssh -vvv root#localhost -p 22022
if you use mc, you can connect to it so you can easily browse the remote file system and copy files:
sh://root#localhost:22022
Last words: nc strips the TCP headers so the ssh packets are no checksumed and are not retried. If there were data error, the connection will break. If you remember your login shell PID, you can kill it and login again, otherwise you have to reboot. The '-vvv' flag for the ssh is for debugging.

How to kill chromedriver

After running my protractor tests I may be left with chromedriver.exe running. The simple question is: how do I kill it? There are several things to note here:
I cannot just kill based on process name since several other chromedrivers may be running and may be needed by other tests.
I already stop the selenium server using "curl http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer"
I noticed that the chromedriver is listening on port 33107 (is it possible to specify this port somehow?), but I do not know how should I call it to quit.
Probably I should be using driver.quit() in my tests, but on some occasions it might not get called (eg. when the build is cancelled).
Any ideas how to kill the proper chromedriver process from command line (eg. using curl)?
The proper way to do it's as you mentioned by using driver.quit() in your tests.
Actually, to be exact in your test cleanup method, since you want a fresh instance of the browser every time.
Now, the problem with some Unit Test Frameworks (like MSTest for example) is that if your test initialize method fails, the test cleanup one will not be called.
As a workaround for this you can surround in a try-catch statement you test initialize with catch calling and executing your test cleanup.
public void TestInitialize()
{
try
{
//your test initialize statements
}
catch
{
TestCleanup();
//throw exception or log the error message or whatever else you need
}
}
public void TestCleanup()
{
driver.Quit();
}
EDIT:
For the case when the build is canceled, you can create a method that kills all open instances of Chrome browser and ChromeDriver that gets executed before you start a new suite of tests.
E.g. if your Unit Testing Framework used has something similar to Class Initialize or Assembly Initialize you can do it there.
However, on a different post I found this approach:
PORT_NUMBER=1234
lsof -i tcp:${PORT_NUMBER} | awk 'NR!=1 {print $2}' | xargs kill
Breakdown of command
(lsof -i tcp:${PORT_NUMBER}) -- list all processes that is listening on that tcp port
(awk 'NR!=1 {print $2}') -- ignore first line, print second column of each line
(xargs kill) -- pass on the results as an argument to kill. There may be several.
Here, to be more exact: How to find processes based on port and kill them all?

Hot reconfiguration of HAProxy still lead to failed request, any suggestions?

I found there are still failed request when the traffic is high using command like this
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
to hot reload the updated config file.
Here below is the presure testing result using webbench :
/usr/local/bin/webbench -c 10 -t 30 targetHProxyIP:1080
Webbench – Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.
Benchmarking: GET targetHProxyIP:1080
10 clients, running 30 sec.
Speed=70586 pages/min, 13372974 bytes/sec.
**Requests: 35289 susceed, 4 failed.**
I run command
haproxy -f /etc/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)
several times during the pressure testing.
In the haproxy documentation, it mentioned
They will receive the SIGTTOU
611 signal to ask them to temporarily stop listening to the ports so that the new
612 process can grab them
so there is a time period that the old process is not listening on the PORT(say 80) and the new process haven’t start to listen to the PORT (say 80), and during this specific time period, it will cause the NEW connections failed, make sense?
So is there any approach that makes the configuration reload of haproxy that will not impact both existing connections and new connections?
On recent kernels where SO_REUSEPORT is finally implemented (3.9+), this dead period does not exist anymore. While a patch has been available for older kernels for something like 10 years, it's obvious that many users cannot patch their kernels. If your system is more recent, then the new process will succeed its attempt to bind() before asking the previous one to release the port, then there's a period where both processes are bound to the port instead of no process.
There is still a very tiny possibility that a connection arrived in the leaving process' queue at the moment it closes it. There is no reliable way to stop this from happening though.

asynchronous queries with MySQL/Perl DBI

It is possible that my question title is misleading, but here goes --
I am trying out a prototype app which involves three MySQL/Perl Dancer powered web apps.
The user goes to app A which serves up a Google maps base layer. On document ready, app A makes three jQuery ajax calls -- two to app B like so
http://app_B/points.json
http://app_B/polys.json
and one to app C
http://app_C/polys.json
Apps B and C query the MySQL database via DBI, and serve up json packets of points and polys that are rendered in the user's browser.
All three apps are proxied through Apache to Perl Starman running via plackup started like so
$ plackup -E production -s Starman -w 10 -p 5000 path/to/app_A/app.pl
$ plackup -E production -s Starman -w 10 -p 5001 path/to/app_B/app.pl
$ plackup -E production -s Starman -w 10 -p 5002 path/to/app_C/app.pl
From time to time, I start getting errors back from the apps called via Ajax. The initial symptoms were
{"error":"Warning caught during route
execution: DBD::mysql::st fetchall_arrayref
failed: fetch() without execute() at
<path/to/app_B/app.pm> line 79.\n"}
The offending lines are
71> my $sql = qq{
72> ..
73>
74>
75> };
76>
77> my $sth = $dbh->prepare($sql);
78> $sth->execute();
79> my $res = $sth->fetchall_arrayref({});
This is bizarre... how can execute() not take place above? Perl doesn't have a habit of jumping over lines, does it? So, I turned on DBI_TRACE
$DBI_TRACE=2=logs/dbi.log plackup -E production -p 5001 -s Starman -w
10 -a bin/app.pl
And, following is what stood out to me as the potential culprit in the log file
> Handle is not in asynchronous mode error 2000 recorded: Handle is
> not in asynchronous mode
> !! ERROR: 2000 CLEARED by call to fetch method
What is going on? Basically, as is, app A is non-functional because the other apps don't return data "reliably" -- I put that in quotes because they do work correctly occasionally, so I know I don't have any logic or syntax errors in my code. I have some kind of intrinsic plumbing errors.
I did find the following on DBD::mysql about ASYNCHRONOUS_QUERIES and am wondering if this is the cause and the solution of my problem. Essentially, if I want async queries, I have to add {async => 1} to my $dbh-prepare(). Except, I am not sure if I want async true or false. I tried it, it and it doesn't seem to help.
I would love to learn what is going on here, and what is the right way to solve this.
How are you managing your database handles? If you are opening a connection before starman forks your code then multiple children may be trying to share one database handle and are confusing MySQL. You can solve this problem by always running a DBI->connect in your methods that talk to the database, but that can be inefficient. Many people switch over to some sort of connection pool, but I have no direct experience with any of them.