After running my protractor tests I may be left with chromedriver.exe running. The simple question is: how do I kill it? There are several things to note here:
I cannot just kill based on process name since several other chromedrivers may be running and may be needed by other tests.
I already stop the selenium server using "curl http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer"
I noticed that the chromedriver is listening on port 33107 (is it possible to specify this port somehow?), but I do not know how should I call it to quit.
Probably I should be using driver.quit() in my tests, but on some occasions it might not get called (eg. when the build is cancelled).
Any ideas how to kill the proper chromedriver process from command line (eg. using curl)?
The proper way to do it's as you mentioned by using driver.quit() in your tests.
Actually, to be exact in your test cleanup method, since you want a fresh instance of the browser every time.
Now, the problem with some Unit Test Frameworks (like MSTest for example) is that if your test initialize method fails, the test cleanup one will not be called.
As a workaround for this you can surround in a try-catch statement you test initialize with catch calling and executing your test cleanup.
public void TestInitialize()
{
try
{
//your test initialize statements
}
catch
{
TestCleanup();
//throw exception or log the error message or whatever else you need
}
}
public void TestCleanup()
{
driver.Quit();
}
EDIT:
For the case when the build is canceled, you can create a method that kills all open instances of Chrome browser and ChromeDriver that gets executed before you start a new suite of tests.
E.g. if your Unit Testing Framework used has something similar to Class Initialize or Assembly Initialize you can do it there.
However, on a different post I found this approach:
PORT_NUMBER=1234
lsof -i tcp:${PORT_NUMBER} | awk 'NR!=1 {print $2}' | xargs kill
Breakdown of command
(lsof -i tcp:${PORT_NUMBER}) -- list all processes that is listening on that tcp port
(awk 'NR!=1 {print $2}') -- ignore first line, print second column of each line
(xargs kill) -- pass on the results as an argument to kill. There may be several.
Here, to be more exact: How to find processes based on port and kill them all?
Related
we use multiple PHP workers. Every PHP worker is organized in one container. To scale the amount of parallel working processes we handle it in a docker swarm.
So the PHP is running in a loop and waiting for new jobs (Get jobs from Gearman).
If a new job is receiving, it would be processed. After that, the script is waiting for the next job without quitting/leaving the PHP script.
Now we want to update our workers. In this case, the image is the same but the PHP script is changed.
So we have to leave the PHP script, update the PHP script file, and restart the PHP script.
If I use this docker service update command. Docker will stop the container immediately. In the worst case, a running worker will be canceled during this work.
docker service update --force PHP-worker
Is there any possibility to restart the docker container soft?
Soft means, give the container a sign: "I have to do a restart, please cancel all running processes." That the container has the chance to quit his work.
In my case, before I run the next process in the loop. I will check this cancel flag. If this cancel flag set I will end the loop and end running the PHP script.
Environment:
Debian: 10
Docker: 19.03.12
PHP: 7.4
In the meantime, we have solved it with SIGNALS.
In PHP work with signals is very easy. In our case, this structure helped us.
//Terminate flag
$terminate = false;
//Register signals
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function() use (&$terminate) {
echo"Get SIGTERM. End worker LOOP\n";
$terminate = true;
});
pcntl_signal(SIGHUP, function() use (&$terminate) {
echo"Get SIGHUP. End worker LOOP\n";
$terminate = true;
});
//loop
while($terminate === false){
//do next job
}
Before the next job is started it is checked if the terminate flag is set.
Docker has great support for gracefully stopping containers.
To define the time to wait we used the tag "stop_grace_period".
I need to write a TCL program through which I shall be able to login to the remote server and then execute commands on the remote server; also I need to get the output from the remote server.
EDIT:
Thanks Kostix for the reply. My requirement says that the TCL script should be able to login to the remote server. I am planning to send the password thru the expect mechanism, and after that I am planning to send the commands. My sample code goes like this:
set prompt "(%|>|\#|\\\$) #"
spawn /usr/bin/ssh $username#$server
expect {
-re "Are you sure you want to continue connecting (yes/no)?" {
exp_send "yes\r"
exp_continue
#continue to match statements within this expect {}
}
-nocase "password: " {
exp_send "$password\r"
interact
}
}
I am able to login with this but dont know how to extend this code to send commands. I've tried few methods, but didn't work out.
Since you're about to use SSH, you might not need neither Tcl nor Expect to carry out this task at all: since SSH stands for "Secure SHell", all you need to do to execute commands remotely is to tell SSH what program to spawn on the remote side after logging in (if you do not do this, SSH spawns the so-called "login shell" of the logged in user) and then SSH feeds that program what you pass the SSH client on its standard input and channels back what the remote program writes to its standard output streams.
To automate logging via SSH, several ways exist:
Authentication using public keys: if the private (client's) key is not protected by a password, this method requires no scripting at all — you just tell the SSH client what key to use.
"Keyboard interactive" authentication (password-based). This is what most people think SSH is all about (which is wrong). Scripting this is somewhat hard as SSH insists on getting the password from a "real terminal". It can be tricked to beleive so either through using Expect or simply by wrapping a call to the SSH client using the sshpass program.
Both the SSH client and the server might also support Kerberos-based or GSSAPI-based authentication. Using it might not require any scripting as it derives the authentication credentials from the system (the local user's session).
So the next steps to carry out would be to narrow your requirements:
What kind of authentication has to be supported?
What program should perform the commands you intend to send from the client? Will that be a Unix shell? A Tcl shell? Something else?
Should that remote command be scripted using some degree of interactivity (we send something to it then wait for reply and decide what to send next based on it) or batch-style script would be okay?
Before these questions are answered, the inital question has little sense as it's too broad and hence does not meet stackoverflow format.
Commands on the server can be executed either using exec command like this,
set a [exec ls -lrta]
puts $a
[OR] The expect and execute loop can be continued as above;
I am making a proc using which linux commands can easily be run;
package require Expect
proc ExecCommand {username server password cmd } {
spawn /usr/bin/ssh $username#$server
expect {
".*(yes/no)? " {exp_send "yes\r" ; exp_continue}
".*password: " {exp_send "$password\r";exp_continue}
".*$ " {exp_send -i $spawn_id "$cmd \r";
expect {
".*$ " { return 1; close }
}
}
}
}
set result [ExecCommand "admin" "0" "qwerty" "ls"]
if {$result > 0 } {
puts "Command succesfully executed\n"
} else {
puts "Failed to execute\n"
}
I can start a persistent process on unix with:
nohup process &
It will continue to run after I close my bash session. I cannot seem to do the same with PowerShell remoting on Windows. I can open a PSRemote session with a server and start a process, but as soon as I close that session it dies. My assumption is this is a benefit of strong sandboxing, but it's a benefit I'd rather work around somehow. Any ideas?
So far I've tried:
$exe ='d:\procdump.exe'
$processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps'
1) [System.Diagnostics.Process]::Start($exe,$processArgs)
2) Start-Job -ScriptBlock {param($exe,$processArgs) [System.Diagnostics.Process]::Start($exe,$processArgs)} -ArgumentList ($exe,$processArgs)
3) start powershell {param($exe ='d:\procdump.exe', $processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps') [System.Diagnostics.Process]::Start($exe,$processArgs)}
4) start powershell {param($exe ='d:\procdump.exe', $processArgs = '-ma -e -t -n 3 -accepteula w3wp.exe d:\Dumps') Start-Job -ScriptBlock {param($exe,$processArgs) [System.Diagnostics.Process]::Start($exe,$processArgs)} -ArgumentList ($exe,$processArgs)}
The program runs up until I close the session, then the procdump is reaped. The coolest thing about procdump is it will self-terminate, and I'd like to leave it running to take advantage of that fact.
I'd been starting ADPlus remotely, holding a session open, and just terminating the session to kill the captures. That's kind of handy, but it requires an awful lot of polling, inspecting, and deciding when is the right moment to kill the capture process before filling up the hard drive but after capturing enough dumps to be useful. I can leave procdump running indefinitely while it waits for an appropriate trigger and when it's captured enough data it will just die. That's lovely.
I just need to get procdump to keep running after I terminate my remote session. It's probably not worth creating a procdump scheduled task and starting it, but that's about the last idea I've got left.
Thanks.
This is not directly possible. Indirectly, yes a task or a service could be created and started remotely, but simply pushing a process off into the SYSTEM space is not.
I resolved my issue by spawning a local job to start the remote job and remain alive for the required period of time. The local job holds the remote session open then dies at the appropriate time, and the parent local process is able to continue to run uninterrupted and harvest the return value of the remote procdump with ReceiveJob if I happen to care.
It is possible that my question title is misleading, but here goes --
I am trying out a prototype app which involves three MySQL/Perl Dancer powered web apps.
The user goes to app A which serves up a Google maps base layer. On document ready, app A makes three jQuery ajax calls -- two to app B like so
http://app_B/points.json
http://app_B/polys.json
and one to app C
http://app_C/polys.json
Apps B and C query the MySQL database via DBI, and serve up json packets of points and polys that are rendered in the user's browser.
All three apps are proxied through Apache to Perl Starman running via plackup started like so
$ plackup -E production -s Starman -w 10 -p 5000 path/to/app_A/app.pl
$ plackup -E production -s Starman -w 10 -p 5001 path/to/app_B/app.pl
$ plackup -E production -s Starman -w 10 -p 5002 path/to/app_C/app.pl
From time to time, I start getting errors back from the apps called via Ajax. The initial symptoms were
{"error":"Warning caught during route
execution: DBD::mysql::st fetchall_arrayref
failed: fetch() without execute() at
<path/to/app_B/app.pm> line 79.\n"}
The offending lines are
71> my $sql = qq{
72> ..
73>
74>
75> };
76>
77> my $sth = $dbh->prepare($sql);
78> $sth->execute();
79> my $res = $sth->fetchall_arrayref({});
This is bizarre... how can execute() not take place above? Perl doesn't have a habit of jumping over lines, does it? So, I turned on DBI_TRACE
$DBI_TRACE=2=logs/dbi.log plackup -E production -p 5001 -s Starman -w
10 -a bin/app.pl
And, following is what stood out to me as the potential culprit in the log file
> Handle is not in asynchronous mode error 2000 recorded: Handle is
> not in asynchronous mode
> !! ERROR: 2000 CLEARED by call to fetch method
What is going on? Basically, as is, app A is non-functional because the other apps don't return data "reliably" -- I put that in quotes because they do work correctly occasionally, so I know I don't have any logic or syntax errors in my code. I have some kind of intrinsic plumbing errors.
I did find the following on DBD::mysql about ASYNCHRONOUS_QUERIES and am wondering if this is the cause and the solution of my problem. Essentially, if I want async queries, I have to add {async => 1} to my $dbh-prepare(). Except, I am not sure if I want async true or false. I tried it, it and it doesn't seem to help.
I would love to learn what is going on here, and what is the right way to solve this.
How are you managing your database handles? If you are opening a connection before starman forks your code then multiple children may be trying to share one database handle and are confusing MySQL. You can solve this problem by always running a DBI->connect in your methods that talk to the database, but that can be inefficient. Many people switch over to some sort of connection pool, but I have no direct experience with any of them.
I have a webapp that segfaults when the database in restarted and it tries to use the old connections. Running it under gdb --args apache -X leads to the following output:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1212868928 (LWP 16098)]
0xb7471c20 in mysql_send_query () from /usr/lib/libmysqlclient.so.15
I've checked that the drivers and database are all up to date (DBD::mysql 4.0008, MySQL 5.0.32-Debian_7etch6-log).
Annoyingly I can't reproduce this with a trivial script:
use DBI;
use Test::More tests => 2;
my $dbh = DBI->connect( "dbi:mysql:test", 'root' );
sub test_db {
my ($number) = $dbh->selectrow_array("select 1 ");
return $number;
}
is test_db, 1, "connected to db";
warn "restart db now";
getc;
is test_db, 1, "connected to db";
Which gives the following:
ok 1 - connected to db
restart db now at dbd-mysql-test.pl line 23.
DBD::mysql::db selectrow_array failed: MySQL server has gone away at dbd-mysql-test.pl line 17.
not ok 2 - connected to db
# Failed test 'connected to db'
# at dbd-mysql-test.pl line 26.
# got: undef
# expected: '1'
This behaves correctly, telling me why the request failed.
What stumps me is that it is segfaulting, which it shouldn't do. As it only appears to happen when the whole app is running (which uses DBIx::Class) it is hard to reduce it to a test case.
Where should I start to look to debug this? Has anyone else seen this?
UPDATE: further prodding showed that it being under mod_perl was a red herring. Having reduced it to a simple test script I've now posted to the DBI mailing list. Thanks for your answers.
What this probably means is that there's a difference between your mod_perl environment and the one you were testing via your script. Some things to check:
Was your mod_perl compiled with the same version of Perl
Are the #INC's the same for both
Are you using threads in your mod_perl setup? I don't believe DBD::mysql is completely thread-safe.
I've seen this problem, but I'm not sure it had the same cause as yours. Are you by chance using a certain module for sending mails (forgot the name, sorry) from your application? When we had the problem in a project, after days of debugging we found that this mail module was doing strange things with open file descriptors, then forked off another process which called the console tool sendmail, which again did strange things with file descriptors. I guess one of the file descriptors it messed around with was the connection to the database, but I'm still not sure about that. The problem disappeared when we switched to another module for sending mails. Maybe it's worth a look for you too.
If you're getting a segfault, do you have a core file greated? If not, check ulimit -c. If that returns 0, your system won't create core files and you'll have to change that. If you do have a core file, you can use gdb or similar tools to debug it. It's not particularly fun, but it's possible. The start of the command will look something like:
gbd /usr/bin/httpd core
There are plenty of tutorials for debugging core files scattered about the Web.
Update: Just found a reference for ensuring you get core dumps from mod_perl. That should help.
This is a known problem in old DBD::mysql. Upgrade it (4.008 is not up to date).
There's a simple test script attached to https://rt.cpan.org/Public/Bug/Display.html?id=37027
that will trigger this bug.