I've implemented a task to run a server and react (reload) on changes of my source files.
export function serveDev (gulp) {
return () => {
const bs = browserSync.create();
const stream = bs.init(config.browsersync.opts);
gulp.watch(`${config.source}/components/**/*.js`, gulp.series('scripts'));
gulp.watch(`${config.source}/js/**/*.js`, gulp.series('scripts'));
gulp.watch(config.browsersync.watch).on('change', bs.reload);
return stream;
};
}
I'm using gulp-4.0 and I'm running this task from the command line.
What's the correct implementation to terminate this task correctly when the user hits CTRL+C?
When I terminate this running task with the key-shortcut CTRL+C I get the following error:
The following tasks did not complete: serve, sync
Did you forget to signal async completion?
The task is working properly until the user hits CTRL+C. When the signal coming from CTRL+C reach the task the error described above is printed out. I would like to know how to catch or how to react properly on the termination signal coming from CTRL+C?
You can kill the background process on terminal with pkill command like:
pkill gulp
Related
we use multiple PHP workers. Every PHP worker is organized in one container. To scale the amount of parallel working processes we handle it in a docker swarm.
So the PHP is running in a loop and waiting for new jobs (Get jobs from Gearman).
If a new job is receiving, it would be processed. After that, the script is waiting for the next job without quitting/leaving the PHP script.
Now we want to update our workers. In this case, the image is the same but the PHP script is changed.
So we have to leave the PHP script, update the PHP script file, and restart the PHP script.
If I use this docker service update command. Docker will stop the container immediately. In the worst case, a running worker will be canceled during this work.
docker service update --force PHP-worker
Is there any possibility to restart the docker container soft?
Soft means, give the container a sign: "I have to do a restart, please cancel all running processes." That the container has the chance to quit his work.
In my case, before I run the next process in the loop. I will check this cancel flag. If this cancel flag set I will end the loop and end running the PHP script.
Environment:
Debian: 10
Docker: 19.03.12
PHP: 7.4
In the meantime, we have solved it with SIGNALS.
In PHP work with signals is very easy. In our case, this structure helped us.
//Terminate flag
$terminate = false;
//Register signals
pcntl_async_signals(true);
pcntl_signal(SIGTERM, function() use (&$terminate) {
echo"Get SIGTERM. End worker LOOP\n";
$terminate = true;
});
pcntl_signal(SIGHUP, function() use (&$terminate) {
echo"Get SIGHUP. End worker LOOP\n";
$terminate = true;
});
//loop
while($terminate === false){
//do next job
}
Before the next job is started it is checked if the terminate flag is set.
Docker has great support for gracefully stopping containers.
To define the time to wait we used the tag "stop_grace_period".
After two weeks of trying to run my site, I'm asking for your help.
Has anyone hosted Sails.JS on PlanetHoster?
My queries don't work because the connection to the database doesn't seem established.
Here's an example of some very simple queries:
await User.findOne({ email: email });
Here's what's displayed in the browser error console:
Uncaught (in promise) Error: Request failed with status code 500
I've tried to handle the errors but nothing is displayed...
try { await User.findOne({ email: email }); } catch(err) { // nothing }
So I've deduced that it was a problem with calling the database.
Unfortunately, I have no way to read the error logs ...
Yet, I've set the production.js file (config/env/production.js) and when I run NODE_ENV = production node app.js, it's still displayed in development. In fact, PlanetHoster doesn't require running the command sails lift, it just runs the platform already ...
I'm currently in a total blur as for where to go from here so if you have suggestions, I will take them with pleasure.
Thank you
Environment: Sails v1.0.2
I'm trying to monitor the amount of the restarts, cpu and memory in PM2 module managed microservices and create an alert if the module is restarting using AWS cloud watch.
pm2 list
Command returns the data in a UI formatted way which I would like to avoid parsing.
Is there any way to get the number of process restarts in a more machine-readable friendly format than the one returned by the pm2 list command.
I looked at the pm2 get command but can't find documentation about the keys I can use there.
You can get all kinds of details (including restarts) in json format with
pm2 prettylist (pretty)
or with
pm2 jlist (raw).
pm2 also has an api:
var pm2 = require('pm2');
// Connect or launch PM2
pm2.connect(function(err) {
// Start a script on the current folder
pm2.start('test.js', { name: 'test' }, function(err, proc) {
if (err) throw new Error('err');
// Get all processes running
pm2.list(function(err, process_list) {
console.log(process_list);
// Disconnect to PM2
pm2.disconnect(function() { process.exit(0) });
});
});
});
Details on the api: pm2-api
After running my protractor tests I may be left with chromedriver.exe running. The simple question is: how do I kill it? There are several things to note here:
I cannot just kill based on process name since several other chromedrivers may be running and may be needed by other tests.
I already stop the selenium server using "curl http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer"
I noticed that the chromedriver is listening on port 33107 (is it possible to specify this port somehow?), but I do not know how should I call it to quit.
Probably I should be using driver.quit() in my tests, but on some occasions it might not get called (eg. when the build is cancelled).
Any ideas how to kill the proper chromedriver process from command line (eg. using curl)?
The proper way to do it's as you mentioned by using driver.quit() in your tests.
Actually, to be exact in your test cleanup method, since you want a fresh instance of the browser every time.
Now, the problem with some Unit Test Frameworks (like MSTest for example) is that if your test initialize method fails, the test cleanup one will not be called.
As a workaround for this you can surround in a try-catch statement you test initialize with catch calling and executing your test cleanup.
public void TestInitialize()
{
try
{
//your test initialize statements
}
catch
{
TestCleanup();
//throw exception or log the error message or whatever else you need
}
}
public void TestCleanup()
{
driver.Quit();
}
EDIT:
For the case when the build is canceled, you can create a method that kills all open instances of Chrome browser and ChromeDriver that gets executed before you start a new suite of tests.
E.g. if your Unit Testing Framework used has something similar to Class Initialize or Assembly Initialize you can do it there.
However, on a different post I found this approach:
PORT_NUMBER=1234
lsof -i tcp:${PORT_NUMBER} | awk 'NR!=1 {print $2}' | xargs kill
Breakdown of command
(lsof -i tcp:${PORT_NUMBER}) -- list all processes that is listening on that tcp port
(awk 'NR!=1 {print $2}') -- ignore first line, print second column of each line
(xargs kill) -- pass on the results as an argument to kill. There may be several.
Here, to be more exact: How to find processes based on port and kill them all?
Using web2py (Version 2.8.2-stable+timestamp.2013.11.28.13.54.07), on 64-bit Windows, I have the following problem
There is an exe program that is started on user request (first an txt file is created then p is triggered).
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path_1)
while the exe file is running it is creating a txt file.
The program is stopped on user request by deleting the file the program needs as input.
when exe is started i have other requests user can trigger. it is common that request comes to server (I used microsoft network monitor to check that), but the function is not triggered.
I tried using scheduler but no success. Same problem
I am really stuck here with this problem
Thank you for your help
With a help of web2py google group the solution is.
I used scheduler. Created a scheduler.py file with the following code
def runWoshiEngine(scriptId, path):
import os, sys
import time
import subprocess
p = subprocess.Popen(['woshi_engine.exe', scriptId], shell=True, stdout = subprocess.PIPE, cwd=path)
return dict(status = 1)
from gluon.scheduler import Scheduler
scheduler = Scheduler(db)
In my controller function
task = scheduler.queue_task(runWoshiEngine, [scriptId, path])
you also have to import scheduler (from gluon.scheduler import Scheduler)
then I run the scheduler from command prompt with the following (so if I understood correctly you have two instances of web2py running, one for webserver, one for scheduler)
web2py.py -K woshiweb -D 0 (-D 0 is for verbose logging so it can be removed)