Karma fails to capture Chrome v93 and times-out/gives up - google-chrome

We have just encountered a number of build failures in our CI environment (TeamCity/Windows) following a recent auto-update of Google Chrome to version 93. These failures all look like this:
[14:02:41] : [Step 3/12] > OUR_APP_PACKAGE#0.0.0 ci-test C:\BuildAgent\work\7084fa910d4648a4\OUR_APP_PACKAGE
[14:02:41] : [Step 3/12] > ng test --watch=false --sourceMap=false
[14:02:41] : [Step 3/12]
[14:03:00] : [Step 3/12] 01 09 2021 14:02:59.394:INFO [karma]: Karma v3.0.0 server started at http://0.0.0.0:9876/
[14:03:00] : [Step 3/12] 01 09 2021 14:02:59.779:INFO [launcher]: Launching browser Chrome with unlimited concurrency
[14:03:00] : [Step 3/12] 01 09 2021 14:02:59.793:INFO [launcher]: Starting browser Chrome
[14:04:00] : [Step 3/12] 01 09 2021 14:04:00.752:WARN [launcher]: Chrome have not captured in 60000 ms, killing.
[14:04:00] : [Step 3/12] 01 09 2021 14:04:00.820:INFO [launcher]: Trying to start Chrome again (1/2).
[14:05:01] : [Step 3/12] 01 09 2021 14:05:01.422:WARN [launcher]: Chrome have not captured in 60000 ms, killing.
[14:05:01] : [Step 3/12] 01 09 2021 14:05:01.461:INFO [launcher]: Trying to start Chrome again (2/2).
[14:06:01] : [Step 3/12] 01 09 2021 14:06:01.837:WARN [launcher]: Chrome have not captured in 60000 ms, killing.
[14:06:01] : [Step 3/12] 01 09 2021 14:06:01.879:ERROR [launcher]: Chrome failed 2 times (timeout). Giving up.
[14:06:02]W: [Step 3/12] npm ERR! code ELIFECYCLE
[14:06:02]W: [Step 3/12] npm ERR! errno 1
[14:06:02]W: [Step 3/12] npm ERR! OUR_APP_PACKAGE#0.0.0 ci-test: `ng test --watch=false --sourceMap=false`
[14:06:02]W: [Step 3/12] npm ERR! Exit status 1
[14:06:02]W: [Step 3/12] npm ERR!
[14:06:02]W: [Step 3/12] npm ERR! Failed at the OUR_APP_PACKAGE#0.0.0 ci-test script.
[14:06:02]W: [Step 3/12] npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
[14:06:02]W: [Step 3/12]
[14:06:02]W: [Step 3/12] npm ERR! A complete log of this run can be found in:
[14:06:02]W: [Step 3/12] npm ERR! C:\Users\teamcity\AppData\Roaming\npm-cache\_logs\2021-09-01T13_06_02_086Z-debug.log
[14:06:02]W: [Step 3/12] Process exited with code 1
[14:05:46]E: [Step 3/12] Process exited with code 1 (Step: "npm run ci-test" (test Angular app) (Command Line))
We have ruled-out any changes to our codebase causing this error. Repeating a CI build of a commit which previously built successfully (on that same build agent, with the exact same build config) has also now failed.
Subsequently we noticed that all failures were on a single build agent, but on the following day another agent also began to fail. The common factor amongst the build agents which exhibited the failure was that they had auto-updated to Google Chrome v93.

Whether or not there is a true bug in Google Chrome aside, we noticed that we could work around this problem by using ChromeHeadless instead of regular Chrome in our Karma config file. We made the following one-line change in our Karma config karma.conf.js and everything is working once again. I've included the whole file but really only the browsers: line is relevant.
We had no particular reason to be using full Chrome instead of Chrome Headless so that workaround is suitable for us indefinitely.
module.exports = function (config) {
config.set({
basePath: '',
frameworks: ['jasmine', '#angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage-istanbul-reporter'),
require('#angular-devkit/build-angular/plugins/karma')
],
client:{
clearContext: false
},
coverageIstanbulReporter: {
dir: require('path').join(__dirname, 'coverage'), reports: [ 'html', 'lcovonly' ],
fixWebpackSourcePaths: true
},
angularCli: {
environment: 'dev'
},
reporters: ['progress', 'kjhtml'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['ChromeHeadless'], // Previously this was 'Chrome'
singleRun: false
});

Actually we started to explore the same issue and the thing that helped was to start use of custom launcher with couple of flags.
Before our karma.conf.js contained the following settings:
module.exports = function (config) {
config.set({
...
browsers: ['Chrome'],
...
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless',
flags: ['--no-sandbox']
}
},
});
}
Now it contains the following changes and tests are started to run again:
module.exports = function (config) {
config.set({
...
browsers: ['ChromeNoSandbox'],
...
customLaunchers: {
ChromeNoSandbox: {
base: 'Chrome',
flags: [
'--no-sandbox',
]
}
},
});
}

Related

NodeJS - MySQL server not working after some time

Basically I have my Node.js server working along with MySQL. When I work on my localhost everything's fine. The connection to my local DB (I'm using XAMPPP) is great and nothing breaks.
The problem comes along when the server is hosted by a provider. The one I hired uses cPanel and everithing's great until some time passes, because I get this error:
events.js:377
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Connection instance at:
at Connection._handleProtocolError (/home/adminis6/Artesofa/node_modules/mysql/lib/Connection.js:423:8)
at Protocol.emit (events.js:400:28)
at Protocol._delegateError (/home/adminis6/Artesofa/node_modules/mysql/lib/protocol/Protocol.js:398:10)
at Protocol.handleNetworkError (/home/adminis6/Artesofa/node_modules/mysql/lib/protocol/Protocol.js:371:10)
at Connection._handleNetworkError (/home/adminis6/Artesofa/node_modules/mysql/lib/Connection.js:418:18)
at Socket.emit (events.js:400:28)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read',
fatal: true
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! amor-muebles#1.0.0 start: `node app.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the amor-muebles#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/adminis6/.npm/_logs/2021-11-30T19_15_37_322Z-debug.log
I've been researching how to solve this problem and the only useful answer I got basically said that the DB connection was timming out, so all I had to do was to make a request on an interval and hope it won't break. So I wrote the following code in my app.js file:
const fetch = require("node-fetch");
setInterval(() => {
fetch('sample-endpoint');
}, 30000);
Although this seemed to have solved my problem, it appeared over and over again (note that the server did last longer being up).
Later on, some people taught me about CRONS so I made the following CRON:
PATH=$PATH:$HOME/bin; export PATH; /usr/bin/pgrep "node" >/dev/null || (cd /home/adminis6/Artesofa/; pkill node; pkill npm; nohup npm start &)
And it does work, because it gets the server up, but it instantly crashes (literally right after the server initiates, even after the server connects to the DB successfully), and it logs the following:
> amor-muebles#1.0.0 start /home/adminis6/Artesofa
> node app.js
Server running on port 3100
mysql connected
events.js:377
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Connection instance at:
at Connection._handleProtocolError (/home/adminis6/Artesofa/node_modules/mysql/lib/Connection.js:423:8)
at Protocol.emit (events.js:400:28)
at Protocol._delegateError (/home/adminis6/Artesofa/node_modules/mysql/lib/protocol/Protocol.js:398:10)
at Protocol.handleNetworkError (/home/adminis6/Artesofa/node_modules/mysql/lib/protocol/Protocol.js:371:10)
at Connection._handleNetworkError (/home/adminis6/Artesofa/node_modules/mysql/lib/Connection.js:418:18)
at Socket.emit (events.js:400:28)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
errno: -104,
code: 'ECONNRESET',
syscall: 'read',
fatal: true
}
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! amor-muebles#1.0.0 start: `node app.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the amor-muebles#1.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/adminis6/.npm/_logs/2021-11-30T20_14_02_182Z-debug.log
I don't know what else to try nor I have much more time, please help!
If you need it, here is my app.js:
/* ----------- Server initialization ----------- */
//Here are all the modudule's require
const app = express();
// Connect to mySQL DB
const db = require('./db/connectDB');
// Set server to listen on specified port
app.listen(process.env.PORT || '4000', () => {
console.log(`Server running on port ${process.env.PORT} AAAAA`);
})
app.set('view engine', 'ejs');
app.use(express.static('public'));
app.set('views', [
path.join(__dirname, 'views/adminSite/')
]);
/* ----------- Middleware ----------- */
app.use(express.urlencoded({ extended: true }));
app.use(helmet());
app.use(cookieParser());
app.use(morgan('tiny'));
/* ----------- Routes ----------- */
app.use('/api', apiRoutes);
setInterval(() => {
fetch('https://administracionartesofa.com/api/sucursales');
}, 30000);
And, finally, here is my connectDB file:
const mysql = require('mysql');
const dotenv = require('dotenv').config();
const settings = process.env.ENV === 'dev' ? require('./devSettings.json') : require('./prodSettings.json');
let db;
const connectDatabase = () => {
if (!db) {
db = mysql.createConnection(settings);
db.connect((err) => {
if (err) {
console.log('Database error');
console.log(err);
connectDatabase();
} else {
console.log('mysql connected');
}
})
}
return db;
}
module.exports = connectDatabase();
Use a mysql connection pool in your nodejs program. Your hosting provider's cheap and nasty shared MySql server has an aggressively short idle connection time limit. If you hold open a connection for too long the server slams it shut, so you get ECONNRESET.
Why? Cybercreeps trying to break in to random servers on the internet for fun and profit. This slows them down a bit, hopefully.
Connection pools cope with this behind the scenes if you
set up a pool at app startup, and
grab a connection from the pool when you need one, use it, and then return it to the pool.
Or, you can skip the pooling and just close your connection when you're done using it, then open a new one when you need it again. That will work fine for a low-volume app, but it might cause some inefficient connection thrashing if your volume goes up. Pools are better.

How to manually recreate the bootstrap client certificate for OpenShift 3.11 master?

Our origin-node.service on the master node fails with:
root#master> systemctl start origin-node.service
Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
root#master> systemctl status origin-node.service -l
[...]
May 05 07:17:47 master origin-node[44066]: bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 05 07:17:47 master origin-node[44066]: bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 05 07:17:47 master origin-node[44066]: certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 05 07:17:47 master origin-node[44066]: server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
So it seems that kubelet-client-current.pem and/or kubelet-server-current.pem contains an expired certificate and the service tries to create a CSR using an endpoint which is probably not yet available (because the master is down). We tried redeploying the certificates according to the OpenShift documentation Redeploying Certificates, but this fails while detecting an expired certificate:
root#master> ansible-playbook -i /etc/ansible/hosts openshift-master/redeploy-openshift-ca.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] *******************************************************************************************************************************************
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200505T042754.html or /root/cert-expiry-report.20200505T042754.json.\n"}
[...]
root#master> cat /root/cert-expiry-report.20200505T042754.json
[...]
"kubeconfigs": [
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
{
"cert_cn": "O:system:cluster-admins, CN:system:admin",
"days_remaining": -75,
"expiry": "2020-02-20 13:14:27",
"health": "expired",
"issuer": "CN=openshift-signer#1519045219 ",
"path": "/etc/origin/node/node.kubeconfig",
"serial": 27,
"serial_hex": "0x1b"
},
[...]
"summary": {
"expired": 2,
"ok": 22,
"total": 24,
"warning": 0
}
}
There is a guide for OpenShift 4.4 for Recovering from expired control plane certificates, but that does not apply for 3.11 and we did not find such a guide for our version.
Is it possible to recreate the expired certificates without a running master node for 3.11? Thanks for any help.
OpenShift Ansible: https://github.com/openshift/openshift-ansible/releases/tag/openshift-ansible-3.11.153-2
Update 2020-05-06: I also executed redeploy-certificates.yml, but it fails at the same TASK:
root#master> ansible-playbook -i /etc/ansible/hosts playbooks/redeploy-certificates.yml
[...]
TASK [openshift_certificate_expiry : Fail when certs are near or already expired] ******************************************************************************
Wednesday 06 May 2020 04:07:06 -0400 (0:00:00.909) 0:01:07.582 *********
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"changed": false, "msg": "Cluster certificates found to be expired or within 60 days of expiring. You may view the report at /root/cert-expiry-report.20200506T040603.html or /root/cert-expiry-report.20200506T040603.json.\n"}
Update 2020-05-11: Running with -e openshift_certificate_expiry_fail_on_warn=False results in:
root#master> ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml
[...]
TASK [Wait for master API to come back online] *****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.111) 0:02:25.186 ************
skipping: [master.openshift-cluster.mydomain.com]
TASK [openshift_control_plane : restart master] ****************************************************************************************************************
Monday 11 May 2020 03:48:56 -0400 (0:00:00.257) 0:02:25.444 ************
changed: [master.openshift-cluster.mydomain.com] => (item=api)
changed: [master.openshift-cluster.mydomain.com] => (item=controllers)
RUNNING HANDLER [openshift_control_plane : verify API server] **************************************************************************************************
Monday 11 May 2020 03:48:57 -0400 (0:00:00.945) 0:02:26.389 ************
FAILED - RETRYING: verify API server (120 retries left).
FAILED - RETRYING: verify API server (119 retries left).
[...]
FAILED - RETRYING: verify API server (1 retries left).
fatal: [master.openshift-cluster.mydomain.com]: FAILED! => {"attempts": 120, "changed": false, "cmd": ["curl", "--silent", "--tlsv1.2", "--max-time", "2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://lb.openshift-cluster.mydomain.com:8443/healthz/ready"], "delta": "0:00:00.182367", "end": "2020-05-11 03:51:52.245644", "msg": "non-zero return code", "rc": 35, "start": "2020-05-11 03:51:52.063277", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
root#master> systemctl status origin-node.service -l
[...]
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: E0511 04:23:28.077964 109972 bootstrap.go:195] Part of the existing bootstrap client certificate is expired: 2020-02-20 13:14:27 +0000 UTC
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.078001 109972 bootstrap.go:56] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: I0511 04:23:28.080555 109972 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
May 11 04:23:28 master.openshift-cluster.mydomain.com origin-node[109972]: F0511 04:23:28.130968 109972 server.go:262] failed to run Kubelet: cannot create certificate signing request: Post https://lb.openshift-cluster.mydomain.com:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: EOF
[...]
I have this same case in customer environment, this error is because the certified was expiry, i "cheated" changing da S.O date before the expiry date. And the origin-node service started in my masters:
systemctl status origin-node
● origin-node.service - OpenShift Node
Loaded: loaded (/etc/systemd/system/origin-node.service; enabled; vendor preset: disabled)
Active: active (running) since Sáb 2021-02-20 20:22:21 -02; 6min ago
Docs: https://github.com/openshift/origin
Main PID: 37230 (hyperkube)
Memory: 79.0M
CGroup: /system.slice/origin-node.service
└─37230 /usr/bin/hyperkube kubelet --v=2 --address=0.0.0.0 --allow-privileged=true --anonymous-auth=true --authentication-token-webhook=true --authentication-token-webhook-cache-ttl=5m --authorization-mode=Webhook --authorization-webhook-c...
Você tem mensagem de correio em /var/spool/mail/okd
The openshift_certificate_expiry role uses the openshift_certificate_expiry_fail_on_warn variable to determine if the playbook should fail when the days left are less than openshift_certificate_expiry_warning_days.
So try running the redeploy-certificates.yml with this additional variable set to "False":
ansible-playbook -i /etc/ansible/hosts -e openshift_certificate_expiry_fail_on_warn=False playbooks/redeploy-certificates.yml

E/launcher - unknown error: Chrome failed to start: exited abnormally, Protractor

I am new to protractor ,as well as automation testing. I ran my conf.js file but the process terminate with below error related to chorme
error message
$$> protractor conf.js
[08:38:14] I/launcher - Running 1 instances of WebDriver
[08:38:14] I/direct - Using ChromeDriver directly...
**
[08:39:14] E/launcher - unknown error: Chrome failed to start: exited
abnormally (Driver info: chromedriver=2.37.544315
(730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux
3.10.0-1062.1.1.el7.x86_64 x86_64) [08:39:14] E/launcher - WebDriverError: unknown error: Chrome failed to start: exited
abnormally (Driver info: chromedriver=2.37.544315
(730aa6a5fdba159ac9f4c1e8cbc59bf1b5ce12b7),platform=Linux
3.10.0-1062.1.1.el7.x86_64 x86_64)
**
at Object.checkLegacyResponse (/usr/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:546:15)
at parseHttpResponse (/usr/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/http.js:509:13)
at doSend.then.response (/usr/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/http.js:441:30)
at process._tickCallback (internal/process/next_tick.js:68:7)
From: Task: WebDriver.createSession()
at Function.createSession (/usr/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver.js:769:24)
at Function.createSession (/usr/lib/node_modules/protractor/node_modules/selenium-webdriver/chrome.js:761:15)
at Direct.getNewDriver (/usr/lib/node_modules/protractor/built/driverProviders/direct.js:77:33)
at Runner.createBrowser (/usr/lib/node_modules/protractor/built/runner.js:195:43)
at q.then.then (/usr/lib/node_modules/protractor/built/runner.js:339:29)
at _fulfilled (/usr/lib/node_modules/protractor/node_modules/q/q.js:834:54)
at /usr/lib/node_modules/protractor/node_modules/q/q.js:863:30
at Promise.promise.promiseDispatch (/usr/lib/node_modules/protractor/node_modules/q/q.js:796:13)
at /usr/lib/node_modules/protractor/node_modules/q/q.js:556:49
at runSingle (/usr/lib/node_modules/protractor/node_modules/q/q.js:137:13)
[08:39:14] E/launcher - Process exited with error code 199
I have tried all possible solutions ,
From upgrading chrome to 59x version to downgrading it to 2.37 version
adding extras like below to conf.js file
directConnect: true,
useAllAngular2AppRoots:true,
capabilities: {
browserName: 'chrome',
chromeOptions: {
'args':['--no-sandbox']
}
}
conf.js
exports.config = {
directConnect: true,
framework: 'jasmine',
// seleniumAddress: 'http://localhost:4444/wd/hub',
specs: ['spec.js'],
capabilities: {
browserName: 'chrome',
chromeOptions: {
'args': ['--no-sandbox']
}
},
useAllAngular2AppRoots: true
}
could someone please help me make this work.
Thank you
(new to post questions on stackoverflow as well :b)
The main error here is that your chrome is failing to start a session. This happens if the chrome version being downloaded is the latest beta version. This is a bug in the latest protractor package which is being looked into for the protractor 6 release with backwards compatibility with webdriver-manager. For this you will have to fix the version of the chromedriver.
How are you running your webdriver manager update? Do not use the globally installed protractor to run your tests, use the protractor from node modules. Do the same thing when using webdriver manager update. Use a fix version like ./node_modules/protractor/bin/webdriver-manager update --standalone --versions.standalone=3.8.0 --chrome --versions.chrome=78.0.3904.97
Add this to your scripts in package.json.
Add jasmine options in your configuration file:
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 260000,
isVerbose: true,
includeStackTrace: true,
}
I don't think the latter would help but try updating the webdriver manager with the fix version and let me know how that goes.

Can not access Chrome headless debug

I am running a angular 5 unit test on a headless server in Karma and Jasmine. I am using chrome headless to run the tests.
I am not able to access Chrome's debug mode when using with --remote-debugging-port=9223. I tried with http://35.1.28.84:9223 in my remote chrome url.
I made sure the all interfaces are listening with host: '0.0.0.0'. I made sure the port was open also.
How come I am not able to access chrome's debugger remotely?
START:
29 03 2018 15:38:05.480:INFO [karma]: Karma v2.0.0 server started at http://0.0.0.0:9876/
29 03 2018 15:38:05.482:INFO [launcher]: Launching browser MyHeadlessChrome with unlimited concurrency
29 03 2018 15:38:05.497:INFO [launcher]: Starting browser ChromeHeadless
29 03 2018 15:38:18.487:INFO [HeadlessChrome 0.0.0 (Linux 0.0.0)]: Connected on socket pfKmImL3pGU9ibL7AAAA with id 10485493
headless-karma.conf.js
module.exports = function(config) {
config.set({
host: '0.0.0.0',
basePath: '',
frameworks: ['jasmine', '#angular/cli'],
plugins: [
require('karma-jasmine'),
require('karma-mocha-reporter'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('#angular/cli/plugins/karma')
],
reporters: ['mocha'],
port: 9876, // karma web server port
colors: true,
angularCli: {
environment: 'dev'
},
browsers: ['MyHeadlessChrome'],
customLaunchers: {
MyHeadlessChrome: {
base: 'ChromeHeadless',
flags: [
'--disable-translate',
'--disable-extensions',
'--no-first-run',
'--disable-background-networking',
'--remote-debugging-port=9223',
]
}
},
autoWatch: false,
singleRun: true,
concurrency: Infinity
});
};
one#work:~/github/MCTS.UI (dh/headless-unittests)
$ google-chrome --version
Google Chrome 64.0.3282.167
one#work:~/github/MCTS.UI (dh/headless-unittests)
$ google-chrome-stable --version
Google Chrome 64.0.3282.167
There is another parameter you need to supply to chrome:
--remote-debugging-address=0.0.0.0
Use the given address instead of the default loopback for accepting remote debugging connections. Should be used together with --remote-debugging-port. Note that the remote debugging protocol does not perform any authentication, so exposing it too widely can be a security risk.

Karma singleRun not quitting automatically

My karma test runner isn't automatically quitting after tests have finished, even though my config has singleRun set to true and I'm not auto-watching files, which should make the test runner run once, then quit according to the docs.
module.exports = function(config) {
config.set({
basePath: '',
browsers: ['PhantomJS'],
frameworks: ['browserify', 'jasmine'],
files: [
{ pattern: 'test/*.js', watched: false }
],
preprocessors: {
'static/js/src/*.js': ['browserify'],
'test/*.js': ['browserify']
},
browserify: {
debug: true,
transform: [["babelify", { "presets": ["es2015"] }]]
},
colors: true,
reporters: ['progress'],
singleRun: true,
autoWatch: false
});
};
When run via my gulp test command:
gulp.task('test', function(done) {
new Karma({
configFile: __dirname + '/karma.conf.js',
singleRun: true
}, done).start();
});
the tests complete:
[09:18:38] Using gulpfile ~/static-projects/tic-tac-toe-es6/gulpfile.js
[09:18:38] Starting 'test'...
04 02 2016 09:18:40.502:INFO [framework.browserify]: bundle built
04 02 2016 09:18:40.509:INFO [karma]: Karma v0.13.19 server started at http://localhost:9876/
04 02 2016 09:18:40.523:INFO [launcher]: Starting browser PhantomJS
04 02 2016 09:18:41.157:INFO [PhantomJS 1.9.8 (Linux 0.0.0)]: Connected on socket /#xIZCPzrCyB2xljZ7AAAA with id 64233425
PhantomJS 1.9.8 (Linux 0.0.0): Executed 9 of 9 SUCCESS (0.042 secs / 0.003 secs)
[09:18:41] Finished 'test' after 3.1 s
However, I have to manually quit the test runner via Ctrl + c. What am I doing wrong?
I'm having this same problem. There are various threads that elude to this problem with various fixes claimed for this:
https://github.com/karma-runner/gulp-karma/issues/3
https://github.com/karma-runner/karma/issues/1035
However, as far as I can tell this is still an issue (or maybe it's resurfaced?). The only way I've found to successfully execute karma from grunt is by spawning a child process to start karma.
var spawn = require('child_process').spawn;
gulp.task('test', function(done) {
spawn('karma', ['start', __dirname + '/karma.conf.js'], { stdio : 'inherit' });
});