using puppeteer-recorder to record video of browser - puppeteer

I'm trying to record puppeteer to see what happens when i run it on server, as I understand this package does what i want.
https://www.npmjs.com/package/puppeteer-recorder
so here is simplified version of my code
const puppeteer = require('puppeteer');
const { record } = require('puppeteer-recorder');
var path = 'C:\\wamp64\\www\\home_robot\\';
init_puppeteer();
const global_browser ;
async function init_puppeteer() {
global_browser = await puppeteer.launch({headless: false , args: ['--no-sandbox', '--disable-setuid-sandbox']});
check_login()
};
async function check_login()
{
try {
const page = await global_browser.newPage();
await page.setViewport({width: 1000, height: 1100});
await record({
browser: global_browser, // Optional: a puppeteer Browser instance,
page: page, // Optional: a puppeteer Page instance,
output: path + 'output.webm',
fps: 60,
frames: 60 * 5, // 5 seconds at 60 fps
prepare: function () {}, // <-- add this line
render: function () {} // <-- add this line
});
await page.goto('https://www.example.cob', {timeout: 60000})
.catch(function (error) {
throw new Error('TimeoutBrows');
});
await page.close();
}
catch (e) {
console.log(' LOGIN ERROR ---------------------');
console.log(e);
}
}
But I get this error
$ node home.js
(node:7376) UnhandledPromiseRejectionWarning: Error: spawn ffmpeg ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
(node:7376) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function without a catch
block, or by rejecting a promise which was not handled with .catch(). (rejection
id: 1)
(node:7376) [DEP0018] DeprecationWarning: Unhandled promise rejections are depre
cated. In the future, promise rejections that are not handled will terminate the
Node.js process with a non-zero exit code.
LOGIN ERROR ---------------------
Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed
at doWrite (_stream_writable.js:406:19)
at writeOrBuffer (_stream_writable.js:394:5)
at Socket.Writable.write (_stream_writable.js:294:11)
at Promise (C:\wamp64\www\home_robot\node_modules\puppeteer-recorder\index.j
s:72:12)
at new Promise (<anonymous>)
at write (C:\wamp64\www\home_robot\node_modules\puppeteer-recorder\index.js:
71:3)
at module.exports.record (C:\wamp64\www\home_robot\node_modules\puppeteer-re
corder\index.js:44:11)
at process._tickCallback (internal/process/next_tick.js:68:7)
i've even ran npm i reinstall ffmpeg --with-libvpx
as it was suggested here
https://github.com/clipisode/puppeteer-recorder/issues/6
but still didnt work .... wha telse do i need to do ?

I know this would be a very late response to your question, but nevertheless.
A years ago, I visited this same stack-overflow thread and I had similar challenge of finding a screen recorder library which does a good job a capturing the video as well as offers an options to manually start and stop the recording.
Finally I wrote one for myself and distributed as NPM library...!!
https://www.npmjs.com/package/puppeteer-screen-recorder
Hope this is helpful...!!

Add two empty functions called prepare and render in the options.
await record({
browser: global_browser, // Optional: a puppeteer Browser instance,
page, // Optional: a puppeteer Page instance,
output: path + 'output.webm',
fps: 60,
frames: 60 * 5, // 5 seconds at 60 fps,
prepare: function () {}, // <-- add this line
render: function () {} // <-- add this line
});
Basically it's missing some default functions and the error is not handled properly.

Also, there's https://www.npmjs.com/package/puppeteer-capture which uses HeadlessExperimental protocol.

Related

puppeteer cluster _ how to prevent close page?

I am glad to find the puppeteer cluster. this library made life easy on crawling and automation tasks.tnx to Thomas Dondorf.
according to the author of the puppeteer cluster, when a task finished page will be closed immediately.this is good by the way. but what about some cases that you need to page will be open?
my use case:
I will try to explain briefly:
there is some activity on the page that in the background a socket is involved in for sending some data to the front .this data changes the dome and I need to capture that.
this is my code :
async function runCrawler(){
const links = [
"foo.com/barSome324",
"foo.com/barSome22",
"foo.com/barSome1",
"foo.com/barSome765",
]
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT,
workerCreationDelay: 5000,
puppeteerOptions:{args: ['--no-sandbox', '--disable-setuid-sandbox'], headless:false},
maxConcurrency: numCPUs,
});
await cluster.task(async ({ page, data: url }) => {
await crawler(page, url)
});
for(link of links){
await cluster.queue(link);
}
await cluster.idle();
await cluster.close();
}
and this is the crawler logic in page section:
module.exports.crawler = async(page, link)=>{
await page.goto(link, { waitUntil: 'networkidle2' })
await page.waitForTimeout(10000)
await page.waitForSelector('#dbp')
try {
// method to be executed;
setInterval(async()=>{
const tables=await page.evaluate(async()=>{
/// data I need to catch in every 30 seconds
});
},30000)
} catch (error) {
console.log(error)
}
}
I searched And find out in js we can capture DOM changes with mutationObserver .and tried this solution . but did not work either.page will be closed with this error:
UnhandledPromiseRejectionWarning: Error: Protocol error
(Runtime.callFunctionOn): Session closed. Most likely the page has
been closed.
so I have two options here:
1.mutationObserver
2.set interval for every 30 seconds evaluates the page itself.
but they did not suit my needs. so any idea how to overcome this problem?

how to stop puppeteer-recorder recording

im using puppeteer-recorder to record a video from browser activity
this package
https://www.npmjs.com/package/puppeteer-recorder
here is my code
async function check_login()
{
try {
const page = await global_browser.newPage();
await page.setViewport({width: 1000, height: 1100});
record({
browser: global_browser, // Optional: a puppeteer Browser instance,
page: page, // Optional: a puppeteer Page instance,
output: path + 'output.webm',
fps: 60,
frames: 60 * 5, // 5 seconds at 60 fps
prepare: function () {}, // <-- add this line
render: function () {} // <-- add this line
});
await page.goto('http://localhost/home_robot/mock.php', {timeout: 60000})
.catch(function (error) {
throw new Error('TimeoutBrows');
});
await page.close();
}
catch (e) {
console.log(' LOGIN ERROR ---------------------');
console.log(e);
}
}
it works fine but i dont know how to stop recording and so when i get to the end of function i get this error
(node:10000) UnhandledPromiseRejectionWarning: Error: Protocol error (Emulation.setDefaultBackgroundColorOverride): Target closed.
at Promise (C:\wamp64\www\home_robot\node_modules\puppeteer\lib\Connection.js:202:56)
at new Promise (<anonymous>)
at CDPSession.send (C:\wamp64\www\home_robot\node_modules\puppeteer\lib\Connection.js:201:12)
at Page._screenshotTask (C:\wamp64\www\home_robot\node_modules\puppeteer\lib\Page.js:806:26)
at process._tickCallback (internal/process/next_tick.js:68:7)
(node:10000) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (
rejection id: 1)
(node:10000) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
and i dont get a video file , i assume its because i close the page without stopping the record
unfortunately there's no documentation and the author doesn't answer the questions
There is no way or need to stop recording with this module, look closely at the frames option of the record method:
frames: 60 * 5, // 5 seconds at 60 fps
After getting 300 frames it will stop by itself.
However for it to work a very important requirement must be met and documentation won't mention it, but you can see it in the source of the module: it uses ffmpeg to make video from screenshots AND that must be in PATH. If not you should provide ffmpeg binary location in options:
const puppeteer = require('puppeteer');
const { record } = require('puppeteer-recorder');
puppeteer.launch({headless : false}).then(async browser => {
const page = await browser.newPage();
await page.goto('https://codepen.io/hexagoncircle/full/joqYEj', {waitUntil : 'networkidle2'});
await record({
ffmpeg: "c:\\ffmpeg\\bin\\ffmpeg.exe" // <-- provide full path to ffmpeg binary
browser: browser, // Optional: a puppeteer Browser instance,
page: page, // Optional: a puppeteer Page instance,
output: 'output.webm',
fps: 24,
frames: 24, // desired seconds of recording multiplied by fps
prepare: function (browser, page) { /* you could click on a button */ },
render: function (browser, page, frame) { /* executed before each capture */ }
});
await browser.close();
});
Result:

how to integrate lighthouse with testcafe?

I need to pass the connection argument while calling lighthouse
https://github.com/GoogleChrome/lighthouse/blob/master/lighthouse-core/index.js#L41
async function lighthouse(url, flags = {}, configJSON, connection) {
// verify the url is valid and that protocol is allowed
if (url && (!URL.isValid(url) || !URL.isProtocolAllowed(url))) {
throw new LHError(LHError.errors.INVALID_URL);
}
// set logging preferences, assume quiet
flags.logLevel = flags.logLevel || 'error';
log.setLevel(flags.logLevel);
const config = generateConfig(configJSON, flags);
connection = connection || new ChromeProtocol(flags.port, flags.hostname);
// kick off a lighthouse run
return Runner.run(connection, {url, config});
}
And in my testcafe my tests look like
test('Run lighthouse, async t => {
lighthouse('https://www.youtube.com', {}, {}, ????)
})
I am unable to retrieve the connection of the chrome instance that testcafe had opened up, instead of spawning a new chromeRunner
there is an npm library called testcafe-lighthouse which helps to audit web pages using TestCafe. It also has the capability to produce an HTML detailed report.
Install the plugin by:
$ yarn add -D testcafe-lighthouse
# or
$ npm install --save-dev testcafe-lighthouse
Audit with default threshold
import { testcafeLighthouseAudit } from 'testcafe-lighthouse';
fixture(`Audit Test`).page('http://localhost:3000/login');
test('user performs lighthouse audit', async () => {
const currentURL = await t.eval(() => document.documentURI);
await testcafeLighthouseAudit({
url: currentURL,
cdpPort: 9222,
});
});
Audit with custom Thresold:
import { testcafeLighthouseAudit } from 'testcafe-lighthouse';
fixture(`Audit Test`).page('http://localhost:3000/login');
test('user page performance with specific thresholds', async () => {
const currentURL = await t.eval(() => document.documentURI);
await testcafeLighthouseAudit({
url: currentURL,
thresholds: {
performance: 50,
accessibility: 50,
'best-practices': 50,
seo: 50,
pwa: 50,
},
cdpPort: 9222,
});
});
you need to kick start the test like below:
# headless mode, preferable for CI
npx testcafe chrome:headless:cdpPort=9222 test.js
# non headless mode
npx testcafe chrome:emulation:cdpPort=9222 test.js
I hope it will help your automation journey.
I did something similar, I launch ligthouse with google chrome on a specific port using CLI
npm run testcafe -- chrome:headless:cdpPort=1234
Then I make the lighthouse function to get port as an argument
export default async function lighthouseAudit(url, browser_port){
let result = await lighthouse(url, {
port: browser_port, // Google Chrome port Number
output: 'json',
logLevel: 'info',
});
return result;
};
Then you can simply run the audit like
test(`Generate Light House Result `, async t => {
auditResult = await lighthouseAudit('https://www.youtube.com',1234);
});
Hopefully It helps

GulpUglifyError: unable to minify JavaScript error with Browserify

I'm bundling my script with Browserify + Babel like this:
function buildJs() {
let bopts = {
paths: [
`${SRC}/js`,
'./config'
],
debug: !isProduction
};
let opts = Object.assign({}, watchify.args, bopts);
let b = watchify(persistify(opts));
b.add(`${SRC}/js/index.js`)
.on('update', bundle)
.on('log', gutil.log)
.external(vendors)
.transform(babelify, {
presets: ["es2015", "react"],
plugins: [
"syntax-async-functions",
"transform-regenerator",
"transform-class-properties",
"transform-decorators-legacy",
"transform-object-rest-spread",
"transform-react-jsx-source",
staticFs
]
})
.transform(browserifyCss, { global: true });
function bundle() {
let stream = b.bundle()
.on('error', swallowError)
.on('end', () => {
gutil.log(`Building JS:bundle done.`);
})
.pipe(source('bundle.js'))
.pipe(streamify(uglify()));
return stream.pipe(gulp.dest(`${DIST}/js`));
}
return bundle();
}
It's just browserify -> babelify -> browserify-css -> uglify -> gulp.dest.
But If I ran this task, it fails with:
[17:00:30] Using gulpfile ~/ctc-web/gulpfile.js
[17:00:30] Starting 'build'...
[17:00:46] 1368516 bytes written (16.43 seconds)
[17:00:46] Building JS:bundle done.
events.js:160
throw er; // Unhandled 'error' event
^
GulpUglifyError: unable to minify JavaScript
at createError (/home/devbfex/ctc-web/node_modules/gulp-uglify/lib/create-error.js:6:14)
at wrapper (/home/devbfex/ctc-web/node_modules/lodash/_createHybrid.js:87:15)
at trycatch (/home/devbfex/ctc-web/node_modules/gulp-uglify/minifier.js:26:12)
at DestroyableTransform.minify [as _transform] (/home/devbfex/ctc-web/node_modules/gulp-uglify/minifier.js:79:19)
at DestroyableTransform.Transform._read (/home/devbfex/ctc-web/node_modules/readable-stream/lib/_stream_transform.js:159:10)
at DestroyableTransform.Transform._write (/home/devbfex/ctc-web/node_modules/readable-stream/lib/_stream_transform.js:147:83)
at doWrite (/home/devbfex/ctc-web/node_modules/readable-stream/lib/_stream_writable.js:338:64)
at writeOrBuffer (/home/devbfex/ctc-web/node_modules/readable-stream/lib/_stream_writable.js:327:5)
at DestroyableTransform.Writable.write (/home/devbfex/ctc-web/node_modules/readable-stream/lib/_stream_writable.js:264:11)
at Transform.ondata (_stream_readable.js:555:20)
Just skip uglify works, but I really need it.
The weird thing is that error was occured after end event. I tried using with vinyl-buffer, but same errors happen.
I couldn't find any solution, every my attemps fails with same error message.
What am I missing? Is there a something that I missed?
Try to replace let by var and see what happens.

gulp-protractor error with chrome v54 / web driver v2.25

Due to the latest update of chrome (v54) we've noticed our protractor tests failing. We attempted to update to the latest version of gulp-protractor (v3.0.0) which in turn downloads the latest web driver (v2.25) to resolve the issue but unfortunately a new error occurs we've been unable to resolve.
Everything worked fine before chrome's update.
Our protractor configuration is as follows:
exports.config = {
// Capabilities to be passed to the webdriver instance.
capabilities: {
'browserName': 'chrome'
},
onPrepare: function () {
var fs = require('fs');
var testDir = 'testresults/';
if (!fs.existsSync(testDir)) {
fs.mkdirSync(testDir);
}
var jasmineReporters = require('jasmine-reporters');
// returning the promise makes protractor wait for the reporter config before executing tests
return browser.getProcessedConfig().then(function () {
// you could use other properties here if you want, such as platform and version
var browserName = 'browser';
browser.getCapabilities().then(function (caps) {
browserName = caps.caps_.browserName.replace(/ /g, "_");
var junitReporter = new jasmineReporters.JUnitXmlReporter({
consolidateAll: true,
savePath: testDir,
// this will produce distinct xml files for each capability
filePrefix: 'test-protractor-' + browserName,
modifySuiteName: function (generatedSuiteName) {
// this will produce distinct suite names for each capability,
// e.g. 'firefox.login tests' and 'chrome.login tests'
return 'test-protractor-' + browserName + '.' + generatedSuiteName;
}
});
jasmine.getEnv().addReporter(junitReporter);
});
});
},
baseUrl: 'http://localhost:3000',
// Spec patterns are relative to the current working directory when
// protractor is called.
specs: [paths.e2e + '/**/*.js'],
// Options to be passed to Jasmine-node.
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000
}
};
The error is:
[13:27:13] E/launcher - Error: Error
at C:\ws\node_modules\protractor\built\util.js:55:37
at _rejected (C:\ws\node_modules\q\q.js:844:24)
at C:\ws\node_modules\q\q.js:870:30
at Promise.when (C:\ws\node_modules\q\q.js:1122:31)
at Promise.promise.promiseDispatch (C:\ws\node_modules\q\q.js:788:41)
at C:\ws\node_modules\q\q.js:604:44
at runSingle (C:\ws\node_modules\q\q.js:137:13)
at flush (C:\ws\node_modules\q\q.js:125:13)
at nextTickCallbackWith0Args (node.js:420:9)
at process._tickCallback (node.js:349:13)
[13:27:13] E/launcher - Process exited with error code 100
onPrepare is being evaluated in built/util.js in the runFilenameOrFn_ function. The stacktrace unfortunately is not helpful but what this means is that onPrepare has errors. Looking at your onPrepare method, the error is made when assigning the browserName from the browser capabilities. In your code, caps.caps_ is actually undefined. Because caps.caps_ is undefined, caps.caps_.browserName is throwing an error. The capabilities object should be accessed as the following:
browser.getCapabilities().then(capabilities => {
let browserName = capabilities.browserName.replace(/ /g, "_");