How to get only specific metrics from Google Lighthouse? - puppeteer

Let's suppose I'd only like to get the first-meaningful-paint metric from Google Lighthouse.
I'm using the below snippet of code, which does a full audit (which takes too long, since I'm only interested in one metric). How can I change the below code to tell Lighthouse to only get one metric for me?
(Source code snippet based on this)
const puppeteer = require('puppeteer');
const lighthouse = require('lighthouse');
const urlLib = require('url').URL;
async function run() {
const browser = await puppeteer.launch({
headless: false,
defaultViewport: null
});
const { lhr } = await lighthouse("https://www.google.com", {
port: (new urlLib(browser.wsEndpoint())).port,
logLevel: 'info',
output: 'json'
});
console.log(lhr);
}
run();

Inside the settings object of the configuration, you can specify which audits to run. When calling lighthouse, the configuration is provided as third argument (more information in the docs).
Code Sample
lighthouse('...', { /* ... */ }, {
extends: 'lighthouse:default',
settings: {
onlyAudits: ['first-meaningful-paint'],
}
});
This will only run the first-meaningful-paint audit.

Related

Chrome Extension embedding script in active web page, in MV3?

beautiful people on the internet. I am new to chrome extension not new to writing code though. I have implemented webpack to use external packages. One major one in my application is npm package by the name "mark.js".
My application works like this i want some specific words to be highlighted in active webpage using this package. I have written code for this to achieve the functionality but the problem is with loading the script in a web page. I have performed different aspect of loading script but that doesnot work. The new MV3 version have some strict rules.
I want to achieve anything similar of loading script in an active webpage. Please help.
btn.addEventListener("click", async () => {
console.log("BUTTON IS PRESSED!!");
try {
await chrome.tabs.query(
{ active: true, currentWindow: true },
async function (tabs) {
chrome.scripting.executeScript({
target: { tabId: tabs[0].id },
func: highlightText,
args: [arr],
});
}
);
} catch (e) {
console.log("ERROR AT CHROME TAB QUERY : ", e);
}
});
async function highlightText(arr) {
console.log(typeof Mark);
try {
var instance2 = new Mark(document.querySelector("body"));
// instance2.mark("is");
var success = [];
// const instance2 = new Mark(document.querySelector("body"));
await Promise.all(
arr.map(async function (obj) {
console.log("OBJECT TEXT : ", obj.text);
instance2.mark(obj.text, {
element: "span",
each: function (ele) {
console.log("STYLING : ");
ele.setAttribute("style", `background-color: ${obj.color};`);
if (obj.title && obj.title != "") {
ele.setAttribute("title", obj.title);
}
ele.innerHTML = obj.text;
success.push({
status: "Success",
text: obj.text,
});
},
});
})
);
console.log("SUCCESS : ", success);
} catch (error) {
console.log(error);
}
}
There's no need to use chrome.runtime.getURL. Since you use executeScript to run your code all you need is to inject mark.js before injecting the function.
Also, don't load popup.js in content_scripts, it's not a content script (these run in web pages), it's a script for your extension page. Actually, you don't need content_scripts at all.
btn.addEventListener('click', async () => {
const [tab] = await chrome.tabs.query({ active: true, currentWindow: true });
const target = { tabId: tab.id };
const exec = v => (await chrome.scripting.executeScript({ target, ...v }))[0].result;
if (!await exec({ func: () => !!window.Mark })) {
await exec({files: ['mark.js.min'] });
await exec({func: highlightText, args: [arr] });
}
});
For V3 I assume you will want to use Content Scripts in your manifest to inject the javascript into every webpage it matches. I recently open-sourced TorpedoRead and had to do both V2 and V3, I recommend checking the repo as it sounds like I did something similar to you (Firefox is V2, Chrome is V3).
The code below need to be added to your manifest.json and this will execute on every page based on the matches property. You can read more about content scripts here: https://developer.chrome.com/docs/extensions/mv3/content_scripts/
"content_scripts": [
{
"matches": ["<all_urls>"],
"js": ["yourscript.js"]
}
],

How to enable parallel tests with puppeteer?

I am using the chrome puppeteer library directly to run browser integration tests. I have a few tests written now in individual files. Is there a way run them in parallel? What is the best way to achieve this?
To run puppeteer instances in parallel you can check out this library I wrote: puppeteer-cluster
It helps to run different puppeteer tasks in parallel in multiple browsers, contexts or pages and takes care of errors and browser crashes. Here is a minimal example:
const { Cluster } = require('puppeteer-cluster');
(async () => {
const cluster = await Cluster.launch({
concurrency: Cluster.CONCURRENCY_CONTEXT, // use one browser per worker
maxConcurrency: 4, // cluster with four workers
});
// Define a task to be executed for your data
cluster.task(async ({ page, data: url }) => {
await page.goto(url);
const screen = await page.screenshot();
// ...
});
// Queue URLs
cluster.queue('http://www.google.com/');
cluster.queue('http://www.wikipedia.org/');
// ...
// Wait for cluster to idle and close it
await cluster.idle();
await cluster.close();
})();
You can also queue your functions directly like this:
const cluster = await Cluster.launch(...);
cluster.queue(async ({ page }) => {
await page.goto('http://www.wikipedia.org');
await page.screenshot({path: 'wikipedia.png'});
});
cluster.queue(async ({ page }) => {
await page.goto('https://www.google.com/');
const pageTitle = await page.evaluate(() => document.title);
// ...
});
cluster.queue(async ({ page }) => {
await page.goto('https://www.example.com/');
// ...
});
// My tests contain about 30 pages I want to test in parallel
const aBunchOfUrls = [
{
desc: 'Name of test #1',
url: SOME_URL,
},
{
desc: 'Name of test #2',
url: ANOTHER_URL,
},
// ... snip ...
];
const browserPromise = puppeteer.launch();
// These test pass! And rather quickly. Slowest link is the backend server.
// They're running concurrently, generating a new page within the same browser instance
describe('Generate about 20 parallel page tests', () => {
aBunchOfUrls.forEach((testObj, idx) => {
it.concurrent(testObj.desc, async () => {
const browser = await browserPromise;
const page = await browser.newPage();
await page.goto(testObj.url, { waitUntil: 'networkidle' });
await page.waitForSelector('#content');
// assert things..
});
});
});
from https://github.com/GoogleChrome/puppeteer/issues/474
written by https://github.com/quicksnap
How I achieved this was to create your suite(s) of tests in individual files, as you have done already. Then create a testSuiteRunner.js file (or whatever you wish to call it) and set it up as follows:
require('path/to/test/suite/1);
require('path/to/test/suite/2);
require('path/to/test/suite/3);
...
Import all of your suite(s) using require statements, like above, (no need to give them const variable names or anything like that) and then you can use node ./path/to/testSuiteRunner.js to execute all of your suites in parallel. Simplest solution I could come up with for this one!
I think the best idea would be to use a test runner like Jest that can manage that for you. At least I do that this way. Please keep in mind the machine might blow up if you'd run too many Chrome instances at the same time so it's safe to limit the number of parallel tests to 2 or so.
Unfortunately, it isn't clearly described in the official documentation of how Jest parallels the tests. You might find this https://github.com/facebook/jest/issues/6957 useful.
Libraries as puppeteer-cluster are great but remember first of all you want to parallel your tests, not puppeteer's tasks.

How to use trace.json written by ChromeDriver

I am running a simple node script which starts chromedriver pointed at my website, scrolls to the bottom of the page, and writes the trace to trace.json.
This file is around 30MB.
I can't seem to load this file in chrome://tracing/, which is what I assume I would do in order to view the profile data.
What are my options for making sense of my trace.json file?
Here is my node script, in case that helps clarify what I am up to:
'use strict';
var fs = require('fs');
var wd = require('wd');
var b = wd.promiseRemote('http://localhost:9515');
b.init({
browserName: 'chrome',
chromeOptions: {
perfLoggingPrefs: {
'traceCategories': 'toplevel,disabled-by-default-devtools.timeline.frame,blink.console,disabled-by-default-devtools.timeline,benchmark'
},
args: ['--enable-gpu-benchmarking', '--enable-thread-composting']
},
loggingPrefs: {
performance: 'ALL'
}
}).then(function () {
return b.get('http://www.example.com');
}).then(function () {
// We only want to measure interaction, so getting a log once here
// flushes any previous tracing logs we have.
return b.log('performance');
}).then(function () {
// Smooth scroll to bottom.
return b.execute(`
var height = Math.max(document.documentElement.scrollHeight, document.body.scrollHeight, document.documentElement.clientHeight);
chrome.gpuBenchmarking.smoothScrollBy(height, function (){});
`);
}).then(function () {
// Wait for the above action to complete.
return b.sleep(5000);
}).then(function () {
// Get all the trace logs since last time log('performance') was called.
return b.log('performance');
}).then(function (data) {
// Write the file to disk.
return fs.writeFileSync('trace.json', JSON.stringify(data.map(function (s) {
return JSON.parse(s.message); // This is needed since Selenium outputs logs as strings.
})));
}).fin(function () {
return b.quit();
}).done();
Your script doesn't generate the correct format. The required data for each entry are located in message.message.params.
To generate a trace that can be loaded in chrome://tracing :
var fs = require('fs');
var webdriver = require('selenium-webdriver');
var driver = new webdriver.Builder()
.withCapabilities({
browserName : 'chrome',
loggingPrefs : { performance: 'ALL' },
chromeOptions : {
args: ['--enable-gpu-benchmarking', '--enable-thread-composting'],
perfLoggingPrefs: {
'traceCategories': 'toplevel,disabled-by-default-devtools.timeline.frame,blink.console,disabled-by-default-devtools.timeline,benchmark'
}
}
}).build();
driver.get('https://www.google.com/ncr');
driver.sleep(1000);
// generate a trace file loadable in chrome://tracing
driver.manage().logs().get('performance').then(function (data) {
fs.writeFileSync('trace.json', JSON.stringify(data.map(function (d) {
return JSON.parse(d['message'])['message']['params'];
})));
});
driver.quit();
The same script with python:
import json, time
from selenium import webdriver
driver = webdriver.Chrome(desired_capabilities = {
'loggingPrefs': { 'performance': 'ALL' },
'chromeOptions': {
"args" : ['--enable-gpu-benchmarking', '--enable-thread-composting'],
"perfLoggingPrefs" : {
"traceCategories": "toplevel,disabled-by-default-devtools.timeline.frame,blink.console,disabled-by-default-devtools.timeline,benchmark"
}
}
})
driver.get('https://stackoverflow.com')
time.sleep(1)
# generate a trace file loadable in chrome://tracing
with open(r"trace.json", 'w') as f:
f.write(json.dumps([json.loads(d['message'])['message']['params'] for d in driver.get_log('performance')]))
driver.quit()
Not sure if you know, the recommendation lib for parsing those thing is https://github.com/ChromeDevTools/devtools-frontend
Also, recommended categories are __metadata,benchmark,devtools.timeline,rail,toplevel,disabled-by-default-v8.cpu_profiler,disabled-by-default-devtools.timeline,disabled-by-default-devtools.timeline.frame,blink.user_timing,v8.execute,disabled-by-default-devtools.screenshot
It's very old question, but hope this helps new other guys.

Unable to load app with protractor test runner

I am new to AngularJS. I'm trying to run end-to-end tests with Protractor. Currently, I am running my tests from grunt with help from grunt-protractor-runner. My base test looks like the following:
describe('My Tests', function () {
var p = protractor.getInstance();
beforeEach(function () {
});
it('My First Test', function () {
var message = "Hello!";
expect(message).toEqual('Hello!');
});
});
This works just fine. However, it really doesn't test my app. To do that I always want to start in the root of the app. In an attempt to do this, I've updated the above to the following:
describe('My Tests', function () {
var p = protractor.getInstance();
beforeEach(function () {
p.get('#/');
});
it('My First Test', function () {
var message = "Hello!";
expect(message).toEqual('Hello!');
});
});
When this test gets ran, Chrome launches. However, "about:blank" is what gets loaded in the address bar. My app never loads. I've reviewed my protractor.config.js file and it looks correct. It looks like the following:
exports.config = {
allScriptsTimeout: 110000,
seleniumServerJar: './node_modules/protractor/bin/selenium/selenium-server-standalone-2.37.0.jar',
seleniumPort: 1234,
seleniumArgs: [],
seleniumAddress: null,
chromeDriver: './node_modules/protractor/bin/selenium/chromedriver.exe',
capabilities: { 'browserName': 'chrome' },
specs: [ '../tests/**/*.spec.js' ],
jasmineNodeOpts: {
showColors: true,
defaultTimeoutInterval: 30000
}
};
How do I get my app to load into Chrome for the purpose of an integration test via protractor?
Perhaps you've already figured out how to get it working, but if not maybe the following will help (modify the port if necessary of course):
// A base URL for your application under test. Calls to protractor.get()
// with relative paths will be prepended with this.
baseUrl: 'http://localhost:3000'
Add this property to your protractor.config.js file.
Reference: https://github.com/angular/protractor/blob/master/referenceConf.js

fake model response in backbone.js

How can I fake a REST response in my model s.t. it does not really go to the service but returns a fixed json?
If possible show me a version that does it with overriding sync() and a version that overrides fetch(). I failed with both so this will be a good education for as for the difference between them.
Backbone.Model.extend({
fetch: function(){
var model = this;
model.set({yourStatic: "Json Here"});
}
}
This should work. From the Backbone documentation:
fetch():
Resets the model's state from the server by delegating to Backbone.sync
If your question is related to unit testing your code without the need for a live API, have a look at Sinon.JS. It helps mocking entire API server responses for testing purposes.
Here's an example from the Sinon docs that mocks the $.ajax function of jQuery:
{
setUp: function () {
sinon.spy(jQuery, "ajax");
},
tearDown: function () {
jQuery.ajax.restore(); // Unwraps the spy
},
"test should inspect jQuery.getJSON's usage of jQuery.ajax": function () {
jQuery.getJSON("/some/resource");
assert(jQuery.ajax.calledOnce);
assertEquals("/some/resource", jQuery.ajax.getCall(0).args[0].url);
assertEquals("json", jQuery.ajax.getCall(0).args[0].dataType);
}
}
Take a look at backbone-faux-server. It will allow you to handle (and 'fake' a response for) any sync op (fetch, save, etc) per Model (or Collection).
Sinon.js is a good candidate, although if you want to simulate more than a few responses, it might become a lot of work to setup headers, handle write logic, etc.
Building up on Sinon.js, FakeRest goes a step further and simulates a complete REST API based on a JSON object - all client-side.
My code like that
// config
const TEST_JSON = require('./test.json')
const API_MAP = {
testA: 'someroot'
}
const FAKE_API_MAP = {
testA: TEST_JSON
}
// here's model
let BaseModel = Backbone.Model.extend({
url: function() {
return `${HOST}${API_MAP[this.resourceName]}/`
}
})
let FakeModel = Backbone.Model.extend({
fetch: function(options) {
return this.sync('', this, _.extend({}, options));
},
sync: function(method, model, options) {
this.set(FAKE_API_MAP[this.resourceName], this.options)
this.trigger('sync', this);
},
});
// now it's easy for switch them
let modelA = new BaseModel({
resourceName: 'testA'
})
modelA.fetch()
let fakeModelA = new FakeModel({
resourceName: 'testA'
})
fakeModelA.fetch()