Is there a way to start the performance profiling programmatically in Chrome?
I want to run a performance test of my web app several times to get a better estimate of the FPS but manually starting the performance profiling in Chrome is tricky because I'd have to manually align the frame models. (I am using this technique to extract the frames)
CMD + Shift + E reloads the page and immediately starts the profiling, which alleviates the alignment problem but it only runs for 3 seconds as explained here. So this doesn't work.
Ideally, I'd like to click on a button to start my test script and also starts the profiling. Is there a way to achieve that?
in case you're still interested, or someone else may find it helpful, there's an easy way to achieve this using Puppeteer's tracing class.
Puppeteer uses Chrome DevTools Protocol's Tracing Domain under the hood, and writes a JSON file to your system that can be loaded in the dev tools performance panel.
To get a profile trace of your page's loading time you can implement the following:
const puppeteer = require('puppeteer');
(async () => {
// launch puppeteer browser in headful mode
browser = await puppeteer.launch({
headless: false,
devtools: true
});
// start a page instance in the browser
page = await browser.newPage();
// start the profiling, with a path to the out file and screenshots collected
await page.tracing.start({
path: `tests/logs/trace-${new Date().getTime()}.json`,
screenshots: true
});
// go to the page
await page.goto('http://localhost:8080');
// wait for as long as you want
await page.waitFor(4000);
// or you can wait for an element to appear with:
// await page.waitForSelector('some-css-selector');
// stop the tracing
await page.tracing.stop();
// close the browser
await browser.close();
})();
Of course, you'll have to install Puppeteer first (npm i puppeteer). If you don't want to use Puppeteer you can interact with Chrome DevTools Protocol's API directly (see link above). I didn't investigate that option very much since Puppeteer delivers a high level and easy to use API over CDP's API. You can also interact directly with CDP via Puppeteer's CDPSession API.
Hope this helps. Good luck!
You can use the chrome devtools protocol and use any driver library from here https://github.com/ChromeDevTools/awesome-chrome-devtools#protocol-driver-libraries to programmatically create a profile.
Use this method - https://chromedevtools.github.io/devtools-protocol/tot/Profiler#method-start to start a profile.
Related
I know that puppeteer is a simple and great tool, which can easily get the website data
As far as I know, if it is headless mode, there will be many properties different from normal browsers
But if I use the following method to link an open browser with the puppeteer , I can't detect it?
First :Modify Desktop Google Browser Shortcut Properties and open brwoser
C:\Users\13632\AppData\Local\Google\Chrome\Application\chrome.exe --remote-debugging-port=9222
const axios = require('axios')
const puppeteer = require('puppeteer')
async function main() {
const response = await axios.get(`http://127.0.0.1:9222/json/version`);
const webSocketDebuggerUrl = response.data.webSocketDebuggerUrl;
browser = await puppeteer.connect({
browserWSEndpoint: webSocketDebuggerUrl,
ignoreDefaultArgs: ["--enable-automation"],
slowMo: 100,
defaultViewport: { width: 1280, height: 600 },
});
let target = await browser.waitForTarget(t => t.url().includes("you url"))
const page = await target.page();
}
main()
The above method is to link to an opened browser, which is a normal Google browser. It seems that it is impossible to detect whether it is an automated tool? Is there any other way for me to judge whether the other party is a human or a machine
Browser profiling and automation detection (and beating it) is an entire subfield of its own. Some drivers (chromedriver; I've not used puppeteer) set flags to indicate automated use, but these are easily defeated. (See for instance undetected chromedriver for a package which tries not to be detectable.)
Then there's user profiling (bots tend to click in predictable ways), running JS in the browser to try to detect the environment, blacklisting ips (most bots are behind proxies), and so on.
Ask yourself: what are you afraid of? And then defend against that. Anything you put on the Internet can and will be crawled, but you can make it hard to do disruptive things like booking all the concert tickets and the reselling them with a 500% markup. Specific challenges like that have specific answers; but there is no foolproof way to detect automated browsers, and doing so is a waste of effort.
So I am running puppeteer and going to twitter, I am running the browser with headless false and saving cookies.
So my code is
const browser = await puppeteer.launch({
product: 'firefox',
headless: false,
userDataDir: './dataDir'
});
const page = await browser.newPage();
await page.setDefaultNavigationTimeout(0);
await page.goto('https://twitter.com/home');
console.log("hi");
It goes to twitter, it saves cookies because I went ahead and logged in and the next time I ran it I was logged in but it never gets to printing out hi on the console.
When I run it with some other url, like google.com or news.ycombinator.com it works fine, but not with twitter. which makes me think they have some secret sauce running there (although I would expect google to have that same secret sauce so hmmm)
I have tried with setting wait for events on the page.goto - like for example {waitUntil: "domcontentloaded"}but none of them improve the situation.
So anyway how can I go to Twitter with puppeteer and have that console.log show up after my page.goto.
ON EDIT: Have found that this affects FF with Puppeteer, but if I run Puppeteer with chromium do not have the problem. Would still like a solution of course, as I prefer to work in Firefox.
I am using pyppeteer to trigger headless chrome and perform some actions. But first I want all the elements of the web page to load completely. The official documentation of pyppeteer suggests a waitUntil parameter which comes with more than 1 parameters.
My doubt is do i have to pass all the parameters or any one in particular is sufficient? Please suggest if following snippet helps in my case?
await page.goto(url, {'waitUntil' : ['load', 'domcontentloaded', 'networkidle0', 'networkidle2']})
No, you don't have to pass all possible options to 'waitUntil'. You can pick either of them, or more options at the same time if you like, but if you are:
not deailing with a single-page app,
not interested in all network connections (like 3rd party trackings for example)
then you are good to go with: 'domcontentloaded' to wait for all the elements to be rendered on the page.
await page.goto(url, {'waitUntil' : 'domcontentloaded'})
The options in details:
load: when load event is fired.
domcontentloaded: when the DOMContentLoaded event is fired.
networkidle0: when there are no more than 0 network connections
for at least 500 ms.
networkidle2: when there are no more than 2 network connections
for at least 500 ms.
[source]
Note: of course it is true for the NodeJs puppeteer library as well, they work the same way in terms of waitUntil.
I am using puppeteer-extra package with stealth plugin of puppeteer. While using the default puppeteer package, incognito shows up , but while using puppeteer-extra plugin, even while initializing the incognito context, the incognito window doesn't open up. Any idea if its some compatibility issue or someone already came across this problem.
I have tried with args passing "--incognito" mode and also using the context method.
While using --incognito parameter it opens parent window with incognito but while using newPage(), it open a second window which is without incognito flow.
Two approaches I had used
Importing puppeteer extra package:
import puppeteer from 'puppeteer-extra';
import pluginStealth from 'puppeteer-extra-plugin-stealth';
Method 1:
const context = await browser.createIncognitoBrowserContext();
const page = await context.newPage();
Method 2 :
const browser = await puppeteer.launch({args:[--incognito]});
I expect that while using puppeteer-extra package, the behavior should be same as using puppeteer.
The problem
This appears to be caused by a bug in the puppeteer-extra library. When you open a puppeteer instance with puppeteer-extra, the browser instance is hotpatched to better integrate newly opened pages with plugins.
Unfortunately the current implementation of browser._createPageInContext (as of version 2.1.3) doesn't correctly handle which browser context the new page should belong to once it's opened.
The fix
The fix is this pull request.
Specifically, you need to change this line
return async (contextId) => {
to this
return async function (contextId) {
so that arguments on the next line is evaluated correctly
const page = await originalMethod.apply(context, arguments)
I'm running Chrome in headless mode with Puppeteer, and I discovered that if an URL I load contain a javascript code like:
while (true) {console.log('crash')}
The page will load forever even though I have timeout set in place and waitUntil defined:
await page.goto('http://...', {waitUntil: ['load', 'documentloaded', 'networkidle0'], 'timeout': timeout})
How can I ensure that no JS (or any other kind of) abuse don't stuck my code?