Puppeteer test runs are not consistent - puppeteer

I've made some tests using Jest and Puppeteer, but some of them pass/fail inconsistently. From what I've found, all of the tests pass consistently when {headless: false}, and I can see Puppeteer interacting with Chromium. But once I set {headless: true}, a few of them pass/fail whenever. It's always the same few tests that inconsistently pass, too. For the tests that inconsistently pass, the reasoning is always < element > not found. This is one of the test cases that inconsistently passes, with the reasoning #gallery not found.
describe('Gallery', () => {
it('opens gallery modal', async () => {
const mediaSelector = 'div#media';
const gallerySelector = '#gallery';
await page.$eval(mediaSelector, (e) => {
e.scrollIntoView({ behavior: 'smooth', block: 'center' });
});
await page.click(mediaSelector);
await page.waitForTimeout(1000);
await expect(page).toMatchElement(gallerySelector);
});
)};
I've made sure to precede every expect statement with await page.waitForTimeout(1000); to give the element enough time to display in the HTML. I had heard that the behavior is different for Puppeteer when its headless/headful, but I didn't think it would affect this type of test that much since it's a fairly straightforward test. Any suggestions on how I can overcome the different behavior between headless/headful Puppeteer?

Related

Issue listening for custom event via puppeteer

I am currently working on a GitLab CI test environment and I have a test harness which we use to test our SDK. I have gone about setting up a custom event that is fired on the page which designates the end of the test run. In my puppeteer implementation I am wanting to listen for this custom event "TEST_COMPLETE".
I have not been successful in getting this to work so I figured I would at least make sure the custom-event.js example on the puppeteer repo worked and there too I am not seeing what I believe I should be getting. I cloned the main repo below and performed an npm install. When I execute the js test below, setting headless:false and don't close the browser, I do not see any log in console that shows any custom event being fired.
It is my understanding that I should see some console event message with 'fired' and then 'app-ready' event and info, but this is not the case. Even if I interact with the page I don't see anything outside of some 'features_loaded' and 'features_unveil' logs.
https://github.com/puppeteer/puppeteer/blob/main/examples/custom-event.js
Anyone able to get the expected behavior on this code today? Not sure if this worked previously and has broke since or I am just doing something wrong. Any info would be of great help, Thanks!
Not sure if this is what you need, but I can get the message 'TEST_COMPLETE fired.' in Node.js console with this simplified code (puppeteer 8.0.0):
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch();
try {
const [page] = await browser.pages();
await page.goto('https://example.org/');
await page.exposeFunction('onCustomEvent', async (type) => {
console.log(`${type} fired.`);
await browser.close();
});
await page.evaluate(() => {
document.addEventListener('TEST_COMPLETE', (e) => {
window.onCustomEvent('TEST_COMPLETE');
});
document.dispatchEvent(new Event('TEST_COMPLETE'));
});
} catch (err) { console.error(err); }

Unable to locate an element with puppeteer

I'm trying to do a basic search on FB marketplace with puppeteer(and it was working for me before) but fails recently.
The whole thing fails when it gets to "location" link on marketplace page. to change the location i need to click on it, but puppeteer Errors out saying:
Error: Node is either not visible or not an HTMLElement
If i try to get the boundingBox of the element it returns null
const browser = await puppeteer.launch();
const page = await browser.newPage();
const resp = await page.goto('https://www.facebook.com/marketplace', { waitUntil: 'networkidle2' })
const withinLink = await page.waitForXPath('//span[contains(.,"Within")]', { timeout: 4000 })
console.log(await withinLink.boundingBox()) //returns null
await withinLink.click() //errors out
If i take a screenshot of the page right before i locate an element it is clearly there and i am able to locate in in chrome console using the same xPath manually.
It just doesn't seem to work in puppeteer
Something clearly changed on FB. Maybe they started to use some AI technology to detect scraping?
I don't think facebook changed in headless browser detection lately, but it seems you haven't taken into account that const withinLink = await page.waitForXPath('//span[contains(.,"Within")]', { timeout: 4000 }) returns an array, even if there is only one matching elment to contains(.,"Within").
That should work if you add [0] index to the elementHandles:
const withinLink = await page.waitForXPath('//span[contains(.,"Within")]')
console.log(await withinLink[0].boundingBox())
await withinLink[0].click()
Note: Timeout is not mandatory in waitForXPath, but I'd suggest to rather use domcontentloaded instead of networkidle2 in page.goto if you don't need all analytics/tracking events to achive the desired results, it just slows down your script execution.
Note 2: Honestly, I don't have such element on my fb platform, maybe it is market dependent. But it works with any other XPath selectors with specific content.

Puppeteer: Screenshot lazy images not working [duplicate]

This question already has answers here:
Puppeteer wait for all images to load then take screenshot
(5 answers)
Closed 2 years ago.
I doesn't seems to be able to capture screenshot from https://today.line.me/HK/pc successfully.
In my Puppeteer script, I have also initiate a scroll to the bottom of the page and up again to ensure images are loaded. But for some reason it does't seems to work on the line URL above.
function wait (ms) {
return new Promise(resolve => setTimeout(() => resolve(), ms));
}
const puppeteer = require('puppeteer');
async function run() {
let browser = await puppeteer.launch({headless: false});
let page = await browser.newPage();
await page.goto('https://today.line.me/HK/pc', {waitUntil: 'load'});
//https://today.line.me/HK/pc
// Get the height of the rendered page
const bodyHandle = await page.$('body');
const { height } = await bodyHandle.boundingBox();
await bodyHandle.dispose();
// Scroll one viewport at a time, pausing to let content load
const viewportHeight = page.viewport().height+200;
let viewportIncr = 0;
while (viewportIncr + viewportHeight < height) {
await page.evaluate(_viewportHeight => {
window.scrollBy(0, _viewportHeight);
}, viewportHeight);
await wait(4000);
viewportIncr = viewportIncr + viewportHeight;
}
// Scroll back to top
await page.evaluate(_ => {
window.scrollTo(0, 0);
});
// Some extra delay to let images load
await wait(2000);
await page.setViewport({ width: 1366, height: 768});
await page.screenshot({ path: './image.png', fullPage: true });
}
run();
For anyone wondering, there are many strategies to render lazy loaded images or assets in Puppeteer but not all of them work equally well. Small implementation details in the website that you're attempting to screenshot could change the final result so if you want to have an implementation that works well across many case scenarios you will need to isolate each generic case and address it individually.
I know this because I run a small Screenshot API service and I had to address many cases separately. This is a big task of this project since there seems to be always something new that needs to be addressed with new libraries and UI techniques being used every day.
That being said I think there are some rendering strategies that have good coverage. Probably the best one is a combination of waiting and scrolling through the page like OP did but also making sure to take into account the order of the operations. Here is a slightly modified version of OP's original code.
//Scroll and Wait Strategy
function waitFor (ms) {
return new Promise(resolve => setTimeout(() => resolve(), ms));
}
async function capturePage(browser, url) {
// Load the page that you're trying to screenshot.
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'load'}); // Wait until networkidle2 could work better.
// Set the viewport before scrolling
await page.setViewport({ width: 1366, height: 768});
// Get the height of the page after navigating to it.
// This strategy to calculate height doesn't work always though.
const bodyHandle = await page.$('body');
const { height } = await bodyHandle.boundingBox();
await bodyHandle.dispose();
// Scroll viewport by viewport, allow the content to load
const calculatedVh = page.viewport().height;
let vhIncrease = 0;
while (vhIncrease + calculatedVh < height) {
// Here we pass the calculated viewport height to the context
// of the page and we scroll by that amount
await page.evaluate(_calculatedVh => {
window.scrollBy(0, _calculatedVh);
}, calculatedVh);
await waitFor(300);
vhIncrease = vhIncrease + calculatedVh;
}
// Setting the viewport to the full height might reveal extra elements
await page.setViewport({ width: 1366, height: calculatedVh});
// Wait for a little bit more
await waitFor(1000);
// Scroll back to the top of the page by using evaluate again.
await page.evaluate(_ => {
window.scrollTo(0, 0);
});
return await page.screenshot({type: 'png'});
}
Some key differences here are:
You want to set the viewport from the beginning and operate with that fixed viewport.
You can change the wait time and introduce arbitrary waits to experiment. Sometimes this causes elements that are hanging behind network events to reveal.
Changing the viewport to the full height of the page can also reveal elements as if you were scrolling. You can test this in a real browser by using a vertical monitor. However make sure to go back to the original viewport height, because the viewport also affects the intended rendering.
One thing to understand here is that waiting alone it's not necessarily going to trigger the loading of lazy assets. Scrolling through the height of the document allows the viewport to reveal those elements that need to be within the viewport to get loaded.
Another caveat is that sometimes you need to wait for a relatively long time for the asset to load so in the example above you might need to experiment with the amount of time you're waiting after each scroll. Also as I mentioned arbitrary waits in the general execution sometimes have an effect on whether an asset load or not.
In general, when using Puppeteer for screenshots, you want to make sure that your logic resembles real user behavior. Your goal is to reproduce rending scenarios as if someone was firing Chrome in their computer and navigating to that website.
I have resolved this issue by changing the logic on how I can scroll the page and wait for delay.
A solution that worked for me:
Adjust the timeout limit for my test runner (mocha).
// package.json
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"eject": "react-scripts eject",
"test": "mocha --timeout=5000" <--- set timeout to something higher than 2 seconds
},
Wait for x seconds where x ~ half of what you set above, then take srcreenshot.
var path = require("path"); // built in with NodeJS
await new Promise((resolve) => setTimeout(() => resolve(), 2000));
var file_path = path.join(__dirname, "__screenshots__/initial.png");
await page.screenshot({ path: file_path });

ResizeObserver - loop limit exceeded

About two months ago we started using Rollbar to notify us of various errors in our Web App. Ever since then we have been getting the occasional error:
ResizeObserver loop limit exceeded
The thing that confuses me about this is that we are not using ResizeObserver and I have investigated the only plugin which I thought could possibly be the culprit, namely:
Aurelia Resize
But it doesn't appear to be using ResizeObserver either.
What is also confusing is that these error messages have been occuring since January but ResizeObserver support has only recently been added to Chrome 65.
The browser versions that have been giving us this error are:
Chrome: 63.0.3239 (ResizeObserver loop limit exceeded)
Chrome: 64.0.3282 (ResizeObserver loop limit exceeded)
Edge: 14.14393 (SecurityError)
Edge: 15.15063 (SecurityError)
So I was wondering if this could possibly be a browser bug? Or perhaps an error that actually has nothing to do with ResizeObserver?
You can safely ignore this error.
One of the specification authors wrote in a comment to your question but it is not an answer and it is not clear in the comment that the answer is really the most important one in this thread, and the one that made me comfortable to ignore it in our Sentry logs.
This error means that ResizeObserver was not able to deliver all observations within a single animation frame. It is benign (your site will not break). – Aleksandar Totic Apr 15 at 3:14
There are also some related issues to this in the specification repository.
It's an old question but it still might be helpful to someone. You can avoid this error by wrapping the callback in requestAnimationFrame.
For example:
const resizeObserver = new ResizeObserver(entries => {
// We wrap it in requestAnimationFrame to avoid this error - ResizeObserver loop limit exceeded
window.requestAnimationFrame(() => {
if (!Array.isArray(entries) || !entries.length) {
return;
}
// your code
});
});
If you're using Cypress and this issue bumps in, you can safely ignore it in Cypress with the following code in support/index.js or commands.ts
const resizeObserverLoopErrRe = /^[^(ResizeObserver loop limit exceeded)]/
Cypress.on('uncaught:exception', (err) => {
/* returning false here prevents Cypress from failing the test */
if (resizeObserverLoopErrRe.test(err.message)) {
return false
}
})
You can follow the discussion about it here.
As Cypress maintainer themselves proposed this solution, so I believe it'd be safe to do so.
We had this same issue. We found that a chrome extension was the culprit. Specifically, the loom chrome extension was causing the error (or some interaction of our code with loom extension). When we disabled the extension, our app worked.
I would recommend disabling certain extensions/addons to see if one of them might be contributing to the error.
For Mocha users:
The snippet below overrides the window.onerror hook mocha installs and turns the errors into a warning.
https://github.com/mochajs/mocha/blob/667e9a21c10649185e92b319006cea5eb8d61f31/browser-entry.js#L74
// ignore ResizeObserver loop limit exceeded
// this is ok in several scenarios according to
// https://github.com/WICG/resize-observer/issues/38
before(() => {
// called before any tests are run
const e = window.onerror;
window.onerror = function(err) {
if(err === 'ResizeObserver loop limit exceeded') {
console.warn('Ignored: ResizeObserver loop limit exceeded');
return false;
} else {
return e(...arguments);
}
}
});
not sure there is a better way..
add debounce like
new ResizeObserver(_.debounce(entries => {}, 200);
fixed this error for me
The error might be worth investigating. It can indicate a problem in your code that can be fixed.
In our case an observed resize of an element triggered a change on the page, which caused a resize of the first element again, which again triggered a change on the page, which again caused a resize of the first element, … You know how this ends.
Essentially we created an infinite loop that could not be fitted into a single animation frame, obviously. We broke it by holding up the change on the page using setTimeout() (although this is not perfect since it may cause some flickering to the users).
So every time ResizeObserver loop limit exceeded emerges in our Sentry now, we look at it as a useful hint and try to find the cause of the problem.
In my case, the issue "ResizeObserver - loop limit exceeded" was triggered because of window.addEventListener("resize" and React's React.useState.
In details, I was working on the hook called useWindowResize where the use case was like this const [windowWidth, windowHeight] = useWindowResize();.
The code reacts on the windowWidth/windowHeight change via the useEffect.
React.useEffect(() => {
ViewportService.dynamicDimensionControlledBy(
"height",
{ windowWidth, windowHeight },
widgetModalRef.current,
{ bottom: chartTitleHeight },
false,
({ h }) => setWidgetHeight(h),
);
}, [windowWidth, windowHeight, widgetModalRef, chartTitleHeight]);
So any browser window resize caused that issue.
I've found that many similar issues caused because of the connection old-javascript-world (DOM manipulation, browser's events) and the new-javascript-world (React) may be solved by the setTimeout, but I would to avoid it and call it anti-pattern when possible.
So my fix is to wrap the setter method into the setTimeout function.
React.useEffect(() => {
ViewportService.dynamicDimensionControlledBy(
"height",
{ windowWidth, windowHeight },
widgetModalRef.current,
{ bottom: chartTitleHeight },
false,
({ h }) => setTimeout(() => setWidgetHeight(h), 0),
);
}, [windowWidth, windowHeight, widgetModalRef, chartTitleHeight]);
One line solution for Cypress. Edit the file support/commands.js with:
Cypress.on(
'uncaught:exception',
(err) => !err.message.includes('ResizeObserver loop limit exceeded')
);
https://github1s.com/chromium/chromium/blob/master/third_party/blink/renderer/core/resize_observer/resize_observer_controller.cc#L44-L45
https://github1s.com/chromium/chromium/blob/master/third_party/blink/renderer/core/frame/local_frame_view.cc#L2211-L2212
After looking at the source code, it seems in my case the issue surfaced when the NotifyResizeObservers function was called, and there were no registered observers.
The GatherObservations function will return a min_depth of 4096, in case there are no observers, and in that case, we will get the "ResizeObserver loop limit exceeded" error.
The way I resolved it is to have an observer living throughout the lifecycle of the page.
Managed to solve this in React for our error logger setup.
The Observer error propagates to the window.onerror error handler, so by storing the original window.onerror in a ref, you can then replace it with a custom method that doesn't throw for this particular error. Other errors are allowed to propagate as normal.
Make sure you reconnect the original onerror in the useEffect cleanup.
const defaultOnErrorFn = useRef(window.onerror);
useEffect(() => {
window.onerror = (...args) => {
if (args[0] === 'ResizeObserver loop limit exceeded') {
return true;
} else {
defaultOnErrorFn.current && defaultOnErrorFn.current(...args);
}
};
return () => {
window.onerror = defaultOnErrorFn.current;
};
}, []);
I had this issue with cypress tests not being able to run.
I found that instead of handling the exception the proper way was to edit the tsconfig.json in a way to target the new es6 version like so:
{
"extends": "../tsconfig.json",
"compilerOptions": {
"baseUrl": "../node_modules",
"target": "es5", --> old
"target": "es6", --> new
"types": ["cypress", "#testing-library/cypress"],
"sourceMap": true
},
"include": [
"**/*.ts"
]
}

Google chrome web push api bug

What is this bug? When sending web pushing browser Google Chrome "sometimes" gives a second message with the text: "This site has been updated in the background."
I want to make it only one message
This text I found in source Chrome
This site has been updated in the background.
github.com/scheib/chromium/blob/master/chrome/app/resources/generated_resources_en-GB.хтб
How to get rid of this message.
The way it works is a feature not a bug.
Here is an issue that explains your situation in Chrome: https://code.google.com/p/chromium/issues/detail?id=437277
And more specific code comment in Chromium code:
https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/push_messaging/push_messaging_notification_manager.cc&rcl=1449664275&l=287
What might have happened is some of the push messages sent to the client did not result in showing a notification.
Hope that helps
The reason this often occurs is the promise returned to event.waitUntil() didn't resolve with a notification being shown.
An example that might show the default push notification:
function handlePush() {
// BAD: The fetch's promise isn't returned
fetch('/some/api')
.then(function(response) {
return response.json();
})
.then(function(data) {
// BAD: the showNotification promise isn't returned
showNotification(data.title, {body: data.body});
});
}
self.addEventListener(function(event) {
event.waitUntil(handlePush());
});
Instead you could should write this as:
function handlePush() {
// GOOD
return fetch('/some/api')
.then(function(response) {
return response.json();
})
.then(function(data) {
// GOOD
return showNotification(data.title, {body: data.body});
});
}
self.addEventListener(function(event) {
const myNotificationPromise = handlePush();
event.waitUntil(myNotificationPromise);
});
The reason this is all important is that browsers wait for the promise passed into event.waitUntil to resolve / finish so they know the service worker needs to be kept alive and running.
When the promise resolves for a push event, chrome will check that a notification has been shown and it falls into a race condition / specific circumstance as to whether Chrome shows this notification or not. Best bet is to ensure you have a correct promise chain.
I put some extra notes on promises on this post (See: 'Side Quest: Promises' https://gauntface.com/blog/2016/05/01/push-debugging-analytics)