Can Cypress expose function to global object? - puppeteer

I learned that puppeteer can add a function on the page's window object like this:
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.exposeFunction('md5', text =>
crypto.createHash('md5').update(text).digest('hex')
);
await page.evaluate(async () => {
// use window.md5 to compute hashes
const myString = 'PUPPETEER';
const myHash = await window.md5(myString);
console.log(`md5 of ${myString} is ${myHash}`);
});
So I'm wondering is there any way that Cypress can attach functions to window function like puppeteer does?

Related

how to load extension using puppeteer.connect() method

i use gologin service. gologin is a browser antidetect service where I can fake my browser identity / can manage browser fingerprint.
so I can freely do web-scraping without being detected.
in this case I want to be able to load my extension into that browser using the puppeteer.connect() method.
here's the code:
const puppeteer = require('puppeteer-core');
const GoLogin = require('gologin');
(async () => {
const GL = new GoLogin({
token: 'yU0token',
profile_id: 'yU0Pr0f1leiD',
});
const { status, wsUrl } = await GL.start();
const browser = await puppeteer.connect({
browserWSEndpoint: wsUrl.toString(),
ignoreHTTPSErrors: true,
});
const page = await browser.newPage();
await page.goto('https://myip.link/mini');
console.log(await page.content());
await browser.close();
await GL.stop();
})();
I don't know how. please help me, so i can load my extension using this puppeteer.connect()
Assume your wish is loading chrome-extension into your puppeteer browser.
Find chrome-extension Working Directory Where does Chrome store extensions?
Find your extension ID by go to chrome://extensions/
Sample code:
const puppeteer = require('puppeteer-core');
const MY_EXTENSION_PATH = '~/Library/Application Support/Google/Chrome/Default/Extensions/cdockenadnadldjbbgcallicgledbeoc/0.3.38_0'
async function loadExtension() {
return puppeteer.launch({
headless: 0,
args: [
`--disable-extensions-except=${MY_EXTENSION_PATH}`,
`--load-extension=${MY_EXTENSION_PATH}`,
],
});
}

Node js speed up puppeteer html to pdf

I have a node js application that creates dynamic content which I want users to download.
static async downloadPDF(res, html, filename) {
const puppeteer = require('puppeteer');
const browser = await puppeteer.launch({
headless: true
});
const page = await browser.newPage()
await page.setContent(html, {
waitUntil: 'domcontentloaded'
})
const pdfBuffer = await page.pdf({
format: 'A4'
});
res.set("Content-Disposition", "attachment;filename=" + filename + ".pdf");
res.setHeader("Content-Type", "application/pdf");
res.send(pdfBuffer);
await browser.close()
}
Is there a way to speed up the whole process since it takes about 10 seconds to create a pdf file of size about 100kb?
I read somewhere that I can launch the headless browser once then I will only be creating a new page instead of launching a browser every time I request for the file.
I cannot find out a correct way of doing it.
You could move page creation to a util and hoist it to re-use it.
const puppeteer = require('puppeteer');
let page;
const getPage = async () => {
if (page) return page;
const browser = await puppeteer.launch({
headless: true,
});
page = await browser.newPage();
return page;
};
.
const getPage = require('./getPage');
static async downloadPDF(res, html, filename) {
const page = await getPage()
}
Yes, no reason to launch browser every time. You can set puppeter to call new url and get content. Without every time launching, it would be more faster.
How implement this ? Cut your function to three steps :
Create a browser instance. No matter headless or not. If you run app in X environment, you can launch a window, to see what your puppetter do
Create a function code, that will do main task in cycle.
After block is done, call await page.goto(url) ( where "page" is the instance of browser.newPage() ) and run your function again.
This is one of possible solution in function style code :
Create a instnces :
const browser = await puppeteer.launch( {'headless' : false });
const page = await browser.newPage();
page.setViewport({'width' : 1280, 'height' : 1024 });
I put it in realtime async function like (async ()=>{})();
Gets a data
Im my case, a set of urls was in mongo db, after getting it, I had ran a cycle :
for( const entrie of entries)
{
const url = entrie[1];
const id = entrie[0];
await get_aplicants_data(page,url,id,collection);
}
In get_aplicants_data() I had realized a logic according a loaded page :
await page.goto(url); // Going to url
.... code to prcess page data
Also you can load url in cycle and then put in your logic
Hope I have given you some help )

Scraping carousell with puppeteer

I am currently doing a project that needs to scrape a data from the search result in carousell.ph
I basically made a sample HTML and replicate the output HTML of carousell, so far the javascript work except when I tried to migrate it using puppeteer it always gives me an error.
The task is basically get all the product list from the search url "https://www.carousell.ph/search/iphone"
Here's the code I made.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch()
const page = await browser.newPage()
let url = 'https://www.carousell.ph/search/iphone';
await page.goto(url, {waitUntil: 'load', timeout: 10000});
await page.setViewport({ width: 2195, height: 1093 });
await page.screenshot({ fullPage: true, path: 'carousell.png' });
document.querySelectorAll('main').forEach(main => {
main.querySelectorAll('a').forEach(product => {
const product_details = product.querySelectorAll('p');
const productName = product.textContent;
const productHref = product.getAttribute('href');
console.log(product_details[0].textContent + " - "+ product_details[1].textContent);
});
});
await browser.close()
})()
As #hardkoded stated, document is not something that is out of the box in puppeteer, it's dogma in the browser, but not in Node.js. You also do not need to for each in Node.js. The Map Technique outlined in this video is very helpful and quick. I'd make sure also to keep await on your loop or map technique, because the function is asynchronous so you want to make sure the promise comes back resolved.
Map technique
An extremely fast way to get many elements into an array from a page is to use a function like below. So instead of getting an array of the elements and then looping them for their properties. You can create a function like this below using $$eval and map. The result is a formatted JSON array that takes all the looping out of the equation.
const links = await first_state_list.$$eval("li.stateList__item", links =>
links.map(ele2 => ({
State_nme: ele2.querySelector("a").innerText.trim(), //GET INNER TEXT
State_url: ele2.querySelector("a").getAttribute("href") //get the HREF
}))
);
Already made it work.
const puppeteer = require('puppeteer');
async function timeout(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
(async () => {
const browser = await puppeteer.launch()
const page = await browser.newPage();
let searchItem = 'k20&20pro';
let carousellURL = 'https://www.carousell.ph/search/' + searchItem;
await page.goto(carousellURL, {waitUntil: 'load', timeout: 100000});
//await page.goto(carousellURL, {waitUntil: 'networkidle0'});
await page.setViewport({
width: 2195,
height: 1093
});
await page.evaluate(() => {
window.scrollBy(0, window.innerHeight);
})
await timeout(15000);
await page.screenshot({
fullPage: true,
path: 'carousell.png'
});
var data = await page.evaluate(() =>
Array.from(
document.querySelectorAll('main div a:nth-child(2)')).map(products => products.href
)
)
var i;
for (i = 0; i < data.length; i++) {
console.log(data[i]);
// comment this section but this will open the page to get product details.
//await page.goto(data[1], {"waitUntil" : "networkidle0"});
// inner product page details
// this will get the title
// document.querySelectorAll('h1')[0].innerText;
// this will get the amount
// document.querySelectorAll('h2')[0].innerText;
// this will get the description
// document.querySelectorAll('section div div:nth-child(4) p')[0].innerText;
// this will get sellers name
// document.querySelectorAll('div div:nth-child(2) a p')[0].innerText;
let ss_filename = 'carousellph_'+searchItem+'_'+i+'.png';
console.log(ss_filename);
console.log("\r\n");
//await page.screenshot({ fullPage: false, path: ss_filename });
}
await browser.close()
})()

How to invoke Chrome Node Screenshot from the console?

I know you can capture a single html node vial the command prompt, but is it possible to do this programmatically from the console similar to Puppeteer? I'd like to loop all elements on a page and capture them for occasional one-off projects where I don't want to set up a full auth process in puppeteer.
I'm referring to this functionality:
But executed from the console like during a foreach or something like that.
See the puppeteer reference here.
Something to the effect of this:
$x("//*[contains(#class, 'special-class-name')]").forEach((el)=> el.screenshot())
I just made a script that take a screenshot every submit button in Google main page. Just take a look and take some inspiration from it.
const puppeteer = require('puppeteer')
;(async () => {
const browser = await puppeteer.launch({
headless:false,
defaultViewport:null,
devtools: true,
args: ['--window-size=1920,1170','--window-position=0,0']
})
const page = (await browser.pages())[0]
const open = await page.goto ( 'https://www.google.com' )
const submit = await page.$$('input[type="submit"]')
const length = submit.length
let num = 0
const shot = submit.forEach( async elemHandle => {
num++
await elemHandle.screenshot({
path : `${Date.now()}_${num}.png`
})
})
})()
You can use ElementHandle.screenshot() to take a screenshot of a specific element on the page. The ElementHandle can be obtained from Page.$(selector) or Page.$$(selector) if you want to return multiple results.
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto("https://stackoverflow.com/questions/50715164");
const userInfo = await page.$(".user-info");
await userInfo.screenshot({ path: "userInfo.png" });
The output image after executing the code:

Using Puppeteer how can I get DOMContentLoaded, Load time

Using Puppeteer how can I get DOMContentLoaded, Load time. It would be great if some once can explain how to access dev tools object, Network from Puppeteer.
Probably you are asking about window.performance.timing, here is a simple example how to get this data in Puppeteer:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://en.wikipedia.org');
const performanceTiming = JSON.parse(
await page.evaluate(() => JSON.stringify(window.performance.timing))
);
console.log(performanceTiming);
await browser.close();
})();
But results are quite raw and not meaningful. You should calculate the difference between each value and navigationStart, here is a full example of how to do it (code comes from this article):
const puppeteer = require('puppeteer');
const extractDataFromPerformanceTiming = (timing, ...dataNames) => {
const navigationStart = timing.navigationStart;
const extractedData = {};
dataNames.forEach(name => {
extractedData[name] = timing[name] - navigationStart;
});
return extractedData;
};
async function testPage(page) {
await page.goto('https://en.wikipedia.org');
const performanceTiming = JSON.parse(
await page.evaluate(() => JSON.stringify(window.performance.timing))
);
return extractDataFromPerformanceTiming(
performanceTiming,
'domContentLoadedEventEnd',
'loadEventEnd'
);
}
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
console.log(await testPage(page));
await browser.close();
})();