Load dynamic content with puppeteer? - puppeteer

I'm using this code to get the page data. It works but I get the data just one time.
The problem is that this data is updated lets say every second. I want to get it without reloading the page.
This is a simple example of what I want - http://novinite.win/clock.php
Is there a way to refresh the result without reloading the web page?
const puppeteer = require('puppeteer');
(async () => {
const url = process.argv[2];
const browser = await puppeteer.launch({
args: ['--no-sandbox']
})
const page = await browser.newPage();
page.on('request', (request) => {
console.log(`Intercepting: ${request.method} ${request.url}`);
request.continue();
});
await page.goto(url, {waitUntil: 'load'});
const html = await page.content();
console.log(html);
browser.close();
})();

You can await a Promise that logs the current textContent every 1000 ms (1 second) with setInterval and resolves after a set number of intervals (for example, 10 intervals):
'use strict';
const puppeteer = require( 'puppeteer' );
( async () =>
{
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto( 'http://novinite.win/clock.php' );
await new Promise( ( resolve, reject ) =>
{
let i = 0;
const interval = setInterval( async () =>
{
console.log( await page.evaluate( () => document.getElementById( 'txt' ).textContent ) );
if ( ++i === 10 )
{
clearInterval( interval );
resolve( await browser.close() );
}
}, 1000 );
});
})();
Example Result:
19:24:15
19:24:16
19:24:17
19:24:18
19:24:19
19:24:20
19:24:21
19:24:22
19:24:23
19:24:24

Related

Why am I missing data when using a proxy with puppeteer?

When using a proxy for my puppeteer request to scrape a page, some data is missing such as price but the rest is there. When I remove the proxy, everything loads correctly.
const getData = async () => {
const browser = await puppeteer.launch({
headless: true,
args: ["--no-sandbox", "--disable-setuid-sandbox", "--ignore-certificate-errors", "--proxy-server=myproxy"],
});
const page = await browser.newPage();
await page.authenticate({ username: "username", password: "password" });
await page.goto(url, {
timeout: 300000,
waitUntil: "load",
});
const html = await page.evaluate(() => document.body.innerHTML);
const $ = cheerio.load(html);
const products = [];
$("[data-automation=product-results] > div").each((index, el) => {
const id = product.attr("data-product-id");
const name = product.find("[data-automation=name]").text();
const image = product.find("[data-automation=image]").attr("src");
const price = product.find("[data-automation=current-price]").text();
const data = {
id,
name,
image,
price,
};
products.push(data);
});
console.log(products);
};
I have tried: waitUntil: 'domcontentloaded' (same results), waitUntil: 'networkidle0' and waitUntil: 'networkidle2' both times out (5 minutes).
I don't quite understand why I am able to get all the data without using a proxy and only get partial data using a proxy.

Puppeteer Screenshot full page omitting background images

HI Ive been trying to get puppeteer to take a screenshot of full pages, including all images. Unfortunately background images are getting omitted (see comparison below)... I can't figure out how to get them.
Here's my code
async function screeshotFullPage(url: string): Promise<string> {
const browser = await puppeteer.launch({ headless: true });
const page = await browser.newPage();
await page.goto(url, { waitUntil: "networkidle0" });
await page.evaluate(async () => {
const selectors = Array.from(document.querySelectorAll("img"));
//https://stackoverflow.com/questions/46160929/puppeteer-wait-for-all-images-to-load-then-take-screenshot
await document.body.scrollIntoView(false);
await Promise.all(
selectors.map((img) => {
if (img.complete) return;
return new Promise((resolve, reject) => {
img.addEventListener("load", resolve);
img.addEventListener("error", reject);
});
})
);
});
await sleep(5000); // resolves in 5 sec
const path = generateScreenshotPath();
await page.screenshot({
path,
fullPage: true,
});
return await browser.close();
}
await screeshotFullPage("https://chesskid.com")

pupeteer function not returning array

Hi Guys can you please point my mistake on this code?
console.log(urls) is printing undefined.
Thanks in advance.
const puppeteer = require('puppeteer');
async function GetUrls() {
const browser = await puppeteer.launch( { headless: false,
executablePath: 'C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe' })
const page = await browser.newPage();
await page.goto("https://some page");
await page.waitForSelector('a.review.exclick');
let urls = await page.evaluate(() => {
let results = [];
let items = document.querySelectorAll('a.review.exclick');
items.forEach((item) => {
results.push({
url: item.getAttribute('href'),
});
});
return results;
browser.close();
});
}
(async () => {
let URLS = await GetUrls();
console.log(URLS);
process.exit(1);
})();
Here is a list:
you don't have a return statement in your GetUrls() function
you close the browser after a return statement AND inside the page.evaluate() method
Keep in mind that anything that is executed within the page.evaluate() will relate to the browser context. To quickly test this, add a console.log("test") before let results = []; and you will notice that nothing appears in your Node.js console, it will appear in your browser console instead.
Therefore, the browser variable is visible within the GetUrls() function but NOT visible within the page.evaluate() method.
Here is the corrected code sample:
const puppeteer = require('puppeteer');
async function GetUrls() {
const browser = await puppeteer.launch({
headless: false,
executablePath: 'C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe'
})
const page = await browser.newPage();
await page.goto("https://some page");
await page.waitForSelector('a.review.exclick');
let urls = await page.evaluate(() => {
let results = [];
let items = document.querySelectorAll('a.review.exclick');
items.forEach((item) => {
results.push({
url: item.getAttribute('href'),
});
});
return results;
});
await browser.close();
return urls;
}
(async () => {
let URLS = await GetUrls();
console.log(URLS);
process.exit(1);
})();

Scraping carousell with puppeteer

I am currently doing a project that needs to scrape a data from the search result in carousell.ph
I basically made a sample HTML and replicate the output HTML of carousell, so far the javascript work except when I tried to migrate it using puppeteer it always gives me an error.
The task is basically get all the product list from the search url "https://www.carousell.ph/search/iphone"
Here's the code I made.
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch()
const page = await browser.newPage()
let url = 'https://www.carousell.ph/search/iphone';
await page.goto(url, {waitUntil: 'load', timeout: 10000});
await page.setViewport({ width: 2195, height: 1093 });
await page.screenshot({ fullPage: true, path: 'carousell.png' });
document.querySelectorAll('main').forEach(main => {
main.querySelectorAll('a').forEach(product => {
const product_details = product.querySelectorAll('p');
const productName = product.textContent;
const productHref = product.getAttribute('href');
console.log(product_details[0].textContent + " - "+ product_details[1].textContent);
});
});
await browser.close()
})()
As #hardkoded stated, document is not something that is out of the box in puppeteer, it's dogma in the browser, but not in Node.js. You also do not need to for each in Node.js. The Map Technique outlined in this video is very helpful and quick. I'd make sure also to keep await on your loop or map technique, because the function is asynchronous so you want to make sure the promise comes back resolved.
Map technique
An extremely fast way to get many elements into an array from a page is to use a function like below. So instead of getting an array of the elements and then looping them for their properties. You can create a function like this below using $$eval and map. The result is a formatted JSON array that takes all the looping out of the equation.
const links = await first_state_list.$$eval("li.stateList__item", links =>
links.map(ele2 => ({
State_nme: ele2.querySelector("a").innerText.trim(), //GET INNER TEXT
State_url: ele2.querySelector("a").getAttribute("href") //get the HREF
}))
);
Already made it work.
const puppeteer = require('puppeteer');
async function timeout(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
(async () => {
const browser = await puppeteer.launch()
const page = await browser.newPage();
let searchItem = 'k20&20pro';
let carousellURL = 'https://www.carousell.ph/search/' + searchItem;
await page.goto(carousellURL, {waitUntil: 'load', timeout: 100000});
//await page.goto(carousellURL, {waitUntil: 'networkidle0'});
await page.setViewport({
width: 2195,
height: 1093
});
await page.evaluate(() => {
window.scrollBy(0, window.innerHeight);
})
await timeout(15000);
await page.screenshot({
fullPage: true,
path: 'carousell.png'
});
var data = await page.evaluate(() =>
Array.from(
document.querySelectorAll('main div a:nth-child(2)')).map(products => products.href
)
)
var i;
for (i = 0; i < data.length; i++) {
console.log(data[i]);
// comment this section but this will open the page to get product details.
//await page.goto(data[1], {"waitUntil" : "networkidle0"});
// inner product page details
// this will get the title
// document.querySelectorAll('h1')[0].innerText;
// this will get the amount
// document.querySelectorAll('h2')[0].innerText;
// this will get the description
// document.querySelectorAll('section div div:nth-child(4) p')[0].innerText;
// this will get sellers name
// document.querySelectorAll('div div:nth-child(2) a p')[0].innerText;
let ss_filename = 'carousellph_'+searchItem+'_'+i+'.png';
console.log(ss_filename);
console.log("\r\n");
//await page.screenshot({ fullPage: false, path: ss_filename });
}
await browser.close()
})()

Using Puppeteer how can I get DOMContentLoaded, Load time

Using Puppeteer how can I get DOMContentLoaded, Load time. It would be great if some once can explain how to access dev tools object, Network from Puppeteer.
Probably you are asking about window.performance.timing, here is a simple example how to get this data in Puppeteer:
const puppeteer = require('puppeteer');
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://en.wikipedia.org');
const performanceTiming = JSON.parse(
await page.evaluate(() => JSON.stringify(window.performance.timing))
);
console.log(performanceTiming);
await browser.close();
})();
But results are quite raw and not meaningful. You should calculate the difference between each value and navigationStart, here is a full example of how to do it (code comes from this article):
const puppeteer = require('puppeteer');
const extractDataFromPerformanceTiming = (timing, ...dataNames) => {
const navigationStart = timing.navigationStart;
const extractedData = {};
dataNames.forEach(name => {
extractedData[name] = timing[name] - navigationStart;
});
return extractedData;
};
async function testPage(page) {
await page.goto('https://en.wikipedia.org');
const performanceTiming = JSON.parse(
await page.evaluate(() => JSON.stringify(window.performance.timing))
);
return extractDataFromPerformanceTiming(
performanceTiming,
'domContentLoadedEventEnd',
'loadEventEnd'
);
}
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
console.log(await testPage(page));
await browser.close();
})();