I am able to initiate a download by clicking on the download and can see download start in headless but file is not saved.
function setDownloadBehavior(downloadPath='./') {
return page._client.send('Page.setDownloadBehavior', {
behavior: 'allow',
downloadPath
});
}
await setDownloadBehavior();
await page.mouse.click(644, 288
);
Here is the download code I am using. Would appreciate any feedbacks.
Try to use absolute path for downloadPath: some OSs do not support relative paths here. You can use __dirname, for example.
Related
I want to run a basic script that takes a screenshot of the tv schedule each day on a specific url, the url in question has a cookies pop up that requests to be accepted before the rest of the page is displayed, this obviously gets in the way of my intended screenshot, a similar post (Pupeteer - how can I accept cookie consent prompts automatically for any URL?) had a solution that suggested to download the chrome extension 'I don't care about cookies' and then run puppeteer with google chrome with this extension installed, I have installed the extension and ran puppeteer with chrome but the extension does not seem to show up in the chrome window that puppeteer creates, how do I fix this so the extension is there when I run chrome using puppeteer? note: I am intentionally using regular chrome not chromium for this reason, chromium does not allow extensions with puppeteer.
my code:
const puppeteer = require('puppeteer-core')
async function main() {
const browser = await puppeteer.launch({
headless: false,
slowMo:10,
executablePath: '/Applications/Google
Chrome.app/Contents/MacOS/Google Chrome',
args: [
'--disable-extensions-except=/Applications/Google
Chrome.app/Contents/MacOS/Google Chrome',
'--load-extension=/Applications/Google
Chrome.app/Contents/MacOS/Google Chrome',
]
});
const page = await browser.newPage()
await page.setViewport({
width: 980,
height: 480,
deviceScaleFactor: 2,
});
await
page.goto("https://www.tvguide.co.uk/mobile/channellisting.asp?
ch=145#588622936")
await page.waitForTimeout(15000); // wait for 15 seconds
await browser.close()
}
main();
Any reply would be greatly appreciated. Many Thanks.
I am following a tutorial to resize images via Cloud Functions on upload and am experiencing two major issues which I can't figure out:
1) If a PNG is uploaded, it generates the correctly sized thumbnails, but the preview of them won't load in Firestorage (Loading spinner shows indefinitely). It only shows the image after I click on "Generate new access token" (none of the generated thumbnails have an access token initially).
2) If a JPEG or any other format is uploaded, the MIME type shows as "application/octet-stream". I'm not sure how to extract the extension correctly to put into the filename of the newly generated thumbnails?
export const generateThumbs = functions.storage
.object()
.onFinalize(async object => {
const bucket = gcs.bucket(object.bucket);
const filePath = object.name;
const fileName = filePath.split('/').pop();
const bucketDir = dirname(filePath);
const workingDir = join(tmpdir(), 'thumbs');
const tmpFilePath = join(workingDir, 'source.png');
if (fileName.includes('thumb#') || !object.contentType.includes('image')) {
console.log('exiting function');
return false;
}
// 1. Ensure thumbnail dir exists
await fs.ensureDir(workingDir);
// 2. Download Source File
await bucket.file(filePath).download({
destination: tmpFilePath
});
// 3. Resize the images and define an array of upload promises
const sizes = [64, 128, 256];
const uploadPromises = sizes.map(async size => {
const thumbName = `thumb#${size}_${fileName}`;
const thumbPath = join(workingDir, thumbName);
// Resize source image
await sharp(tmpFilePath)
.resize(size, size)
.toFile(thumbPath);
// Upload to GCS
return bucket.upload(thumbPath, {
destination: join(bucketDir, thumbName)
});
});
// 4. Run the upload operations
await Promise.all(uploadPromises);
// 5. Cleanup remove the tmp/thumbs from the filesystem
return fs.remove(workingDir);
});
Would greatly appreciate any feedback!
I just had the same problem, for unknown reason Firebase's Resize Images on purposely remove the download token from the resized image
to disable deleting Download Access Tokens
goto https://console.cloud.google.com
select Cloud Functions from the left
select ext-storage-resize-images-generateResizedImage
Click EDIT
from Inline Editor goto file FUNCTIONS/LIB/INDEX.JS
Add // before this line (delete metadata.metadata.firebaseStorageDownloadTokens;)
Comment the same line from this file too FUNCTIONS/SRC/INDEX.TS
Press DEPLOY and wait until it finish
note: both original and resized will have the same Token.
I just started using the extension myself. I noticed that I can't access the image preview from the firebase console until I click on "create access token"
I guess that you have to create this token programatically before the image is available.
I hope it helps
November 2020
In connection to #Somebody answer, I can't seem to find ext-storage-resize-images-generateResizedImage in GCP Cloud Functions
The better way to do it, is to reuse the original file's firebaseStorageDownloadTokens
this is how I did mine
functions
.storage
.object()
.onFinalize((object) => {
// some image optimization code here
// get the original file access token
const downloadtoken = object.metadata?.firebaseStorageDownloadTokens;
return bucket.upload(tempLocalFile, {
destination: file,
metadata: {
metadata: {
optimized: true, // other custom flags
firebaseStorageDownloadTokens: downloadtoken, // access token
}
});
});
I'm making a project where when the page opens, the pdf file is automatically downloaded, I managed to use this:
window.addEventListener('load', () => {
window.print();
})
but I want when windows opens the file is directly downloaded to the directory that I have defined, for example D: / myproject.
is there any way? I don't use pdf library because I make pdf with css myself.
thank you
window will not be accessible on server side code.
If you want to download file in browser, on just open the web page the you can use res.download() as follows:
app.get('/download', function(req, res){
const file = `${__dirname}/upload-folder/file_name.pdf`;
res.download(file); // Set disposition and send it.
});
As you want to download the file in a specific directory, so you can use the npm module
var download = require('download-file')
app.get('/download', function(req, res){
var url = ${__dirname} + "/upload-folder/file_name.pdf";
var options = {
directory: "path of directory/",
filename: "file_name.pdf"
}
download(url, options, function(err){
if (err) throw err
res.send("Done"); // Set disposition and send it.
})
});
Edit: To convert the html code into the pdf you can use the npm module jspdf
I am currently running with Chrome 74 and trying to use Cypress to test a style-guide in my app. When I load up Cypress it throws this error:
SecurityError: Blocked a frame with origin "http://localhost:3000"
from accessing a cross-origin frame.
Please let me know if there is a solution to this!
I had tried to follow along with this:
https://github.com/cypress-io/cypress/issues/1951
But nothing has changed/worked for me. :(
My code is shown below: cypress/plugins/index.js
module.exports = (on, config) => {
on('before:browser:launch', (browser = {}, args) => {
// browser will look something like this
// {
// name: 'chrome',
// displayName: 'Chrome',
// version: '63.0.3239.108',
// path: '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome',
// majorVersion: '63'
// }
if (browser.name === 'chrome') {
args.push('--disable-site-isolation-trials');
return args
}
if (browser.name === 'electron') {
args['fullscreen'] = true
// whatever you return here becomes the new args
return args
}
})
}
in my cypress/support/index.js
This will load the site before every test I run to save myself from having to write cy.visit in every test.
beforeEach(() =>{
cy.visit('http://localhost:3000/style-guide')
})
I had the very same issue yesterday and the answer from #jsjoeio in the cypress issue #1951 you've referenced in your question actually helped me.
So basically only thing I've done was to modify my cypress.json and add following value:
{
"chromeWebSecurity": false
}
You can disable security to overcome this issue.
Go to cypress.json file.
Write { "chromeWebSecurity": false } and save.
Run the test again.
I had exactly the same problem, I advise you to do as DurkoMatko recommends. Documentation chromeWebSecurity
But I encountered another problem with a link pointing to localhost in an iframe.
If you want to use a link in an iframe I recommend this :
cy.get('iframe').then((iframe) => {
const body = iframe.contents().find('body');
cy.wrap(body).find('a').click();
});
I have also faced this issue. My application was using service workers. Disabling service workers while visiting a page solved the issue.
cy.visit('index.html', {
onBeforeLoad (win) {
delete win.navigator.__proto__.serviceWorker
}
})
Ref: https://glebbahmutov.com/blog/cypress-tips-and-tricks/#disable-serviceworker
So, at least for me, my further problem was an internal one with tokens, logins, etc. BUT!
the code I posted for how the index in the plugin folder is correct to bypass the chrome issue. That is how you want to fix it!
Goto your cypress.json file.
Set chrome web security as false
{
"chromeWebSecurity": false
}
To get around these restrictions, Cypress implements some strategies involving JavaScript code, the browser's internal APIs, and network proxying to play by the rules of same-origin policy.
Acess your project
In file 'cypress.json' insert
{
"chromeWebSecurity": false
}
Reference: Cypress Documentation
I am new to Puppeteer and am trying to run the example script. However, I get a blank chromium window (with no tab or URL bar).
Environment details:
OS: Windows 10
Node version: 8.4.0
NPM version: 6.4.1
I installed puppeteer using NPM and version 1.0.0 got installed. I also installed version 1.9.0 directly from Puppeteer's github page. Both versions have a similar issue.
This is my script:
const puppeteer = require('puppeteer');
(async () => {
try {
console.log('starting');
const browser = await puppeteer.launch({
executablePath: 'D:/Code/Puppeteer/node_modules/puppeteer/.local-chromium/win64-594312/chrome-win/chrome.exe',
headless: false
});
console.log('one');
const page = await browser.newPage();
console.log('two');
await page.goto('https://github.com');
console.log('three');
await page.screenshot({path: 'example.png'});
console.log("Page is up");
await browser.close();
}
catch (e) {
console.log("Error: ", e);
}
})();
In above script, I can see 'starting' and then Chromium window opens with nothing on screen. When I press F12 to bring up the dev tool, I see 'one' being printed on screen.
I have set environment variable 'path' to use this:
D:\Code\Puppeteer\node_modules\puppeteer\.local-chromium\win64-594312\chrome-win; C:\Program Files (x86)\Google\Chrome\Application
The puppeteer script is working now. I started the node.js cmd window in admin mode to run the script which did not work. Running in normal mode worked.