How to change Google Chrome Profile cookie loaded in Puppeteer - google-chrome

I found similar questions asked but I didn't find them useful. I'm trying to load other Google Chrome Profile such as Profile 8 instead of the Default one, my code will load the Default one but couldn't load the other profiles. I made sure I closed all browser before running the script, but it still doesn't work, Is there any solution for this? Or is there something I need to change
Code that works to load Default profile:
const puppeteer = require('puppeteer-extra')
puppeteer.launch({
headless: false,
executablePath: 'C:/Program Files/Google/Chrome/Application/chrome.exe',
userDataDir:"C:/Users/USER/AppData/Local/Google/Chrome/User Data",
})
Since I want to load other Profile, e.g. Profile 8, my code is changed to:
const puppeteer = require('puppeteer-extra')
puppeteer.launch({
headless: false,
executablePath: 'C:/Program Files/Google/Chrome/Application/chrome.exe',
userDataDir:"C:/Users/USER/AppData/Local/Google/Chrome/User Data/Profile 8",
})
But it loads a whole new profile. I've tried solutions from the internet but doesn't work.
Method 1 Tested (Doesn't work):
Change from using userDataDir to args:['--user-data-dir=']
puppeteer.launch({
headless: false,
executablePath: 'C:/Program Files/Google/Chrome/Application/chrome.exe',
args:['--user-data-dir=C:/Users/USER/AppData/Local/Google/Chrome/User Data/Profile 8']
})
**Method 2 Tested (Doesn't work): **
Create a directory called Default in Profile 8 folder, and then move the files in Profile 8 into the Default folder

Related

Downloaded blob files blocked

In my Google Chrome extension, I am doing the following:
Creating a text blob:
var file_Blob = new Blob([file_Content], {type: 'text/plain'});
Creating a URL for the text blob:
var file_URL = URL.createObjectURL(file_Blob)
Using the method chrome.downloads.download to download the blob via the URL to a file:
chrome.downloads.download({
url: file_URL,
filename: file_Name,
saveAs: true
});
This has been working fine until the last few weeks / versions (as of 2019/01/07, versions 71 or 70) whereby the downloaded files have been flagged as coming from the Internet.
Security: This file came from another computer and might be blocked to help protect this computer. Unblock
Therefore, a warning window is prompting upon trying to open the files.
File Download - Security Warning
Do you want to open this file?
Name: exampleFile_Name
Type: Unknown File Type
From: exampleFile_Folder
Open Cancel
While files from the Internet can be useful, this file type can potentially harm your computer. If you do not trust the source, do not open this software.
I can't find anything online for this change in behaviour and, as far as I can see, I'm generating and downloading the files as per best practice. Can anyone advise?

How to add bookmarks in Chromium/Chrome in automated fashion

I am trying to test a Chrome extension that searches bookmarks. Puppeteer loads Chromium with a clean profile each time, which is great, but my bookmarks are empty.
I was hoping to find a way to load a bookmarks file, so I don't have to use the Chrome API to manually create a bookmark tree under a testing flag in my code.
You can load an existing user data directory with all of its data, including bookmarks and browser settings:
puppeteer.launch({
userDataDir: '/path/to/user-data-directory',
})
The profile must be from a Chrome/Chromium version close to the puppeteer's Chromium.
Also — in my experience path shouldn't contain spaces on Windows.

Capybara, Chrome Headless: File Download is not working

I am trying to download a file with chrome headless.
My Chrome version is 67.0.3396.87 and my chromedriver has the 2.4.
The file does not appear on my file system. As far as i researched, its a safety function of chrome headless to prevent file downloads, but which can be turned on again.
Thats what i tried to do, regarding to this Thread:
https://bugs.chromium.org/p/chromium/issues/detail?id=696481
Still nothing works. I tried different approaches with
Page.setDownloadBehavior
eg. i copied the content of comment 78 but Chrome does not respond to it, or atleast it still does not work:
def enable_chrome_headless_downloads(driver, directory)
bridge = driver.send(:bridge)
path = '/session/:session_id/chromium/send_command'
path[':session_id'] = bridge.session_id
bridge.http.call(:post, path, {
"cmd" => "Page.setDownloadBehavior",
"params" => {
"behavior" => "allow",
"downloadPath" => directory,
}
})
end
I also checked if i could download manually a file with headless chrome with
'--remote-debugging-port=9222'
but it was not possible either.
Has anyone an idea, what I could do to make it work?
Thanks already!

Refresh forced with gulp and browser-sync

I use browser sync and gulp to reload my page on change of every file in my project. It half works : sometimes I have to ctrl+F5 for the changes on my page to be displayed and then I just have to save for next changes and the browser sync reload is enough. Here my code :
var cache = require('gulp-cache');
var browserSync = require('browser-sync').create();
gulp.task('browser-sync', function() {
browserSync.init({
injectChanges: true,
proxy: "http://localhost:8888/project/source/"
});
gulp.watch("source/**/*").on("change", function(e) {
cache.clearAll();
return gulp.src(e.path)
.pipe(browserSync.reload({stream: true}));
});
});
I want the browser to be refresh on every change on my project (every file in every folder/subfolder).
I thought the stream parameter to true found an another topic will do the job but I still have the same problem...And the gulp clear cache doesn't change anything.
On each save, my page is refreshed but the changes not always displayed, so I have to hard refresh with a ctrl+F5.
It's a small project that's why I don't wanna use webpack. And it's on MAMP server.
Any idea ?
The most likely culprit is asset-cache. A hard reload removes any cached data from the browser, and also unregisters service workers, indexeddb, etc.
go into devtools, click the application tab, select service worker, and check the box that says, "update on reload". I am pretty sure this does a hard reload on page refresh.

shardTestFiles unexpected behaviour with chromeoptions setting of --user-dir-data

I have the following settings in my config file.
specs : ['../specs/sample1.js', '../specs/sample2.js']
capabilities : {
'browserName' : 'chrome',
'chromeOptions' : {
'args' : [ '--user-data-dir=C:/Chrome/User Data' ]
},
shardTestFiles: true,
maxInstances: 2
}
There is nothing special about the tests in the spec file, just prints out the title of the url in the console.
When I run the test two chrome windows are opened up. But only one
(random choice) of the spec file tests are run. Once the tests in that spec file are completed both the browsers close at the same time. The second spec file is orphaned out and errors out after some time. Both the browser icons are overlapping each other. Check image below.
When I comment out the args in chromoptions containing the user-data-dir option, everything works perfectly. Two chrome windows are opened and the spec files are divided among the two browser windows and run all tests to completion. In this case the browser icons are separate.
I want to use the existing chrome profile with parallel option as it speeds up page loading and has default cookies setup.
What is the solution to get this to work?
On latest versions of protractor, chrome, webdrivers running on windows 8