Is it possible to use Watir-Webdriver to interact with Polymer? - polymer

I just updated my Chrome browser (Version 50.0.2661.75) and have found that the chrome://downloads page has changed such that my automated tests can no longer interact with it. Previously, I had been using Watir-Webdriver to clear the downloads page, delete files from my machine, etc, without too much difficulty.
It looks like Google is using Polymer on this page, and
there are new (to me) elements like paper-button that Watir-Webdriver doesn't recognize
even browser.img(:id, 'file-icon').present? returns false when I
can clearly see that the image is on the page.
Is automating a page made with Polymer (specifically the chrome://downloads page) a lost cause until changes are made to Watir-Webdriver, or is there a solution to this problem?

Given that the download items are accessible in Javascript and that Watir allows Javascript execution (as #titusfortner pointed out), it's possible to automate the new Downloads page with Watir.
Note the shadow root elements (aka "local DOM" in Polymer) can be queried with $$.
Here's an example Javascript that logs the icon presence and filename of each download item and removes the items from the list. Copy and paste the snippet into Chrome's console to test (verified in Chrome 49.0.2623.112 on OS X El Capitan).
(function() {
var items = document
.querySelector('downloads-manager')
.$$('iron-list')
.querySelectorAll('downloads-item');
Array.from(items).forEach(item => {
let hasIcon = typeof item.$$('#file-icon') !== 'undefined';
console.log('hasIcon', hasIcon);
let filename = item.$$('#file-link').textContent;
console.log('filename', filename);
item.$.remove.click();
});
})();
UPDATE: I verified the Javascript with Watir-Webdriver in OS X (with ChromeDriver 2.21). It works the same as in the console for me (i.e., I see the console logs, and the download items are removed). Here are the steps to reproduce:
Run the following commands in a new irb shell (copy+paste):
require 'watir-webdriver'
b = Watir::Browser.new :chrome
In the newly opened Chrome window, download several files to create some download items, and then open the Downloads tab.
Run the following commands in the irb shell (copy+paste):
script = "(function() {
var items = document
.querySelector('downloads-manager')
.$$('iron-list')
.querySelectorAll('downloads-item');
Array.from(items).forEach(item => {
let hasIcon = typeof item.$$('#file-icon') !== 'undefined';
console.log('hasIcon', hasIcon);
let filename = item.$$('#file-link').textContent;
console.log('filename', filename);
item.$.remove.click();
});
})();"
b.execute_script(script)
Observe the Downloads tab no longer contains download items.
Open the Chrome console from the Downloads tab.
Observe the console shows several lines of hasIcon true and the filenames of the downloaded items.

Looks like Google put the elements inside the Shadow-Dom, which isn't supported by Selenium/Watir/WebDriver spec (yet). There might a way to obtain the element via javascript (browser.execute_script(<...>)), but it is experimental at best still.

Attempting to automated a Polymer page, I found I was able to access the web elements by asking Polymer to use the shady dom by adding ?dom=shady in the URL. Like in the example on this page https://www.polymer-project.org/1.0/docs/devguide/settings:
http://example.com/test-app/index.html?dom=shady
Adding the dom parameter to request Polymer to use the shady dom may be worth a try.

Related

Can you enable chrome / chromium edge extensions on the extension pages? [duplicate]

Is Chrome blocking access to the webstore url?
I would like to make an extension that displays a like button beside the +1 button, but it looks like that content scripts are not working on https://chrome.google.com/webstore/*
Is that true?
TL;DR The webstore cannot be scripted by extensions, and the flag that previously allowed you to do that (--allow-scripting-gallery) has been removed in Chrome 35.
Chrome extensions cannot execute Content scripts / insert CSS the Chrome Web Store. This is explicitly defined in the source code, at function IsScriptableURL (click on the previous link to see the full logic).
// The gallery is special-cased as a restricted URL for scripting to prevent
// access to special JS bindings we expose to the gallery (and avoid things
// like extensions removing the "report abuse" link).
// TODO(erikkay): This seems like the wrong test. Shouldn't we we testing
// against the store app extent?
GURL store_url(extension_urls::GetWebstoreLaunchURL());
if (url.host() == store_url.host()) {
if (error)
*error = manifest_errors::kCannotScriptGallery;
return false;
}
manifest_errors::kCannotScriptGallery is defined here:
const char kCannotScriptGallery[] =
"The extensions gallery cannot be scripted.";
The error can be viewed in the background page's console when you use chrome.tabs.executeScript to inject a script in a Web Store tab. For instance, open https://chrome.google.com/webstore/, then execute the following script in the background page of an extension (via the console, for live debugging):
chrome.tabs.query({url:'https://chrome.google.com/webstore/*'}, function(result) {
if (result.length) chrome.tabs.executeScript(result[0].id, {code:'alert(0)'});
});

Cannot refresh <object/> when changing from application/pdf to text/html

I am trying to refresh an element in the DOM tree. Basically the typescript code simply update the data & type of an existing HTMLObjectElement. Here is the pseudocode:
const textCanvas: HTMLObjectElement = <HTMLObjectElement>(curElement.children.namedItem('text-canvas'));
// Populate both the actual data as well as the associated mime/type:
textCanvas.data = enabledTextElement.textData; // 'blob:http://localhost:8081/d3c9a0ac-8e40-4e0e-aeb8-91656273837c'
textCanvas.type = enabledTextElement.mimeType; // 'application/pdf'
Which then gets updated with:
textCanvas.data = enabledTextElement.textData; // 'blob:http://localhost:8081/3c5ad888-0a7f-41d0-8ec9-35c334ef3f20'
textCanvas.type = enabledTextElement.mimeType; // 'text/html'
My chrome simply display the PDF version:
The funny part is if I do the opposite (html first), then the element gets properly updated (html text is displayed, then the PDF box is displayed). I tried to verify if this is supposed to work at:
https://html.spec.whatwg.org/multipage/iframe-embed-object.html#the-object-element
And it seems it should. I also found an old bug report:
https://bugs.chromium.org/p/chromium/issues/detail?id=123536
-> <object> works in every browser except Google Chrome
Using:
Google Chrome 80.0.3987.163 (Official Build) (64-bit) (cohort: 81_Win_122)
Revision e7fbe071abe9328cdce4ffedac9822435fbd3656-refs/branch-heads/3987#{#1037}
OS Windows 8.1 (Build 9600.19676)
JavaScript V8 8.0.426.30
If that help the URL are created from a Blob which is then passed to URL.createObjectURL.
I am currently using the following work-around:
textCanvas.data = '';
textCanvas.type = enabledTextElement.mimeType;
textCanvas.data = enabledTextElement.textData;
Seems to make the symptoms go away. I've filled a bug report just in case:
https://bugs.chromium.org/p/chromium/issues/detail?id=1076373

How to use R to download a file from webpage when there is no specific file embedded on the page

Is there any possible solution to extract the file from any website when there is no specific file uploaded using download.file() in R.
I have this url
https://www.fangraphs.com/leaders.aspx?pos=all&stats=bat&lg=all&qual=y&type=8&season=2016&month=0&season1=2016&ind=0
there is a link to export csv file to my working directory, but when i right click on the export data hyperlink on the webpage and select the link address
it turns to be the following script
javascript:__doPostBack('LeaderBoard1$cmdCSV','')
instead of the url which give me access to the csv file.
Is there any solution to tackle this problem.
You can use RSelenium for jobs like this. The script below works for me exactly as is, and it should for you as well with minor edits noted in the text. The solution uses two packages: RSelenium to automate Chrome, and here to select your active directory.
library(RSelenium)
library(here)
Here's the URL you provided:
url <- paste0(
"https://www.fangraphs.com/leaders.aspx",
"?pos=all",
"&stats=bat",
"&lg=all",
"&qual=y",
"&type=8",
"&season=2016",
"&month=0",
"&season1=2016",
"&ind=0"
)
Here's the ID of the download button. You can find it by right-clicking the button in Chrome and hitting "Inspect."
button_id <- "LeaderBoard1_cmdCSV"
We're going to automate Chrome to download the file, and it's going to go to your default download location. At the end of the script we'll want to move it to your current directory. So first let's set the name of the file (per fangraphs.com) and your download location (which you should edit as needed):
filename <- "FanGraphs Leaderboard.csv"
download_location <- file.path(Sys.getenv("USERPROFILE"), "Downloads")
Now you'll want to start a browser session. I use Chrome, and specifying this particular Chrome version (using the chromever argument) works for me. YMMV; check the best way to start a browser session for you.
An rsDriver object has two parts: a server and a browser client. Most of the magic happens in the browser client.
driver <- rsDriver(
browser = "chrome",
chromever = "74.0.3729.6"
)
server <- driver$server
browser <- driver$client
Using the browser client, navigate to the page and click that button.
Quick note before you do: RSelenium may start looking for the button and trying to click it before there's anything to click. So I added a few lines to watch for the button to show up, and then click it once it's there.
buttons <- list()
browser$navigate(url)
while (length(buttons) == 0) {
buttons <- browser$findElements(button_id, using = "id")
}
buttons[[1]]$clickElement()
Then wait for the file to show up in your downloads folder, and move it to the current project directory:
while (!file.exists(file.path(download_location, filename))) {
Sys.sleep(0.1)
}
file.rename(file.path(download_location, filename), here(filename))
Lastly, always clean up your server and browser client, or RSelenium gets quirky with you.
browser$close()
server$stop()
And you're on your merry way!
Note that you won't always have an element ID to use, and that's OK. IDs are great because they uniquely identify an element and using them requires almost no knowledge of website language. But if you don't have an ID to use, above where I specify using = "id", you have a lot of other options:
using = "xpath"
using = "css selector"
using = "name"
using = "tag name"
using = "class name"
using = "link text"
using = "partial link text"
Those give you a ton of alternatives and really allow you to identify anything on the page. findElements will always return a list. If there's nothing to find, that list will be of length zero. If it finds multiple elements, you'll get all of them.
XPath and CSS selectors in particular are super versatile. And you can find them without really knowing what you're doing. Let's walk through an example with the "Sign In" button on that page, which in fact does not have an ID.
Start in Chrome by pretty Control+Shift+J to get the Developer Console. In the upper left corner of the panel that shows up is a little icon for selecting elements:
Click that, and then click on the element you want:
That'll pull it up (highlight it) over in the "Elements" panel. Right-click the highlighted line and click "Copy selector." You can also click "Copy XPath," if you want to use XPath.
And that gives you your code!
buttons <- browser$findElements(
"#linkAccount > div > div.label-account",
using = "css selector"
)
buttons[[1]]$clickElement()
Boom.

Chrome Extension Content Script on https://chrome.google.com/webstore/

Is Chrome blocking access to the webstore url?
I would like to make an extension that displays a like button beside the +1 button, but it looks like that content scripts are not working on https://chrome.google.com/webstore/*
Is that true?
TL;DR The webstore cannot be scripted by extensions, and the flag that previously allowed you to do that (--allow-scripting-gallery) has been removed in Chrome 35.
Chrome extensions cannot execute Content scripts / insert CSS the Chrome Web Store. This is explicitly defined in the source code, at function IsScriptableURL (click on the previous link to see the full logic).
// The gallery is special-cased as a restricted URL for scripting to prevent
// access to special JS bindings we expose to the gallery (and avoid things
// like extensions removing the "report abuse" link).
// TODO(erikkay): This seems like the wrong test. Shouldn't we we testing
// against the store app extent?
GURL store_url(extension_urls::GetWebstoreLaunchURL());
if (url.host() == store_url.host()) {
if (error)
*error = manifest_errors::kCannotScriptGallery;
return false;
}
manifest_errors::kCannotScriptGallery is defined here:
const char kCannotScriptGallery[] =
"The extensions gallery cannot be scripted.";
The error can be viewed in the background page's console when you use chrome.tabs.executeScript to inject a script in a Web Store tab. For instance, open https://chrome.google.com/webstore/, then execute the following script in the background page of an extension (via the console, for live debugging):
chrome.tabs.query({url:'https://chrome.google.com/webstore/*'}, function(result) {
if (result.length) chrome.tabs.executeScript(result[0].id, {code:'alert(0)'});
});

How do I make Firefox auto-refresh on file change?

Does anyone know of an extension for Firefox, or a script or some other mechanism, that can monitor one or more local files. Firefox would auto-refresh or otherwise update its canvas when it detected a change (of timestamp) in the files(s).
For editing CSS, it would be ideal if just the CSS could be reloaded, rather than a full HTML re-render.
Effectively it would enable similar behaviour to Firebug with its dynamic HTML/CSS editing, only through external files.
Live.js
From the website:
How?
Just include Live.js and it will monitor the current page including local CSS and Javascript by sending consecutive HEAD requests to the server. Changes to CSS will be applied dynamically and HTML or Javascript changes will reload the page. Try it!
Where?
Live.js works in Firefox, Chrome, Safari, Opera and IE6+ until proven otherwise. Live.js is independent of the development framework or language you use, whether it be Ruby, Handcraft, Python, Django, NET, Java, Php, Drupal, Joomla or what-have-you.
It has the huge benefit of working with IETester, dynamically refreshing each open IE tab.
Try it out by adding the following to your <head>
<script type="text/javascript" src="http://livejs.com/live.js"></script>
Have a look at FileWatcher extension:
https://addons.mozilla.org/en-US/firefox/addon/filewatcher/
it's a WebExtension, so it works with the latest Firefox
it has a native app (to be installed locally) that monitors watched files for changes using native OS calls (no polling!) and notifies the WebExtension to let it reload the web page
reload is driven by rules: a rule contains the page URL (with regular expression support) and its included/excluded local source files
open source: https://github.com/coolsoft-ita/filewatcher
DISCLAIMER: I'm the author of the extension ;)
I would recommend livejs
But it has following Advantages and Disadvantages
Advantages:
1. Easy setup
2. Works seamlessly on different browsers (Live.js works in Firefox, Chrome, Safari, Opera and IE6+)
3. Don't add irritating interval for refreshing browser specially when you want to debug along with designing
4. Only refreshing when you save change ctrl + S
5. Directly saves CSS etc from firebug I have not used that feature but read on their site http://livejs.com/ that they support it too!!!
Disadvantages:
1. It will not work on file protocol file:///C:/Users/Admin/Desktop/livejs/live.html
2. You need to have server to run it like http://localhost
3. You have to remove it while deploying on staging/production
4. Doesn't serves CDN I have tried cheating & applying direct link http://livejs.com/live.js but it will not work you have to download and keep on local to work.
Xrefresh with firebug.
Firefox has an extension called mozRepl.
Emacs can plug into this, with moz-reload-on-save-mode.
when it's set up, saving the file forces a refresh of the browser window.
There are some IDE's that contain this ability (They'll have a pane within them or some other means to auto-refresh a page on save).
If you want to do this yourself a quick hack is to set the meta refresh on the page to a low value - one or two seconds.
# Will refresh the page content every second
<meta http-equiv="refresh" content="1" />
You could just place a javascript interval on your page, have it query a local script which checks the last date modified of the css file, and refreshes it if it changed.
jQuery Example:
var modTime = 0;
setInterval(function(){
$.post("isModified.php", {"file":"main.css", "time":modTime}, function(rst) {
if (rst.time != modTime) {
modTime = rst.time;
// reload style tag
$("head link[rel='stylesheet']:eq(0)").remove();
$("head").prepend($(document.createElement("link")).attr({
"rel":"stylesheet",
"href":"http://sstatic.net/mso/all.css?v=4372"
})
);
}
});
}, 5000);
Browsersync can do this from the server side / outside of the browser.
This can achieve more repeatable results / things that don't require so much clicking.
This will serve a page and refresh on change
cd static_content
browser-sync start --server --files .
It also allows a scripting mode.
This is certainly hacky, but if you want to work locally without making any external request (to live.js, for example), or run any local server, I think this might be useful. This is not specific to web development, you can adopt similar strategy to any other workflow.
You will need two tiny tools (which are present in almost all distribution repos): inotify-tools and xdotool.
First get the ID of your Firefox and your editor window using xdotool.
$ xdotool search --name "Mozilla Firefox"
60817411
60817836
$ xdotool search --name "Pluma" # Pluma is my editor
94371842
Depending on the number of processes running, you will get one or more window ID. Use xdotool windowactivate <ID> to know which one you want (the focus changes to the respective window).
Use inotifywait -e close_write to monitor changes to your local file and when you save the file using your editor, change focus to your browser, reload xdotool key CTRL+R and focus back to your editor. This is so instantaneous you will not notice nothing.
Also, inotifywait exits on change, so you might have to do it in a loop. Here is a minimum working example (in Bash in your working directory).
while /usr/bin/true
do
inotifywait -e close_write index.html;
xdotool windowactivate 60917411; # Switch to Firefox
xdotool key CTRL+R; # Reload Firefox
xdotool windowactivate 94371842 # Switch back to Pluma
done
You can use inotifywait to watch for the entire directory or some selected files in your directory.
You can write a script that can automate is easily.
This works on Linux (I've tested this on Void Linux.)
You can use live.js with a tampermonkey script to avoid having to include https://livejs.com/live.js in your HTML file.
// ==UserScript==
// #name Auto reload
// #author weirane
// #version 0.1
// #match http://127.0.0.1/*
// #grant none
// ==/UserScript==
(function() {
'use strict';
if (Number(window.location.port) === 8000) {
const script = document.createElement('script');
script.src = 'https://livejs.com/live.js';
document.body.appendChild(script);
}
})();
With this tampermonkey script, the live.js script will be automatically inserted to pages whose address matches http://127.0.0.1:8000/*. You can change the port according to your need.
I think that you can solve it by using some ajax requests after a determinate interval. You can do a request to CSS files and then if you don't get the "not modified" header you delete your css and load it again. For dynamic files you do a request and store the response and then every time you make a request to that file you compare the response to the latest.