Continuously emulate GPS Locations on Chrome - google-chrome

For a mobile web application I would like to emulate location movements of the device. While it is possible to override a single location using the Sensor Tab in Chrome's Developer Console (See: https://developers.google.com/web/tools/chrome-devtools/device-mode/device-input-and-sensors) I would like to override the location continuously, say for instance update the device's location every second.
Is there a possibility to achieve this in Chrome (or any other Desktop Browser)?
I am looking for a solution similar to the Android Emulator which allows to replay recorded GPS Tracks (From GPX or KML files):
(See: https://developer.android.com/guide/topics/location/strategies.html#MockData)

DevTools has no feature for this, but if you happen to be using getCurrentPosition() you can pretty much recreate this by overriding the function in a snippet.
I suppose this workflow won't work if you're using watchPosition() (which you probably are), because I believe that's basically a listener that gets fired when the browser updates the coordinates. There's no way to update the browser's coordinates.
However, I'll record the workflow below b/c it may be useful to somebody else.
Store your script in a snippet.
Override navigator.geolocation.getCurrentPosition() to the target coordinates.
So, you could store the coordinates and timestamps in JSON (either within the snippet, or just fetch the JSON from the snippet using XHR / Fetch), and then use setTimeout() to update the coordinates at the specified times.
var history = [
{
time: 1000,
coords: ...
},
{
time: 3000,
coords: ...
}
];
for (var i = 0; i < history.length; i++) {
setTimeout(function() {
navigator.geolocation.getCurrentPosition = function(success, failure) {
success({
coords: history[i].coords,
timestamp: Date.now()
});
}, history[i].time);
}

Related

Detecting rendering events / layout changes (or any way to know when the page has stopped "changing")

I'm using Puppeteer (PuppeteerSharp actually, but the API is the same) to take a screenshot of a web page from my application.
The problem is that the page does several layout changes via JavaScript after the page has loaded, so a few seconds pass before seeing the "final" rendered version of the page.
At the moment I'm just waiting a "safe" amount of seconds before taking the screenshot, but this is obviously not a good approach, since a temporary performance slowdown on the machine can result in an incomplete rendering.
Since puppeteer uses Chromium in the background, is there a way to intercept Chromium's layouting/rendering events (like you can do in the DevTools console in Chrome)? Or, really, ANY other way to know when the page has stopped "changing" (visually I mean)
EDIT, some more info: The content is dynamic, so I don't know before hand what it will draw and how. Basically, it's a framework that draws different charts/tables/images/etc. (not open-source unfortunately). By testing with the "performance" tool in the Chrome DevTools however, I noticed that after the page has finished rendering all activity in the timeline stops, so if I could access that information it would be great. Unfortunately, the only way to do that in Puppeteer (that I can see) is using the "Tracing" feature, but that doesn't operate in real-time. Instead, it dumps the trace to file and the buffer is way too big to be of any use (the file is still 0 bytes after my page has already finished rendering, it only flushes to disk when I call "stopTracing"). What I would need is to access the Tracing feature of puppeteer in realt-time, for example via events or a in-memory stream, but that doesn't seem to be supported by the API. Any way around this?
You should use page.waitForSelector() to wait for the dynamic elements to finish rendering.
There must be a pattern that can be identified in terms of the content being generated.
Keep in mind that you can use flexible CSS Selectors to match elements or attributes without knowing their exact values.
await page.goto( 'https://example.com/', { 'waitUntil' : 'networkidle0' } );
await Promise.all([
page.waitForSelector( '[class^="chart-"]' ), // Class begins with 'chart-'
page.waitForSelector( '[name$="-image"]' ), // Name ends with '-image'
page.waitForSelector( 'table:nth-of-type(5)' ) // Fifth table
]);
This can be useful when waiting for a certain pattern to exist in the DOM.
If page.waitForSelector() is not powerful enough to meet your needs, you can use page.waitForXPath():
await page.waitForXPath( '//div[contains(text(), "complete")]' ); // Div contains 'complete'
Alternatively, you can plug the MutationObserver interface into page.evaluate() to watch for changes being made to the DOM tree. When the changes have stopped over a period of time, you can resume your program.
After some trial and error, I settled for this solution:
string traceFile = IOHelper.GetTemporaryFile("txt");
long lastSize = 0;
int cyclesWithoutTraceActivity = 0;
int totalCycles = 0;
while (cyclesWithoutTraceActivity < 4 && totalCycles < 25)
{
File.Create(traceFile).Close();
await page.Tracing.StartAsync(new TracingOptions()
{
Categories = new List<string>() { "devtools.timeline" },
Path = traceFile,
});
Thread.Sleep(500);
await page.Tracing.StopAsync();
long curSize = new FileInfo(traceFile).Length;
if(Math.Abs(lastSize - curSize) > 5)
{
logger.Debug("Trace activity detected, waiting...");
cyclesWithoutTraceActivity = 0;
}
else
{
logger.Debug("No trace activity detected, increasing idle counter...");
cyclesWithoutTraceActivity++;
}
lastSize = curSize;
totalCycles++;
}
File.Delete(traceFile);
if(totalCycles == 25)
{
logger.Warn($"WARNING: page did not stabilize within allotted time limit (15 seconds). Rendering page in current state, might be incomplete");
}
Basically what I do here is this: I run Chromium's tracing at 500 msec intervals, and each time I compare the size of the last trace file to the size of the current trace file. Any significant changes in the size are interpreted as activity on the timeline, and they reset the idle counter. If enough time passes without significant changes, I assume the page has finished rendering. Note that the trace file always starts with some debugging info (even if the timeline itself has no activity to report), this is the reason why I don't do an exact size comparison, but instead I check if the file's lengths are more than 5 bytes apart: since the initial debug info contains some counters and IDs that vary over time, I allow for a little variance to account for this.

Forge viewer crashes after loading specific model

I have been trying to use the forge viewer to load somewhat large model, but it seems like that the viewer crashes after few seconds (3 - 5) of usage. (with typical Aw snap! page).
I've had no trouble with other models, but this happens on this specific model on Windows 10, Chrome.
I've tested loading in OS X, but it seem to work although it is somewhat slow.
My current best guess is that this is happening due to memory overflow in Chrome, but this is not yet certain, because the viewer crashes before I try to log the heap usage.
Is there any option that I can use for efficient model loading?
Also, is there a debug mode that allows memory tracking?
If you need the model urn, please let me know.
Thanks!
To modify the memory environment for the viewer (like iPhone), change the options parameters with memory limit values found here:
(refer to Default Memory Management section)
https://developer.autodesk.com/en/docs/viewer/v2/overview/changelog/2.17/
In particular, you can force memory management like this:
var config3d = {
memory: {
limit: 400, // in MB
debug: {
force: true
}
}
};
var viewer = new av.Viewer3D(container, config3d);
viewer.loadModel( modelUrl, {}, onSuccess, onError );
For debugging memory, try the following:
var memInfo = viewer.getMemoryInfo();
console.log(memInfo.limit); // == 400 MB
console.log(memInfo.effectiveLimit); // >= 400 MB
console.log(memInfo.loaded);
Lastly, you can open the memory manager panel extension, from the Chrome debug console, with this command...
NOP_VIEWER.loadExtension("Autodesk.Viewing.MemoryManager")
Click on the memory-chip icon, to bring up the panel (see screenshot below)...
In the memory tab, you can see many parameters relating to paged memory in order to render and network-load many meshes (mesh packs (pf) zip, sort by closest or largest mesh AABB, ignore meshes that are too few pixels on the screen, etc).
Another quick way to activate the Viewer's low-memory mode, is to trick your desktop chrome browser into thinking it's a mobile device, by activating mobile debugging. You can use this to test out mobile related memory issues.
Follow this guide: Chrome debug - Mobile mode
Hope this helps!

Setting sensors (location) in headless Chrome

Is it possible to set custom location coordinates with Chrome Headless? I can't find it in the
Devtools protocol
API. Is there a workaround available?
I googled it and got many methods. I try one by one, almost all of them turn out outdated. Then I find out a solution, use chrome devtools protocol to achieve that.
The small example code below, that it uses the most common tool selenium to execute chrome devtools protocol command.
import time
from selenium.webdriver import Chrome, ChromeOptions
options = ChromeOptions()
options.add_argument("--headless")
driver = Chrome(options=options)
driver.execute_cdp_cmd(
"Browser.grantPermissions",
{
"origin": "https://www.openstreetmap.org/",
"permissions": ["geolocation"]
},
)
driver.execute_cdp_cmd(
"Emulation.setGeolocationOverride",
{
"latitude": 35.689487,
"longitude": 139.691706,
"accuracy": 100,
},
)
driver.get("https://www.openstreetmap.org/")
driver.find_element_by_xpath("//span[#class='icon geolocate']").click()
time.sleep(3) # wait for the page full loaded
driver.get_screenshot_as_file("screenshot.png")
https://chromedevtools.github.io/devtools-protocol/tot/Emulation#method-setGeolocationOverride
and
https://chromedevtools.github.io/devtools-protocol/tot/Emulation#method-clearGeolocationOverride
... then you'll need to contend with ensuring that the correct location sharing setting is set within the user profile (chrome://settings/content/location - which is difficult to access due to being displayed via shadow dom, so using a preconfigured user profile will likely be easier --user-data-dir).
Edit to add: The above does not seem to be effective when using --headless. To resolve this I used https://chromedevtools.github.io/devtools-protocol/tot/Page#method-addScriptToEvaluateOnNewDocument with the following snippet:
navigator.geolocation.getCurrentPosition = function(success, failure) {
success({
coords: {latitude: <your_lat_float>, longitude: <your_lng_float>},
timestamp: Date.now(),
});
}

Google Maps kml file limit

Is there a limit to the number of kml files which can be rendered? I know there is a file size limit but I seem to be hitting another limit.
The error thrown is
GET https://mts1.googleapis.com/mapslt?hl=en-US&lyrs=kml%3AcXOw0bjKUSgN5kcEMpDT…7Capi%3A3%7Cclient%3A2&x=67&y=98&z=8&w=256&h=256&source=apiv3&token=127990 414 (Request-URI Too Large) mts1.googleapis.com/mapslt?hl=en-US&lyrs=kml%3AcXOw0bjKUSgN5kcEMpDTUkENzfIp…api%3A3%7Cclient%3A2&x=67&y=98&z=8&w=256&h=256&source=apiv3&token=127990:1
Below is an example of what I am attempting to accomplish.
http://tinyurl.com/qg5enx8
from the documentation
There is a limit on the number of KML Layers that can be displayed on a single Google Map.
If you exceed this limit, none of your layers will display. The limit is based on the total
length of all URLs passed to the KMLLayer class, and consequently will vary by application;
on average, you should be able to load between 10 and 20 layers without hitting the limit.
Try using Network links: https://developers.google.com/kml/documentation/kml_tut?csw=1#network_links
I know it sounds a but more cumbersome, but it helps when loading lots of KML Data at once.
Another option is to change your approach and use Google Fusion Tables through the Javascript API. Basically to start on this route, you'll need to load the script: https://apis.google.com/js/client.js?onload=init where onload refers to YOUR javascript function to run once the script is loaded. Mine looks something like this:
function init() {
gapi.client.load('fusiontables', 'v1', function() {
gapi.client.setApiKey( YOUR_API_KEY_AS_STRING );
gapi.client.fusiontables.query.sql({sql:["SELECT * FROM", TABLE_NAME].join(' '), fields:'rows, columns'}).execute( function(json) {
//Do what you need to parse the json response
//Set up KML Layers
//etc... make sure to check if maps is loaded too
json.rows.forEach( function(t) {
console.log( t );
});
});
});
}

Run Chrome extension in the background

I'm currently creating my first Chrome extension, so far so good.
It's just a little test where I run multiple timers.
But obviously all my timers reset when I open and close the extension.
So to keep all my timers running, I would have to same them somehow when I close the extension and make them run in the background page.
When I open the extension again, those timers should be send back to the open page.
How would you handle this?
I already have an array of all my timers, what would be the best option for me>
A background page runs at all times when the extension is enabled. You cannot see it, but it can modify other aspects of the extension, like setting the browser action badge.
For example, the following would set the icon badge to the number of unread items in a hypothetical service:
function getUnreadItems(callback) {
$.ajax(..., function(data) {
process(data);
callback(data);
});
}
function updateBadge() {
getUnreadItems(function(data) {
chrome.browserAction.setBadgeText({text:data.unreadItems});
});
}
Then, you can make a request and schedule it so the data is retrieved and processed regularly, you can also stop the request at any time.
var pollInterval = 1000 * 60; // 1 minute
function startRequest() {
updateBadge();
window.setTimeout(startRequest, pollInterval);
}
function stopRequest() {
window.clearTimeout(timerId);
}
Now just load it...
onload='startRequest()'
Also, HTML5 offline storage is good for storing data and constantly update it...
var data = "blah";
localStorage.myTextData = data;