I have an Electron app that uses a custom app:// protocol to serve files. It seems that Chrome/Electron considers all files returned from that protocol to be from the same origin. This means that app pages have the same zoom level, which isn't what I want.
How does Electron determine the origin in this case (a pointer to the code would be helpful) and is there any way to convince it that some URLs are from different origins, short of registering another protocol like app2://?
I found some documentation in the Chromium source code:
// Zoom can be defined at three levels: default zoom, zoom for host, and zoom
// for host with specific scheme. Setting any of the levels leaves settings
// for other settings intact. Getting the zoom level starts at the most
// specific setting and progresses to the less specific: first the zoom for the
// host and scheme pair is checked, secondly the zoom for the host only and
// lastly default zoom.
And in zoom_controller.cc it seems like it just uses the scheme/host from the URL:
GURL url = content::HostZoomMap::GetURLFromEntry(entry);
std::string host = net::GetHostOrSpecFromURL(url);
if (zoom_map->HasZoomLevel(url.scheme(), host)) {
// If there are other tabs with the same origin, then set this tab's
// zoom level to match theirs. The temporary zoom level will be
// cleared below, but this call will make sure this tab re-draws at
// the correct zoom level.
double origin_zoom_level =
zoom_map->GetZoomLevelForHostAndScheme(url.scheme(), host);
And
std::string GetHostOrSpecFromURL(const GURL& url) {
return url.has_host() ? TrimEndingDot(url.host_piece()) : url.spec();
}
url.spec() actually returns the entire URL, which suggests to me that if I browse file:// URLs they'll get separate zoom levels. I verified this experimentally and it does seem to be the case.
In any case I figured out what was happening in my case - I was running in development mode which uses the WebPack dev server. In that case all files are served from localhost so they always get the same zoom.
However in production using the app:// protocol my code was setting the host to . so URLs were like app://./index.html. The host was actually ignored by the custom protocol handler, so to give windows separate origins you can just make up a fake hostname for them, like app://main/index.html or app://help/help.html. Seems to work perfectly.
Related
I'm working on a Manifest v3 browser extension where I need to identify the browser in which the extension is currently running from the backgroundScript. Since ManifestV3 extension uses a service worker, it doesn't have DOM or window. So I'm not able to use window.navigator.userAgent.
I found a related question which talks about how to gets window height and width information, but I couldn't find any other information to fetch the userAgent of the browser.
Is this possible?
Neutral globals
Things like navigator aren't specific to visual representation of a window.
Just omit window. and read it directly:
navigator
navigator.userAgent
atob
fetch
Window-specific globals
Things specific to user interaction or visual/aural representation like DOM or AudioContext, or those that may show a prompt for user permissions.
Not available in a worker.
Aliases for window
Use them instead of window for code clarity or if a local variable is named just as a global property.
Built-in globalThis (Chrome/ium 71+, FF 65+) and self
These are worker-compatible aliases for the global scope. Note that a JS library you're loading may redefine them theoretically, but that'd be really weird and abnormal.
Self-made global
The most reliable method, but you'll have to add 'use strict' only inside an IIFE not globally.
This is already offered by bundlers like webpack.
Here's how you can replicate it yourself:
const global = (function(){
if (!this) throw "Don't add 'use strict' globally, use it inside IIFE/functions";
return this;
})();
My company switched to B2C and now the login is performed with a redirection from another website. For this reason my login tests that used to work now are giving me a cross origin error.
Already tried adding chromeWebSecurity: false to cypress.json and it makes no difference.
Also tried to add this to index.js :
on('before:browser:launch', (browser = {}, launchOptions) => {
// `args` is an array of all the arguments that will
// be passed to browsers when it launches
console.log(launchOptions.args) // print all current args
if (browser.family === 'chromium' && browser.name !== 'electron') {
// auto open devtools
launchOptions.args.push('--auto-open-devtools-for-tabs')
// whatever you return here becomes the launchOptions
return launchOptions
}
if (browser.family === 'firefox') {
// auto open devtools
launchOptions.args.push('-devtools')
return launchOptions
}
})
Didnt make any difference either.
From what I read, many people are having this issue since 2018 and Cypress didnt seem to have solved it yet. Kind of make me nervous because I already have A LOT of tests written for cypress, so migrating would be harsh.
Do you guys know any workaround?
Here's the full error:
CypressError
Cypress detected a cross origin error happened on page load:
Blocked a frame with origin "my company's address" from accessing a cross-origin frame.
Before the page load, you were bound to the origin policy:
my company's address
A cross origin error happens when your application navigates to a new URL which does not match the origin policy above.
A new URL does not match the origin policy if the 'protocol', 'port' (if specified), and/or 'host' (unless of the same superdomain) are different.
Cypress does not allow you to navigate to a different origin URL within a single test.
You may need to restructure some of your test code to avoid this problem.
Alternatively you can also disable Chrome Web Security in Chromium-based browsers which will turn off this restriction by setting { chromeWebSecurity: false } in cypress.json.Learn more
How can I configure Polymer's platinum-sw-cache or platinum-sw-fetch to cache all URL paths except for /_api, which is the URL for Hoodie's API? I've configured a platinum-sw-fetch element to handle the /_api path, then platinum-sw-cache to handle the rest of the paths, as follows:
<platinum-sw-register auto-register
clients-claim
skip-waiting
on-service-worker-installed="displayInstalledToast">
<platinum-sw-import-script href="custom-fetch-handler.js"></platinum-sw-import-script>
<platinum-sw-fetch handler="HoodieAPIFetchHandler"
path="/_api(.*)"></platinum-sw-fetch>
<platinum-sw-cache default-cache-strategy="networkFirst"
precache-file="precache.json"/>
</platinum-sw-cache>
</platinum-sw-register>
custom-fetch-handler.js contains the following. Its intent is simply to return the results of the request the way the browser would if the service worker was not handling the request.
var HoodieAPIFetchHandler = function(request, values, options){
return fetch(request);
}
What doesn't seem to be working correctly is that after user 1 has signed in, then signed out, then user 2 signs in, then in Chrome Dev Tools' Network tab I can see that Hoodie regularly continues to make requests to BOTH users' API endpoints like the following:
http://localhost:3000/_api/?hoodieId=uw9rl3p
http://localhost:3000/_api/?hoodieId=noaothq
Instead, it should be making requests to only ONE of these API endpoints. In the Network tab, each of these URLs appears twice in a row, and in the "Size" column the first request says "(from ServiceWorker)," and the second request states the response size in bytes, in case that's relevant.
The other problem which seems related is that when I sign in as user 2 and submit a form, the app writes to user 1's database on the server side. This makes me think the problem is due to the app not being able to bypass the cache for the /_api route.
Should I not have used both platinum-sw-cache and platinum-sw-fetch within one platinum-sw-register element, since the docs state they are alternatives to each other?
In general, what you're doing should work, and it's a legitimate approach to take.
If there's an HTTP request made that matches a path defined in <platinum-sw-fetch>, then that custom handler will be used, and the default handler (in this case, the networkFirst implementation) won't run. The HTTP request can only be responded to once, so there's no chance of multiple handlers taking effect.
I ran some local samples and confirmed that my <platinum-sw-fetch> handler was properly intercepting requests. When debugging this locally, it's useful to either add in a console.log() within your custom handler and check for those logs via the chrome://serviceworker-internals Inspect interface, or to use the same interface to set some breakpoints within your handler.
What you're seeing in the Network tab of the controlled page is expected—the service worker's network interactions are logged there, whether they come from your custom HoodieAPIFetchHandler or the default networkFirst handler. The network interactions from the perspective of the controlled page are also logged—they don't always correspond one-to-one with the service worker's activity, so logging both does come in handy at times.
So I would recommend looking deeper into the reason why your application is making multiple requests. It's always tricky thinking about caching personalized resources, and there are several ways that you can get into trouble if you end up caching resources that are personalized for a different user. Take a look at the line of code that's firing off the second /_api/ request and see if it's coming from an cached resource that needs to be cleared when your users log out. <platinum-sw> uses the sw-toolbox library under the hood, and you can make use of its uncache() method directly within your custom handler scripts to perform cache maintenance.
CefSharp: 1.25.0 (based on Chromium 25.0.1364.152)
Angular: 1.3.0-beta16
UIRouter: 0.2.10
I'm developing a stand-alone C# application that uses CefSharp Chromium + Angular + UIRouter as the stack upon which the GUI will be relying on.
I hit it off by trying to make the above stack load the sample-code provided here:
http://scotch.io/tutorials/javascript/angular-routing-using-ui-router
For the sake of elegance the HTML + Javascript-libs of the GUI, get cobundled in a single resource file inside the .Net executable of the application.
This resource is then passed programmatically during application-init to the Chromium control (by means of .LoadHtml) to be loaded directly into the browser, aka the HTML is not loaded from a separate .html file residing in the hard-drive or on a remote HTTP server. If the HTML gets loaded from the later ("standard") venues then everything works flawlessly.
I noticed that when loading the HTML directly as a string, as described above, the url of the resulting static web page (aka window.location) is set to 'about:blank'. It appears that angular has some sort of pet peeve with such a url, especially when it comes to using routing:
First of all, the invocation of:
history.pushState(null, "", url);
inside
self.url = function(url, replace) { ... }
throws an exception ala
Error: SecurityError: DOM Exception 18
Error: An attempt was made to break through the security policy of the user agent.
at Browser.self.url (about:blank:8004:21)
at about:blank:10049:24
at Scope.$eval (about:blank:11472:28)
at Scope.$digest (about:blank:11381:31)
at Scope.$apply (about:blank:11493:24)
at about:blank:6818:15
at Object.invoke (about:blank:7814:19)
at doBootstrap (about:blank:6817:16)
at bootstrap (about:blank:6827:14)
at angularInit (about:blank:6796:7)
the url that is passed to .pushState is:
about:blank#/home
which appears to be the result of concatenating 'about:blank' with the default state '/home'.
Secondly, even if the above problem is solved there appears to be a major issue inside:
$rootScope.$watch(function $locationWatch() { ... })
which causes the following error:
Error: [$rootScope:infdig] 10 $digest() iterations reached. Aborting!
the reason is that when 'window.location' is set to 'about:blank' then
$browser.url()
always returns
about:blank
while
$location.absUrl()
returns
about:blank#/home
causing $watch to fire non-stop.
Is there any proper way to handle this shortcoming of angular when its dealing with web pages loaded directly into the browser in the manner described here?
If there is no workaround for this issue then I'm afraid that I will have to resort to loading the HTML directly from a file in the hard drive, which apart from being slower (can't cache the string to memory for subsequent usages), it's also a noticable deviation from the goal of developing a stand-alone-exe. :(
Thanks in advance and I apologize if this issue has been addressed elsewhere.
By default Firefox allows loading of external files within html file that loaded from "file:///...". but Chrome does not. in CefSharp(Chrome) you can do it in this way:
// Allow angular routing and load external files
BrowserSettings setting = new BrowserSettings();
setting.FileAccessFromFileUrls = CefState.Enabled;
browser.BrowserSettings = setting;
this.Controls.Add(browser);
Most browsers don't allow to do AJAX on the file-system. But Chromium can be tweaked to do so:
browser = new ChromiumWebBrowser(path);
browser.BrowserSettings = new BrowserSettings();
browser.BrowserSettings.FileAccessFromFileUrlsAllowed = true;
How do you test a hybrid application when your requirement is to sign off and ship the very same package? You have a single hardcoded URL your AJAX calls are going to go to, but actually this endpoint needs to be different in test and production environments.
Override the hosts file is not an options because it would require to root all test devices.
Serve and host custom DNS server or HTTP proxy is an overkill.
Have an application option is against the requirements, the end users can not be exposed to such a setting.
Have a cookie to optionally override the URL would work but how to I add a cookie manually to a hybrid app running on a tablet?
Have a local storage setting to optionally override the URL would work but how to change local storage manually?
Is there a way to have but hide an application configuration option, setting from the end user?
Testing is performed on iOS tablet running a native app package.
If you really really want to ship the exact same code all the time, you could easily use local storage. In your app:
if(!localStorage.getItem('env')) localStorage.setItem('env', 'production');
switch(localStorage.getItem('env') {
case 'testing': endpoint = 'http://testserver'; break;
case 'production': endpoint = 'http://productionserver'; break;
}
Then just open your browser console and type:
localStorage.setItem('env', 'testing');
You might not be able to open a console on mobile browsers or inside Cordova, but if you really need that: rethink the "same package" thing. I can't think of any valid reason why you would not want to do different testing and production builds.