I am working on a project where I need make cross-origin requests, but there does not appear to be any way to allow this in a pure web page.
Chrome extensions can simply request permission to the domains they would like to make requests to as in the following example.
"permissions": [
"http://www.google.com/",
"https://www.google.com/"
]
http://developer.chrome.com/extensions/xhr.html
I found https://developers.google.com/chrome/apps/docs/no_crx which seemed like something closer to what I was looking for, but the only permissions allowed are "geolocation", "notifications", and "unlimitedStorage".
There is the HTTP header Access-Control-Allow-Origin which could be set on the domains I would like to make requests to, but they are not under my control so that is not practical.
Similarly the Content-Security-Policy: connect-src https://www.google.com; is primarily used to further restrict access instead of opening up access.
http://www.html5rocks.com/en/tutorials/security/content-security-policy/
I understand the security concerns, but as a quick search will show people get around this by making a proxy server. Wouldn't it make sense to allow the equivalent request to be made, meaning a request without the user's session/cookie information (like incognito mode)? Or some mechanism by which the page can request permission in the same manner as an extension? Seems somewhat backwards to require things like this to be down in browser specific manner.
Just like webspeech api (or getUserMedia) requests access to use microphone.
Any thoughts or perhaps something I missed?
EDIT: I posted this elsewhere and got:
If you are making requests from domains that are under your control, there are other options (like JSONP) that you can use to access data from another domain. Or, you can load an iframe and use postMessage() to interact with the contents - there are lots of tools that also enforce that the domain you're trying to communicate with is willing to share that data.
Me:
JSONP looks like a solution for data sources that provide JSON, but I am not sure that will solve my overall problem. I am trying to create a tool that will pull in data from a variety of sources to do both displaying a result and interpreting the information to perform an action. One query might be a google search which jsonp or the other official methods should allow for, but that does not work for scraping data from other web pages. All of the requests being made will not require user session information and thus a proxy would work, but will add latency and maintenance costs.
The postMessage() interface would require the pages being requested to implement listeners right?
So far the "best" solution still seems to be to have a companion extension that runs in a privileged environment that can make the requests and communicate the results with the page. The tool does a variety of other things that work within a web page so I would rather leave the primary environment as the web page with the option to run the extension.
Related
Almost all useful extensions require permission to access and modify all data on a page.
We can't be sure that a chrome extension is malicious in the sense if it's leaking my data or not.
I realise that many extensions which I use for example the great suspender, even though it needs access to all site data, it doesn't need to communicate with outside world.
Is there a way to block specific chrome extensions from making any network requests at all. ( can we block all outgoing/incoming traffic to a chrome extension. )
I can't keep monitoring a extension 24/7 to see when is it leaking data, For all you know it could be leaking once a month.
No, there's no way to block just the network communication of an extension without blocking its site access (aka "host permissions") entirely. That's because a malicious extension can open a tab with its controlling site (or a hidden iframe in the background script) and insert js code as a standard DOM script which the browser will attribute to the page itself so it'll be able to communicate with the site's domain to upload the exfiltrated data.
So, what you can do practically is to protect the most sensitive sites you use from all extensions by adding a local ExtensionSettings policy with runtime_blocked_hosts that contains that site(s). This will prevent all extensions from accessing the entire site either via content scripts or network requests. Example: {"*": {"runtime_blocked_hosts": ["*://lastpass.com"]}}. And if you have an extension you trust then you can relax this rule for that extension by using runtime_allowed_hosts. See the policy link above for more examples.
I am trying to find a script or program to convert my html website links from http to https.
I have looked all over hundreds of search results and web articles and I used the Word Press SSL plugin but it missed numerous pages with http links.
Below is one of thousands of my links I need to convert:
http://www.robert-b-ritter-jr.com/2015/11/30/blog-121-we-dont-need-the-required-minimum-distributions-rmds
I am looking for a way to do this quickly instead of one at a time.
The HTTPS Everywhere extension will automatically rewrite unsecure HTTP requests to HTTPS. Keep in mind not all websites offer a secure and encrypted connection.
Looks like Push notifications are finally usable for web-apps! Unfortunately, this requires https for ServiceWorker, which not all sites may have.
One thing I noticed in the spec it mentions:
if r's url's scheme is not one of "http" and "https", then:
Throw a TypeError."
So I'm confused - can the site be http, as long as it includes a serviceworker that is from https? For example, mydomain.com could include an https serviceworker from https://anotherdomain.com?
Another standard, web-api simple-push, doesn't mention requiring https (likely an omission in the documentation?), and "The user experience on Firefox Desktop has not been drawn out yet". Is the documentation on this outdated, or is push really only supported in FirefoxOS??
Simple-push, that is the current push solution in Firefox OS doesn't have anything to do with ServiceWorkers.
The next generation of push, implemented by both Google and Mozilla will be done through ServiceWorkers:
Push API spec
In that case yes, your content will need to be served over HTTPS.
Probably you will be interested in the LetsEncrypt initiative:
letsencrypt.org
A new certification authority that will help developers to transition their content over HTTPS.
Also just for development purposes, both Google and Mozilla implementations of ServiceWorkers allow you to bypass the check of the secure content, if you develop against localhost.
In the case of Mozilla you will need to enable the flag:
devtools.serviceWorkers.testing.enabled: true
But again this will be just for development, and AFAIK, Mozilla push landed or is about to land, and will be available in the nightly builds, you can follow the work here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1038811
No, the new generation of push notifications (i.e. Push API) requires HTTPS.
If you need to add push notifications to a website without HTTPS you can use a third-party service like Pushpad (I am the founder) that delivers notifications on your behalf.
The text you cited from the spec is from the Cache.addAll() section (5.4).
Here's the summary of addAll() on MDN:
The addAll() method of the Cache interface takes an array of URLS, retrieves them, and adds the resulting response objects to the given cache. The request objects created during retrieval become keys to the stored response operations.
Service workers can request & cache URLs that are either HTTP or HTTPS, but a Service Worker itself can only work in its registered Scope (which must be HTTPS).
simple-push is not related to Service Workers; it seems comparable to the approaches other platforms have taken:
Apple Push Notifications
Google Cloud Messaging
I found a nice bypass workaround to allow notifications from websites and domains without SSL, hence http:// and not https:// for Firefox.
Firefox holds a file inside the Mozilla directory called permissions.sqlite which is a sqlite database file that holds the permissions for domains. You can add your domain there http://yourdomainname with permissions for notifications and it will work.
I have created a demonstration for Windows here https://gist.github.com/caviv/8df5fa11a98e0e33557f75215f691d54 in golang
I am developing an app using Phalcon and would like to create a popup logging window that displays any logging type information when I am logged in (such as DB calls and exceptions).
Alot of my app is driven by Ajax calls. Is it going to be possible to have a window that I can popup on my main app that uses a tail like method of displaying this information?
How would I go about this? I'm not entirely sure that what I want is possible with the Ajax calls as they are done in a different request. I can't find anything on the internet as to how I would go about this so any help would be great.
Well, you didn't said that explicitly, but I imagine that you want this just for development purposes. If so, you can log useful info to a method that checks if it should send that log to the browser based on some criteria (e.g. logged in user is you, the app is in a dev enviroment, etc) and then use Phalcon's FirePHP log adapter to send to log the information to the browser.
You'll just need to have some FirePHP extension in your Firefox or Chrome to be able to see the information under your JavaScript console. And yes, it works well with Ajax calls too.
Let me know if you need further explanations on this...
I think you are looking for a debug toolkit.. There are lot of toolkit available on packagist.org and phalconist.com. I personally like this phalcon-debug-widget toolkit that you may try.
From my experiences so far, I've concluded that the HTML5 Manifest scheme was really terribly designed.
My site serves a manifest file when a user is logged in. Unfortunately, when they log out, they can still access the cached protected materials. Can anyone think of a way to fix this?
A manifest file is designed to take a website offline and still be able to navigate. It essentially just tells the browser to download and keep that stuff in cache. If your adding secret stuff to the manifest and the user goes offline, he needs to be able to still access it - or whats the point of having a special logged-in-manifest-file if he has to be loggedin (therefor online)?
You could add javascript that checks if the user is online again and if he is, tries to validate the "login state" and redirects or removes the secret stuff from localstorage (if you would use localstorage to save the "secret" stuff and javascript to display it instead of a manifest file )
Lets say the secret stuff is an image and you are not using a manifest file, but just displaying images when the user is logged in and its crusial, the user cant view that image after logout, you would need to set the http headers to no-cache and cache-expire to some random date of the past, so that a normal user would see it anymore. Problem then is, that the image is downloaded everytime somebody visits the website..
You need to approach the HTML5 Application Cache in a different way. It is not useful for caching server-side dynamically generated pages, especially those that require a login to reach. The Application Cache has no concept of logins, nor securing a page from somebody with a different/no login.
It is much more appropriate for an AJAX-based site, where all HTML/CSS/JavaScript is static and registered in the Application Cache, and data is instead fetched via AJAX then used to populate pages. If you need to cache data in the application for offline use, then use one of the offline data storage mechanisms such as Local Storage/Session Storage, or IndexedDB, for data.
You can then make your own judgement on how much data you want to cache offline, since there's no way to validate a login without making a call to the server that is naturally inaccessable whilst offline.
What if when the user logs out or is not logged in they get a manifest with only network:*