How to import a client certificate into Chrome on Google TV? - google-chrome

We're looking to host a private web service in a cloud based server, and are considering client certificates as the method of identifying and authenticating authorized nodes.
Long story short, it seems that there's no interface anywhere to import such certs into the GTV Chrome browser.
I've found menu-> settings-> advanced settings->Under the Hood -> Manage Certificates, but the only thing that seems to allow you to do is revoke trust on manufacturer provided certs.
Although requiring users to install certs by hand is suboptimal, it seems to me that it should at least be possible.
Have I missed how to do this?
Furthermore, is there an API for this? It might be better to have the users install an app that manages such issues, in addition to providing other services.

Unfortunately this is not something supported by the current generation of Google TVs. It is an edge case that would expose too much risk to the user and be a possible security hole if abused. Trust certificates can be installed by OEMs so it may be worth while to contact one of them.

Related

Chrome and Safari not honorring HPKP

I added HPKP header to my site, but it is not honored by Chrome or Safari. I tested it manually by setting a proxy and by going to chrome://net-internals/#hsts and looking for my domain - which did not found. The HPKP seems correct, and I also tested it using HPKP toolset so I know it is valid.
I am thinking I might be doing something weird with my flow. I have a web app, which is served over myapp.example.com. On login, the app redirects the user to authserver.example.com/begin to initiate OpenID Connect Authorization Code flow. HPKP header is returned only from authserver.example.com/begin, and I think this might be the issue. I have include-subdomain in the HPKP header so I think this is not the issue.
This is the HPKP header (line breaks added for readability):
public-key-pins:max-age=864000;includeSubDomains; \
pin-sha256="bcppaSjDk7AM8C/13vyGOR+EJHDYzv9/liatMm4fLdE="; \
pin-sha256="cJjqBxF88mhfexjIArmQxvZFqWQa45p40n05C6X/rNI="; \
report-uri="https://reporturl.example"
Thanks!
I added HPKP header to my site, but it is not honored by Chrome or Safari... I tested it manually by setting a proxy...
RFC 7469, Public Key Pinning Extension for HTTP, kind of sneaks that past you. The IETF published it with overrides, so an attacker can break a known good pinset. Its mentioned once in the standard by name "override" but the details are not provided. The IETF also failed to publish a discussion in a security considerations section.
More to the point, the proxy you set engaged the override. It does not matter if its the wrong proxy, a proxy certificate installed by an mobile device OEM, or a proxy controlled by an attacker who tricked a user to install it. The web security model and the standard allow it. They embrace interception and consider it a valid use case.
Something else they did was make the reporting of the broken pinset a Must Not or Should Not. It means the user agent is complicit in the coverup, too. That's not discussed in a security considerations section, either. They really don't want folks to know their supposed secure connection is being intercepted.
Your best bet to avoid it is move outside the web security model. Don't use browser based apps when security is a concern. Use a hybrid app and perform the pinning yourself. Your hybrid app can host a WebView Control or View, but still get access to the channel to verify parameters. Also see OWASP's Certificate and Public Key Pinning.
Also see Comments on draft-ietf-websec-key-pinning on the IETF mailing list. One of the suggestions in the comment was change the title to "Public Key Pinning Extension for HTTP with Overrides" to highlight the feature. Not surprisingly, that's not something they want. They are trying to do it surreptitiously without user knowledge.
Here's the relevant text from RFC 6479:
2.7. Interactions with Preloaded Pin Lists
UAs MAY choose to implement additional sources of pinning
information, such as through built-in lists of pinning information.
Such UAs should allow users to override such additional sources,
including disabling them from consideration.
The effective policy for a Known Pinned Host that has both built-in
Pins and Pins from previously observed PKP header response fields is
implementation-defined.
Locally installed CAs (like those used for proxies like you say are running) override any HPKP checks.
This is necessary so as not to completely break the internet given the prevalence of them: anti-virus software and proxies used in large corporations basically MITM https traffic through a locally issued certificate as otherwise they could not read the traffic.
Some argue that locally installing a CA requires access to your machine, and at that point it's game over anyway, but to me this still massively reduces the protection of HPKP and that, coupled with the high risks of using HPKP, means I am really not a fan of it.

Signing an extension

How can I sign my extension so users make sure my extension is safe and it won't steal their information? My extensions needs to access page contents, some users have no good sense of permitting an extension to do so.
Can I sign my extension using a verified sign provider, for example VeriSign?
When you publish an extension to the Chrome Web Store, the only "proof" that users can have of your extension is given by the rating system and the comments of other users. An hypothetical user that wants to install your extension, looks at the ratings and the comments, so make sure that your extensions has a good feedback from its users.
By the way, Google doesn't always look at the internal code of your extension manually, most of the times it only performs some heuristic checks on the code. So the problem is that developers could easily include some malicious code that may not be recognized and that could harm user's privacy in their extension without any problem.
Therefore, due to the Chrome Web Store policy, "validating" your extension is not possible at all. Plus, using SSL servicies (like the one you mentioned) will not make any sense since that your extension's scripts are stored locally.
What you can do is:
Encourage users in rating your extension and leave good feedbacks if they like it.
Redirect users to help links in case of trouble (links like "having trouble?" in your popup and so on).
Write a good worded description, and obviously add some images (or videos, better) to clearly show why an user may find your extension useful.
Always be nice (implied, ahah).
Your extension cannot be signed by an external provider, but it is signed by Chrome Web Store itself.
Every extension has an associated private key used for signing. It ensures consistent extension ID and updates. You can generate one yourself by packaging the extension as CRX (that produces a .pem file) and provide it when publishing on the CWS, or CWS generates it internally when you publish it (and then there's no way to extract it).
From on then, only code signed by this key (by the Web Store engine) will be recognized by Chrome as an update. Furthermore, at least on Windows only CWS-signed packages can be installed.
This security is as strong as the developer's Google account: if it is compromised, CWS will accept an update to your extension, which will be signed with the same key.
Although, as Marco correctly pointed out in his answer, the act of signing something would be just snake oil with respect to security. This signature verifies the identity of the publisher, but nothing more.
There's one more aspect - verified sites. If your extension interacts with a site you control, you can certify this by associating your extension with the site. It will be visible in the Web Store.
CWS-signed packages have an additional warranty of saying "so far, we did not catch this extension breaking any rules". Google can pull the extension off Web Store, and in severe cases blacklist and remove it from all Chrome installs. So that's an additional assurance for the user.
Google runs automated heuristic checks every time you submit your extension, which can trigger manual review. But that's invisible to the user.
That said, make sure to only ask absolute minimum permissions you need. For instance, look into the activeTab permission. It gives full host permissions for a tab when the extension is invoked by the user, but does not result in any permission warning. This was specifically added to address concerns about blanket extension permissions.

Implementing NTLM silent login with Java

Hoping someone can remedy my naivety when it comes to calling a simple URL to an application (which returns XML) using NTLMv2.
I have read pretty much every question and page there is but I am left with one overriding curiosity. I am using the HTTPClient at present (although this can be changed) along with the latest JDK (at the time of writing).
Here is an example page which appears to call the JCIFS library:
http://hc.apache.org/httpcomponents-client-ga/ntlm.html
All looks good, albeit confusing, but this highlights the question that many of the examples I have seen raises - the issue of supplying NTCredentials.
To me the whole point of NTLM is so that I do not have to supply credentials. The target aplication is set up to use NTLM so surely the user credntials of the currently logged in user should be used? Why should I be supplying any credentials myself?
Apologies if I am missing something obvious here. I just need the most basic for of NTLM SSO possible using Java. I don't care what version of what, I am able to use the latest of anything.
Holding out hope! Thanks for reading.
Unfortunately, there's way to do single sign-on in a pure Java environment.
NTLM isn't a solution to single sign-on directly. NTLM is a challenge/response authentication mechanism and it requires the NTLM hash of the user's password. Windows machines are able to provide single sign-on using NTLM because the NTLM hash is persisted. They are then able to compute the response to a challenge based on the persisted hash.
Without access to that hash (and, to my knowledge, you can't simply request it) you need to compute it yourself. And that requires having the user's password.
Similarly, you can do single sign-on with a Kerberos ticket using SPNEGO authentication (if the remote system is setup to support it, of course) but Java unfortunately reimplemented Kerberos instead of using the system Kerberos libraries. So even if you were already logged in to the domain, you'd need to go get another Kerberos ticket for Java. And that means typing your password in again.
The only realistic way to avoid typing in a password to authenticate is to call the native methods. On Windows, this is SSPI, which will provide you the ability to respond to an NTLM or SPNEGO challenge. On non-Windows platforms, this is handled by the very similar GSSAPI and provides the ability to respond to SPNEGO (Kerberos).

Reason for installation through Chrome Web Store

Is there a technical reason, why a Google Drive application must be installed through the Chrome Web Store (which severely limits the number of potential users)?
The reason that installation is required is to give users the ability to access applications from within the Google Drive user interface. Without installation, users would have no starting point for most applications, as they would not be able to start at a specific file, and then choose an application.
That said, I realize it can be difficult to work with in early development. We (the Google Drive team) are evaluating if we should remove this requirement or not. I suspect we'll have a final answer/solution in the next few weeks.
Update: We have removed the installation requirement. Chrome Web Store installation is no longer required for an app to work with a user's Drive transparently, but it is still required to take advantage of Google Drive UI integrations.
To provide the create->xxx behaviour that makes a new application document from the drive interface, and to be able to open existing documents from links, there must be some kind of manifest registered with Google's systems and some kind of agreement from the user that an application can access your documents and work with specific file types. There's little way around this when you think about the effects of not doing this.
That said, there are two high level issues that make for compatibility problems.
As the poster says, the requirement to install in the chrome store
severely limits the number of potential users.
But why? Why do the majority of Chrome Web Store applications say that they only work on Chrome? Most of these are wrappers to web applications that work on a range of browsers, yet you click through a selection and most display "works on chrome", aka only installs on chrome.
Before we launched our application on chrome we found that someone had created "xxxxxxx launcher" in the store, that simply forwards to our web app page. We're still wondering why it only "works on chrome". I suspect that some default template for the web store has:
"container" : "CHROME",
in it, which is the configuration option to say chrome only. That said, I can't find one, so I'm very confused why this is. It would be healthier if people picked Chrome because it's the better browser (which it is in a number of regards), not because their choice is limited if they don't. People can always write to the application vendor and ask if this limitation is really necessary.
The second thought is that a standardised manifest format across cloud storage providers would mean a much higher take up in web app vendors. Although, it isn't hugely complex to integrate, for example, with Google Drive, the back-end and ironing out the the details took over a week in total. Multiply that lots of storage providers and you have you lose an engineer for 2 months + the maintenance afterwards. The more than is common across vendor integration, the more likely it is to happen.
And while I'm on it, a JavaScript widget for opening and saving (I know Google have opening) by each cloud storage provider would improve integration by web app vendors. We should be using one storage providers across multiple applications, not one web application across multiple storage providers, the file UI should be common to the storage provider.
In order to sync with the local file system, one would need to install a browser plug-in in order to bridge the Web with the local computer. By default, Web applications don't have file I/O permissions on the user's hard drive for security reasons. Browser extensions, on the other hand, do not suffer from this limitation as it's assumed that when you, the user, give an application permission to be installed on your computer, you give it permissions to access more resources on the local computer.
Considering the add-on architectures for different browsers are different, Google first decided to build this application for their platform first. You can also find Google Drive in the Android/Play marketplace, one of Google's other app marketplaces.
In the future, if Google Drive is successful, there may very well be add-ons created for Firefox and Internet Explorer, but this of course has yet to be done and depends on whether or not Google either releases the API's to the public or internally makes a decision to develop add-ons for other browsers as well.

Internet facing Windows Server 2008 -- is it secure?

I really know nothing about securing or configuring a "live" internet facing web server and that's exactly what I have been assigned to do by management. Aside from the operating system being installed (and windows update), I haven't done a thing. I have read some guides from Microsoft and on the web, but none of them seem to be very comprehensive/ up to date. Google has failed me.
We will be deploying a MVC ASP.NET site.
What is your personal check when you are getting ready to deploy a application on a new windows server?
This is all we do:
Make sure Windows Firewall is enabled. It has an "off by default" policy, so the out of box rule setup is fairly safe. But it never hurts to turn additional rules off, if you know you're never going to need them. We disable almost everything except for HTTP on the public internet interface, but we like Ping (who doesn't love Ping?) so we enable it manually, like so:
netsh firewall set icmpsetting 8
Disable the Administrator account. Once you're set up and going, give your own named account admin rights. Disabling the default Administrator account helps reduce the chance (however slight) of someone hacking it. (The other common default account, Guest, is already disabled by default.)
Avoid running services under accounts with administrator rights. Most reputable software is pretty good about this nowadays, but it never hurts to check. For example, in our original server setup the Cruise Control service had admin rights. When we rebuilt on the new servers, we used a regular account. It's a bit more work (you have to grant just the rights necessary to do the work, instead of everything at once) but much more secure.
I had to lockdown one a few years ago...
As a sysadmin, get involved with the devs early in the project.. testing, deployment and operation and maintenance of web apps are part of the SDLC.
These guidelines apply in general to any DMZ host, whatever OS linux or windows.
there are a few books deicated to IIS7 admin and hardening but It boils down to
decide on your firewall architecture and configuration and review for appropriateness. remember to defend your server against internal scanning from infected hosts.
depending on the level of risk consider a transparent Application Layer gateway to clean the traffic and make the webserver easier to monitor.
1, you treat the system as a bastion host. locking down the OS, reducing the attack surface(services, ports installed apps ie NO interactive users or mixed workloads, configure firewalls RPC to respond only to specified management DMZ or internal hosts).
consider ssh, OOB and/or management LAN access and host IDS verifiers like AIDE tripwire or osiris.
if the webserver is sensitive, consider using argus to monitor and record traffic patterns in addition to IIS/FW logs.
baseline the system configuration and then regularly audit against the base line, minimizing or controlling changes to keep this accurate. automate it. powershell is your friend here.
the US NIST maintain a national checklist program repository. NIST, NSA and CIS have OS and webserver checklists worth investigating even though they are for earlier versions. look at the apache checklists as well for configuration suggestions. review the addison wesley and OReilly apache security books to get a grasp of the issues.
http://checklists.nist.gov/ncp.cfm?prod_category://checklists.nist.gov/ncp.cfm?prod_category
http://www.nsa.gov/ia/guidance/security_configuration_guides/web_server_and_browser_guides.shtml
www.cisecurity.org offer checklists and benchmarking tools for subscribers. aim for a 7 or 8 at a minimum.
Learn from other's mistakes (and share your own if you make them):
Inventory your public facing application products and monitor them in NIST's NVD(vulerability database..) (they aggregate CERT and OVAL as well)
subscribe and read microsoft.public.iinetserver.iis.security and microsoft security alerts. (NIST NVD already watches CERT)
Michael Howard is MS's code security guru, read his blog (and make sure your dev's read it too) it's at: http://blogs.msdn.com/michael_howard/default.aspx
http://blogs.iis.net/ is the IIS teams blog. as a side note if you're a windows guy, always read the team blog for MS product groups you work with.
David Litchfield has written several books on DB and web app hardening. he is a man to listen to. read his blog.
If your dev's need a gentle introduction to (or reminder about) web security and sysadmins too! I recommend "Innocent code" by Sverre Huseby.. havent enjoyed a security book like that since a cookoo's egg. It lays down useful rules and principles and explains things from the ground up. Its a great strong accessible read
have you baselined and audited again yet? ( you make a change you make a new baseline).
Remember, IIS is a meta service (FTP.SMTP and other services run under it). make your life easier and run a service at a time on one box. backup your IIS metabase.
If you install app servers like tomcat or jboss on the same box ensure that they are secured and locked down too..
secure web management consoles to these applications, IIS included.
IF you have to have DB on the box too. this post can be leveraged in a similar way
logging.an unwatched public facing server (be it http, imap smtp) is a professional failure. check your logs pump them into an RDMS and look for the quick the slow and the the pesky. Almost invariably your threats will be automated and boneheaded. stop them at the firewall level where you can.
with permission, scan and fingerprint your box using P0f and nikto. Test the app with selenium.
ensure webserver errors are handled discreetly and in a controlled manner by IIS AND any applications. , setup error documents for 3xx, 4xx and 5xx response codes.
now you've done all that, you've covered your butt and you can look at application/website vulnerabilities.
be gentle with the developers, most only worry about this after a breach and reputation/trust damage is done. the horse has bolted and is long gone. address this now. its cheaper. Talk to your dev's about threat trees.
Consider your response to Dos and DDoS attacks.
on the plus side consider GOOD traffic/slashdotting and capacity issues.
Liase with the Dev's and Marketing to handle capacity issues and server/bandwidth provisioning in response to campaigns/sales new services. Ask them what sort of campaign response theyre expec(or reminting.
Plan ahead with sufficient lead time to allow provisioning. make friends with your network guys to discuss bandwidth provisioing at short notice.
Unavailabilty due to misconfiguration poor performance or under provisioning is also an issue.. monitor the system for performance, disk, ram http and db requests. know the metrics of normal and expected performance.. (please God, is there an apachetop for IIS? ;) ) plan for appropriate capacity.
During all this you may ask yourself: "am I too paranoid?". Wrong question.. it's "am I paranoid enough?" Remember and accept that you will always be behind the security curve and that this list might seem exhaustive, it is but a beginning. all of the above is prudent and diligent and should in no way be considered excessive.
Webservers getting hacked are a bit like wildfires (or bushfires here) you can prepare and it'll take care of almost everything, except the blue moon event. plan for how you'll monitor and respond to defacement etc.
avoid being a security curmudgeon or a security dalek/chicken little. work quietly and and work with your stakeholders and project colleagues. security is a process, not an event and keeping them in the loop and gently educating people is the best way to get incremental payoffs in term of security improvements and acceptance of what you need to do. Avoid being condescending but remember, if you DO have to draw a line in the sand, pick your battles, you only get to do it a few times.
profit!
Your biggest problem will likely be application security. Don't believe the developer when he tells you the app pool identity needs to be a member of the local administrator's group. This is a subtle twist on the 'don't run services as admin' tip above.
Two other notable items:
1) Make sure you have a way to backup this system (and periodically, test said backups).
2) Make sure you have a way to patch this system and ideally, test those patches before rolling them into production. Try not to depend upon your own good memory. I'd rather have you set the box to use windowsupdate than to have it disabled, though.
Good luck. The firewall tip is invaluable; leave it enabled and only allow tcp/80 and tcp/3389 inbound.
use the roles accordingly, the less privileges you use for your services accounts the better,
try not to run all as an administrator,
If you are trying to secure a web application, you should keep current with information on OWASP. Here's a blurb;
The Open Web Application Security
Project (OWASP) is a 501c3
not-for-profit worldwide charitable
organization focused on improving the
security of application software. Our
mission is to make application
security visible, so that people and
organizations can make informed
decisions about true application
security risks. Everyone is free to
participate in OWASP and all of our
materials are available under a free
and open software license. You'll
find everything about OWASP here on
our wiki and current information on
our OWASP Blog. Please feel free to
make changes and improve our site.
There are hundreds of people around
the globe who review the changes to
the site to help ensure quality. If
you're new, you may want to check out
our getting started page. Questions or
comments should be sent to one of our
many mailing lists. If you like what
you see here and want to support our
efforts, please consider becoming a
member.
For your deployment (server configuration, roles, etc...), their have been a lot of good suggestions, especially from Bob and Jeff. For some time attackers have been using backdoor's and trojans that are entirely memory based. We've recently developed a new type of security product which validate's server memory (using similar techniques to how Tripwire(see Bob's answer) validates files).
It's called BlockWatch, primarily designed for use in cloud/hypervisor/VM type deployments but can also validate physical memory if you can extract them.
For instance, you can use BlockWatch to verify your kernel and process address space code sections are what you expect (the legitimate files you installed to your disk).
Block incoming ports 135, 137, 138, 139, 445 with a firewall. The builtin one will do. Windows server 2008 is the first one for which using RDP directly is as secure as ssh.