Is ServiceWorker intended to replace Appcache, or is the intention that the two will coexist? Phrased another way, is appcache about to become deprecated?
Blink's Service Worker team is keen on deprecating AppCache (We will follow our usual intent to deprecate process). We believe that Service Worker is a much better solution. Also, it should be pretty easy to offer a drop-in replacement for AppCache built on top of SW. We'll start by collecting usage metrics and do some outreach.
AppCache and Service Worker should coexist without any issue since offering offline support via AppCache for browsers that don't support Service Workers is a valid use case.
#flo850 If it's not working, please let us know by filing a bug.
I must say that Services Worker is not only the replacement for AppCache, but it’s far more capable. An AppCache can’t be partially updated, a byte-by-byte manifest comparison to trigger the update seems odd and there are several use cases leading to security and terrible usability problems.
Even Chrome and Firefox are planning to stop support for AppCache in the near future. Now that service workers are supported by Chrome, Opera, and Firefox.Also, The noises coming from Microsoft and Safari have been positive with respect to implementation and under consideration.
As a cache tool, it will coexist with appcache. Appcache works on virtually every browser.
But service workers are a solid foundation that will permit new usage like push (even when the browser is in the background) , geofencing or background synchronization.
Related
From MDN:
This feature has been removed from the Web standards. Though some browsers may still support it, it is in the process of being dropped. Avoid using it and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.
Because it was replaced by Service Workers.
AppCache has a not so good API design and Service Worker can be used more flexible.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 13 days ago.
The community reviewed whether to reopen this question 13 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What are the existing client-side architectures to access a local Smart Card thru a PC/SC Smart Card reader (ISO 7816-3, ISO 14443) from a generic browser (connected to a server through http(s)), preferably from Javascript, with the minimum installation hassle for the end user? The server needs to be able to at least issue APDUs of its choice to the card (or perhaps delegate some of that to client-side code that it generates). I am assuming availability on the client side of a working PC/SC stack, complete with Smart Card reader. That's a reasonable assumption at least on Windows since XP, modern OS X and Unixes.
I have so far identified the following options:
Some custom ActiveX. That's what my existing application uses (we developed it in-house), deployment is quite easy for clients with IE once they get the clearance to install the ActiveX, but it does not match the "generic browser" requirement.
Update: ActiveX is supported mostly by the deprecated IE, including IE11; but not by Edge.
Some PC/SC browser extension using the Netscape Plugin API, which seems like a smooth extension of the above. The only ready-made one I located is SConnect (webarchive). It's no longer promoted (Update: thought still actively maintained and used late 2020 in at least one application), it's API documentation (webarchive) is no longer officially available, and it has strong ties to a particular Smart Card and reader vendor. The principle may be nice, but making such a plugin for every platform would be a lot of work.
Update: NPAPI support is dropped by many browsers, including Chrome and Firefox.
A Java Applet, running on top of Oracle's JVM (1.)6 or better, which comes with javax.smartcardio. That's fine from a functional point of view, well documented, I can live with the few known bugs, but I'm afraid of an irresistible downwards spiral regarding acceptance of Java-as-a-browser-extension.
[update, Feb 2021]: This answer considered the WebUSB API as a promising solution solution in 2015, then reported in 2019 that can't work or is abandoned. I made a question about it there.
Any other idea?
Also: is there some way to prevent abuse of whatever PC/SC interface the browser has by a rogue server (e.g. presenting 3 wrong PINs to block a card, just for the nastiness of it; or making some even more evil things).
The fact is that browsers can't talk to (cryptographic) smart cards for other purposes than establishing SSL.
You shall need additional code, executed by the browser, to access smart cards.
There are tens of custom and proprietary plugins (using all three options you mentioned) for various purposes (signing being the most popular, I guess) built because there is no standard or universally accepted way, at least in Europe and I 'm sure elsewhere as well.
Creating, distributing and maintaining your own shall be a blast, because browsers release every month or so and every new release changes sanboxing ir UI tricks, so you may need to adjust your code quite often.
And you probably would want to have GUI capabilities, at least for asking the permission of the user to access a card or some functionality on it.
For creating a multiple-platform, multiple browser plugin, something like firebreath could be used.
Personally, I don't believe that exposing PC/SC to the web is any good. PC/SC is by nature qute a low level protocol that when exposing this, you could as well expose block level access to your disk and hope that "applications on the web are mine only and they behave well" (this should answer your "Also"). At the same time a thin shim like SConnect is the easiest to create, for providing a javscript plugin.sendAPDU()-style code (or just wrap all the PC/SC API and let the javascript caller take care of the same level of details as in native PC/SC API use case).
Creating a plugin for this purpose is usually driven by acute current deficiencies.
Addressing the future (mobile etc) is another story, where things like W3C webcrypto and OpenMobile API will probably finally somehow create something that exposes client-side key containers to web applications. If your target with smart cards is cryptography, my suggestion is to avoid PC/SC and use platform services (CryptoAPI on Windows, Keychain on OSX, PKCS#11 on Linux)
Any kind of design has requirements. This all applies if you're thinking of using keys rather than arbitrary APDU-s. If your requirement is to send arbitrary APDU-s, do create a plugin and just go with it.
Update (8/2016): A new API for the Web called WebUSB API is being discussed. You can already use it with Chrome v54+.
This standard will be implemented in all major browsers and will replace the need for third-party applications or extensions for Smard Cards :-)
So the new answer is YES!
And the OSI-like architecture stack is:
PC/SC
CCID v1.1
WebUSB API
USB driver, i.e. libusb.
2019 Update: As #vlp commented, it seems that it doesn't work any in Chrome because they decided to block WebUSB for smartcards for some specious reasons :-(
Note: Google annonced that they will abandon Chrome Apps in 2017.
Previous anwser:
Now (2015) you can create a Google Chrome App, using the chrome.usb API.
Then you access the smartcard reader via its CCID-compliant interface.
It's not cross-browser but JavaScript programmable & cross-platform.
Anyway Netscape Plugin API (NPAPI) is not supported any more by modern browsers. And Java applets are being dismissed by browser vendors.
I have just released a beta plugin addressing this problem.
This beta code is available here:
https://github.com/ubinity/webpcsc-firebreath
This plugin is based on the firebreath framework and has been beta-tested with Fireofx and Chrome under Linux/WinXP/Win7. Source code and extension pack are provided.
The basic idea is to provide a PCSLite API access and then develop a more friendly JS-api on top of this.
This plugin is under active development, so feel free to send any report and request.
For your first question I have little hope: either you are satisied with a very small subset of smart card functionality (like signing e-Mail or PDFs), then you may use some ready-made software (like PKCS), ideally maintained by the smart card company, or you want broader functionality and need to invest considerable effort on your own. Surely PCSC is the starting point to choose.
At least for your "also:" there is some hope.
1) Note, that some specifications (e.g. ICAO/German BSI TR-3110) request a method, where a PIN is not blocked, but uses a substantial amount of time as soon as the error counter hits 1 before replying. The final attempt must be enabled using a different command, otherwise no further comparison and error counter adjustment is done.
2) Simply protect the Verify command by requiring secure messaging. Sensitive applications use secure messaging for everything, so first step a session key is negtiated, which is second applied to all succeeding commands and responses. The effect would be, that the command is rejected due to incorrect MACs long before a comparison or modification of error counter is done.
There is another browser plugin similar to the one proposed by #cslashm available at http://github.com/cardid/WebCard. Is also open source and can be installed with "minimum installation hassle" as required in the original question. You can see an example of use visiting http://plugin.cardid.org
WebCard has been tested in IE 8 through 11, Chrome and Firefox in Windows and in Chrome and Safari in Mac OS X. Since is just a wrapper for PC/SC it requires in Mac OS X the installation of SmartCard Services from http://smartcardservices.macosforge.com
As chrome and firefox going to stop the support of NPAPI Plugin, there is no secure solution available to maintain the session for the smart card reading instead your certificate of the card have support for mutual ssl ,I answered for the similar question source,It might help
Its dirty, but if its acceptable / viable to install a bridge daemon/service on the client machine, then you can write a local bridge service (e.g. in python / pyscard) that exposes the smartcard via a REST interface, then have javascript in the browser that mediates between that local service (facade) and the remote server API.
Web Serial API (draft) can be used to communicate with a serial smart card reader from some browsers.
Buyer beware: This API is a draft and may be changed/abandoned at any time.
Speaking about Chrome, you can now use the Smart Card Connector app provided by Google which bundles the PC/SC-Lite port and the generic CCID driver.
The app itself works through the chrome.usb API, that was mentioned by the previous commenters.
So, instead of rewriting the whole stack (starting from the lowest level - raw USB), it's now possible for developers to code only the part that works on top of PC/SC API - which is exposed by the Connector app.
Clients,clients,clients...plugins,..JSApis..
Well..
For certain we know this : All browsers, when communicating to an Apache or IIS servers, are actually signing "something" when a https/SSL handshake process is needed.
For instance, a typical Apache configuration like this:
SSLVerifyClient require
SSLVerifyDepth 10
SSLOptions +FakeBasicAuth +StdEnvVars +ExportCertData +OptRenegotiate
Initiates a PIN pad pop up and the user must insert the smartcard pin to go on.
Well, my idea is : why not make the turn to the server, and tweak that behaviour, in order to upload a bytestream of stuff to sign something when a handshake is initiaded?
I have a setup where a smartcard reader is scanned to login a user. The PC/SC library work great on desktop. Somebody had mentioned to use
Emscripten (https://github.com/kripken/emscripten) compiler which compiles c++ into JavaScript code. But that didn't work well because some of the functions being used by PC/SC are only available server side.
After much research. I finally gave up on a client side solution, chrome web usb API also couldn't recognize the reader.
I then decided to give signalR a try and set up a hub on the PC connected to the smartcard reader and this approach worked out very well.
Ok, so it isn't a huge worry yet as it is only supported by a few browsers:
Mozilla Firefox: Supported
Google Chrome: Supported since version 13 (Use an alternate syntax)
Safari: Currently not supported Internet
Explorer: Currently not supported
However, prefetch makes me twitch. If the user lands on your page and bounces off to another site have you paid for the bandwidth of them visiting your prefetch links?
Isn't there a risk of developers prefetching every link on the page which in turn would make the website a slower experience for user?
It looks like it can alter analytics. Will people be forcing page views onto users via prefetch?
Security, you wont know what pages are being prefetched. Can it prefetch malicious files?
Will all this prefetching be painful for mobile users with limited usage?
I can't call myself an expert on the subject, but I can make these observations:
Prefetch should be considered only where it is known to be beneficial. Enabling prefetch on everything would just be silly. It's essentially a balance of server load vs user experience.
I haven't looked into the HTML5 prefetching spec, but I would imagine they've specified a header that states "this request is being performed as part of prefetching", which could be used to fix the analytics problem - i.e. "if this is a prefetch, don't include it in analytics stats".
From a security standpoint, one would expect prefetch to follow the same cross-domain rules as Ajax does. This would mitigate any cases where XSS is an issue.
Mobile browsers that support HTML5 prefetch should be smart enough to turn it on when using WiFi, and off when using potentially expensive or slow forms of network connection, e.g. 2G/3G.
As I've stated, I can't guarantee any of the above things, but (like with any technology) it's a case of best practices. You wouldn't use Cache-Control to force every page on your site to be cached for a year. Nor would you expect a browser to satisfy a cross-domain Ajax request. Hopefully the same considerations were/will be taken for prefetching.
To answer the question of analytics and statistics, the spec has the following to say:
To ensure compatibility and improve the success rate of prerendering requests the target page can use the [PAGE-VISIBILITY] to determine the visibility state of the page as it is being rendered and implement appropriate logic to avoid actions that may cause the prerender to be abandoned (e.g. non-idempotent requests), or unwanted side-effects from being triggered (e.g. analytics beacons firing prior to the page being displayed).
With many browsers adding proper local storage support (and with this whole HTML5 buzz), there is a lot of talk about offline web apps competing with desktop software. But, as a matter of fact - one quick "clear private data" on your browser (which a lot of people do) - clears all the local storage data.
I'm now thinking that local storage in browsers can at best be used to cache data temporarily before being sync-ed with the web server, but truly offline web applications can't rely on HTML5's local storage permanently due to the problem I outlined above.
Is there a scope for offline web applications that actually depend on data extensively?
My take on this is that the offline capability of online web apps can compete with desktop software, but not pure offline web-apps.
Why? Well, the major drawback of online web apps was what happens when you lose your network connection when doing any work. Seeing as this can be resolved now, the competition is truly on. Imagine editing a document online, then move around without internet, come back online and then sync the changes and continue to work as if nothing happened. That is truly awesome.
For this to work, the browser should allow to store data in a location that you can pick which would mean access to OS layer, which will probably not happen anytime soon...
It looks like Websockets in HTML 5 will become a new standard for server push.
Does that mean the server push hack called Comet will be obsolete?
Is there a reason why I should learn how to implement comet when Websockets soon (1-2 years) will be available in all major browsers?
Then I could just use Beaconpush or Pusher instead till then right?
There are 2 pieces to this puzzle:
Q: Will the client-side portion of "comet" be necessary?
A: Yes. Even in the next 2 years, you're not going to see full support for WebSockets in the "major" browsers. IE8 for example doesn't have support for it, nor does the current version of FireFox. Given that IE6 was released in 2001, and it's still around today, I don't see WebSockets as replacing comet completely anytime soon.
Q: Will the server-side portion of "comet" be necessary?
A: Yes. Comet servers are designed to handle long-lived HTTP connections, where "typical" web servers do not. Even if the client side supports WebSockets, the server side will still need to be designed to handle the load.
In addition, as "gustavogb" mentioned, at least right now WebSockets aren't properly supported in a lot of HTTP Proxies, so until those all get updated as well, we'll still need some sort of fallback mechanism.
In short: comet, as it exists today, is not going away any time soon.
As an added note: the versions of WebSockets that currently ARE implemented in Chrome and Safari are two different drafts, and work on the "current" draft is still under very heavy development, so I don't even believe it is realistic to say that WebSockets support is functional at the moment. As a curiosity or for learning, sure, but not as a real spec, at least not yet.
[Update, 2/23/11]
The currently shipping version of Safari has a broken implementation (it doesn't send the right header), Firefox 4 has just deprecated WebSockets, so it won't ship enabled, and IE9 isn't looking good either. Looks like Chrome is the only one with a working, enabled version of a draft spec, so WebSockets has a long way to go yet.
Does that mean the server push hack called Comet will be obsolete?
WebSockets are capable of replacing Comet, AJAX, Long Polling, and all the hacks to workaround the problem when web browsers could not open a simple socket for bi-directional communications with the server.
Is there a reason why I should learn how to implement comet when WebSockets soon will be available in all major browsers?
It depends what "soon" means to you. No version of Internet Explorer (pre IE 9) supports the WebSockets API yet, for example.
UPDATE:
This was not intended to be an exhaustive answer. Check out the other answers, and #jvenema's in particular, for further insight into this topic.
Consider using a web socket library/framework that falls back to comet in the absence of browser support.
Checkout out Orbited and Hookbox.
In the medium term websockets won't replace comet solutions not only because of lack of browsers support but also because of HTTP Proxies. Until most of HTTP Proxies will be updated to support websockets connections, web developers will have to implement alternative solutions based on comet techniques to work in networks "protected" with this kind of proxies.
In the short/medium websockets will be just an optimization to be used when available, but you will still need to implement long-polling (comet) to rely on when websockets are not available if you need to make your website accessible for a lot of customers with networks/browsers not under your control.
Hopefully this will be abstracted by javascript frameworks and will be transparent for web developers.
Yes, because "soon" is a very slippery term. Many web shops still have to support IE6.
No, because a rash of comet frameworks and servers has emerged in recent times that will soon make it largely unnecessary for you to get your hands dirty in the basement.
Yes, because "soon" is a very slippery term...
Charter for the [working group] working group tasked with websockets, BiDirectional or Server-Initiated HTTP (hybi):
Description of Working Group
HTTP has most often been used as a request/response protocol, leading
to clients polling for new data, or users hitting the refresh button in
their browsers. Recent web applications are finding ways to communicate
with web servers in realtime, pushing data from the server-side to the
client as soon as it is available. However, these applications at
present can only use a variety of HTTP mechanisms (e.g. long polling
requests) to communicate with web servers bidirectionally.
The Hypertext-Bidirectional (HyBi) working group will seek
standardization of one approach to maintain bidirectional
communications between the HTTP client, server and intermediate
entities, which will provide more efficiency compared to the current
use of hanging requests.
HTTP still has a role to play; it's a flexible message oriented system. websockets was developed to provide bidirectionality and avoid the long polling issue altogether. [it does this well]. but it's simpler than http. and there's a lot of things that are useful about http. there will certainly be continued progress enriching http's bidirectional communication, be it comet or other push technologies. my own humble attempt is [http://github.com/rektide/pipe-layer].