Will websockets in HTML 5 replace ajax for partial page refreshes? - html

I've just stumbled upon the websockets feature coming in HTML 5, here. At first glance it seems that once Firefox and IE get on board with the spec ajax may be redundant. My question is, in your opinion will ajax (using jquery $.ajax() or even with straight XMLHttpRequest/other) be replaced by this new ws:// protocol?
If so, when should we start changing our development methodologies?

Websockets address a different need than XMLHTTPRequests. The latter is what its name says: a request: You know that you need something (ie. because the user clicked a link, scrolled or whatever) and you retrieve it - and XHR does a fine job doing just that.
Trouble starts when certain events can be triggered on the serverside that are supposed to be pushed to the client in realtime. The only thing that you can do right now is to poll the server on a regular basis - which is a hack that comes with a set of problems. And this is exactly the problem that Websockets are made for: To provide a backchannel to the browser for realtime notifications.
I think Ajax XOR Websockets is kind of a false dichotomy. They address different needs and can coexist peacefully.

When browsers start to implement web sockets.

I'm somewhat skeptical. All major browsers started supporting ajax in 1999, and it gained popularity somewhere around 2005, after the launch of gmail. And we've not even reached that point yet, where major browsers support websockets (add another couple of years).
There's a simple reason behind that delay in ajax adoption: need for websites to support older browser versions. (Remember, how many people still use IE 6&7?)

Related

Advantages of the html5Mode?

I've read quite some post about the angularjs html5Mode including this one and I'm still puzzled:
What is it good for? I can see that I can use http://example.com/home instead of http://example.com/#/home, but the two chars saved are not worth mentioning, are they?
How is this related to HTML5?
The answers links to a page showing how to configure a server. It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Update after Peter Lyons's answer
I started to react in a comment, but it grew too long. His long and valuable answer raises some more questions of mine.
option of rendering the actual "/home"
Yes, but that means a lot of work.
crazy escaped fragment hacks
Yes, but this hack is easy to implement (I did it just a few hours ago). I actually don't know what I should do for in case of the html5mode (as I haven't finished reading this seo article yet.
Here's a demo
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks
But the same holds for the hashbang, too. Moreover, a user following an external link to http://example.com/#!/home and then another link to http://example.com/#!/foreign will be always served the same page via the same URL, while in the html5mode they'll be served the same page (unless the burdensome optimization you mentioned gets done) via a different URL (which means it has to be loaded again).
but the two chars saved are not worth mentioning, are they?
Many people consider the URL without the hash considerably more "pretty" or "user friendly". Also, a very big difference is when you browse to a URL with a hash (a "fragment"), the browser does NOT include the fragment in it's request to the server, which means the server has a lot less information available to deliver the right content immediately. As compared to a regular URL without any fragment, where the full path "/home" IS including in the HTTP GET request to the server, thus the server has the option of rendering the actual "/home" content directly instead of sending the generic "index.html" content and waiting for javascript on the browser to update it once it loads and sees the fragment is "#home".
HTML5 mode is also better suited for search engine optimization without any crazy escaped fragment hacks. My guess is this is probably the largest contributing factor to the push for HTML5 mode.
How is this related to HTML5?
HTML5 introduced the necessary javascript APIs to change the browser's location bar URL without reloading the page and without just using the fragment portion of the URL. Here's a demo
It seems like the purpose of this rewriting to make the server return always the same page, no matter what the URL looks like. But doesn't it read to needlessly increased traffic?
Nope, it's the opposite. The server ONLY gets a full page request when the user first clicks a link onto the site OR does a manual browser reload. Otherwise, the user can navigate around the app clicking like mad and in some cases the server will see ZERO traffic from that. More commonly, each click will make at least one AJAX request to an API to get JSON data, but overall the approach serves to reduce browser<->server traffic. If you see an app responding instantly to clicks and the URL is changing, you have HTML5 to thank for that, as compared to a traditional app, where every click includes a certain minimum latency, a flicker as the full page reloads, input forms losing focus, etc.
It works neither in my Chromium 25 nor in my Firefox 20. Sure, they're both ancient, but everything else I needed works in both of them.
A good implementation will use HTML5 when available and fall back to fragments otherwise, but work fine in any browser. But in any case, the web is a moving target. At one point, everything was full page loads. Then there was AJAX and single page apps with fragments. Now HTML5 can do single page apps with fragmentless URLs. These are not overwhelmingly different approaches.
My feeling from this back and forth is like you want someone to declare for you one of these is canonically more appropriate than the other, and it's just not like that. It depends on the app, the users, their devices, etc. Twitter was all about fragments for a good long while and then they realized their mobile users were seeing too much latency and "time to first tweet" was too long, so they went back to server-side rendering of HTML with real data in it.
To your other point about rendering on the server being "a lot of work", it's true but some consider it the "holy grail" of web app development. Look at what airbnb has done with their
rendr framework. See also Derby JS. My point being, if you decide you want rendering in both the browser and the server, you pick a framework that offers that. Not that you have a lot of options to choose from at the moment, granted, but I wouldn't advise hacking together your own.

Content Security Policy: If set, cannot load script from bookmarklet. Is a browser extension granted clearance?

I'm working on browser automation tools (working at the JS level). It's pretty clear that loading external script can be considered an XSS attack. A few months ago I was able to run my scripts on Github.com so long as I served my js resources over HTTPS.
But this is no longer the case, i.e. Github has implemented an elegant standards-compliant barrier to this:
This is a great step forward I think: we can specify to the clients that we want them to put a more secure perimeter around our site's sandbox.
On the other hand it is making the options more limited on mobile platforms, though that's not entirely true because it's entirely possible to produce a standalone browser app which has these extension features built-in. Not exactly gonna be easy to accomplish compared to a browser extension, though.
Is it still possible to work around this with a (codesigned and reviewed) browser extension? What sort of user experience impact might this have? My hope is that it will be possible to set this up so that end-users only have to go through a short one-time setup. It's apparent to me that at least Google is making it so that Extensions published through their portal is distributed at least "reasonably" securely, and I imagine Apple (and eventually Microsoft) would be following suit for Safari and IE. I am only interested in Chrome and Safari for now (primarily Chrome for now).
If it turns out that somehow even extensions are subject to the content security policy, how might I write an extension that can reliably manipulate a page for me? I'm fairly sure this can't be the case as it would be the death of something like Tampermonkey.
Oh I just needed to read a little further (oh Github, you're awesome):
https://github.com/blog/1477-content-security-policy
The answer is yes! User configured scripts should always be granted clearance! (but we are off to a rocky start it seems)
I actually think there's significant opportunity for social engineering happening here; "Install this bookmark in your browser to use our cute emoticons in forums!" "oh bookmarks can't be viruses, right?"
As a workaround, you can tell your bookmarklet to load an external CSS stylesheet with your JS code injected. This bypasses CSP. Have a look at my answer to a similar question.

Is Modernizr a replacement for browser capability tools such as BrowserHawk?

Every now and again I get asked to install something like this on a customer web server, or we're asked if we support BrowserHawk (which we don't).
I'm wondering if Modernizr is something I can point my customers at and tell them to use instead?
I've not used Browserhawk (in fact, I'd never heard of it until now), so please don't take my opinion as infallible.
However, I do know about browsecap.ini, and having taken a few moments to read the Browserhawk website, I'm fairly certain it's also a server-side browser detection tool.
If that's the case, then the answer is 'Yes'. Current best practice says to avoid using server-side browser detection, and to use client-side feature detection instead. And this is exactly what Modernizr does.
Feature detection allows you to do much finer-grained tuning of your site according to what the user's browser is capable of, rather than simply blocking users who have (or don't have) a particular browser. It also allows you to implement specific fall-back solutions for specific features, if required.
Detecting the user's browser from the server-side is a problem because of the rapid pace of change in the browser market; you would need to be constantly updating your browser detection script to cope with new versions.
In addition, users of slightly more unusual browsers or browser shells may not be detected properly by a browser detection script, so they may have trouble with sites that use it, even though their browser should be capable of displaying the site. Also, some users may not provide the user-agent string required to correctly detect their browser; it is blocked by some proxies, firewalls, etc, and some browsers also allow it to be modified, so it can be spoofed easily if a user wants to.
But having gone to lengths to promote feature detection over browser detection, I need to point out one exception to all of this, and that's IE.
Older versions of IE have a lot of bugs. This is different to simply having missing features, because you can't actively check for bugs so easily. If you're having specific issues with IE bugs, then it is legitimate to do browser detection to avoid them. (feature detection is still valid if you're only worried about what the browser supports, rather than actual bugs)
But even in this case, a tool such as browsercap.ini or Browserhawk is unnecessary. IE helpfully supports Conditional Comments which allows you to add specific code for IE without having to go out of your way to detect it.

Can you detect the HTML 5 History API on the server side?

Is there reliable a way to detect such browser capability from the user agent string?
HTML5 isn't a server-side language.
Anyway, there isn't a way to reliably detect UA capabilities, for instance they could have Javascript turned off, addons installed, etc. etc.
You could use some SS methods such as PHP's Browser Detect, but aside from that there's nothing more you can really do. This is not at all comprehensive at all, though!
Everything such as this should really be done client side in Javascript, as you can easily detect what's available and what isn't. There's a number of libraries out there that will do this, but it's very simple to do yourself if you know what you want so using one shouldn't really be required. Furthermore, you should never want to do this based on User Agent strings, as I mentioned before there's addons available that can modify behaviour etc. You should literally just check for the feature you wish to use rather than restricting yourself to a certain version of the browser.
Not reliably — you’re stuck with figuring out the browser version from the user-agent string, and maintaining a list of which browser versions support the API.
You could, however, detect it on the client side using JavaScript:
Modernizr
Mark Pilgrim’s suggested History API detection code
and then do a redirect via JavaScript (i.e. by setting window.location) to let the server know whether the API is available or not. That would be the usual way to redirect to a URL starting with # (as per your comment on rudi_visser’s answer.
This is not server side (so it probably does not answer your question, I thought it would be helpful though):
Have you looked at Modernizr
It is a javascript file you include in your HTML page. You can then use its properties to detect if a particular HTML 5 tag is supported by the browser.

Where is the chink in Google Chrome's armor?

While browsing with Chrome, I noticed that it responds extremely fast (in comparison with IE and Firefox on my laptop) in terms of rendering pages, including JavaScript heavy sites like gmail.
This is what googlebook on Chrome has to say
tabs are hosted in process rather than thread.
compile javascript using V8 engine as opposed to interpreting.
Introduce new virtual machine to support javascript heavy apps
introduce "hidden class transitions" and apply dynamic optimization to speed up things.
Replace inefficient "Conservative garbage colllection" scheme with more precise garbage collection scheme.
Introduce their own task scheduler and memory manager to manage the browser environment.
All this sounds so familiar, and Microsoft has been doing such things for long time.. Windows os, C++, C# etc compilers, CLR, and so on.
So why isn't Microsoft or any other browser vendor taking Chrome's approach? Is there a flaw in Chrome's approach? If not, is the rest of browser vendor community caught unaware with Google's approach?
Chrome's approach is difficult to write, and requires forethought from the developers. IE and Firefox are both attempting to move to a process-per-tab model, but due to backwards compatibility are not able to transition quickly. Chrome, being an entirely new browser build on a clean rendering engine (WebKit), was easier to write in this way.
They have crossed over from a web browser as a tool to view web pages, to a tool optimized to work for web applications. There may be some flaws in this initial release, but they are changing the game.
IE8 uses a similar individual process per tab module, though they do not use a single process per tab, but instead spread all tabs across a process pool.
#pix0r but they added a little thing in the bottom right corner so you can expand the text box any direction you want, which I love because I use a wide display and prefer to type in a wider screen.
Thats actually a WebKit feature, Chrome just inherited it.
Virtually all of these features existed in other browsers before Chrome. IE8 had process isolation for tabs. Firefox / Safari had most of the JavaScript stuff. Most browsers do their own memory management.
Chrome has a few unique features (hyperrestricted render processes, etc) which are difficult to put into other browsers due to add-on/application compatibility concerns.
The primary thing Chrome has going for it is an extremely hardcore focus on minimalism and high-performance. By focusing on these as their competitive advantages, they can appeal to users who find this area of focus compelling.
As time passes, I'm sure you will see the homogenization of features as the browsers attempt to one-up each other.
In the meanwhile, I still stick with Firefox over Chrome for the simple reason that Firefox is (i) non-profit and has a (ii) huge addon community.
Addons such as NoScript and AdBlockPlus are almost essential for me.
One chink in Chrome's armor is the fact that it renders these darned textareas on StackOverflow are so small that it's making my eyes bleed!
One chink in Chrome's armor is the fact that it renders these darned textareas on StackOverflow are so small that it's making my eyes bleed!
Yeah. I mentioned this on uservoice and got declined because the current size is evidently the default under webkit. Every other site I've tried with Chrome that uses textboxes to compose content manages to have a decent sized font. The default definitely doesn't work, but there's obviously some way to override it. Jeff needs to fix this!
Edit:
Jeff was nice enough to point out how to fix this problem yourself.
#pix0r but they added a little thing in the bottom right corner so you can expand the text box any direction you want, which I love because I use a wide display and prefer to type in a wider screen.
I also wanted to point out that Google completely built Chrome from the ground up, with the exception of using webkit, so they have some of the advantages of not having to not deal with old-code. And of course there is the INSANLELY cool/smart developers.
The biggest chink I've found is its lousy proxy support compared to IE, FF and Opera. So it's pretty much useless at work, render pages at random, and requesting authentication for the proxy, where the others pass it seamlessly.
That said on my home machine it works great, if it wasn't for the OTT EULA I'd use it now.
thing2k
One "flaw" about Chrome is that it uses more memory upfront than all of the other browsers. I'm just guessing that this is due to the overhead associated with all the separate tab management.
After it's been open for some time, however, it doesn't use more memory than other browsers.
Many companies play a game of "What's the least we can do to get the leg up?" Marketing creates a laundry list of features needed to be better than the competitors. Project management ensures engineers stick to those features for fear that the project will exceed the time allocated... which of course it will. There's not a whole lot of room in such a system for a big picture leap-ahead. The incremental improvements you see in products, and browsers, is a consequence.
You have to keep in mind that Microsoft primary business is Rich environement (GUI) Application. Web tool is a threat to them as it is platform independant (not promoting they main product).
Of course the IE team probably had figured something like that but... Microsoft definetly won't invest a lot of money in IE if what they are selling is a Rich application platform.