HTML Imports with the Async flag - strange behaviour in Chrome - google-chrome

I am trying to optimise the loading of Polymer Elements in my Polymer based web app. In particular I am concentrating my effort around the initial start up screens. Users will have to log on if they don't have a valid jwt token held in a cookie.
index.html loads an application element <pas-app> which in turn loads an session manager (<pas-eession>). Since the normal startup will be when the user is already logged on the element that handles input of user name and password (<pas-logon>) is hidden behind a <template is="dom-if"> element inside of <pas-session>and I have added the async flag to its html import line in that element as well - thus :
<link rel="import" href="pas-logon.html" async>
However, in chrome (I don't experience this in firefox, where html imports are polyfilled) this async seems to flow over embedded <script> element inside the custom element. In particular I get a type error because the script to cause it to be regestered as a custom element thinks Polymer is not a function.
I suspect I am using the wrong kind of async flag - is there a way to specify that the html import should not block the current element, but should block the scripts inside itself when loaded.

I think I had the same problem today and found this question when searching for a solution. When using importHref async I get errors like [paper-radio-button::_flattenBehaviorsList]: behavior is null, check for missing or 404 import and dependencies are not loaded in the right order. When I change to async = false the error messages are gone.
It seems that this is a known bug of Polymer or probably Chrome https://github.com/Polymer/polymer/issues/2522

Related

First steps with Polymer, elements not displaying correctly (in Plunker!)

I'm just starting with web development, and I'm trying to use some polymer elements:
http://embed.plnkr.co/o4OKkE/
I'm kind of half managing the import. The elements display (in some manner). The paper element works well, apart from the margins. The button is good, the paper-input completely fails, same with tabs. The text/formatting is all default. Does polymer dictate the font etc, or is it managed using CSS separately?
I think I'm not attaching the theme correctly. Can anyone point out the errors?
Edit: Thanks to Neil John Ramal, I've got the basics working without any errors:
http://run.plnkr.co/AD3ETQOsMwajnSBt/
I just can't seem to get the elements to import using polygit, just rawgit.
This here:
works fine. However this produces an error:
Redirect at origin 'http://polygit.org' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://run.plnkr.co' is therefore not allowed access.
Presumably because Plunker is not allowing redirects and that's how polygit works. How it functions with polymer.html I'm not sure...
You are mixing up your imports. You have to make sure you are importing your components from a single source so no variable/name clashing would occur. On your example, you are importing both from your own repository and polygit's.
Evidence is on the error logs:
VM199 polymer-micro.html:363 Uncaught NotSupportedError: Failed to execute 'registerElement' on 'Document': Registration failed for type 'dom-module'. A type with that name is already registered.
This just means that you have imported polymer.html more than once and from different sources. HTML imports only dedupe if they came from the same source.
Also at your index.html:
<script data-require="polymer#*" data-semver="1.0.0" src="http://polygit.org/components/polymer/polymer.html"></script>
Should be:
<link rel="import" src="//polygit.org/components/polymer/polymer.html">

Polymer breaks with old version of Mootools

Latest Update (also updated post title)
So I tracked down the issue to the old version of Mootools (which I cannot upgrade or remove due to project restrictions).
Mootools does the following, which is the code that causes the issue:
/*
Class: Abstract
Abstract class, to be used as singleton. Will add .extend to any object
Arguments:
an object
Returns:
the object with an .extend property, equivalent to <$extend>.
*/
var Abstract = function(obj){
obj = obj || {};
obj.extend = $extend;
return obj;
};
//window, document
var Window = new Abstract(window);
var Document = new Abstract(document);
The new definitions of Window and Document is what's breaking Polymer imports. Any suggestions on updating the code above to gracefully extend the Document/Window objects without breaking existing functionality?
OLD description below before I discovered the issue lies with mootools
I've already included the webcomponents.js script.
Then, when I have the for polymer.html, the errors below start appearing, and my polymer components doesn't work.
The components works in isolation using the polymer-cli. Anyone know what may be causing this issue?
EDIT: So this is what I have in my <head>
<script src="/media/bower_components/webcomponentsjs/webcomponents.js"></script>
<link rel="import" href="/media/bower_components/polymer/polymer.html">
(...sorry I cannot show more, private company code and what-not)
That's literally all I need in my page to raise the error mentioned above.
I'm starting to think there is some other javascript library (there's a lot) that might be interfering with Polymer, since I cannot replicate the issue on a brand new site.
I should also note, there are no 404's. The polymer.html file does get loaded as expected in the browser, I verified this in my network panel in developer console.

add multiple chrome inline installations for different link tags same domain

I have a Chrome extension, and a Chrome app. I need inline install for both of them on the same domain.
As per Googles instructions (for one inline install) I add the header link tag:
<link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/itemID">
Then add the onclick function in the body:
<button onclick="chrome.webstore.install()" id="install-button">Add to Chrome</button>
<script>
if (chrome.app.isInstalled) {
document.getElementById('install-button').style.display = 'none';
}
</script>
What I need to know is how to add two instances. One for the extension, and one for the app. Do I add two link tags in the header, then edit the onclick function?
This is what Google says to do for multiple instances, but I don't understand where to edit the onclick function to differentiate between the two.
To actually begin inline installation, the
chrome.webstore.install(url, successCallback, failureCallback)
function must be called. This function can only be called in response
to a user gesture, for example within a click event handler; an
exception will be thrown if it is not. The function can have the
following parameters:
url (optional string) If you have more than one tag on your
page with the chrome-webstore-item relation, you can choose which item
you'd like to install by passing in its URL here. If it is omitted,
then the first (or only) link will be used. An exception will be
thrown if the passed in URL does not exist on the page.
successCallback (optional function) This function is invoked when
inline installation successfully completes (after the dialog is shown
and the user agrees to add the item to Chrome). You may wish to use
this to hide the user interface element that prompted the user to
install the app or extension.
failureCallback (optional function) This
function is invoked when inline installation does not successfully
complete. Possible reasons for this include the user canceling the
dialog, the linked item not being found in the store, or the install
being initiated from a non-verified site. The callback is given a
failure detail string as a parameter. You may wish to inspect or log
that string for debugging purposes, but you should not rely on
specific strings being passed back.
I currently have one link tag in my header for the extension. I need to add another inline installation, on a different page, same domain, but this second onclick code needs to be different so it doesn't refer to the existing link tag in my header.
Many thanks.
<link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/itemID1">
<link rel="chrome-webstore-item" href="https://chrome.google.com/webstore/detail/itemID2">
<button onclick="chrome.webstore.install('https://chrome.google.com/webstore/detail/itemID1')" id="install-button-1">Add App to Chrome</button>
<button onclick="chrome.webstore.install('https://chrome.google.com/webstore/detail/itemID2')" id="install-button-2">Add Extension to Chrome</button>
The very same docs page shows a method for the extensions.
Basically, your extension can inject a <div id="somethingYouExpect"> into the DOM, and the page's script can detect that.
It's a bit clunky though: I was trying to get it to work for test code and didn't manage to do so in a good way, as content scripts are injected either before DOM is constructed at all or after document ready fires. You can bypass that with mutation observers, but meh and your button will be visible for a split second.
You can save yourself some pain, if you're just hiding an element, by injecting a css file hiding it. Or, you can hide the elements from injected code. Either way is somewhat layout-sensitive though.
If you HAVE to be layout-independent and at the same time want something more complex than element hiding, either go the (div inject + mutation observer) route or you can try window.postMessage approach to signal the page to hide the element.
Step by step guide for the extension / CSS variant.
Suppose your extension install UI is contained in an element with id extension-install.
Add a content script to the manifest file:
"content_scripts": [
{
"matches": ["*://yourdomain/*"],
"css": ["iaminstalled.css"],
"run_at": "document_start"
}
],
The CSS:
#extension-install {
display: none !important;
}
So, to recap:
To allow installs of both the app and the extension, you need two <link> tags in the head
To install either you pass the url parameter to chrome.webstore.install
If the app is installed, it will define chrome.app.isInstalled in the page's context. You can check for it from the page to hide the install button.
If the extension is installed, it can inject CSS/JS into page to hide the install button.

Browser load HTML [duplicate]

I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:
Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)
If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.
AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.
Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.
The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

CodeIgniter + jQuery(ajax) + HTML5 pushstate: How can I make a clean navigation with real URLs?

I'm currently trying to build a new website, nothing special, nice and small, but I'm stuck at the very beginning.
My problems are clean URLs and page navigation. I want to do it "the right way".
What I would like to have:
I use CodeIgniter to get clean URLs like
"www.example.com/hello/world"
jQuery helps me using ajax, so I can
.load() additional content
Now I want to use HTML5 features like pushstate to
get rid of the # in the URL
It should be possible to go back and forth without a page refresh but the page will still display the right content according to the current URL.
It should also be possible to reload a page without getting a 404 error. The site should exist thanks to CodeIgniter. (there is a controller and a view)
For example:
A very basic website. Two links, called "foo" and "bar" and a emtpy div box beneath them.
The basic URL is example.com
When you click on "foo" the URL changes to "example.com/foo" without reloading and the div box gets new content with jQuery .load(). The same goes for the other link, just of course different content and URL.
After clicking "foo" and then "bar" the back button will bring me back to "example.com/foo" with the according content. If I load this link directly or refresh the page, it will look the same. No 404 error or something.
Just think about this page and tell me how you would do this.
I would really love to have this kind of navigation and so I tried several things.
So far...
I know how to use CodeIgniter to get the URLs like this. I know how to use jQuery to load additional content and while I don't fully understand the html5 pushstate stuff, I at least got it to work somehow.
But I can't get it to work all together.
My code right now is a mess, that's the reason I don't really want to post it here. I looked at different tutorials and copy pasted some code together. Would be better to upload my CI folder I guess.
Some of the tutorials I looked at:
Dive into HTML5
HTML5 demos
Mozilla manipulating the browser history
Saner HTML5 history
Github: History.js
(max. number of links reached :/)
I think my main problem is, that everybody tries to make it compatible with all browsers and different versions, adds scripts/jQuery plugins and whatnot and I get confused by all the additional code. There is more code between my script-tags then actual html content.
Could somebody post the most basic method how to use HTML5 for my example page?
My failed attemp:
On my test page, when I go back, the URL changes, but the div box will still show the same content, not the old one. I also don't know how to change the URL in the script according to the href attribute from the link. Is there something like $(this).attr('href'), that changes according to which link I click? Right now I would have to use a script for every link, which of course is bad.
When I refresh the site, CodeIgniter kicks in and loads the view, but really only the view by itself, the one I loaded with ajax, not the whole page. But I guess that should be easy to fix with a layout and the right controller settings. Haven't paid much attention to this yet.
Thanks in advance for any help.
If you have suggestions, ideas, or simple just want to mention something, please let me know.
regards
DiLer
I've put up a successful minimal example of HTML5 history here: http://cairo140.github.com/html5-history-example/one.html
The easiest way to get into HTML5 pushstate in my opinion is to ignore the framework for a while and use the most simplistic state transition possible: a wholesale replacement of the <body> and <title> elements. Outside of those elements, the rest of the markup is probably just boilerplate, although if it varies (e.g., if you change the class on HTML in the backend), you can adapt that.
What a dynamic backend like CI does is essentially fake the existence of data at particular locations (identified by the URL) by generating it dynamically on the fly. We can abstract away from the effect of the framework by literally creating the resources and putting them in locations through which your web server (Apache, probably) will simply identify them and feed them on through. We'll have a very simple file system structure relative to the domain root:
/one.html
/two.html
/assets/application.js
Those are the only three files we're working with.
Here's the code for the two HTML files. If you're at the level when you're dealing with HTML5 features, you should be able to understand the markup, but if I didn't make something clear, just leave a comment, and I'll walk you through it:
one.html
<!doctype html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.js"></script>
<script src="assets/application.js"></script>
<title>One</title>
</head>
<body>
<div class="container">
<h1>One</h1>
Two
</div>
</body>
</html>
two.html
<!doctype html>
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.js"></script>
<script src="assets/application.js"></script>
<title>Two</title>
</head>
<body>
<div class="container">
<h1>Two</h1>
One
</div>
</body>
</html>
You'll notice that if you load one.html through your browser, you can click on the link to two.html, which will load and display a new page. And from two.html, you can do the same back to one.html. Cool.
Now, for the history part:
assets/application.js
$(function(){
var replacePage = function(url) {
$.ajax({
url: url,
type: 'get',
dataType: 'html',
success: function(data){
var dom = $(data);
var title = dom.filter('title').text();
var html = dom.filter('.container').html();
$('title').text(title);
$('.container').html(html);
}
});
}
$('a').live('click', function(e){
history.pushState(null, null, this.href);
replacePage(this.href);
e.preventDefault();
});
$(window).bind('popstate', function(){
replacePage(location.pathname);
});
});
How it works
I define replacePage within the jQuery ready callback to do some straightforward loading of the URL in the argument and to replace the contents of the title and .container elements with those retrieved remotely.
The live call means that any link clicked on the page will trigger the callback, and the callback pushes the state to the href in the link and calls replacePage. It also uses e.preventDefault to prevent the link from being processed the normal way.
Finally, there's a popstate event that fires when a user uses browser-based page navigation (back, forward). We bind a simple callback to that event. Of note is that I couldn't get the version on the Dive Into HTML page to work for some reason in FF for Mac. No clue why.
How to extend it
This extremely basic example can more or less be transplanted onto any site because it does a very uncreative transition: HTML replacement. I suggest you can use this as a foundation and transition into more creative transitions. One example of what you could do would be to emulate what Github does with the directory navigation in its repositories. It's an intermediate manoever that requires floats and overflow management. You could start with a simpler transition like appending the .container in the loaded page to the DOM and then animating the old container to {height: 0}.
Addressing your specific "For example"
You're on the right track for using HTML5 history, but you need to clarify your idea of exactly what /foo and /bar will contain. Basically, you're going to have three pages: /, /foo, and /bar. / will have an empty container div. /foo will be identical to / except in that container div has some foo content in it. /bar will be identical to /foo except in that the container div has some bar content in it. Now, the question comes to how you would extract the contents of the container through Javascript. Assuming that your /foo body tag looked something like this:
<body>
foo
bar
<div class="container">foo</div>
</body>
Then you would extract it from the response data through var html = $(data).filter('.container').html() and then put it back into the parent page through $('.container').html(html). You use filter instead of the much more reasonable find because from some wacky reason, jQuery's DOM parser produces a jQuery object containing every child of the head and every child of the body elements instead of just a jQuery object wrapping the html element. I don't know why.
The rest is just adapting this back into the "vanilla" version above. If you are stuck at any particular stage, let me know, and I can guide you better though it.
Code
https://github.com/cairo140/html5-history-example
Try this in your controller:
if (!$this->input->is_ajax_request())
$this->load->view('header');
$this->load->view('your_view', $data);
if (!$this->input->is_ajax_request())
$this->load->view('footer');