What does appending "?v=1" to CSS and JavaScript URLs in link and script tags do? - html

I have been looking at a HTML 5 boilerplate template (from http://html5boilerplate.com/) and noticed the use of "?v=1" in URLs when referring to CSS and JavaScript files.
What does appending "?v=1" to CSS and JavaScript URLs in link and script tags do?
Not all JavaScript URLs have the "?v=1" (example from the sample below: js/modernizr-1.5.min.js). Is there a reason why this is the case?
Sample from their index.html:
<!-- CSS : implied media="all" -->
<link rel="stylesheet" href="css/style.css?v=1">
<!-- For the less-enabled mobile browsers like Opera Mini -->
<link rel="stylesheet" media="handheld" href="css/handheld.css?v=1">
<!-- All JavaScript at the bottom, except for Modernizr which enables HTML5 elements & feature detects -->
<script src="js/modernizr-1.5.min.js"></script>
<!------ Some lines removed ------>
<script src="js/plugins.js?v=1"></script>
<script src="js/script.js?v=1"></script>
<!--[if lt IE 7 ]>
<script src="js/dd_belatedpng.js?v=1"></script>
<![endif]-->
<!-- yui profiler and profileviewer - remove for production -->
<script src="js/profiling/yahoo-profiling.min.js?v=1"></script>
<script src="js/profiling/config.js?v=1"></script>
<!-- end profiling code -->

These are usually to make sure that the browser gets a new version when the site gets updated with a new version, e.g. as part of our build process we'd have something like this:
/Resources/Combined.css?v=x.x.x.buildnumber
Since this changes with every new code push, the client's forced to grab a new version, just because of the querystring. Look at this page (at the time of this answer) for example:
<link ... href="http://sstatic.net/stackoverflow/all.css?v=c298c7f8233d">
I think instead of a revision number the SO team went with a file hash, which is an even better approach, even with a new release, the browsers only forced to grab a new version when the file actually changes.
Both of these approaches allow you to set the cache header to something ridiculously long, say 20 years...yet when it changes, you don't have to worry about that cache header, the browser sees a different querystring and treats it as a different, new file.

This makes sure you are getting the latest version from of the css or js file from the server.
And later you can append "?v=2" if you have a newer version and "?v=3", "?v=4" and so on.
Note that you can use any querystring, 'v' is not a must for example:
"?blah=1" will work as well.
And
"?xyz=1002" will work.
And this is a common technique because browsers are now caching js and css files better and longer.

The hash solution is nice but not really human readable when you want to know what version of file is sitting in your local web folder. The solution is to date/time stamp your version so you can easily compare it against your server file.
For example, if your .js or .css file is dated 2011-02-08 15:55:30 (last modification) then the version should equal to .js?v=20110208155530
Should be easy to read properties of any file in any language. In ASP.Net it's really easy...
".js?v=" + File.GetLastWriteTime(HttpContext.Current.Request.PhysicalApplicationPath + filename).ToString("yyMMddHHHmmss");
Of coz get it nicely refactored into properties/functions first and off you go. No more excuses.
Good luck, Art.

In order to answer you questions;
"?v=1" this is written only beacuse to download a fresh copy of the css and js files instead of using from the cache of the browser.
If you mention this query string parameter at the end of your stylesheet or the js file then it forces the browser to download a new file, Due to which the recent changes in the .css and .js files are made effetive in your browser.
If you dont use this versioning then you may need to clear the cache of refresh the page in order to view the recent changes in those files.
Here is an article that explains this thing How and Why to make versioning of CSS and JS files

Javascript files are often cached by the browser for a lot longer than you might expect.
This can often result in unexpected behaviour when you release a new version of your JS file.
Therefore, it is common practice to add a QueryString parameter to the URL for the javascript file. That way, the browser caches the Javascript file with v=1. When you release a new version of your javascript file you change the url's to v=2 and the browser will be forced to download a new copy.

During development / testing of new releases, the cache can be a problem because the browser, the server and even sometimes the 3G telco (if you do mobile deployment) will cache the static content (e.g. JS, CSS, HTML, img). You can overcome this by appending version number, random number or timestamp to the URL e.g: JSP: <script src="js/excel.js?time=<%=new java.util.Date()%>"></script>
In case you're running pure HTML (instead of server pages JSP, ASP, PHP) the server won't help you. In browser, links are loaded before the JS runs, therefore you have to remove the links and load them with JS.
// front end cache bust
var cacheBust = ['js/StrUtil.js', 'js/protos.common.js', 'js/conf.js', 'bootstrap_ECP/js/init.js'];
for (i=0; i < cacheBust.length; i++){
var el = document.createElement('script');
el.src = cacheBust[i]+"?v=" + Math.random();
document.getElementsByTagName('head')[0].appendChild(el);
}

As you can read before, the ?v=1 ensures that your browser gets the version 1 of the file. When you have a new version, you just have to append a different version number and the browser will forget about the old version and loads the new one.
There is a gulp plugin which takes care of version your files during the build phase, so you don't have to do it manually. It's handy and you can easily integrate it in you build process. Here's the link: gulp-annotate

As mentioned by others, this is used for front end cache busting. To implement this, I have personally find grunt-cache-bust npm package useful.

Related

Proper order for scripts in head of my document [duplicate]

When embedding JavaScript in an HTML document, where is the proper place to put the <script> tags and included JavaScript? I seem to recall that you are not supposed to place these in the <head> section, but placing at the beginning of the <body> section is bad, too, since the JavaScript will have to be parsed before the page is rendered completely (or something like that). This seems to leave the end of the <body> section as a logical place for <script> tags.
So, where is the right place to put the <script> tags?
(This question references this question, in which it was suggested that JavaScript function calls should be moved from <a> tags to <script> tags. I'm specifically using jQuery, but more general answers are also appropriate.)
Here's what happens when a browser loads a website with a <script> tag on it:
Fetch the HTML page (e.g. index.html)
Begin parsing the HTML
The parser encounters a <script> tag referencing an external script file.
The browser requests the script file. Meanwhile, the parser blocks and stops parsing the other HTML on your page.
After some time the script is downloaded and subsequently executed.
The parser continues parsing the rest of the HTML document.
Step #4 causes a bad user experience. Your website basically stops loading until you've downloaded all scripts. If there's one thing that users hate it's waiting for a website to load.
Why does this even happen?
Any script can insert its own HTML via document.write() or other DOM manipulations. This implies that the parser has to wait until the script has been downloaded and executed before it can safely parse the rest of the document. After all, the script could have inserted its own HTML in the document.
However, most JavaScript developers no longer manipulate the DOM while the document is loading. Instead, they wait until the document has been loaded before modifying it. For example:
<!-- index.html -->
<html>
<head>
<title>My Page</title>
<script src="my-script.js"></script>
</head>
<body>
<div id="user-greeting">Welcome back, user</div>
</body>
</html>
JavaScript:
// my-script.js
document.addEventListener("DOMContentLoaded", function() {
// this function runs when the DOM is ready, i.e. when the document has been parsed
document.getElementById("user-greeting").textContent = "Welcome back, Bart";
});
Because your browser does not know my-script.js isn't going to modify the document until it has been downloaded and executed, the parser stops parsing.
Antiquated recommendation
The old approach to solving this problem was to put <script> tags at the bottom of your <body>, because this ensures the parser isn't blocked until the very end.
This approach has its own problem: the browser cannot start downloading the scripts until the entire document is parsed. For larger websites with large scripts and stylesheets, being able to download the script as soon as possible is very important for performance. If your website doesn't load within 2 seconds, people will go to another website.
In an optimal solution, the browser would start downloading your scripts as soon as possible, while at the same time parsing the rest of your document.
The modern approach
Today, browsers support the async and defer attributes on scripts. These attributes tell the browser it's safe to continue parsing while the scripts are being downloaded.
async
<script src="path/to/script1.js" async></script>
<script src="path/to/script2.js" async></script>
Scripts with the async attribute are executed asynchronously. This means the script is executed as soon as it's downloaded, without blocking the browser in the meantime.
This implies that it's possible that script 2 is downloaded and executed before script 1.
According to http://caniuse.com/#feat=script-async, 97.78% of all browsers support this.
defer
<script src="path/to/script1.js" defer></script>
<script src="path/to/script2.js" defer></script>
Scripts with the defer attribute are executed in order (i.e. first script 1, then script 2). This also does not block the browser.
Unlike async scripts, defer scripts are only executed after the entire document has been loaded.
(To learn more and see some really helpful visual representations of the differences between async, defer and normal scripts check the first two links at the references section of this answer)
Conclusion
The current state-of-the-art is to put scripts in the <head> tag and use the async or defer attributes. This allows your scripts to be downloaded ASAP without blocking your browser.
The good thing is that your website should still load correctly on the 2% of browsers that do not support these attributes while speeding up the other 98%.
References
async vs defer attributes
Efficiently load JavaScript with defer and async
Remove Render-Blocking JavaScript
Async, Defer, Modules: A Visual Cheatsheet
Just before the closing body tag, as stated on Put Scripts at the Bottom:
Put Scripts at the Bottom
The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
Non-blocking script tags can be placed just about anywhere:
<script src="script.js" async></script>
<script src="script.js" defer></script>
<script src="script.js" async defer></script>
async script will be executed asynchronously as soon as it is available
defer script is executed when the document has finished parsing
async defer script falls back to the defer behavior if async is not supported
Such scripts will be executed asynchronously/after document ready, which means you cannot do this:
<script src="jquery.js" async></script>
<script>jQuery(something);</script>
<!--
* might throw "jQuery is not defined" error
* defer will not work either
-->
Or this:
<script src="document.write(something).js" async></script>
<!--
* might issue "cannot write into document from an asynchronous script" warning
* defer will not work either
-->
Or this:
<script src="jquery.js" async></script>
<script src="jQuery(something).js" async></script>
<!--
* might throw "jQuery is not defined" error (no guarantee which script runs first)
* defer will work in sane browsers
-->
Or this:
<script src="document.getElementById(header).js" async></script>
<div id="header"></div>
<!--
* might not locate #header (script could fire before parser looks at the next line)
* defer will work in sane browsers
-->
Having said that, asynchronous scripts offer these advantages:
Parallel download of resources:
Browser can download stylesheets, images and other scripts in parallel without waiting for a script to download and execute.
Source order independence:
You can place the scripts inside head or body without worrying about blocking (useful if you are using a CMS). Execution order still matters though.
It is possible to circumvent the execution order issues by using external scripts that support callbacks. Many third party JavaScript APIs now support non-blocking execution. Here is an example of loading the Google Maps API asynchronously.
The standard advice, promoted by the Yahoo! Exceptional Performance team, is to put the <script> tags at the end of the document's <body> element so they don't block rendering of the page.
But there are some newer approaches that offer better performance, as described in this other answer of mine about the load time of the Google Analytics JavaScript file:
There are some great slides by Steve Souders (client-side performance expert) about:
Different techniques to load external JavaScript files in parallel
their effect on loading time and page rendering
what kind of "in progress" indicators the browser displays (e.g. 'loading' in the status bar, hourglass mouse cursor).
The modern approach is using ES6 'module' type scripts.
<script type="module" src="..."></script>
By default, modules are loaded asynchronously and deferred. i.e. you can place them anywhere and they will load in parallel and execute when the page finishes loading.
Further reading:
The differences between a script and a module
The execution of a module being deferred compared to a script(Modules are deferred by default)
Browser Support for ES6 Modules
If you are using jQuery then put the JavaScript code wherever you find it best and use $(document).ready() to ensure that things are loaded properly before executing any functions.
On a side note: I like all my script tags in the <head> section as that seems to be the cleanest place.
<script src="myjs.js"></script>
</body>
The script tag should always be used before the body close or at the bottom in HTML file.
The Page will load with HTML and CSS and later JavaScript will load.
Check this if required:
http://stevesouders.com/hpws/rule-js-bottom.php
The best place to put <script> tag is before closing </body> tag, so the downloading and executing it doesn't block the browser to parse the HTML in document,
Also loading the JavaScript files externally has its own advantages like it will be cached by browsers and can speed up page load times, it separates the HTML and JavaScript code and help to manage the code base better.
But modern browsers also support some other optimal ways, like async and defer to load external JavaScript files.
Async and Defer
Normally HTML page execution starts line by line. When an external JavaScript <script> element is encountered, HTML parsing is stopped until a JavaScript is download and ready for execution. This normal page execution can be changed using the defer and async attribute.
Defer
When a defer attribute is used, JavaScript is downloaded parallelly with HTML parsing, but it will be execute only after full HTML parsing is done.
<script src="/local-js-path/myScript.js" defer></script>
Async
When the async attribute is used, JavaScript is downloaded as soon as the script is encountered and after the download, it will be executed asynchronously (parallelly) along with HTML parsing.
<script src="/local-js-path/myScript.js" async></script>
When to use which attributes
If your script is independent of other scripts and is modular, use async.
If you are loading script1 and script2 with async, both will run
parallelly along with HTML parsing, as soon as they are downloaded
and available.
If your script depends on another script then use defer for both:
When script1 and script2 are loaded in that order with defer, then script1 is guaranteed to execute first,
Then script2 will execute after script1 is fully executed.
Must do this if script2 depends on script1.
If your script is small enough and is depended by another script
of type async then use your script with no attributes and place it above all the async scripts.
Reference: External JavaScript JS File – Advantages, Disadvantages, Syntax, Attributes
It turns out it can be everywhere.
You can defer the execution with something like jQuery so it doesn't matter where it's placed (except for a small performance hit during parsing).
The most conservative (and widely accepted) answer is "at the bottom just before the ending tag", because then the entire DOM will have been loaded before anything can start executing.
There are dissenters, for various reasons, starting with the available practice to intentionally begin execution with a page onload event.
It depends. If you are loading a script that's necessary to style your page / using actions in your page (like click of a button) then you better place it at the top. If your styling is 100% CSS and you have all fallback options for the button actions then you can place it at the bottom.
Or the best thing (if that's not a concern) is you can make a modal loading box, place your JavaScript code at the bottom of your page and make it disappear when the last line of your script gets loaded. This way you can avoid users using actions in your page before the scripts are loaded. And also avoid the improper styling.
Including scripts at the end is mainly used where the content/ styles of the web page is to be shown first.
Including the scripts in the head loads the scripts early and can be used before the loading of the whole web page.
If the scripts are entered at last the validation will happen only after the loading of the entire styles and design which is not appreciated for fast responsive websites.
You can add JavaScript code in an HTML document by employing the dedicated HTML tag <script> that wraps around JavaScript code.
The <script> tag can be placed in the <head> section of your HTML, in the <body> section, or after the </body> close tag, depending on when you want the JavaScript to load.
Generally, JavaScript code can go inside of the document <head> section in order to keep them contained and out of the main content of your HTML document.
However, if your script needs to run at a certain point within a page’s layout — like when using document.write to generate content — you should put it at the point where it should be called, usually within the <body> section.
Depending on the script and its usage the best possible (in terms of page load and rendering time) may be to not use a conventional <script>-tag per se, but to dynamically trigger the loading of the script asynchronously.
There are some different techniques, but the most straightforward is to use document.createElement("script") when the window.onload event is triggered. Then the script is loaded first when the page itself has rendered, thus not impacting the time the user has to wait for the page to appear.
This naturally requires that the script itself is not needed for the rendering of the page.
For more information, see the post Coupling async scripts by Steve Souders (creator of YSlow, but now at Google).
Script blocks DOM load until it's loaded and executed.
If you place scripts at the end of <body>, all of the DOM has a chance to load and render (the page will "display" faster). <script> will have access to all of those DOM elements.
On the other hand, placing it after the <body> start or above will execute the script (where there still aren't any DOM elements).
You are including jQuery which means you can place it wherever you wish and use .ready().
You can place most of <script> references at the end of <body>.
But if there are active components on your page which are using external scripts, then their dependency (.js files) should come before that (ideally in the head tag).
The best place to write your JavaScript code is at the end of the document after or right before the </body> tag to load the document first and then execute the JavaScript code.
<script> ... your code here ... </script>
</body>
And if you write in jQuery, the following can be in the head document and it will execute after the document loads:
<script>
$(document).ready(function(){
// Your code here...
});
</script>
If you still care a lot about support and performance in Internet Explorer before version 10, it's best to always make your script tags the last tags of your HTML body. That way, you're certain that the rest of the DOM has been loaded and you won't block and rendering.
If you don't care too much any more about in Internet Explorer before version 10, you might want to put your scripts in the head of your document and use defer to ensure they only run after your DOM has been loaded (<script type="text/javascript" src="path/to/script1.js" defer></script>). If you still want your code to work in Internet Explorer before version 10, don't forget to wrap your code in a window.onload even, though!
I think it depends on the webpage execution.
If the page that you want to display can not displayed properly without loading JavaScript first then you should include the JavaScript file first.
But if you can display/render a webpage without initially download JavaScript file, then you should put JavaScript code at the bottom of the page. Because it will emulate a speedy page load, and from a user's point of view, it would seems like that the page is loading faster.
Always, we have to put scripts before the closing body tag expect some specific scenario.
For Example :
`<html> <body> <script> document.getElementById("demo").innerHTML = "Hello JavaScript!"; </script> </body> </html>`
Prefer to put it before the </body> closing tag.
Why?
As per the official doc: https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/JavaScript_basics#a_hello_world!_example
Note: The reason the instructions (above) place the element
near the bottom of the HTML file is that the browser reads code in the
order it appears in the file.
If the JavaScript loads first and it is supposed to affect the HTML
that hasn't loaded yet, there could be problems. Placing JavaScript
near the bottom of an HTML page is one way to accommodate this
dependency. To learn more about alternative approaches, see Script
loading strategies.

Calling a URL in non-blocking manner on a webpage

Story time
I have a purchased/rented typeface whose license asks me to query a unique counter on their domain every time the typeface is shown. Sadly, their suggestion is to call it in the CSS file with import, which blocks rendering for the duration of the call. It is also weird since according to the license they wish to track individual page views yet if the CSS file in question is cached, won't that prevent the import from being called again until cache clears?
In any case, I removed the import call, but then began to ponder what exactly should I replace it with. What tag would give me a non-blocking call that would work universally across browsers and irregardless of disabled features? A link with rel=prefetch? HTML5, it didn't work in IE7 when I tested it. And it would also feel awkward since it implies the resource should be cached yet the response contains a No-Cache directive. A script tag with defer and async attributes at the end of the page? Maybe, but what if someone has disabled scripting? I could add a noscript tag and then an image tag inside it as a fallback. But! Will the image display as broken for some browsers since the image contents are an empty string? And what if someone has scripting AND images disabled? Oh no! World must be a pretty bleak place for them, I must admit. Oh, oh! What about embed/object? Now that's just wrong, stop touching me funny.
I ended up going with just a plain image tag for now, but what would be the magical combination that would cover the widest range of edge cases? I could add the script tag for example to support those without image loading on.
My intrigue here is purely scientifical so I'm not really looking for alternative typeface providers or to discuss how unlikely previously described situations are. Also, why they provide me with the actual font file to serve from my own server and then trust me to call the counter honestly is beyond me.
Code
Let's imagine my unique font counter is located at http://font.foo/bar and the font.foo server is acting slow.
Starting point
// fonts.css
#import url('font.foo/bar')
#font-face { ... }
// index.html
<link rel="stylesheet" href="fonts.css">
Separate link tag
// Problem: Blocks rendering
// index.html
<link rel="stylesheet" href="fonts.css">
<link rel="stylesheet" href="font.foo/bar">
rel=prefetch
// Problem: Won't load in IE7, semantically awkward
// index.html
<link rel="prefetch" href="font.foo/bar">
Deferred async script load
// Problem: Won't work when user has disabled scripting
// index.html
<script src="font.foo/bar" async defer>
</body>
Script tag with added image fallback
// Problem: Won't work when user has disabled scripting AND images
// index.html
<script src="font.foo/bar" async defer>
<noscript><img src="font.foo/bar" alt=""></noscript>
</body>
Maybe mix it this way?
You have the script options and keep using a link element instead of an img
<script src="font.foo/bar" async defer>
<noscript><link rel="stylesheet" href="font.foo/bar"></noscript>
</body>

Browser load HTML [duplicate]

I have done some web based projects, but I don't think too much about the load and execution sequence of an ordinary web page. But now I need to know detail. It's hard to find answers from Google or SO, so I created this question.
A sample page is like this:
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
So here are my questions:
How does this page load?
What is the sequence of the loading?
When is the JS code executed? (inline and external)
When is the CSS executed (applied)?
When does $(document).ready get executed?
Will abc.jpg be downloaded? Or does it just download kkk.png?
I have the following understanding:
The browser loads the html (DOM) at first.
The browser starts to load the external resources from top to bottom, line by line.
If a <script> is met, the loading will be blocked and wait until the JS file is loaded and executed and then continue.
Other resources (CSS/images) are loaded in parallel and executed if needed (like CSS).
Or is it like this:
The browser parses the html (DOM) and gets the external resources in an array or stack-like structure. After the html is loaded, the browser starts to load the external resources in the structure in parallel and execute, until all resources are loaded. Then the DOM will be changed corresponding to the user's behaviors depending on the JS.
Can anyone give a detailed explanation about what happens when you've got the response of a html page? Does this vary in different browsers? Any reference about this question?
Thanks.
EDIT:
I did an experiment in Firefox with Firebug. And it shows as the following image:
Edit: It's 2022. If you are interested in detailed coverage on the load and execution of a web page and how the browser works, you should check out https://browser.engineering/ (open sourced at https://github.com/browserengineering/book)
According to your sample,
<html>
<head>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript">
</script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
roughly the execution flow is about as follows:
The HTML document gets downloaded
The parsing of the HTML document starts
HTML Parsing reaches <script src="jquery.js" ...
jquery.js is downloaded and parsed
HTML parsing reaches <script src="abc.js" ...
abc.js is downloaded, parsed and run
HTML parsing reaches <link href="abc.css" ...
abc.css is downloaded and parsed
HTML parsing reaches <style>...</style>
Internal CSS rules are parsed and defined
HTML parsing reaches <script>...</script>
Internal Javascript is parsed and run
HTML Parsing reaches <img src="abc.jpg" ...
abc.jpg is downloaded and displayed
HTML Parsing reaches <script src="kkk.js" ...
kkk.js is downloaded, parsed and run
Parsing of HTML document ends
Note that the download may be asynchronous and non-blocking due to behaviours of the browser. For example, in Firefox there is this setting which limits the number of simultaneous requests per domain.
Also depending on whether the component has already been cached or not, the component may not be requested again in a near-future request. If the component has been cached, the component will be loaded from the cache instead of the actual URL.
When the parsing is ended and document is ready and loaded, the events onload is fired. Thus when onload is fired, the $("#img").attr("src","kkk.png"); is run. So:
Document is ready, onload is fired.
Javascript execution hits $("#img").attr("src", "kkk.png");
kkk.png is downloaded and loads into #img
The $(document).ready() event is actually the event fired when all page components are loaded and ready. Read more about it: http://docs.jquery.com/Tutorials:Introducing_$(document).ready()
Edit - This portion elaborates more on the parallel or not part:
By default, and from my current understanding, browser usually runs each page on 3 ways: HTML parser, Javascript/DOM, and CSS.
The HTML parser is responsible for parsing and interpreting the markup language and thus must be able to make calls to the other 2 components.
For example when the parser comes across this line:
a hypertext link
The parser will make 3 calls, two to Javascript and one to CSS. Firstly, the parser will create this element and register it in the DOM namespace, together with all the attributes related to this element. Secondly, the parser will call to bind the onclick event to this particular element. Lastly, it will make another call to the CSS thread to apply the CSS style to this particular element.
The execution is top down and single threaded. Javascript may look multi-threaded, but the fact is that Javascript is single threaded. This is why when loading external javascript file, the parsing of the main HTML page is suspended.
However, the CSS files can be download simultaneously because CSS rules are always being applied - meaning to say elements are always repainted with the freshest CSS rules defined - thus making it unblocking.
An element will only be available in the DOM after it has been parsed. Thus when working with a specific element, the script is always placed after, or within the window onload event.
Script like this will cause error (on jQuery):
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
<div id="mydiv">Hello World</div>
Because when the script is parsed, #mydiv element is still not defined. Instead this would work:
<div id="mydiv">Hello World</div>
<script type="text/javascript">/* <![CDATA[ */
alert($("#mydiv").html());
/* ]]> */</script>
OR
<script type="text/javascript">/* <![CDATA[ */
$(window).ready(function(){
alert($("#mydiv").html());
});
/* ]]> */</script>
<div id="mydiv">Hello World</div>
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the <head></head> section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
Open your page in Firefox and get the HTTPFox addon. It will tell you all that you need.
Found this on archivist.incuito:
http://archivist.incutio.com/viewlist/css-discuss/76444
When you first request a page, your
browser sends a GET request to the
server, which returns the HTML to the
browser. The browser then starts
parsing the page (possibly before all
of it has been returned).
When it finds a reference to an
external entity such as a CSS file, an
image file, a script file, a Flash
file, or anything else external to
the page (either on the same
server/domain or not), it prepares to
make a further GET request for that
resource.
However the HTTP standard specifies
that the browser should not make more
than two concurrent requests to the
same domain. So it puts each request
to a particular domain in a queue, and
as each entity is returned it starts
the next one in the queue for that
domain.
The time it takes for an entity to be
returned depends on its size, the
load the server is currently
experiencing, and the activity of
every single machine between the
machine running the browser and the
server. The list of these machines
can in principle be different for
every request, to the extent that one
image might travel from the USA to me
in the UK over the Atlantic, while
another from the same server comes out
via the Pacific, Asia and Europe,
which takes longer. So you might get a
sequence like the following, where a
page has (in this order) references
to three script files, and five image
files, all of differing sizes:
GET script1 and script2; queue request for script3 and images1-5.
script2 arrives (it's smaller than script1): GET script3, queue
images1-5.
script1 arrives; GET image1, queue images2-5.
image1 arrives, GET image2, queue images3-5.
script3 fails to arrive due to a network problem - GET script3 again
(automatic retry).
image2 arrives, script3 still not here; GET image3, queue images4-5.
image 3 arrives; GET image4, queue image5, script3 still on the way.
image4 arrives, GET image5;
image5 arrives.
script3 arrives.
In short: any old order, depending on
what the server is doing, what the
rest of the Internet is doing, and
whether or not anything has errors
and has to be re-fetched. This may
seem like a weird way of doing
things, but it would quite literally
be impossible for the Internet (not
just the WWW) to work with any degree
of reliability if it wasn't done this
way.
Also, the browser's internal queue
might not fetch entities in the order
they appear in the page - it's not
required to by any standard.
(Oh, and don't forget caching, both in
the browser and in caching proxies
used by ISPs to ease the load on the
network.)
If you're asking this because you want to speed up your web site, check out Yahoo's page on Best Practices for Speeding Up Your Web Site. It has a lot of best practices for speeding up your web site.
AFAIK, the browser (at least Firefox) requests every resource as soon as it parses it. If it encounters an img tag it will request that image as soon as the img tag has been parsed. And that can be even before it has received the totality of the HTML document... that is it could still be downloading the HTML document when that happens.
For Firefox, there are browser queues that apply, depending on how they are set in about:config. For example it will not attempt to download more then 8 files at once from the same server... the additional requests will be queued. I think there are per-domain limits, per proxy limits, and other stuff, which are documented on the Mozilla website and can be set in about:config. I read somewhere that IE has no such limits.
The jQuery ready event is fired as soon as the main HTML document has been downloaded and it's DOM parsed. Then the load event is fired once all linked resources (CSS, images, etc.) have been downloaded and parsed as well. It is made clear in the jQuery documentation.
If you want to control the order in which all that is loaded, I believe the most reliable way to do it is through JavaScript.
Dynatrace AJAX Edition shows you the exact sequence of page loading, parsing and execution.
The chosen answer looks like does not apply to modern browsers, at least on Firefox 52. What I observed is that the requests of loading resources like css, javascript are issued before HTML parser reaches the element, for example
<html>
<head>
<!-- prints the date before parsing and blocks HTMP parsering -->
<script>
console.log("start: " + (new Date()).toISOString());
for(var i=0; i<1000000000; i++) {};
</script>
<script src="jquery.js" type="text/javascript"></script>
<script src="abc.js" type="text/javascript"></script>
<link rel="stylesheets" type="text/css" href="abc.css"></link>
<style>h2{font-wight:bold;}</style>
<script>
$(document).ready(function(){
$("#img").attr("src", "kkk.png");
});
</script>
</head>
<body>
<img id="img" src="abc.jpg" style="width:400px;height:300px;"/>
<script src="kkk.js" type="text/javascript"></script>
</body>
</html>
What I found that the start time of requests to load css and javascript resources were not being blocked. Looks like Firefox has a HTML scan, and identify key resources(img resource is not included) before starting to parse the HTML.

How do I make Firefox auto-refresh on file change?

Does anyone know of an extension for Firefox, or a script or some other mechanism, that can monitor one or more local files. Firefox would auto-refresh or otherwise update its canvas when it detected a change (of timestamp) in the files(s).
For editing CSS, it would be ideal if just the CSS could be reloaded, rather than a full HTML re-render.
Effectively it would enable similar behaviour to Firebug with its dynamic HTML/CSS editing, only through external files.
Live.js
From the website:
How?
Just include Live.js and it will monitor the current page including local CSS and Javascript by sending consecutive HEAD requests to the server. Changes to CSS will be applied dynamically and HTML or Javascript changes will reload the page. Try it!
Where?
Live.js works in Firefox, Chrome, Safari, Opera and IE6+ until proven otherwise. Live.js is independent of the development framework or language you use, whether it be Ruby, Handcraft, Python, Django, NET, Java, Php, Drupal, Joomla or what-have-you.
It has the huge benefit of working with IETester, dynamically refreshing each open IE tab.
Try it out by adding the following to your <head>
<script type="text/javascript" src="http://livejs.com/live.js"></script>
Have a look at FileWatcher extension:
https://addons.mozilla.org/en-US/firefox/addon/filewatcher/
it's a WebExtension, so it works with the latest Firefox
it has a native app (to be installed locally) that monitors watched files for changes using native OS calls (no polling!) and notifies the WebExtension to let it reload the web page
reload is driven by rules: a rule contains the page URL (with regular expression support) and its included/excluded local source files
open source: https://github.com/coolsoft-ita/filewatcher
DISCLAIMER: I'm the author of the extension ;)
I would recommend livejs
But it has following Advantages and Disadvantages
Advantages:
1. Easy setup
2. Works seamlessly on different browsers (Live.js works in Firefox, Chrome, Safari, Opera and IE6+)
3. Don't add irritating interval for refreshing browser specially when you want to debug along with designing
4. Only refreshing when you save change ctrl + S
5. Directly saves CSS etc from firebug I have not used that feature but read on their site http://livejs.com/ that they support it too!!!
Disadvantages:
1. It will not work on file protocol file:///C:/Users/Admin/Desktop/livejs/live.html
2. You need to have server to run it like http://localhost
3. You have to remove it while deploying on staging/production
4. Doesn't serves CDN I have tried cheating & applying direct link http://livejs.com/live.js but it will not work you have to download and keep on local to work.
Xrefresh with firebug.
Firefox has an extension called mozRepl.
Emacs can plug into this, with moz-reload-on-save-mode.
when it's set up, saving the file forces a refresh of the browser window.
There are some IDE's that contain this ability (They'll have a pane within them or some other means to auto-refresh a page on save).
If you want to do this yourself a quick hack is to set the meta refresh on the page to a low value - one or two seconds.
# Will refresh the page content every second
<meta http-equiv="refresh" content="1" />
You could just place a javascript interval on your page, have it query a local script which checks the last date modified of the css file, and refreshes it if it changed.
jQuery Example:
var modTime = 0;
setInterval(function(){
$.post("isModified.php", {"file":"main.css", "time":modTime}, function(rst) {
if (rst.time != modTime) {
modTime = rst.time;
// reload style tag
$("head link[rel='stylesheet']:eq(0)").remove();
$("head").prepend($(document.createElement("link")).attr({
"rel":"stylesheet",
"href":"http://sstatic.net/mso/all.css?v=4372"
})
);
}
});
}, 5000);
Browsersync can do this from the server side / outside of the browser.
This can achieve more repeatable results / things that don't require so much clicking.
This will serve a page and refresh on change
cd static_content
browser-sync start --server --files .
It also allows a scripting mode.
This is certainly hacky, but if you want to work locally without making any external request (to live.js, for example), or run any local server, I think this might be useful. This is not specific to web development, you can adopt similar strategy to any other workflow.
You will need two tiny tools (which are present in almost all distribution repos): inotify-tools and xdotool.
First get the ID of your Firefox and your editor window using xdotool.
$ xdotool search --name "Mozilla Firefox"
60817411
60817836
$ xdotool search --name "Pluma" # Pluma is my editor
94371842
Depending on the number of processes running, you will get one or more window ID. Use xdotool windowactivate <ID> to know which one you want (the focus changes to the respective window).
Use inotifywait -e close_write to monitor changes to your local file and when you save the file using your editor, change focus to your browser, reload xdotool key CTRL+R and focus back to your editor. This is so instantaneous you will not notice nothing.
Also, inotifywait exits on change, so you might have to do it in a loop. Here is a minimum working example (in Bash in your working directory).
while /usr/bin/true
do
inotifywait -e close_write index.html;
xdotool windowactivate 60917411; # Switch to Firefox
xdotool key CTRL+R; # Reload Firefox
xdotool windowactivate 94371842 # Switch back to Pluma
done
You can use inotifywait to watch for the entire directory or some selected files in your directory.
You can write a script that can automate is easily.
This works on Linux (I've tested this on Void Linux.)
You can use live.js with a tampermonkey script to avoid having to include https://livejs.com/live.js in your HTML file.
// ==UserScript==
// #name Auto reload
// #author weirane
// #version 0.1
// #match http://127.0.0.1/*
// #grant none
// ==/UserScript==
(function() {
'use strict';
if (Number(window.location.port) === 8000) {
const script = document.createElement('script');
script.src = 'https://livejs.com/live.js';
document.body.appendChild(script);
}
})();
With this tampermonkey script, the live.js script will be automatically inserted to pages whose address matches http://127.0.0.1:8000/*. You can change the port according to your need.
I think that you can solve it by using some ajax requests after a determinate interval. You can do a request to CSS files and then if you don't get the "not modified" header you delete your css and load it again. For dynamic files you do a request and store the response and then every time you make a request to that file you compare the response to the latest.

Common Header / Footer with static HTML

Is there a decent way with static HTML/XHTML to create common header/footer files to be displayed on each page of a site? I know you can obviously do this with PHP or server side directives, but is there any way of doing this with absolutely no dependencies on the server stitching everything together for you?
Edit: All very good answers and was what I expected. HTML is static, period. No real way to change that without something running server side or client side. I've found that Server Side Includes seem to be my best option as they are very simple and don't require scripting.
There are three ways to do what you want
Server Script
This includes something like php, asp, jsp.... But you said no to that
Server Side Includes
Your server is serving up the pages so why not take advantage of the built in server side includes? Each server has its own way to do this, take advantage of it.
Client Side Include
This solutions has you calling back to the server after page has already been loaded on the client.
JQuery load() function can use for including common header and footer. Code should be like
<script>
$("#header").load("header.html");
$("#footer").load("footer.html");
</script>
You can find demo here
Since HTML does not have an "include" directive, I can think only of three workarounds
Frames
Javascript
CSS
A little comment on each of the methods.
Frames can be either standard frames or iFrames. Either way, you will have to specify a fixed height for them, so this might not be the solution you are looking for.
Javascript is a pretty broad subject and there probably exist many ways how one might use it to achieve the desired effect. Off the top of my head however I can think of two ways:
Full-blown AJAX request, which requests the header/footer and then places them in the right place of the page;
<script type="text/javascript" src="header.js"> which has something like this in it: document.write('My header goes here');
Doing it via CSS would be really an abuse. CSS has the content property which allows you to insert some HTML content, although it's not really intended to be used like this. Also I'm not sure about browser support for this construct.
The simplest way to do that is using plain HTML.
You can use one of these ways:
<embed type="text/html" src="header.html">
or:
<object name="foo" type="text/html" data="header.html"></object>
You can do it with javascript, and I don't think it needs to be that fancy.
If you have a header.js file and a footer.js.
Then the contents of header.js could be something like
document.write("<div class='header'>header content</div> etc...")
Remember to escape any nested quote characters in the string you are writing.
You could then call that from your static templates with
<script type="text/javascript" src="header.js"></script>
and similarly for the footer.js.
Note: I am not recommending this solution - it's a hack and has a number of drawbacks (poor for SEO and usability just for starters) - but it does meet the requirements of the questioner.
you can do this easily using jquery. no need of php for such a simple task.
just include this once in your webpage.
$(function(){
$("[data-load]").each(function(){
$(this).load($(this).data("load"), function(){
});
});
})
now use data-load on any element to call its contents from external html file
you just have to add line to your html code where you want the content to be placed.
example
<nav data-load="sidepanel.html"></nav>
<nav data-load="footer.html"></nav>
The best solution is using a static site generator which has templating/includes support. I use Hammer for Mac, it is great. There's also Guard, a ruby gem that monitors file changes, compile sass, concatenate any files and probably does includes.
The most practical way is to use Server Side Include. It's very easy to implement and saves tons of work when you have more than a couple pages.
HTML frames, but it is not an ideal solution. You would essentially be accessing 3 separate HTML pages at once.
Your other option is to use AJAX I think.
You could use a task runner such as gulp or grunt.
There is an NPM gulp package that does file including on the fly and compiles the result into an output HTML file. You can even pass values through to your partials.
https://www.npmjs.com/package/gulp-file-include
<!DOCTYPE html>
<html>
<body>
##include('./header.html')
##include('./main.html')
</body>
</html>
an example of a gulp task:
var fileinclude = require('gulp-file-include'),
gulp = require('gulp');
gulp.task('html', function() {
return gulp.src(['./src/html/views/*.html'])
.pipe(fileInclude({
prefix: '##',
basepath: 'src/html'
}))
.pipe(gulp.dest('./build'));
});
You can try loading them via the client-side, like this:
<!DOCTYPE html>
<html>
<head>
<!-- ... -->
</head>
<body>
<div id="headerID"> <!-- your header --> </div>
<div id="pageID"> <!-- your header --> </div>
<div id="footerID"> <!-- your header --> </div>
<script>
$("#headerID").load("header.html");
$("#pageID").load("page.html");
$("#footerID").load("footer.html");
</script>
</body>
</html>
NOTE: the content will load from top to bottom and replace the content of the container you load it into.
No. Static HTML files don't change. You could potentially do this with some fancy Javascript AJAXy solution but that would be bad.
Short of using a local templating system like many hundreds now exist in every scripting language or even using your homebrewed one with sed or m4 and sending the result over to your server, no, you'd need at least SSI.
The only way to include another file with just static HTML is an iframe. I wouldn't consider it a very good solution for headers and footers. If your server doesn't support PHP or SSI for some bizarre reason, you could use PHP and preprocess it locally before upload. I would consider that a better solution than iframes.