what is the order of html assets when page load - html

What is the order of loading?
php
html
java script
css
jquery
ajax
please give me little explanation too
Thanks,

1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
~source
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So, in short:
The html will load first obviously as this instructs the browser on the extended requirements: images, scripts, external stylesheets etc
After that, the load order is pretty random - multiple connections will be initiated by most browsers and the order in which they return cannot be predicted.

Related

HTML limit image downloads at the same time

I'm looking for best solution to throttle image download in SPA application.
Problem:
On slow network, when entering details page there's a lot of image downloads happening that are maxing-out browser connections, which breaks(makes unresponsive) UI that requires connections to do AJAX request on button clicks.
I'm looking for some HTTP/browser API to throttle image downloads at given time. Do you know is something like this exists?
Other approach I would like to have is ability to kill all GET request on button click (that could also solve this issue).
I cannot find such APIs. Maybe inserting these images as 'prefetch' could be good idea?
Or I have to "lazy" load these images as <img data-src="url"> and then put them to custom fetcher that does up to 3 request at the time?
EDIT: found some related issues:
javascript: cancel all kinds of requests
How do I abort image <img> load requests without using window.stop()
Best is using a Lazy loading Module/Library of your SPA Framework . For example this library for React seems to be widely used: https://github.com/twobin/react-lazyload. I would defenitely use this approach.
Build an lazy image loader yourself. Insert a placeholder image (or low quality image) that you then replace with the final image.
Yes. Parallel downloads per Domain are limited. But you can create a second Domain (=Subdomain) where you host your images so this can effectively increase the amount of parallel downloads. (But of course not the total bandwith).
Preloading: Just as a note: If you have a large app with lots of navigations paths/screens, it might be a waste of ressources if you download images of pages (or section of the page) the user then will not visit... I would not rather do this, as mobile traffic is quite costly!
Have a look at the "importance" Attribute of an img Tag: importance
Indicates the relative download importance of the resource. Maybe this helps? But never used this before...
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img
importance
Indicates the relative download importance of the resource. Priority hints allow the values:
auto: no preference. The browser may use its own heuristics to prioritize the image.
high: the image is of high priority.
low: the image is of low priority.

Polymer Hybrid Rendering

I'm interested in improving the initial page load of an app that is currently rendering entirely clientside. Right now the app loads an initial app frame and then, once the initial page is loaded, fires a request to the server to fetch the data. While the request is processing, the user effectively sees a partially rendered page. Once the data comes back from the server, the page finishes rendering on the client.
What is the best way to remove the delay caused by fetching the initial page and data separately? Should I just bootstrap the data into the initial page load, or should I leverage some sort of server side templating engine (Jade, Handlebars, etc)? It seems like doing the latter means not being able to leverage features like dom-repeat as easily, thus losing the ability to have Polymer handle some of the more complex re-rendering scenarios.
I had the same problem, the page took 4.5 sec to load cause it has to receive data from the client, And I look for ways to make polymer faster, I think that I found out, now I load the page in 1.2 sec (with no cache) and the request to the server take 0.4 sec.
Steps To Make Polymer Faster
Dont show things that you dont need. Use dom-if to not render, like if you have pages that show only if the user click on button, DO NOT RENDER THEM.
You can make request to the server before the body. so you will receive the response faster.
If you want that the user fill that the site load faster you can remove the unresolved property from the body Thus the user will see the components in the loading process (But the user will see something and not a blank screen).
use this script (before import polymer)
window.Polymer=window.Polymer ||{dom:'shadow'};
this make the browser use the shadow dom (if supported) and not the shady dom.
it faster to use shadow dom but not all the browser support it yet.
Use vulcanize https://github.com/polymer/vulcanize to merge all the import files and also minimize the file.
For long list you can use the iron-list, it render only what that is on the screen.
If you use script import, you can use the async parameter and dont block the process of the rendering.
Edit
if you dont want to use iron-list their is new option in dom-repeat
<template is="dom-repeat" items="{{items}}" initial-count="20">
</template>
this is not block the thread for a long time but render the list in parts.

SVG stacking, anchor elements, and HTTP fetching

I have a series of overlapping questions, the intersection of which can best be asked as:
Under what circumstances does an # character (an anchor) in a URL trigger an HTTP fetch, in the context of either an <a href or an <img src ?
Normally, should:
http://foo.com/bar.html#1
and
http://foo.com/bar.html#2
require two different HTTP fetches? I would think the answer should definitely be NO.
More specific details:
The situation that prompted this question was my first attempt to experiment with SVG stacking - a technique where multiple icons can be embedded within a single svg file, so that only a single HTTP request is necessary. Essentially, the idea is that you place multiple SVG icons within a single file, and use CSS to hide all of them, except the one that is selected using a CSS :target selector.
You can then select an individual icon using the # character in the URL when you write the img element in the HTML:
<img
src="stacked-icons.svg#icon3"
width="80"
height="60"
alt="SVG Stacked Image"
/>
When I try this out on Chrome it works perfectly. A single HTTP request is made, and multiple icons can be displayed via the same svg url, using different anchors/targets.
However, when I try this with Firefox (28), I see via the Console that multiple HTTP requests are made - one for each svg URL! So what I see is something like:
GET http://myhost.com/img/stacked-icons.svg#icon1
GET http://myhost.com/img/stacked-icons.svg#icon2
GET http://myhost.com/img/stacked-icons.svg#icon3
GET http://myhost.com/img/stacked-icons.svg#icon4
GET http://myhost.com/img/stacked-icons.svg#icon5
...which of course defeats the purpose of using SVG stacking in the first place.
Is there some reason Firefox is making a separate HTTP request for each URL instead of simply fetching img/stacked-icons.svg once like Chrome does?
This leads into the broader question of - what rules determine whether an # character in the URL should trigger an HTTP request?
Here's a demo in Plunker to help sort out some of these issues
Plunker
stack.svg#circ
stack.svg#rect
A URI has a couple basic components:
Protocol - determines how to send the request
Domain - where to send the request
Location - file path within the domain
Fragment - which part of the document to look in
Media Fragment URI
A fragment is simply identifying a portion of the entire file.
Depending on the implementation of the Media Fragment URI spec, it actually might be totally fair game for the browser to send along the Fragment Identifier. Think about a streaming video, some of which has been cached on the client. If the request is for /video.ogv#t=10,20 then the server can save space by only sending back the relevant portion/segment/fragment. Moreover, if the applicable portion of the media is already cached on the client, then the browser can prevent a round trip.
Think Round Trips, Not Requests
When a browser issues a GET Request, it does not necessarily mean that it needs to grab a fresh copy of the file all the way from the server. If it has already has a cached version, the request could be answered immediately.
HTTP Status Codes
200 - OK - Got the resource and returned from the server
302 - Found - Found the resource in the cache and not enough has changed since the previous request that we need to get a fresh copy.
Caching Disabled
Various things can affect whether or not a client will perform caching: The way the request was issued (F5 - soft refresh; Ctrl + R - hard refresh), the settings on the browser, any development tools that add attributes to requests, and the way the server handles those attributes. Often, when a browser's developer tools are open, it will automatically disable caching so you can easily test changes to files. If you're trying to observe caching behavior specifically, make sure you don't have any developer settings that are interfering with this.
When comparing requests across browsers, to help mitigate the differences between Developer Tool UI's, you should use a tool like fiddler to inspect the actual HTTP requests and responses that go out over the wire. I'll show you both for the simple plunker example. When the page loads it should request two different ids in the same stacked svg file.
Here are side by side requests of the same test page in Chrome 39, FF 34, and IE 11:
Caching Enabled
But we want to test what would happen on a normal client where caching is enabled. To do this, update your dev tools for each browser, or go to fiddler and Rules > Performance > and uncheck Disable Caching.
Now our request should look like this:
Now all the files are returned from the local cache, regardless of fragment ID
The developer tools for a particular browser might try to display the fragment id for your own benefit, but fiddler should always display the most accurate address that is actually requested. Every browser I tested omitted the fragment part of the address from the request:
Bundling Requests
Interestingly, chrome seems to be smart enough to prevent a second request for the same resource, but FF and IE fail to do so when a fragment is part of address. This is equally true for SVG's and PNG's. Consequently, the first time a page is ever requested, the browser will load one file for each time the SVG stack is actually used. Thereafter, it's happy to take the version from the cache, but will hurt performance for new viewers.
Final Score
CON: First Trip - SVG Stacks are not fully supported in all browsers. One request made per instance.
PRO: Second Trip - SVG resources will be cached appropriately
Additional Resources
simurai.com is widely thought to be the first application of using SVG stacks and the article explains some browser limitations that are currently in progress
Sven Hofmann has a great article about SVG Stacks that goes over some implementation models.
Here's the source code in Github if you'd like to fork it or play around with samples
Here's the official spec for Fragment Identifiers within SVG images
Quick look at Browser Support for SVG Fragments
Bugs
Typically, identical resources that are requested for a page will be bundled into a single request that will satisfy all requesting elements. Chrome appears to have already fixed this, but I've opened bugs for FF and IE to one day fix this.
Chrome - This very old bug which highlighted this issue appears to have been resolved for webkit.
Firefox - Bug Added Here
Internet Explorer - Bug Added Here

Lazyloading html at the byte level

I'm aware of things like infinite scroll,where I can load the (n+1)th page of HTML and throw it into the current document, but are there any methods for loading a single very large html document in chunks and displaying those chunks as they arrive?
I'm imagining a 'loader' html document, with suitable javascript to make an accept-ranges type query of the super-large html, collect the first 50 Kb, pass all the complete html elements to the page DOM, keep the rest in the buffer while the next 50 Kb are requested. This process would repeat until the page was fully loaded.
This kind of functionality could be entirely provided by a browser which displayed page content which has finished downloading - but I'm not sure how many can do this and remain responsive (ie. usable) while the rest arrives.
Any thoughts welcome! Thanks

DOMContentLoaded/load (event), how to increase the speed.

I'm trying to do everything I can to increase the load speed of my pages, and in particular the ajax loaded components.
In firebug, my out put looks like this
I'm not entirely sure if I'm reading this correctly, but it is either +2.19s for DOMContentLoaded (or it mayonly be .8 if we are supposed to subtrack that from the waiting response).
But then 4.67s for the 'load' (event).
Both of these seem like significantly long loading times.
I haven't been able to figure out what would cause these sorts of times.
These stats are from loading a straight html page which I normally load via ajax.
But this is just the HTML. No javascript in the page, and the page is being loaded directly, not through an ajax request.
However when i do load this page via ajax, I recognize a significant delay while the page attempts to load.
Any suggestions?
I've been looking through the html in IE debugbar, and it all looks surprisingly clean.
There are 30 images in the page. Could that be what the 'load' event is waiting for? And if so, is there any way to speed this up?
In particular, as the user will never load this page directly, but only via an ajax request, is their a way to improve the page loading performance in ajax. The problem isn't with the ajax loading script, but with the html page specifically.
----------------------EDITTED------------------------------
The results of the page are loaded into a jquery cycle where multiple images are visible at a single time, so using lazyloader provides a pretty horrible user experience. (assuming it is the images which is causing this problem).
Those firebug stats are telling you that:
It takes 2.1 seconds from the time you started the request for your server-side code to start returning a response
It then takes 19ms to download the response (i.e. the HTML)
71ms or so after that, the DOMContentLoaded event fires
After the page has completely loaded (after the images are all downloaded), the load event is fired.
DOMContentLoaded fires as soon as the DOM is ready. The load event waits until the whole page is loaded (including any external resources like images)
Most of the time in your request (other than downloading the images) is spent waiting for the server to construct the HTML. Maybe your server-side code can be optimized to run faster?