I'm aware of things like infinite scroll,where I can load the (n+1)th page of HTML and throw it into the current document, but are there any methods for loading a single very large html document in chunks and displaying those chunks as they arrive?
I'm imagining a 'loader' html document, with suitable javascript to make an accept-ranges type query of the super-large html, collect the first 50 Kb, pass all the complete html elements to the page DOM, keep the rest in the buffer while the next 50 Kb are requested. This process would repeat until the page was fully loaded.
This kind of functionality could be entirely provided by a browser which displayed page content which has finished downloading - but I'm not sure how many can do this and remain responsive (ie. usable) while the rest arrives.
Any thoughts welcome! Thanks
Related
I am using as background image a data uri base64 encoded image directly inline inside the HTML page.
I read everywhere that contrary to unencoded images, they're not stored in cache , which is indeed sometimes a performance problem.
But when looking at my chrome dev tools, I see "cached in memory"...
Are inlined-in-the-html base64 image cached ?
If yes : my question is: I checked it originally because i wanted to analyze the time to load the base64 inlined-image. So i am unable now to see on first download/visit the time spend to decode and load it? I tried clearing my browser history but contrary to other caches (like standard assets like images), it remains, that is to say even if i clear my browser history/caches, the next time I load it, it still is written in chrome dev tools as 0ms/"from memory cache": how can I clear it to see what happens for first time visitors ?
Note: Of course "disable cache' is selected inside chrome dev tools .
Also, I am not sure it is important but here is the base64 and html code (it's pretty small: less than 900 bytes)
<div style="background-image: url('data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wAARCAAKAB4DAREAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAABQYE/8QAJBAAAgEEAQMFAQAAAAAAAAAAAQIDAAQREgUTITEyQUJRcZL/xAAWAQEBAQAAAAAAAAAAAAAAAAAEAgP/xAAdEQACAgIDAQAAAAAAAAAAAAAAAQIDESEEInEx/9oADAMBAAIRAxEAPwCOtrYGJmdY36iEIerqUOR3I9/yqRJssYAk2jakg47HIpUFoNYygsZY7PmYmcqge2dYiQMb/Xfxmj8pPRvxWt5Bb6BYbyUB9hIRJqRgrnPmpqei7liQInmqiRIWsPUKVENIT5FQZeJJbUfId/SUW5934IpXReg7Mz8ldFyWJkfJJz8qx+JG7WWz/9k=');" ><a href="/home">
When I repeat your test Chrome makes two requests. The first one fetches the HTML page and is a normal request. The second fetches the image and is served from memory. That makes sense as the image was included in the first request embedded in the HTML itself.
In other words inlined images are not cached unless the page they live in is cached.
What is the order of loading?
php
html
java script
css
jquery
ajax
please give me little explanation too
Thanks,
1) HTML is downloaded.
2) HTML is parsed progressively. When a request for an asset is reached the browser will attempt to download the asset. A default configuration for most HTTP servers and most browsers is to process only two requests in parallel. IE can be reconfigured to downloaded an unlimited number of assets in parallel. Steve Souders has been able to download over 100 requests in parallel on IE. The exception is that script requests block parallel asset requests in IE. This is why it is highly suggested to put all JavaScript in external JavaScript files and put the request just prior to the closing body tag in the HTML.
3) Once the HTML is parsed the DOM is rendered. CSS is rendered in parallel to the rendering of the DOM in nearly all user agents. As a result it is strongly recommended to put all CSS code into external CSS files that are requested as high as possible in the section of the document. Otherwise the page is rendered up to the occurance of the CSS request position in the DOM and then rendering starts over from the top.
4) Only after the DOM is completely rendered and requests for all assets in the page are either resolved or time out does JavaScript execute from the onload event. IE7, and I am not sure about IE8, does not time out assets quickly if an HTTP response is not received from the asset request. This means an asset requested by JavaScript inline to the page, that is JavaScript written into HTML tags that is not contained in a function, can prevent the execution of the onload event for hours. This problem can be triggered if such inline code exists in the page and fails to execute due to a namespace collision that causes a code crash.
Of the above steps the one that is most CPU intensive is the parsing of the DOM/CSS. If you want your page to be processed faster then write efficient CSS by eliminating redundent instructions and consolidating CSS instructions into the fewest possible element referrences. Reducing the number of nodes in your DOM tree will also produce faster rendering.
Keep in mind that each asset you request from your HTML or even from your CSS/JavaScript assets is requested with a separate HTTP header. This consumes bandwidth and requires processing per request. If you want to make your page load as fast as possible then reduce the number of HTTP requests and reduce the size of your HTML. You are not doing your user experience any favors by averaging page weight at 180k from HTML alone. Many developers subscribe to some fallacy that a user makes up their mind about the quality of content on the page in 6 nanoseconds and then purges the DNS query from his server and burns his computer if displeased, so instead they provide the most beautiful possible page at 250k of HTML. Keep your HTML short and sweet so that a user can load your pages faster. Nothing improves the user experience like a fast and responsive web page.
~source
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So, in short:
The html will load first obviously as this instructs the browser on the extended requirements: images, scripts, external stylesheets etc
After that, the load order is pretty random - multiple connections will be initiated by most browsers and the order in which they return cannot be predicted.
I built a script that replaced all inline images with data URI's to reduce http requests and seed up loading time on mobile devices.
Unfortunately I experienced slower loading. I figure it depends on that the html file was bigger (around 100kb instead of around 5 kb) :)? Or is there something else with data URIs that slows down the page load?
Must the browser complete download of the full document before it can load liked sources in it? Or will linked sources, for example css and javascript in the top of document, be loaded before browser has completet the full document?
How does it work with CSS? Must the browser load the complete CSS file before reading all the CSS settings?
If so, is it better to have a sepearate CSS file for data uri like this:
Load CSS for structure (no data uri's)
Load CSS for background images (all background images in URI format)
Will a "separate chaced jpg file" load faster than a "URI based image included in a cached css file"?
Any other suggestions on how to use data URIs?
mobify.com published a blog post: On Mobile, Data URIs are 6x Slower than Source Linking (New Research)
[…]
So you can imagine my surprise to discover, when measuring the performance of hundreds of thousands of mobile page views, that loading images using a data URI is on average 6x slower than using a binary source link such as an img tag with an src attribute!
I didn’t dig into this post, but I guess part of the problem might be the required CPU power for decoding base64-encoded data URIs.
The author recommends to use data URIs only "infrequently for small images".
HTML contents are loaded in order they appear in the HTML file.
Data URI's of large size slows down the page.
Large Data URI are not supported in IE browsers(IE6).
Data URI for image size less than 20KB is recommended & normally used.
You have the option of image compression, js, css compression to increase pagespeed.
TL;DR: using data URIs will delay loading the HTML if the images are large
I built a script that replaced all inline images with data URI's to reduce http requests and seed up loading time on mobile devices.
This is a good idea if the time to perform requests is greater than the time to download the images, since the browser needs to download the image data in any case (whether data URI or not). If the image is embedded in the HTML as a data URI, you will increase the total size of the HTML by the sum of the size of all of the images (plus about 33% for the data URI encoding).
E.g. if the time to open a connection for a new request is 1s, and your images take 0.2s each to download, and you have 10 images, and your HTML takes 0.5s to download then, it will take:
1s + 0.5s = 1.5s for the HTML alone
10 x (1 + 0.2)s = 12s for the images
If you encoded the images into the HTML as data URIs (which are 33% larger):
1s + 0.5s + (10 x 0.2s x 1.33) = 4.2s
(and no external requests for images)
An important factor here is how long it takes to get the whole HTML source of the page (1.5s vs 3.5s). If it's critical that you have the images when the page is drawn, data URIs would be a good optimization. If it's acceptable to draw most of the page with the images loaded asynchronously, it might be better to optimize for downloading the HTML completely first.
And if your images are large, it's much worse to encode them as data URIs. If each image takes 1 second to download and the other factors are the same:
With external images:
1s + 0.5s = 1.5s for the HTML alone
10 x (1s + 1s) = 20s for the images
With data URIs:
1s + 0.5s + (10 x 1s x 1.33) = 14.8s until the HTML is downloaded!
I figure it depends on that the html file was bigger (around 100kb instead of around 5 kb) :)?
As the browser downloads the HTML source of the page it begins interpreting and displaying it. When it reaches elements that refer to a separate resource, like images with external URIs (not data URIs) and (async) script tags, it will begin or queue downloading them, but continue downloading and processing the HTML of the page.
This means that as the page downloads, the user sees more and more of it, albeit without images and sometimes fonts loaded.
When the browser reaches an image tag with a data URI, it has to download and parse the whole data URI string before it can process anything after that in the HTML source. So if your original HTML was 1kb and you embed an image data URI of 1MB, there's going to be a pause when the browser reaches that image tag as it downloads the 1MB of HTML source occupied by the data URI-encoded image, before it processes any more of the page's HTML.
Must the browser complete download of the full document before it can load liked sources in it? Or will linked sources, for example css and javascript in the top of document, be loaded before browser has completet the full document?
Linked sources have their own loading and parsing logic. <script> tags will be loaded synchronously - blocking the browser from parsing any more of the HTML source until the referenced script is loaded and executed - unless they have the async attribute. External stylesheets will be downloaded in parallel but will block rendering until they're loaded.
There's a good explanation here.
If so, is it better to have a sepearate CSS file for data uri like this:
Load CSS for structure (no data uri's)
Load CSS for background images (all background images in URI format)
Will a "separate chaced jpg file" load faster than a "URI based image included in a cached css file"?
Any other suggestions on how to use data URIs?
It depends what you want to optimize for. Are you more interested in time to first render, even if the page is incomplete? Or time until the whole page, with all images, is rendered in full?
Like any tool, there's no blanket rule that will apply, like "always use data URIs" or "never use data URIs". A good rule of thumb is use data URIs for small images like icons. Webpack's url-loader implements this logic automatically, using data URIs for files up to a specified size and otherwise using external URLs.
I have a single 3.3ko image that i display in 15 different img HTML balise in my web page.
I am wondering if the Base64 encoded method is not a bad idea for this use case, since it significantly make the page heavier to load (since i display the image 15 times), than using a "classic URL".
However i am not sure that a bigger HTML page to load still slowly than a lower one which have to reach the image from the server, what is theorically the best choice here ?
OK,
If you load the base64 image in the page 15x then you add about 510,000 bits of data to the total page size. This data has to be transferred from the server to the client and will take just over 0.5 seconds (over a 1mb line) + the propagation delay for the page which will be around 75ms. So the total is around (aprox 500ms + 76ms) => call it 600ms.
If you include put the image on the Server and reference it from the various parts of the page by URL) It has to transmit around 34,000 bits and once downloaded will pull the image from the local download cache a further 14 times + propagation delay for the image and a propagation delay for the page page. Getting the image from cache takes almost no time typically < 1ms) The overall expected cost on a 1mb line would be roughly (34ms + 76ms + 14ms + 76ms) => about 200ms.
This is a highly simplified view of what happens and a number of factors come into play as a page is loaded. The number of hosts domains, Download threads, Script execution, CSS styling, positioning, paint events, layout events and so on.
You can put base64 strings directly in the background: url() CSS tage if its going to be used behind a DIV or button and repated on you page.
But its typically a good thing to put the resource on the server and download it as required by URL in the places its needed. Not only for your sanity when it come time to copy and paste 15 Base64 string into your page when the image changes :)
I'm trying to do everything I can to increase the load speed of my pages, and in particular the ajax loaded components.
In firebug, my out put looks like this
I'm not entirely sure if I'm reading this correctly, but it is either +2.19s for DOMContentLoaded (or it mayonly be .8 if we are supposed to subtrack that from the waiting response).
But then 4.67s for the 'load' (event).
Both of these seem like significantly long loading times.
I haven't been able to figure out what would cause these sorts of times.
These stats are from loading a straight html page which I normally load via ajax.
But this is just the HTML. No javascript in the page, and the page is being loaded directly, not through an ajax request.
However when i do load this page via ajax, I recognize a significant delay while the page attempts to load.
Any suggestions?
I've been looking through the html in IE debugbar, and it all looks surprisingly clean.
There are 30 images in the page. Could that be what the 'load' event is waiting for? And if so, is there any way to speed this up?
In particular, as the user will never load this page directly, but only via an ajax request, is their a way to improve the page loading performance in ajax. The problem isn't with the ajax loading script, but with the html page specifically.
----------------------EDITTED------------------------------
The results of the page are loaded into a jquery cycle where multiple images are visible at a single time, so using lazyloader provides a pretty horrible user experience. (assuming it is the images which is causing this problem).
Those firebug stats are telling you that:
It takes 2.1 seconds from the time you started the request for your server-side code to start returning a response
It then takes 19ms to download the response (i.e. the HTML)
71ms or so after that, the DOMContentLoaded event fires
After the page has completely loaded (after the images are all downloaded), the load event is fired.
DOMContentLoaded fires as soon as the DOM is ready. The load event waits until the whole page is loaded (including any external resources like images)
Most of the time in your request (other than downloading the images) is spent waiting for the server to construct the HTML. Maybe your server-side code can be optimized to run faster?