individual JS file XMLHttpRequest vs combined gzip download - html

some stats before i can state the situation,
total JS code = 122 MB
minified = 36 MB
minified and gzip = 4 MB
I would like to get the entire 4 MB down in one shot (with a loading progress indicator on the page), uncompress them, but not parse them yet. We don't want the code expanding in browsers memory when a lot of it might not be required at this point. The parsing should happen when a script tag with the corresponding js file name is encountered.
Intention: faster one shot download of js files, but keeping the behaviour unchanged from the browser perspective.
Do any such solutions exist? Am I even thinking sane?
If yes, I know how to get the gzip, I would like to know how to keep them in the browser cache so that when a script tag is encountered the browser doesn't fire a XMLHttpRequest for it again.

The trick is to leverage HTTP caching directives. For a starter take a look at this. You should only need to fetch your JS code once because you can safely set the cache directive to instruct the browser to hold on to the JS file indefinitely (subject to space). Indefinitely in this context typically means the year 2035.
When you're ready to update all your browser-side caches with a new version of the JS file simply use a cache busting Query String. Any serial number or time and date will do, or a simple version number eg;
<script src="/js/myfile.js?v2.1"></script>
Some minification frameworks handle the cache-busting for you. A good technique for example is those that MD5 the contents and use that as the cache buster query string. That way, whenever your source JS changes the browser will request the new version (because the QS is embedded in your HTML script tag) and then cache for as long as possible again.
XMLHttpRequest will honour the caching primities you set.
In the other part of your question, I believe what you're asking is whether you can download one combined script file and then only refer to parts of it with individual script tags on the page. No - I don't believe you can do that. If you want to refer to individual files you would need to have a HTTP URL and caching directives for each piece of GZIPped content you want to use separately. However, you might find this is as much or maybe even more performant than one big file at first depending on how much parallelisation you can achieve.
A neat trick here is to pre-load a lot of what you need. Google have been doing this on the home page for years. Basically, they pre-load stacks of resources (images certainly, but possibly also JS). So whilst you're thinking about what search query to enter, they are already loading the cache up with stuff you'll want on the subsequent page.
So you could use XMLHttpRequest to fetch your JS files (without parsing them) well before you need them. Then by the time your <script/> tag refers to them they'll already be downloaded and you just need to parse them.

In addition to cirrus's point about using HTTP caching, you could break that still-pretty-large 4mb file down and only load them when that functionality is required.
It's more HTTP requests, but 4MB is a big hit in one go.
Suggest something like require.js to load in the appropriate files when they are needed:
http://requirejs.org/docs/start.html

Related

Is html sent compressed and/or minified?

I'm thinking about using the jQuery Ajax load method. In some cases, the html I want to load is quite large. I'm wondering if the browser already streamlines the process behind the scenes, or should I minify and/or compress the html before calling .load() from jQuery? If so, which one? or both? Is there a standard way to perform minification and/or compressing in this scenario?
UPDATE
Does this make any sense:
The data I'm going to retrieve from the server is static. Let's say I have data for apples, oranges, kumquats, and papayas, and none of it changes "on the fly" (only when I update the site).
So is it preferable that I get the data as Json via jQuery this way:
$.getJson('kumquats')
(...and then, of course, process the results that come back)... OR ...simply send back the html with no need of massaging, as "kumquats" will always send back the exact same html, "oranges" will always be the same html, etc.
In the latter option, then, I would do something like this (jQuery pseudocode) instead:
$('#MainContent").html($.load("\Content\Kumquat.htm"));
In summation, I can send all the html fully-formed across the wire, and clog up the pipes with some extra bits for a bit, OR I can send a less verbose representation of the datta (json), and then massage it in the .getJson() callback function, transforming it into html. Performance-wise, does it make much difference? BTW, this is not "sensitive" data - it doesn't matter who sees it as it zips by through the ether.
I'm wondering if the browser already streamlines the process behind the scenes
The browser can't control how much data the server sends in its response.
or should I minify and/or compress the html before calling .load() from jQuery?
You call load on the client. The server has to do any minification or compression of the HTML.
Is there a standard way to perform minification and/or compressing in this scenario?
Compression is usually handled by gzip encoding. How you set that up depends on your server and/or the server side programming language that is generating the content.
I'm not aware of any standard way to perform minification. I used HTML Tidy to do that once.
The browser can't minify HTML before downloading it first. The only reason you want to minify is to reduce download time by decreasing the file size of the download, so this is counter intuitive.
Your server needs to minify and/or compress. It probably already is compressing by default (mod_deflate on apache for example). Minification of the HTML can be done in a variety of ways depending upon the server-side technology you are using. There may be a library for it, or you could use a third party CDN to minify and serve the content for you.

Single page web app: single html file or several files loaded using ajax?

I have this relatively large web app, it is a single page with ajax calls for the business logic.
Currently I have a small html file that loads all css and js files, and then loads the actual content of the page using ajax, so I have like 15 html files to load a single page (each html file is a "div" in the main html page.
Several files are easier to maintain, but my question is: what is better in terms of performance / User experience?
Keep it as is now (several files loaded async) OR have a script that joins all the files on "compile" time (when deploying)?
I understand that having a single html file is more efficient in terms of network performance, but on the other hand a small file will load faster, and the rest of the content will load after a "loading" dialog.
It is better to have less files as scripts block and load sequentially, or use deferred loading. There is normally a per domain limit for parallel downloads although I cannot for the life of me remember what it is.
For production if you compile a single payload for the scripts together and all of the stylesheets together you will likely reap some performance benefits. I would also consider minifying the output as well. Yahoo Compressor and Google Closure Compiler are two tools that can be used to achieve this.
This will tell you more about the techniques to stop blocking...
http://www.stevesouders.com/blog/2009/04/27/loading-scripts-without-blocking/
Some performance tips, not limited to JavaScript...
http://developer.yahoo.com/performance/rules.html

Minimize size of HTML file

I have a large HTML file being generated for a report at the moment (around 2-3 mb) and this file is going to be transferred a lot of times. It is not being access through any form of a web host, it is just a file being accessed by a network, but the network is all around the world and therefore not fast everywhere.
I know about gzip compression, but from the looks of it that will only work with an apache web server or something similar to configure it via the .htaccess file. I have already stripped the white spaces from the HTML file, my question is besides just zipping it up in a standard archive, what else can I do to minimize the size of the file?
Thanks, and I will be happy to answer any other questions.
You can certainly look at the HTML structure itself to see if you can reduce the number of tags themselves. For example to you have a bunch of nested table structures that could be replaced? Do you have inline styles that could be put into a separate stylesheet? Do you have any javascript content which could be put into a separate file?
I does not think that you can compress it without a proper web server, because is the web server that say to the browser that the file is to unzip in the HTTP response.
If the format is the greater part of the file (i.e. there are more tags and script than the text) you can use a css to minimize the size.
If the data is the greater, so information are the more than tags, I suggest you to use a web server (also with the Microsoft IIS you can compress it)
But, if possible, consider also to split the data in several file, with different level of details for example
It is possible to contain compressed data within the HTML file and use a JavaScript to dynamically compress the data as the page is rendered using a JavaScript implementation of the Decompression module. See this answer for references: JavaScript implementation of Gzip

Is it possible to do client-side page/DOM caching with localStorage?

I'm reading up on Local Storage in HTML5, and I'm starting to view it sort of like a client-side version of how I use memcached. That got me thinking -- I currently do page-level caching in memcache.
Is that possible with localStorage? That is, can an assembled page store itself (or, more importantly, maybe parts of itself) in localStorage such that the client doesn't have to work its DOM so hard next time the user shows up to a page?
It seems to me that since things are only stored as strings this may not work unless there is some string to object transformation available.
Have a look at Christian's 2010 24ways post under the heading Caching a full interface (near the end). He basically does:
localStorage.setItem('state',f.innerHTML);
Followed by:
if('state' in localStorage){
f.innerHTML = localStorage.getItem('state');
}
Where f is the element he wants to cache.
The problem with this is that you don't know what's in the cache until you've loaded your page, meaning that you'd need to perform another HTTP request to get the data that you do need which leads to even more overhead. I would definitely stick with the server-side caching of resources.
You could do it, but something like this would basically involve a single, master index page of Javascript that either loaded cached local files or performed Ajax requests to load content from the server.

Google App Engine - Caching generated HTML

I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages.
The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage.
However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated.
I am open to any suggestions on this approach, or other approaches for caching generated html.
Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires).
In order of speed:
memcache
cached HTML in data store
full page generation
Your caching solution should take this into account. Essentially, I would probably recommend using memcache anyways. It will be faster than accessing the data store in most cases and when you're generating a large block of HTML, one of the main benefits of caching is that you potentially didn't have to incur the I/O penalty of accessing the data store. If you cache using the data store, you still have the I/O penalty. The difference between regenerating everything and pulling from cached html in the data store is likely to be fairly small unless you have a very complex page. It's probably better to get a bunch of very fast cache hits off memcache and do a full regenerate every once in a while than to make a call out to the data store every time. There's nothing stopping you from invalidating the cached HTML in memcache when you update, and if your traffic is high enough to warrant it, you can always do a multi-level caching system.
However, my main concern is that this is premature optimization. If you don't have the traffic yet, keep caching to a minimum. App Engine provides a set of really convenient performance analysis tools, and you should be using those to identify bottlenecks after you've got at least a few QPS of traffic.
Anytime you're doing performance optimization, measure first! A lot of performance "optimizations" turn out to either be slower than the original, exactly the same, or they have negative user experience characteristics (like stale data). Don't optimize until you're certain you have to.
A while ago I wrote a series of blog posts about writing a blogging system on App Engine. You may find the post on static generation of HTML pages of particular interest.
This is not a complete solution, but might offer some interesting option for caching.
Google Appengine Frontend Caching allows you a way of caching without using memcache.
Just serve a static version of your site
It's actually a lot easier than you think.
If you already have a file that contains all of the urls for your site (ex urls.py), half the work is already done.
Here's the structure:
+-/website
+--/static
+---/html
+--/app/urls.py
+--/app/routes.py
+-/deploy.py
/html is where the static files will be served from. urls.py contains a list of all the urls for your site. routes.py (if you moved the routes out of main.py) will need to be modified so you can see the dynamically generated version locally but serve the static version in production. deploy.py is your one-stop static site generator.
How you layout your urls module depends. I personally use it as a one-stop-shop to fetch all the metadata for a page but YMMV.
Example:
main = [
{ 'uri':'about-us', 'url':'/', 'template':'about-us.html', 'title':'About Us' }
]
With all of the urls for the site in a structured format it makes crawling your own site easy as pie.
The route configuration is a little more complicated. I won't go into detail because there are just too many different ways this could be accomplished. The important piece is the code required to detect whether you're running on a development or production server.
Here it is:
# Detect whether this the 'Development' server
DEV = os.environ['SERVER_SOFTWARE'].startswith('Dev')
I prefer to put this in main.py and expose it globally because I use it to turn on/off other things like logging but, once again, YMMV.
Last, you need the crawler/compiler:
import os
import sys
import urllib2
from app.urls import main
port = '8080'
local_folder = os.getcwd() + os.sep + 'static' + os.sep + 'html' + os.sep
print 'Outputting to: ' + local_folder
print '\nCompiling:'
for page in main:
http = urllib2.urlopen('http://localhost:' + port + page['url'])
file_name = page['template']
path = local_folder + file_name
local_file = open(path, 'w')
local_file.write(http.read())
local_file.close()
print ' - ' + file_name + ' compiled successfully...'
This is really rudimentary stuff. I was actually stunned with how easy it was when I created it. This is literally the equivalent of opening your site page-by-page in the browser, saving as html, and copying that file into the /static/html folder.
The best part is, the /html folder works like any other static folder so it will automatically be cached and the cache expiration will be the same as all the rest of your static files.
Note: This handles a site where the pages are all served from the root folder level. If you need deeper nesting of folders it'll need a slight modification to handle that.
Old thread, but i'll comment anyways as technology has progressed a little...
Another idea that may or may not be approproate for you is to generate the HTML and store it on Google Cloud Storage.
Then access the HTML via a CDN link that the cloud storage provides for you.
No need to check memcache or wait for datastore to wake up on new requests.
Ive started storing all my JavaScript, CSS, and other static content (images, downloads etc) like this for my appengine apps and its working well for me.