Suppose there is a fancy button to be put on a website. And the design of the button is such that parts of it can be sliced and applied as a repeating background.
I often slice the images and apply them as a repeating backgrounds this way. So one button in an image is split into several different images. I do this to reduce the size of the images used.
My team leader told me not to slice the images. If you slice a button into three parts, there would be three web requests. And this will slow down the site.
I find it hard to believe that the overhead of three requests would be more than using the entire image. So I just want to know how to calculate the amount of bytes transferred per web request.
You could use the net tab on firebug to see the time taken for each web request, its also broken down into the time it takes to download each component in the response.
If you reuse the three images a lot in the site than you will save requests and bandwidth (your gut reaction). However the three images need to be initially retrieved -- so you must reuse them.
The crux is this: Due to HTTP pipelining the overhead is pretty much negligible (especially since you consider that this is HTTP -- a string based protocol). Three images being retrieved probably have the same latency as retrieving a single image.
-- edit: after comment from Ericlaw --
Pipelining is indeed not widely supported. However this does not mean you don't stand to gain from three images. Just make sure you reuse them A LOT throughout your website.
-- edit --
Browsers also open multiple connections to a server, the norm is 2 connections last I checked -- however, I believe recent browsers open more.
Related
I have an application that displays a page with 5-10000 rows in a table element and has a drop down menu that switches to other views with a similar structure. Currently I do an async request when switching between views (I also clear/remove the current view from the document) and load the appropriate list each time; however, I was thinking of
1) loading all views in the background before they are requested for viewing so they load instantly on click.
and
2) Just hiding a particular set of rows rather than removing it so if the client navigates back it will be instant as well.
This would mean that potentially 10s of thousands of html elements would be loaded into the current document; is this a problem? What if it was more than 10's of thousands?
Loading 10000+ HTML elements onto your page is not a very smart idea. If the user's computer is of normal to fast speed, the user may experience slight lag, and if the user's computer is slow, it may even cause a system crash (depending on the amount of RAM on the computer).
A technique you could explore is to lazyload the HTML elements - this means that when the user scrolls down to a particular portion of the page, then the elements are loaded via AJAX. (also known as "infinite scrolling").
This means that the browser's rendering engine does not have to render so many elements in one go, and that saves memory, and increases speed.
It depends on the HTML elements, but I would say as a rule of thumb, don't load upfront.
When I say it depends on the elements, I mean take a look at facebook for example. They load maybe 10 or 20 items into the feed, and then add more as you scroll, because each item is so rich in content (photos, videos, etc).
However on the flipside of that, think about how much info is in each row, if it's say, less than 500 bytes, 500 x 10000 = 5MB, which isn't an awful load for a web request, and if you can cache intelligently, maybe it will be a lot less than that.
The bottom line is, don't assume that the number of HTML elements matters so much, think about the amount of data that it amounts to. My advice, cross that bridge when you get there. Monitor your request times, if they are too long, come up with an AJAX page loader, which shouldn't be too hard.
The size of your html document would be conciderable...
I once got a page with 5000~10000 items (table rows) and the browser (IE) was rendering way too long (downloading the page, parsing and rendering it).
The best solution seems to me to set up a WebService with a LazyLoading system.
So IMHO yes, the number of element in a HTML document should be monitored.
Yes of course! The number of element in a HTML document can be monitored.! Use FireBug in Firefox!
Is there any way to do this without bogging down a computer?
I don't know much of the innerworkings of computers and browsers, so I don't know what about a bunch of images on a page takes up so much cpu. My first thought was to hide the images that aren't visible on the page anyway. The ones that have been scrolled past or yet to be scrolled to.
I tried a sample jsfiddle with randomly colored divs instead of pictures, but just scrolling up and down through that makes my computer sound like it's taking off.
What is it about all the pictures that takes up cpu? Can it be avoided?
The question is generally not about the amount of bandwith it might save. It is more about lowering the number of HTTP requests needed to render a webpage.
See this
What takes time, when doing lots of requests to get small contents (like images, icons, and the like) is the multiple round-trips to the server : you end up spending time waiting for the request to go, and the server to respond, instead of using this time to download data.
Less http requests = faster loading overall
If we can minimize the number of requests, we minimize the number of trips to the server, and use our hight-speed connection better (we download a bigger file, instead of waiting for many smaller ones).
That's why CSS sprites are used.
For more informations, you can have a look at, for instance : CSS Sprites: Image Slicing’s Kiss of Death
Other than this you can also use lazy load
I have seen that google music is using one image for all the small images used in google Music website
http://music.google.com/music/sprites.png
i want to know whats the advantage of it. It would be very difficult to mark the position coordinates of small images
It reduces the number of HTTP requests that the client has to make to the server. Generally this speeds load time.
Yahoo provides a good set of guidelines for decreasing the load time of your web page. This is part of their first rule.
Setting up all of the indexes for the locations is time consuming, but it only has to be done once by the developer and then every single page load requires less HTTP requests. Specifically, in this case, 1 request rather than several dozen for all of the little images.
How much faster is it to use a base64/line to display images than opposed to simply linking to the hard file on the server?
url(data:image/png;base64,.......)
I haven't been able to find any type of performance metrics on this.
I have a few concerns:
You no longer gain the benefit of caching
Isn't a base64 A LOT larger in size than what a PNG/JPEG file size?
Let's define "faster" as in: the time it takes for a user to see a full rendered HTML web page
'Faster' is a hard thing to answer because there are many possible interpretations and situations:
Base64 encoding will expand the image by a third, which will increase bandwidth utilization. On the other hand, including it in the file will remove another GET round trip to the server. So, a pipe with great throughput but poor latency (such as a satellite internet connection) will likely load a page with inlined images faster than if you were using distinct image files. Even on my (rural, slow) DSL line, sites that require many round trips take a lot longer to load than those that are just relatively large but require only a few GETs.
If you do the base64 encoding from the source files with each request, you'll be using up more CPU, thrashing your data caches, etc, which might hurt your servers response time. (Of course you can always use memcached or such to resolve that problem).
Doing this will of course prevent most forms of caching, which could hurt a lot if the image is viewed often - say, a logo that is displayed on every page, which could normally be cached by the browser (or a proxy cache like squid or whatever) and requested once a month. It will also prevent the many many optimizations web servers have for serving static files using kernel APIs like sendfile(2).
Basically, doing this will help in certain situations, and hurt in others. You need to identify which situations are important to you before you can really figure out if this is a worthwhile trick for you.
I have done a comparison between two HTML pages containing 1800 one-pixel images.
The first page declares the images inline:
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB/AAffA0nNPuCLAAAAAElFTkSuQmCC">
In the second one, images reference an external file:
<img src="img/one-gray-px.png">
I found that when loading multiple times the same image, if it is declared inline, the browser performs a request for each image (I suppose it base64-decodes it one time per image), whereas in the other scenario, the image is requested once per document (see the comparison image below).
The document with inline images loads in about 250ms and the document with linked images does it in 30ms.
(Tested with Chromium 34)
The scenario of an HTML document with multiple instances of the same inline image doesn't make much sense a priori. However, I found that the plugin jquery lazyload defines an inline placeholder by default for all the "lazy" images, whose src attribute will be set to it. Then, if the document contains lots of lazy images, a situation like the one described above can happen.
You no longer gain the benefit of caching
Whether that matters would vary according to how much you depend on caching.
The other (perhaps more important) thing is that if there are many images, the browser won't get them simultaneously (i.e. in parallel), but only a few at a time -- so the protocol ends up being chatty. If there's some network end-to-end delay, then many images divided by a few images at a time multiplied by the end-to-end delay per image results in a noticeable time before the last image is loaded.
Isn't a base64 A LOT larger in size than what a PNG/JPEG file size?
The file format / image compression algorithm is the same, I take it, i.e. it's PNG.
Using Base-64, each 8-bit character represents 6-bits: therefore binary data is being decompressed by a ratio of 8-to-6, i.e. only about 35%.
How much faster is it
Define 'faster'. Do you mean HTTP performance (see below) or rendering performance?
You no longer gain the benefit of caching
Actually, if you're doing this in a CSS file it will still be cached. Of course, any changes to the CSS will invalidate the cache.
In some situations this could be used as a huge performance boost over many HTTP connections. I say some situations because you can likely take advantage of techniques like image sprites for most stuff, but it's always good to have another tool in your arsenal!
So I was looking at the facebook HTML with firebug, and I chanced upon this image
and came to the conclusion that facebook uses this large image (with tricky image positioning code) rather than many small ones for its graphical elements. Is this more efficient than storing many small images?
Can anybody give any clues as to why facebook would do this.
These are called CSS sprites, and yes, they're more efficient - the user only has to download one file, which reduces the number of HTTP requests to load the page. See this page for more info.
The problem with the pro-performance viewpoint is that it always seems to present the "Why" (performance), often without the "How", and never "Why Not".
CSS Sprites do have a positive impact on performance, for reasons that other posters here have gone into in detail. However, they do have a downside: maintainability; removing, changing, and particularly resizing images becomes more difficult - mostly because of the alterations that need to be made to the background-position-riddled stylesheet along with every change to the size of a sprite, or to the shape of the map.
I think it's a minority view, but I firmly believe that maintainability concerns should outweigh performance concerns in the vast majority of cases. If the performance is necessary, then go ahead, but be aware of the cost.
That said, the performance impact is massive - particularly when you're using rollovers and want to avoid that effect you get when you mouseover an image then the browser goes away to request the rollover. It's appropriate to refactor your images into a sprite map once your requirements have settled down - particularly if your site is going to be under heavy traffic (and certainly the big examples people have been pulling out - facebook, amazon, yahoo - all fit that profile).
There are a few cases where you can use them with basically no cost. Any time you're slicing an image, using a single image and background-position tags is probably cheaper. Any time you've got a standard set of icons - especially if they're of uniform size and unlikely to change. Plus, of course, any time when the performance really matters, and you've got the budget to cover the maintenance.
If at all possible, use a tool and document your use of it so that whoever has to maintain your sprites knows about it. http://csssprites.org/ is the only tool I've looked into in any detail, but http://spriteme.org/ looks seriously awesome.
The technique is dubbed "css sprites".
See:
What are the advantages of using CSS
Sprites in web applications?
Performance of css sprites
How do CSS sprites speed up a web
site?
Since other users have answered this already, here's how to do it, and another way is here.
Opening connections is more costly than simply continuing a transfer. Similarly, the browser only needs to cache one file instead of hundreds.
yet another resource: http://www.smashingmagazine.com/2009/04/27/the-mystery-of-css-sprites-techniques-tools-and-tutorials/
One of the major benefits of CSS sprites is that it add virtually 0 server overhead and is all calculated client side. A huge gain for no server side performance hit.
Simple answer, you only have to 'fetch' one image file and it is 'cut' for different views, if you used multiple images that would be multiple files you would need to download, which simply would equate into additional time to download everything.
Cutting up the large image into 'sprites' makes one HTTP request and provides a no flicker approach as well to 'onmouseover' elements (if you reuse the same large image for a mouse over effect).
Css Sprites tecnique is a method for reducing the number of image requests using background position.
Best Practices for Speeding Up Your Web Site
CSS Sprites: Image Slicing’s Kiss of Death
Google also does it - I've written a blog post on it here: http://www.stevefenton.co.uk/Content/Blog/Date/200905/Blog/Google-Uses-Image-Sprites/
But the essence of it is that you make a single http request for one big image, rather than 20 small http requests.
If you watch a http request, they spend more time waiting to start downloading than actually downloading, so it's much faster to do it in one hit - chunky, not chatty!