So I know generally it is not best practice to have multiple css files placed within the head tag of your site due to the increased number of http requests you would have to make. Therefore it has been recommended to me to just use one big css file instead of smaller ones. However, wouldn't it make sense to break up that css file into separate, smaller files, and then link those individual css files only on the pages where they are needed? So say my homepage had its own set of styles, and my about page obviously will differ from my homepage with it's own set of styles. So why not link a homepage.css file to the html homepage and then only link an about.css file to the html about page? There is still only one http request being made in each case, and you can have much smaller css files. Is there something I'm missing?
CSS is cached, so that if you were to link to your big CSS file on your homepage (lets say domain.com/css/homepage.css) then the user travels from your homepage to the about page and it calls the same homepage.css, but you're browser then says "wait! I already have it!" and it skips the HTTP request for the file, resulting in no additional requests.
Where as you have a CSS file for each page, there would be a request for nearly identical CSS files that is wasting bandwidth needlessly. It may not seem like a big deal on smaller sites, but bigger sites with 1000s of requests happening, the extra bandwidth adds up and results in higher operating costs.
The only exception to this is when you're using a responsive framework of some sort (like bootstrap) where editing the main bootstrap file is either impossible (through CDNs) or impractical and hard-to-maintain (i.e. when bootstrap updates you'll lose all your customization), which in this case you would have one bootstrap.min.css file and one custom.css that allows you to keep your customizations while only increasing bandwidth slightly.
Background:
We want to build a page that will use dynamic serving (different version for desktop, tablet and mobile).
Currently we think which of the two solutions will serve content faster and be better overall.
I am thinking in context of serving it to different devices (desktop, tablet, mobile) using the same URL.
Dynamic serving sites: one URL that serves different HTML and CSS depending on the user agent.
source: http://www.vervesearch.com/blog/the-ultimate-guide-to-developing-mobile-websites/
Our main goal is speed. Then other things.
Solutions:
1) storing images as base64 in HTML
User enters a site. For example it contains 1-2 images that are pretty large. Images are embedded directly into HTML as base64.
We have one minified JS file and one minified CSS file.
So in ideal example we have
- HTML that contains 1-2 large images (they can have up to 5MB).
- one minified CSS file
- one minified JS file
So...
3 requests
2) storing images in separate URLs
User enters a site. It contains 1-2 images stored in fastest possible external storage (sidenote: Can I store images as attachment in fastly?).
We have one minified JS file and one minified CSS file.
So we have
- HTML
- one minified CSS file
- one minified JS file
- 1-2 images
So...
3 requests (HTML, JS, CSS) + 2 requests
5 requests
Questions:
Do we gain a lot of speed when using less HTTP calls (connection estabilish latency etc)?
Is it worth it? Base64 encoding bloats image sizes by 33%. source: Should I embed images as data/base64 in CSS or HTML
Which solution/way is gonna work "better" (faster speed)?
Is it possible to do "each" way?
Can I cache whole 'stuff' (html, js, css, images)?
Will caching works (good for each customer/visitor) if I have a lot of sites to cache?
PS. If any question looks stupid - sorry we have a very hot discussion in our company and I want to provide all information.
You can try both solutions yourself and compare the results:
http://www.webpagetest.org/compare
and if you want to test for desktop and mobile, try pageSpeed test:
https://developers.google.com/speed/pagespeed/insights/
you will have the results showing for desktop and mobile, then you can decide which route to go.
I am considering moving my website's images to a separate host (such as imgur) in order to cut down on bandwidth use. Are there any techniques I can use to guarantee that the images will still be shown even if the primary host is down? I am thinking of something along the line of what is possible to do with Javascript, where you can check if the first script tag loaded and dynamically insert another one if it didn't work.
In my particular case the images appear both as regular HTML IMG tags and as background images, with the URLs in CSS stylesheet.
The only way to do this would be with JavaScript. I really don't think this is a problem you need to worry about.
Also, you should know that you cannot use Imgur for a mirror for your site's images. It is against their terms of service.
I am building a dynamic navigation menu in HTML 5/CSS 3 with all the new good stuff. Some menu nodes will have an svg icon, some not. In some views the menu levels are rendered differently depending on the user authorization level and so on.
So my question is: What would be the best way to store these icons?
Since each icon i connected to a navigation node, would it be appropriate to store the xml for the SVG icon in the database? Does anyone have any good recommendations?
I'm working with .net MVC3 MSSQL , but it is probably not relevant for this question.
best regards
//K
You really don't need to store them in database. These requests should lower the performance a bit and prevent browser cashing in some cases. Separate files do the job better.
Additionally, you should look into some practises of stroring bitmap icons in one file. Something like putting all the SVG's in one largee file and do tricks with background positioning like in this tutorial. I didn't try it with SVG, but with some changes like background sizing this method should do good job for you. Keep tring.
You could store all the icons in one file. This file would contain all SVG symbol definitions. Then, you could reference these symbols in your HTML like this:
<svg class="icon-home">
<use xlink:href="symbol-defs.svg#icon-home"></use>
</svg>
You can learn more about this method here: https://css-tricks.com/svg-sprites-use-better-icon-fonts/
Using this method, your SVG can be cached, and it would only require one HTTP request to load.
To get up and running with this method, I recommend the IcoMoon app. It lets you import and select your SVG icons and generates those symbol definition files.
I would store them as separate files.
They will be cached by the client and makes it easier to change the files if they need changing.
I want that my html page loads faster as it can. So I'm trying to put all CSS styles into the same .css file and all JavaScript code to the one .js file. How my colleagues told me it makes web page load faster. I want to ask some questions about it.
Just to be sure: Are my colleagues right? On which situations it's better to break CSS or JS code to the separate files?
Question is: If I have a lot of small icons on my page, like "delete, edit, add", should I load image with all icons at once or each icon separately? If I'll load all icons at once, how do I select desired one, if icon's size is 40x40px?
Thank you!
Are my colleagues right?
Single files can be downloaded with single HTTP requests (with single sets of HTTP headers, etc, etc) and can be compressed more efficiently then multiple files. So from a performance perspective, if you need all the content, it is better to place them in a single file.
On which situations it's better to break CSS or JS code to the separate files?
When you need a specific page to load very quickly (e.g. the homepage) or when there are sections of the site which use a large chunk of script that isn't used elsewhere then it can be beneficial to break the files up then.
If I have a lot of small icons on my page, like "delete, edit, add", should I load image with all icons at once or each icon separately?
From a performance standpoint, the same rules apply. However, there is no way to specify that a content image (and icons that don't decorate text are content images) is just part of a larger file. You have to use hacks involving background images. This breaks the separation of concerns around content and style and usually involves using semantically incorrect elements, and then requires further hackery to provide alternative content for users who can't see the image and that hackery rarely does as good a job as an alt attribute.
If I'll load all icons at once, how do I select desired one, if icon's size is 40x40px?
You have an element with specific dimensions and a background image with background-position set so that only the part of the image you want shows through.
Consolidating your CSS and JS code into a shared file will improve load times on all loads after the first so long as the browser uses the version of the file in its cache rather than downloading it again. There are many factors that can affect this, but under normal circumstances it should work.
Also, make sure your image files are stored in the same resolution as they will be displayed. Displaying a 40 x 40 pixel file at 20 x 20 pixels means that you have download four times the necessary image size. If the same icon file is referenced many places in an HTML document, then that icon file will only be downloaded once, so it will have little effect on page loading times.
For putting all the icons into one file and choosing which one, see this:
http://cssglobe.com/post/3028/creating-easy-and-useful-css-sprites
You can use what they call CSS sprite.
The thing is very simple to think of but can be a little tricky to use. Here is the idea.
You merge all your images into on big image, making it a single load.
Wherever these images were used on the site, you replace it by a css class which use the big image as a background and a certain positioning.
Let's say you merge 4 image together : delete.png, add.png, edit.png, share.png.
You create a css class for everyone of these like so :
.delete{ background-image:url('../img/icons.png');
background-position:0px 0px;
}
.add {background-image:url('../img/icons.png');
background-position:0px 40px;
}
.share {background-image:url('../img/icons.png');
background-position:40px 0px;
}
.edit { background-image:url('../img/icons.png');
background-position:40px 40px;
}
This way, you reduce the number of request since you you use a single image to show everywhere.
The code was written on the fly, tell if something is wrong.
Also have a look at performance guru tools : Page Speed
Breaking CSS files is not really a problem, considering browser caching.
Breaking up JS files is okay. You can have one JS that handles things needed for the page to load in the <head> tag. And one js that gives interaction to you, after </body>. By doing this you won't have various effects, but you ensure your users sees your text content.
Regarding your images, there's a practice called CSS Sprites. You can use that to make one big file for your small images and use CSS background-position to show only the part you want. It's like cropping your image file based on the css class.
If speed is most important, then what you've been told is correct.
Less CSS and JS files means less HTTP requests to the server. I would only separate files if you have a specific need as part of a project (eg they need to be maintained in ver separate ways)
For JS I always load JQuery and other libraries from the Google CDN - this has a greater performance boost that merging the library into your code as users are likely to have a cached version of Googles code.
For icons I would use CSS sprites (again this means fewer requests to the server) or if you really want to go as far as possible - look into embeding Data URI in your CSS.
Further reading
Googles Page Speed tool
Data URI in CSS
If you want a better performance report about your page, you can take a look at these tools
YSlow: http://developer.yahoo.com/yslow/
PageSpeed: http://code.google.com/intl/es-ES/speed/page-speed/docs/overview.html
Both can be added into the FireBug plugin (Mozilla Firefox).
From YSlow documentation:
Minify JavaScript and CSS
tag: javascript, css
Minification is the practice of removing unnecessary characters from code to reduce its >size thereby improving load times. When code is minified all comments are removed, as >well as unneeded white space characters (space, newline, and tab). In the case of >JavaScript, this improves response time performance because the size of the downloaded >file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI >Compressor. The YUI compressor can also minify CSS.
Obfuscation is an alternative optimization that can be applied to source code. It's more >complex than minification and thus more likely to generate bugs as a result of the >obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a >21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size >reduction, minifying JavaScript is less risky.
In addition to minifying external scripts and styles, inlined and blocks >can and should also be minified. Even if you gzip your scripts and styles, minifying them >will still reduce the size by 5% or more. As the use and size of JavaScript and CSS >increases, so will the savings gained by minifying your code.
Preload Components
tag: content
Preload may look like the opposite of post-load, but it actually has a different goal. By >preloading components you can take advantage of the time the browser is idle and request >components (like images, styles and scripts) you'll need in the future. This way when the >user visits the next page, you could have most of the components already in the cache and >your page will load much faster for the user.
There are actually several types of preloading:
•Unconditional preload - as soon as onload fires, you go ahead and fetch some extra >components. Check google.com for an example of how a sprite image is requested onload. >This sprite image is not needed on the google.com homepage, but it is needed on the >consecutive search result page.
•Conditional preload - based on a user action you make an educated guess where the user >is headed next and preload accordingly. On search.yahoo.com you can see how some extra >components are requested after you start typing in the input box.
•Anticipated preload - preload in advance before launching a redesign. It often happens >after a redesign that you hear: "The new site is cool, but it's slower than before". Part >of the problem could be that the users were visiting your old site with a full cache, but >the new one is always an empty cache experience. You can mitigate this side effect by >preloading some components before you even launched the redesign. Your old site can use >the time the browser is idle and request images and scripts that will be used by the new >site
If you are using JQuery, then you can take a look at this: Preloading images with jQuery
Interesting concepts to improve download speed, perceived speed and actual speed:
7 techniques for faster JavaScript loading without compromising performance…
Make better use of caching
Download external scripts after visible content is loaded &
download multiple JavaScript in batch (asp.net/ajax)
Most principles explained are still generally applicable.