I have one question about page load speed. Consider a case when I have different kinds of images on my web page, such as icons, logos, images inside content etc....
I want to know whether having separate folders for each media category may affect the page load speed:
/logos
/icons
/images
Will the webpage load faster if the images of all categories were located in a single folder rather in multiple ones?
Thanks in advance for your advice.
Even though performance-related questions often get closed due to them not being answerable without benchmarks on the machine, this one is worth an answer since unless you run a potato-based computer, you won't have any performance impact.
Directories are not actually physical folders like you would have in real life.
They are simply registers of pointers to disk spaces where your files are stored. (Of course this is massively over-simplified as it involves file-systems and more low-level stuff, but that's not needed at that point).
To come back to your question, the difference between loading two files from two directories:
/var/foobar/dir1/image1.jpeg
/var/foobar/dir2/image2.jpeg
or one directory:
/var/foobar/dir1/image1.jpeg
/var/foobar/dir1/image2.jpeg
...is that your file system will have to look-up two different directories tables. With modern file-systems and moderate (even low-end) hardware, this causes no issues.
As #AjitZero mentioned, here your performance impact will come from the size of the files, the number of distinct HTTP requests (i.e.: How many images, CSS, scripts, etc...) and the way you cache data on the user's computer.
No, the number of folders doesn't affect the speed of page-load.
Lesser number of HTTP requests matter, however, so you can use sprite sheets.
Related
HTTP/2 makes it possible to multiplex connections, eliminating the need for more than one connection to a server. Over a single connection, many individual images can be sent down to the client. This obviates the old image sprite pattern of combining many images into one and using CSS to cut it apart.
I'm curious if sprites would still actually be faster in an HTTP/2 world. If so, under what circumstances?
Sprites, as you will know, are used to prevent multiple requests being queued, so with one payload you could get all the sprites for the site.
But with sprites you tend to get lots of additional icons that are used throughout the website that aren't all needed on any single page.
So with http/2 multiplexing, queuing resources is no longer an issue. You get the speed benefit when you only download the files needed for each page.
However you may get better compression by combining some images into a single file, making the overall size of file transfers smaller.
Speed tests run by Benoît Béraud and Alexandre Masselot have given an example of a sprite sheet loading faster than individual sprites. They concluded that sprite sets can still be used to optimise site performance when using http/2 http://blog.octo.com/en/http2-arrives-but-sprite-sets-aint-no-dead/
Extended write up about http/2 by Rachel Andrew can be found here:
https://www.smashingmagazine.com/2016/02/getting-ready-for-http2/
With HTTP/2 multiplexing, the server will be reading a lot of small files instead of reading a single big file. If the server is resource-limited (some Internet-of-Things contraption, for example), then you might be able to find a situation where it's better to have it making the single, big read instead of a lot of small ones, since each read causes the server OS to potentially do a lot of file-access-related operations.
On the client side, the browser will be managing a lot of small files, instead of a big one. I can imagine the code path used for the current sprite workflow being well massaged and optimized, since it's so commonly used. So it might happen that the new case of having a lot of small files could be slower, at least for a time.
What is the correct way of storing the image files in the database.
I am using file-paths to store the images.
But the problem is here. I have to basically show 3 different sizes of one image for my website. One would be used as thumbnail, the second would be around 290px*240px and third would be full size(approx 500px*500px). As it is not considered good to scale down the images using HTML img elements, so what should be the solution for it?
Currently, what I am doing is storing 3 different size images for one thing. Is there any better way?
Frankly the correct way to store images in a database is not to store them in a database. Store them in the file system, and keep the DB restricted to the path.
At the end of the day, you're talking about storage space. That's the real issue. If you're storing multiple copies of the same file at different resolutions, it will take more space than storing just a single copy of the file.
On the other hand, if you only keep one copy of the file and scale it dynamically, you don't need the storage space, but it does take more processing power instead.
And as you already stated in the question, sending the full-size image every time is costly in terms of bandwidth.
So that's the trade-off; storage space on your server vs processor work vs bandwidth costs.
The simple fact is that the cheapest of those three things is storage space. Therefore, you should store the multiple copies of the files on your server.
This is also the solution that will give you the best performance as well, which in many ways is an even more important point than direct cost.
In terms of storing the file paths, my advice is to give the scaled versions predictable names with a standard prefix or suffix compared to the original file. This means you only need to have the single filename on the database; you can simply add the prefix for the relevant version of the image that has been requested.
Nothing wrong with storing multiple versions of the same image.
Ideally you want even more – the #2x ones for retina screens.
You can use a server side script to generate the smaller ones dynamically, but depending on traffic levels this may be a bad idea.
You are really trading storage space and speed for higher CPU and RAM usage to generate them on the fly – depending on your situation that might be a good trade off, or it might not.
Agree with rick , you can store multiple size pics as your business requirements. You should store Image in folder on the server and store its location in database. Make hierarchy in the folder and store low res images inside the folders so that you can always refer to them with only one address.
you can do like this in your web.config
<add key="WebResources" value="~/Assets/WebResources/" />
<add key="ImageRoot" value="Images\Web" />
make .233240 and .540540 folders and store those pictures with same name inside them so you can easily access them.
Which one is better and faster? why?
using only one file for styling:
css/style.css
or
using several files for styling:
css/header.css
css/contact.css
css/footer.css
css/tooltip.css
The reason Im asking it is that im developing a site for users who have very low internet speed. country uganda. So I want to make it as fast as possible.
Using a single file is faster because it requires less HTTP requests (assuming the amount of styles loaded is still the same).
So it's better to keep it in just one file.
Separating CSS should only be done if you want to keep for example IE specific classes separate.
As per Yahoo's Performance Rules [source], It is VERY IMPORTANT to minimize HTTP requests
From the source
Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining
all CSS into a single stylesheet. Combining files is more challenging
when the scripts and stylesheets vary from page to page, but making
this part of your release process improves response times.
It is quite uneasy to develop using combined files, so stick to developing with multiple files but you should combine the files once you are deploying the system on the web.
I really recommend using boilerplate's ant build script. You can find it here.
It Combines and minifies CSS
One css file is better than multiple css files because of the overhead involved by the end user's browser to make multiple requests for each file. Other things you can do yo improve the performance include:
Enable gzip impression on your webserver e.g. on Apache so that the files are compressed before downloading
where possible host your files geographically as close to the majority of your end users as possible
use a CDN network for your static content such as css files
Use CSS sprites
Cache your content
Note that there are tools available to help you do this. See 15 ways to optimise css for more information
This is always a better solution to bundle or combine multiple CSS or JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.
With a proper caching, you can gain extra bandwidth and even fewer HTTP request.
Update:
There's a new Bundling feature in ASP.Net 4.5 which you might be interested in.
This allows you to have css files separated at compile-time, and in runtime gain benefit of combined resources into one resource
One resource file is always the fastest approach since you reduce the number of HTTP requests made to fetch those files.
I would suggest to use Yslow which is a great extension for firebug that analyzes web pages and suggests ways to improve their performance.
I read some website development materials on the Web and every time a person is asking for the organization of a website's js, css, html and php files, people suggest single js for the whole website. And the argument is the speed.
I clearly understand the fewer request there is, the faster the page is responded. But I never understand the single js argument. Suppose you have 10 webpages and each webpage needs a js function to manipulate the dom objects on it. Putting 10 functions in a single js and let that js execute on every single webpage, 9 out of 10 functions are doing useless work. There is CPU time wasting on searching for non-existing dom objects.
I know that CPU time on individual client machine is very trivial comparing to bandwidth on single server machine. I am not saying that you should have many js files on a single webpage. But I don't see anything go wrong if every webpage refers to 1 to 3 js files and those js files are cached in client machine. There are many good ways to do caching. For example, you can use expire date or you can include version number in your js file name. Comparing to mess the functionality in a big js file for all needs of many webpages of a website, I far more prefer split js code into smaller files.
Any criticism/agreement on my argument? Am I wrong? Thank you for your suggestion.
A function does 0 work unless called. So 9 empty functions are 0 work, just a little exact space.
A client only has to make 1 request to download 1 big JS file, then it is cached on every other page load. Less work than making a small request on every single page.
I'll give you the answer I always give: it depends.
Combining everything into one file has many great benefits, including:
less network traffic - you might be retrieving one file, but you're sending/receiving multiple packets and each transaction has a series of SYN, SYN-ACK, and ACK messages sent across TCP. A large majority of the transfer time is establishing the session and there is a lot of overhead in the packet headers.
one location/manageability - although you may only have a few files, it's easy for functions (and class objects) to grow between versions. When you do the multiple file approach sometimes functions from one file call functions/objects from another file (ex. ajax in one file, then arithmetic functions in another - your arithmetic functions might grow to need to call the ajax and have a certain variable type returned). What ends up happening is that your set of files needs to be seen as one version, rather than each file being it's own version. Things get hairy down the road if you don't have good management in place and it's easy to fall out of line with Javascript files, which are always changing. Having one file makes it easy to manage the version between each of your pages across your (1 to many) websites.
Other topics to consider:
dormant code - you might think that the uncalled functions are potentially reducing performance by taking up space in memory and you'd be right, however this performance is so so so so minuscule, that it doesn't matter. Functions are indexed in memory and while the index table may increase, it's super trivial when dealing with small projects, especially given the hardware today.
memory leaks - this is probably the largest reason why you wouldn't want to combine all the code, however this is such a small issue given the amount of memory in systems today and the better garbage collection browsers have. Also, this is something that you, as a programmer, have the ability to control. Quality code leads to less problems like this.
Why it depends?
While it's easy to say throw all your code into one file, that would be wrong. It depends on how large your code is, how many functions, who maintains it, etc. Surely you wouldn't pack your locally written functions into the JQuery package and you may have different programmers that maintain different blocks of code - it depends on your setup.
It also depends on size. Some programmers embed the encoded images as ASCII in their files to reduce the number of files sent. These can bloat files. Surely you don't want to package everything into 1 50MB file. Especially if there are core functions that are needed for the page to load.
So to bring my response to a close, we'd need more information about your setup because it depends. Surely 3 files is acceptable regardless of size, combining where you would see fit. It probably wouldn't really hurt network traffic, but 50 files is more unreasonable. I use the hand rule (no more than 5), but surely you'll see a benefit combining those 5 1KB files into 1 5KB file.
Two reasons that I can think of:
Less network latency. Each .js requires another request/response to the server it's downloaded from.
More bytes on the wire and more memory. If it's a single file you can strip out unnecessary characters and minify the whole thing.
The Javascript should be designed so that the extra functions don't execute at all unless they're needed.
For example, you can define a set of functions in your script but only call them in (very short) inline <script> blocks in the pages themselves.
My line of thought is that you have less requests. When you make request in the header of the page it stalls the output of the rest of the page. The user agent cannot render the rest of the page until the javascript files have been obtained. Also javascript files download sycronously, they queue up instead of pull at once (at least that is the theory).
I'm wondering if there are any best practices for organizing files on the filesystem for a site that centers around users uploading files. (Not a hosting site like Imageshack, more like addons.mozilla.org)
Or am I over-analyzing this and should put everything in one folder?
I tend to think about user uploads as just another kind of user data, and so it all goes into a database. Obviously, make sure the database you are going to use for this is a good choice for that, for example, a SQL database isn't necessarily right.
If it makes sense, I try to use a url pattern that makes sense in the context of the usage pattern of the site, for example:
example.com/username/users_file.jpg
If there's just no obvious way to do that, and I have to use a surrogate key, I just live with it:
example.com/files/abc123
example.com/files/abc123/
example.com/files/abc123/users_file.jpg
All three are the same file. in particular, the abc123 is all that the app needs to look up the file, the extra bit at the end is there so that browsers get a good hint at what the file should be named when it's saved to disk.
Doing it this way means that no matter what the original file is named, it always is unique to the user. Even if the user wishes to upload 100 files with the same name, all are unique.
First (and probably obviously), put the users' files in some dedicated place so they don't risk overwriting other stuff.
Second, if you expect lots of files then you may want to have subfolders. The easiest way to do that is to use the first letter of their filename as the folder.
So if I were to upload "smile.jpg", you could store it there: s/smile.jpg
If you're super popular and still have too many files, you can use more letters. And if you expect to have tons of users and you have tons of servers, you can imagine splitting the work by saving on s.example.com/upload/s/smile.jpg (but really if you have tons of servers then you probably already have a transparent way of sharing storage and load).