Images in browsers performance for background-repeat CSS - html

Does any one know which is going to be better for a browser loading time between these two options:
background-image:url('1by1px.png');
or
background-image:url('10by10px.png');
A 1px by 1px semi transparent png repeated for the div. Or a larger one say 10px by 10px.
There must be some kind of looping that has to be done to display the repeated image in the browser, and so I wondered if the image which is 1px by 1px causes alot of looping to get the image displayed that it may in fact be less speedy than a larger dimensioned image with less looping?
Of course the counter argument is image size is smaller for 1by1 compared to 10by10, but doesn't mean its better to be smaller because looping many times might not scale as good as looping a large image size slightly less often.
Does any know more about which would be better and how browsers handle situations like this?

When not repeating the background image, the time required to render depends on only the final scaled image, not the original one.
The image in a file is compressed as PNG format, but after being loaded by browser, it is in RGBA bitmap format (4 bytes for a pixel). When repeating a background, (let say on Intel x86), the native code of browser would use REP MOVSD to move the bitmap data from RAM to video memory (this is standard sequence, might be different on various implementations of graphics driver or specific GPU).
Assume that the dimensions of the HTML DIV which contains the background would be 100x100.
For the only-1 pixel image: the browser programme has to exec 10 thousand 'REP MOVSD' instructions.
For the 10x10 image: with each repeated image, the browser programme has to exec 'REP MOVSD' only 10 times (1 time calling to 'REP MOVSD' can render 1 pixel line (pixel row) of the image). So in this case, the number of 'REP MOVSD' instructions executed would be only 10x100 times (10 times in 1 image, 100 repeated images). This takes totally 1 thousand 'REP MOVSD'.
Therefore, the final background based on the bigger image would be rendered faster.
More notes:
The above explanation doesn't mean the performance is exactly 10 times better for the 10x10 image. A 'REP MOVSD' (with CX=9999 for example) is only 1 single CPU instruction but still requires 9999x4 bytes to be transfered through data bus. If using 9999 simple 'MOV's, that much of data still have to go thru' data bus, however, CPU has to execute 9998 instructions more. A more clever browser would create a pre-rendered bitmap for the background with replicated images; so each time it needs to transfer to video memory, it needs just only 100 'REP MOVSD' (100 is the number of pixel rows in the final background, assumed above) instead of 10 thousand or 1 thousand.

I agree with Paul answer.
I did few rough test with Google Chrome developer tool recently.
I used different size of semi-transparent png images on top of a background image and use page paint time to see how long do it take to refresh the screen.
Here is the result:
Time to refresh without -webkit-transform hack (rounded):
2x2 image : 65-160ms
10x10 image: 60-150ms
100x100 image: 55-135ms
1000x1000 image: 55-130ms
Time to refresh with -webkit-transform hack (rounded):
2x2 image : 40-120ms
10x10 image: 30-90ms
100x100 image: 30-90ms
1000x1000 image: 30-90ms
Just like what Paul said, bigger image is take shorter time to load(refresh), than smaller image.
But, it seem it is getting less effective after the image getting bigger than 10px. I don't see much difference between 100x100 and 1000x1000.
In my opinion, an huge image won't give you a noticeable result, and it might increase the loading time. So, I think any size around 10 - 100 is good enough for performance and loading time.
But still, different image might have different result, I think you should test your site with page paint time tool in Google Chrome developer tool for accurate result.

Related

How to do object detection on high resolution images?

I have images of around 2000 X 2000 pixels. The objects that I am trying to identify are of smaller sizes (typically around 100 X 100 pixels), but there are lot of them.
I don't want to resize the input images, apply object detection and rescale the output back to the original size. The reason for this is I have very few images to work with and I would prefer cropping (which would lead to multiple training instances per image) over resizing to smaller size (this would give me 1 input image per original image).
Is there a sophisticated way or cropping and reassembling images for object detection, especially at the time of inference on test images?
For training, I suppose I would just take out the random crops, and use those for training. But for testing, I want to know if there is a specific way of cropping the test image, applying object detection and combining the results back to get the output for the original large image.
I guess using several (I've never tried) networks simultaneously is a choice, for you, using 4*4 (500+50 * 500+50) with respect to each 1*1), then reassembling at the output stage, (probably with NMS at the border since you mentioned the target is dense).
But it's weird.
You know one insight in detection with high resolution images is altering the backbone with "U" shape shortcut, which solves some problems without resize the images. Refer U-Net.

Tiling a hundred 1px squares VS tiling four 25px squares

This question has more to do with the way a browser handles objects created/rendered by HTML and CSS, rather than just a scripting question.If I have a div that is 100 pixels by 100 pixels and I want it to have a nice translucent blue background but I don't want to use CSS to set the background color to RGBA (and then just adjust alpha) due to browser compatibility issues, so instead I make a .png file that is a solid translucent blue and set the background image of the div to that png file and then tile it....
I can tile a hundred 1px image squares.
or
I can tile four 25px image squares.
Both will create the same effect except the 1px image square will load MUCH quicker than the 25px image square.... but I'm wondering if having 100 image squares on the screen will lag the browser more than only having 4 images on the screen that are of larger images? The browser itself, does it create a new reference for each image tile and then have to keep track of them all and update the position of them all?
It seems like putting 100,000 1px by 1px images on a webscreen would lag more than putting one 100,000px by 100,000px image on the screen? Especially if the user is scrolling up or down. Right?
The task of repeating the image n amount of times is the up to the user's machine to execute. It's dependent on the processing power of the average user's machine.
If you consider that on average you'll get 2-3GHz of processing power out of the average user's computer and compare that with an average download rate of 10 Megabits/sec, I would say that the bottleneck is download speeds. These days the lag with network speeds is almost negligible, so the difference would be so small that its not really worth worrying about, but nevertheless, the network lag of downloading a larger image would probably be worse than the lag of the processor to tile the image for the browser.
Whether you tile a larger image 20 times or a smaller image 200 times, the browser would still use the same execution loop to perform the tiling of an image, just more iterations in the latter, so I think the processor would be much more efficient than the network.
Also, I would say if you can achieve the same effect with smaller images, its more thoughtful and polite to your users to use the absolute minimum bandwidth possible with these insane ISP prices and bandwidth caps.

HTML Image size

I am exporting my web images from PSD by 'Save For Web' option along with using its Quality and Format controls as per the standards but when I check my image size they are heavy than the expectation. I see the image sizes on other web. Their GIF images are too small. i.e a bullet arrow GIF image (12px X 48px) is near around 90 bytes but when I create the same image in PSD and export as GIF my size goes upto 1 KB. I just wanted to know that is their any other way to export or create images for web to gain lower size?
You can check how many colors are you using.
If you create a GIF with less colors you get a smaller GIF.
The size also depends on how many colors are in the image, maybe the 90 bytes gif is pure black and white and yours have more colors.
When you Save For Web in Photoshop, there are a few variables you can play to gain lower size, that you already use. I tried to explain some for other users who find this question,in case you already know how them works:
Lossy: Lossy compression lets you compress more bytes out. Higher value produces low image quality and less weight.
Colors: More number of colors used in the image causes higher size image, as aleixventa says in his answer.
Web Snap: That's the option to convert the color of the image to web safe colors. More web safe colors results in a smaller image.
Combobox "No Dither": Dithering mixes pixels of the gradients to simulate the missing colors. You could retrieve more specific info about this feature here. BTW, less percentaje of difussion dittering results in higher image.
With all these variables in mind, you can see the total weight while playing with them:

downscale large 1 bit tiff to 8 bit grayscale / 24 bit

Let's say i have a 100000x100000 1 bit (K channel) tiff with a dpi of 2000 and i want to downscale this to a dpi of 200. My resulting image would be 10000x10000 image. Does this mean that every 10 bits in the 1 bit image correspond to 1 pixel in the new image? By the way, i am using libtiff and reading the 1 bit tiff with tiffreadscanline. Thanks!
That means every 100 bits in the 1 bit image correspond to 1 pixel in the new image. You'd need to average the value over 10x10 1bit pixel area. For smoother greyscales, you'd better average over n bits where n is the bit depth of your target pixel, overlying the calculated area partially with neighbor areas (16x16px squares 10x10px apart, so their borders overlay, for a smooth 8-bit grayscale.)
It is important to understand why you want to downscale (because of output medium or because of file size?). As SF pointed out, colors/grayscale are somewhat interchangeable with resolution. If it is only about file size losless/lossy compression is also worth to look at..
The other thing is to understand a bit of the characteristics of your source image. For instance, if the source image is rasterized (as for newspaper images) you may get akward patterns because the dot-matrix is messed up. I have once tried to restore an old news-paper image, and I found it a lot of work. I ended up converting it to gray scale first before enhancing the image.
I suggest to experiment a bit with VIPS or Irfanview to find the best results (i.e. what is the effect of a certain resampling algorithm on your image quality). The reason for these programs (over i.e. Photoshop) is that you can experiment with GUI/command line while being aware of name/parameters of the algorithms behind it. With VIPS you can control most if not all parameters.
[Edit]
TiffDump (supplied with LibTiff binaries) is a valuable source of information. It will tell you about byte ordering etc. What I did was to start with a known image. For instance, LibTIFF.NET comes with many test images, including b&w (some with 0=black, some with 1=black). [/Edit]

Repeating website background image - size vs speed

I was wondering if anyone has done any tests with background images. We normally create a background that repeats at least in one direction (x or y or both).
Example
Let's say we have a gradient background that repeats in X direction. Gradient height is 400px. We have several possibilities. We can create as small image as possible (1 pixel width and 400 pixels high) or we can create a larger image with 400 pixels height.
Observation
Since gradient is 400 pixels high we probably won't choose GIF format, because it can only store 256 adaptive colours. Maybe that's enaough if our gradient is subtle, since it doesn't have that many, but otherwise we'll probably rather store image as a 24-bit PNG image to preserve complete gradient detail.
Dilemma
Should we create an image of 1×400 px size that will be repeated n times horizontally or should we create an image of 100×400 px size to speed up rendering in the browser and have a larger image file size.
So. Image size vs. rendering speed? Which one wins? Anyone cares to test this? With regards to browser rendering speed and possible small image redraw flickering...
The rendering speed is the bottleneck here, since bigger tiles can be put into the browser's cache.
I've actually tried this for the major browsers, and at least some of them rendered noticeably slow on very small tiles.
So if increasing the bitmap size does not result in ridiculously big file sizes, I would definately go with that. Test it yourself and see. (Remember to include IE6, as still many people are stuck with it).
You might be able to strike a good balance between bitmap size and file size, but in general I'd try 50x400, 100x400, 200x400 and even 400x400 pixels.
I found out that there may be a huge difference in the rendering performance of the browser, if you have a background-image with width of 1px and repeating it. It's better to have a background-image with slightly larger dimensions. So a image with a width of 100px performs much better in the browser. This especially comes into play when you use a repeated background-image in a draggable layer on your website. The drag-performance is pretty bad with an often-repeated background-image.
I'd like to point out that for the cost of sending down an extra few rows (1-2 only example here) .8k - 1.6kb (if you can get away with 8-bit) more like 2.4kb - 4.0kb for 24bit
2 pixel columns more means the required iterations required to blit the background in is cut down to 1/3 for up to 1.6kb (8-bit) or 4kb (24bit)
even 1 extra column halves the blitting required down to half the element width.
If the background's done in less than a second for a 56.6k modem I reckon it's lean enough.
If small dimensions of an image have a negative impact on rendering, I'm sure any decent browser would blit the image internally a few times before tiling.
That said, I tend not to use 1 pixel image dimensions, so I can see the image clearly without resizing it. PNG compression is good enough to handle this at very little cost to file size, in most situations.
I'd put money on the bottleneck being the image download rather than the rendering engine doing the tiling, so go for the 1 pixel wide option.
Also the 24-bit PNG is redundant since you're still only getting 8 bits per channel (red, green and blue).
I generally prefer to go in between, 1pixel wide will probably make your gradient seem a bit unclear but you can do something like 5pixel width which gives enough room to the gradient to maintain consistency and clarity across the page.. but I would suggest you can add more patterns and images to a single image and then use background positioning(css sprites) to position them because download a single image of say 50kb would take less time comapared to 5 40kb images since the browser makes fewer requests to the server...
I have not benchmarked this but I'd bet that on most modern computers the rendering won't be an issue whereas you always want to save on on the image download time. I often go for the 1px type of graphics.