Anyone know if adding a tracking pixel (just a transparent GIF with some logging going on at the server that's sending out the GIF) as a background-image in a linked stylesheet will work the same as if it was a regular <img> on a page?
From a simple test I did in Chrome's network inspector it looks like it's just pulling images from local cache if I go into the location bar and hit enter. If I actually click Reload then I see it go out to the server and get back a 304. And if I shift+reload it'll force it to go back to the server for real and I get 200s.
But, it looks like it's the same behavior for the <img> images on the page as well, so then the behavior as a background-image should match the behavior as if it was an <img>
It should work exactly the same way. But tracking pixels are not decorative, or are specific to a single page, so it is easier to put an img tag in the HTML and a little more reliable, it will work for text browsers, etc.
Having a tracking pixel called via CSS would be possible, but there would be some challenges compared to using a tracking pixel in HTML:
Tracking pixels commonly have visitor data detected by JavaScript (monitor size, for instance) added to the pixel's URL. JavaScript would be unable to (easily) do this if the pixel was called via stylesheet.
When in the page source, the tracking pixel can be at the bottom of the page and be modified by code that executes first. This would be difficult to do in a single-pixel-call via CSS.
Related
I need to implement image previews via thumbnails. Now I'm not certain how to implement it.
One option would be to do it straight with CSS, so something like this:
.thumbnail {
background-image: url(path/to/image);
background-size: 300px, 400px;
}
Is that practical? Are there any performance issues?
If you scale image in CSS to be smaller than it it actually is, browser still have to download the same amount of data. You should create separate file with the thumbnail.
With images, you need to focus on three things: size, format and the src attribute.
Image size
Oversized images take longer to load, so it’s important that you keep your images as small as possible. Use image editing tools to:
Crop your images to the correct size. For instance, if your page is 570px wide, resize the image to that width. Don’t just upload a 2000px-wide image and set the width parameter (width=”570”). This slows your page load time and creates a bad user experience.
Reduce color depth to the lowest acceptable level.
Remove image comments.
Image format
JPEG is your best option.
PNG is also good, though older browsers may not fully support it.
GIFs should only be used for small or simple graphics (less than 10×10 pixels, or a color palette of 3 or fewer colors) and for animated images.
Do not use BMPs or TIFFs.
Src attribute
Once you’ve got the size and format right, make sure the code is right too. In particular, avoid empty image src codes.
In HTML, the code for an image includes this:
When there’s no source in the quotation marks, the browser makes a request to the directory of the page or to the actual page itself. This can add unnecessary traffic to your servers and even corrupt user data.
Pro Tip: Take time to re-size your images before uploading them. And always include the src attribute with a valid URL.
To ensure your images load quickly, consider adding the WP Smush.it plugin to your website.
from: http://blog.crazyegg.com/2013/12/11/speed-up-your-website/
I uploaded a PNG image with alpha transparency to my server with the filename logo.png, however, when the image is used on a page and I go to see it's file-path, I get:
http://example.com/i/300x300xlogo.png.pagespeed.ic.0A66xVq4G9.png
That file is nowhere to be seen on the server.
I don't know if it affects it, but the actual image size is 400px by 400px, and I have it displayed in the HTML as 300px by 300px. (Don't ask.)
Could it be my web-host doing this? Seems to prefix the file with the resolution to be displayed at: 300x300x and also has that weird suffix: .pagespeed.ic.0A66xVq4G9.png
The URL you're seeing is as intended: mod_pagespeed rewrites the image URL to an optimized name and embeds the size, as well as the fingerprint of the content into the filename.
If you're downscaling the image in HTML from the original size.. you're wasting user's bandwidth, which is especially painful on mobile. Advantage of mod_pagespeed is that it can do this resizing for you on the fly - simply specify the desired dimensions in the HTML and it will do the right thing.
You can read more about the various image optimizations done by mod_pagespeed here:
https://developers.google.com/speed/docs/mod_pagespeed/filter-image-optimize
What you're describing as "loading slower" sounds like "image lazyload":
https://developers.google.com/speed/docs/mod_pagespeed/filter-lazyload-images
The images are deferred until onload fires, which helps get the page painted faster to the screen. You can also configure mod_pagespeed to do this on-scroll, such that only visible images are loaded. Finally, you can also disable this specific filter, but keep the image optimization done above.
P.S. mod_pagespeed is no longer beta, the team recently shipped 1.0.
Solved! Google Pagespeed was enabled by default in the webhosts control panel. Hopefully this will help anyone else.
I noticed that images would actually load slower, they would flash once the document finished loading. (Google Pagespeed is beta though.)
I have a page with a series of divs. Each div represents a slide in a slide deck. I need a series of thumbnails, one for each slide. Ideally these thumbnails would be rasterized versions of the slides: a PNG data: url would be perfect. I'd like the work to be done in the browser, and I'm okay with things that only work in one of the modern browsers (e.g. chrome, or firefox). I suspect this is impossible, but would love to hear otherwise.
The method toDataURL() on the canvas object is essentially what I want, but the divs in question aren't instances of canvas, so that won't work.
One solution can be to render HTML to a canvas by embedding the HTML into an SVG image as a <foreignObject> and then drawing the resulting image via ctx.drawImage().
Read the article on MDN here, or take a look at rasterizeHTML.js which is an implementation of said approach.
The limitations are that your content should all be same-origin clean (i.e. accessible by AJAX).
Disclaimer: I am the author of rasterizeHTML.js.
It isn't possible on the client side as this is forbidden to protect from potential frauds like for ie script that would take a screenshot of your page with some private data and send it god one knows where.
Although...
it is possible to copy whole HTML to use it as a thumb preview/whatever and use CSS3 transformations (scaling) + add overlay over it to prevent interactions/text selection etc to mimic a thumbnail of a div.
and there was an option in firefox/chrome extensions to save page to an image - though not sure if it was possible to take only part of the page as an image
or you can always do like google does on its search results page with their page preview (click the magnifying lens near the result title) - have a robot machine which enters the page and takes a screenshot of whatever to produce the preview of the page using this data - don't know how much you WANT to do that but if you wanted it that bad... :)
I'm afraid there is no easy way to do what you want and the CSS3 trick one seems to be the easiest one to pull of.
you can rasterize html to a <canvas> element in javascript with the rasterizeHTML library:
http://cburgmer.github.io/rasterizeHTML.js/
I just started using the chrome developer tools for some basic html websites and I used the audit tool.
I had two identical images, one with the height and width attribute, and one without. On the Resources section, both the latency and the download time were identical. However, the Audit showed
Specify image dimensions (1)
A width and height should be specified for all images in order to speed up page display.
Does this actually help? And are there any other ways to speed up page time?
This is only a splash page for the website I am building and as such it is only html, no css or javascript or anything. I have already compressed the images but I want to speed up load time even more. Is there a way?
Generally speaking: If you specify the image dimensions in the <img> tag, the browser will know how much space to allot for it and will proceed to render the rest of the page while simultaneously downloading the image. Otherwise, the browser will have to wait a few more milliseconds to get the size of the image from the image itself before rendering the rest of the page.
Since you have only a splash page, I doubt that there will be anything else for the browser to render, so it doesn't much matter whether you specify dimensions or not.
I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns.
This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time?
I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way?
Thanks
-edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?
Things to try:
Repeating backgrounds: If part of it can be broken off into a repeating image, you can reduce file size a lot that way (divide the image into the parts that don't repeat, such as a logo, and the parts that do, and then use CSS to repeat it as a background).
Caching: You should look into whether the image is properly being cached or not. There's no reason it should reload on every page request. If the HTTP headers correctly allow caching, then the browser will not request it again until the image is cleared from its cache.
See http://www.mnot.net/cache_docs/ for some info about cache control with HTTP headers.
Use the Web Developer toolbar for Firefox to check the headers for the image (hit the image URL, then click Information > View Response Headers)
File quality or type: Also, you may be able to use Photoshop to resave the image in either a different format or lower quality. GIF and JPG images can have greatly different file sizes for the same image depending on the content of the image (GIF is very good for graphics with limited and/or repeating colors, especially when large portions of the same color run in consecutive horizontal pixels). If the image is photo-like, JPG can be very good because high compression can be disguised in very detailed images.
Crunching (removing unnecessary metadata and finding a more efficient compression algorithm) the image with a decent image compressor is a good start. Reducing the number of colors, changing the format (GIF or PNG24 to PNG8) when possible.
This may be totally obvious, but... defer it until the page is fully loaded (if the contrast of colors in the background image are not needed to make the foreground text readable).
An easy way to do this is to make the css selector for the background image dependant upon a class in the body like:
...
<style type="text/css">
/*<![CDATA[*/
body.page-loaded{background:url(/path/to/image.jpg)}
/*]]>*/
</style>
</head>
<body class="page-loaded" onload="document.body.className+=' page-loaded';">
...
Of course, the "onload" attribute in the body tag should be migrated out (to a SCRIPT tag at the bottom of the page or in an external JavaScript file). Also this code doesn't require any JS library to run; it should probably make use of an event observer.
The two things you want to do are:
Convert it to PNG (10-30% smaller than GIF on average); and
Cache it effectively.
The way to cache it is to version it and use a far future Expires header. For versioning I generally just use the mtime (last modified time) of the file as a query string:
body { background-image: url(/images/background.png?1232343455); }
See Speed Tips: Add Future Expires Headers for adding the Expires header. You can do it with a script or with Web server configuration. The reason you need the version is that you can change it to force a reload of the file otherwise you need to rename it whenever you want to change it.
This way the background will only get downloaded once. Splitting the file into several will actually worsen the situation as browsers tend to limit the number of concurrent downloads and you'll be downloading more data overall (more HTTP headers plus image file overhead).
Putting the image on a CDN may also help (along with the other things mentioned), because people may load images from the CDN faster than your server.