Usage of encoded images - html

Recently I found out about the encoded images (base64 strings) and it seems really nice. I have few questions about it:
Should I make all of my images encoded?
if I have a photo encoded, do I have to keep the photo on my website inside some directory?
how much faster is this? does it really worth converting every image and use it as a string?
If I have a gallery, should I use the encoded images, or just keep it as it is which results in some cases in hundreds of HTTP requests?
Thanks in advance!

Should I make all of my images encoded?
NO, some browsers have limit
From Mozilla Developer Network:
Although Mozilla supports data URIs of essentially unlimited length,
browsers are not required to support any particular maximum length of
data. For example, the Opera 11 browser limits data URIs to around
65000 characters.
If I have a photo encoded, do I have to keep the photo on my website inside some directory?
NO, you won't need the original image, if you encode it, than you will require that encoded string only.
How much faster is this? does it really worth converting every image and use it as a string?
It will save you http requests, and you shouldn't convert every image.
If I have a gallery, should I use the encoded images, or just keep it as it is which results in some cases in hundreds of HTTP requests?
NO, you shouldn't, take a look at lazy loading instead.

Should I make all of my images encoded?
Not necessarily. If you use a base64 encoding the size of the images grows typically by ~33% than the original resources, they are not cached like regular images and they always need to be converted into images.
Thus it's better use this technique to reduce the requests amount for a few of small images. Besides, older browser like IE7 doesn't accept base64 encoding
I have a photo encoded, do I have to keep the photo on my website inside some directory?
No
how much faster is this? does it really worth converting every image and use it as a string?
No, see first answer
If I have a gallery, should I use the encoded images, or just keep it as it is which results in some cases in hundreds of HTTP requests?
I wouldn't recommend this. You should instead consider to use an image preloader or lazyloading

Related

At what point is requesting an image file faster than base64 encoding it inline?

I'm trying to come up with guidelines or performance testing that will help me choose which images to render inline as base64 encoded strings, and which should be requested as files from a cdn or similar.
Determining request time and delayed render is fairly straight forward in gauging the performance of the requested image, but I can't get a good read on render time for inlined images with the Chrome console. Obviously inline smaller images and request larger ones as a file, but what is a good cut off point?
For example, if an image is 2kb in size, and requesting it as a file it takes 100ms, how can I tell how long it takes to render the inline version of the same image?
It'll always be faster rendering an inline base64 encoded string. A network request will always take longer than the CPU processing it takes to decode a base64 string. The question you should ask yourself is around the tradeoff of when you want to download the bytes: in the payload of the HTML or later in the payload of separate HTTP request. The more you add to the HTML, the longer your page load time will be. The benefit of downloading the image instead of inlining it is if you don't need it to display right away, you can defer it through an asynchronous fetch.
So ask yourself if it's more important to show the image ASAP or is it more important for the page to be ready to use sooner without the image? Same tradeoff discussion for inlining in CSS as well.

What ways are there to send images (png, jpg...) with JSON?

I know of base64 and those can easily be sent through JSON, as they are strings. Are they some other ways? Is is possible to send it as bitmap and how effective would that be? Thank you.
Base64 is the only direct method but i'd avoid it for images larger than a couple of kB, it clutters what are usually short succinct messages.
Normally I would prefer to host it and provide the URLs for the image.
I don't think so. Base64 is only way to sent images in JSON as base64 can be sent in String format.
And base64 images are 37% more than original image in size, if this is avoidable and image URLs can be sent, the please prefer that.

Use a base64 embedded image multiple times

I have a couple small images in an HTML document that I want to make portable, e.g. still works when emailing. I use the following, which works great:
<img src="data:image/png;base64,..."/>
Problem is, I want to use the same image many times in the document, but don't want to repeat the entire base64 data string. I have seen in emails where the data is encoded a single time, but referenced many. Is this possible with HTML?
If you can use CSS, you could place it there instead, as a class.
Then just add the class to the elements you want.
Configure your webserver to zgip (/deflate) your content. Deflate should detect the repeating string and compress to about the same size as you would have if you only included it once. This way you won't waste bandwidth. Doesn't work for email, or for plain html-file in filesystem.

How to display all non-English characters correctly in a web site?

It's annoying to see even the most professional sites do it wrong. Posted text turns into something that's unreadable. I don't have much information about encodings. I just want to know about the problem that's making such a basic thing so hard.
Does HTTP encoding limit some
characters?
Do users need to send info about the
charset/encoding they are using?
Assuming everything arrives to the
server as it is, is encoding used
saving that text causing the problem?
Is it something about browser
implementations?
Do we need some JavaScript tricks to
make it work?
Is there an absolute solution to this? It may have its limits but StackOverflow seems to make it work.
I suspect one needs to make sure that the whole stack handles the encoding with care:
Specify a web page font (CSS) that supports a wide range of international characters.
Specify a correct lang/charset HTML tag attributes and make sure that the Browser is using the correct encoding.
Make sure the HTTP requests are send with the appropriate charset specified in the headers.
Make sure the content of the HTTP requests is decoded properly in your web request handler
Configure your database/datastore with a internationalization-friendly encoding/Collation (such as UTF-9/UTF-16) and not one that just supports latin characters (default in some DBs).
The first few are normally handled by the browser and web framework of choice, but if you screw up the DB encoding or use a font with limited character set there will be no one to save you.

Converting Source ASCII Files to JPEGs

I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way.
The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain.
My current mechanism to do that involves:
Running expand to set a consistent 2-space tab size, which pipes to...
a2ps, which pipes to...
A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to...
ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc.
That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is.
Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism.
So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)!
Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now.
Thanks in advance!
You might consider html2ps and then imagemagick's convert.
A thought: if your target (Kindle?) supports PNG, use that in preference to JPEG for this text rendering.
html2ps is an excellent program -- I used it to produce a 1300-page book once, but it's overkill if you just want plain text -> postscript. Consider enscript instead.
Because the question of converting HTML to JPG has been answered, I will offer a suggestion on the pretty printer. I've found Pygments to be pretty awesome. It supports different themes and has lexers for pretty much any language out there (they advertise the fact that it even highlights brainfuck). There's a command line tool and it's available on most Linux distros.
Your Linux distribution may include pango-view and an assortment of fonts.
This works on my FC6 system:
pango-view --font=DejaVuLGCSansMono --dpi=200 --output=/tmp/text.jpg -q /tmp/text
You'll need to identify a monospaced font that is installed on your system. Look around /usr/share/fonts/.
Pango supports Unicode.
Leave off the -q while you're experimenting, it'll display to a window instead of to a file.
Don't use jpeg. It's optimized for photographs and does a terrible job with text and line art. Use gif or png instead. My understanding is that gif is now patent-free, so I would just use that.