I have the following html:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
</head>
<body>
会意字 / 會意字 huìyìzì
</body>
When I run it in firefox, it displays the Chinese characters just fine. How come it works with the ISO-8859-1 characterset? I thought you needed UTF-8?
I can't reproduce your successful rendering:
… but HTML 5 defines a fairly complex character encoding detection method which doesn't pay any attention to <meta> until step 9.
In general, you should avoid encodings other than UTF-8 and definitely should not lie about the encoding of the document.
The most probable explanation is that the document is in fact UTF-8 encoded and the browser treats it that way, despite the meta tag. According to HTML5 encoding sniffing algorithm, which largely reflects browser behavior, the meta tag is ignored if any of the following is true:
The user has instructed (via e.g. a View → Encoding command) the browser to use a specific encoding.
The page starts with bytes that represent the Byte Order Mark in UTF-8 or UTF-16. In practice, it starts that way if the file was saved in an editor with a command like “Save as UTF-8 (with BOM)”.
HTTP headers specify an encoding in a Content-Type header.
You can find out which of these is the cause by using e.g. Rex Swain’s HTTP viewer. It lets you see both the HTTP response headers and the actual data as bytes. Developer Tools in browsers have similar features.
Related
My html document starts as follows:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
אבגד
If I encode my document as UTF-8, it appears correctly in the browser. If I encode as UTF-8 without BOM (which I understand is more standard) I get unusual characters.
What am I doing wrong?
Your web server is declaring that the encoding is ISO-8859-1, and the browser is respecting that. Ironically enough, using a byte order mark sends a stronger signal to the browser that the encoding must actually be UTF-8. (The exact reason for this is complicated and boring.)
Fixing your web server depends on what the server is. If this is a static resource on disk served by Apache httpd, then something like AddCharset UTF-8 .html will add the header.
If this resource is served dynamically, then you should make sure you add the proper HTTP headers when producing the response, something like self.send_header('Content-Type', 'text/html; charset=utf-8') for Python's basic http server.
Something that made me curious - supposedly the default character encoding in HTML5 is UTF-8. However if I have a plain simple HTML file with an HTML5 doctype like the code below, I get:
"hello" in Russian: "ЗдраÑтвуйте"
In Chrome 33+, Safari 6, IE11, etc.
<!DOCTYPE html>
<html>
<head></head>
<body>
<p>"hello" in Russian is "здраствуйте"</p>
</body>
</html>
What gives? Shouldn't the browser utilize the UTF-8 unicode standard and display the text correctly? I'm using Coda which is set to save html files with UTF-8 encoding by default so that's not the problem.
The text data in the example is UTF-8 encoded text misinterpreted as window-1252 encoded. The reason is that the encoding has not been specified and browsers are forced to make a guess. To fix this, specify the encoding; see the W3C page Character encodings. Two simple ways that work independently of server settings, as long as the server does not send wrong encoding information in HTTP headers:
1) Save the file as UTF-8 with BOM (there is probably an option for this in your authoring program.
2) Add the following tag into the head part:
<meta charset=utf-8>
There is no single default encoding specified for HTML5. On the contrary, browsers are expected to make guesses when no encoding has been declared. This is a fairly complex process, described in 8.2.2.2 Determining the character encoding.
If you want to be sure which charset will be used by browser you must have in your page head
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
otherwise you are at the mercy of local settings and browser automation.
I've got simple HTML pages in Russian with a bit of js in it.
Every browser going well except IE10. Even IE9 is fine. Next code is included:
<html lang="ru">
<meta http-equiv="Cоntent-Type" content="text/html"; charset="utf-8">
Also I've added .htacess with
AddDefaultCharset UTF-8
Still IE10 loads page in Cyrillic encoding (cp-1251 I believe), the only way to display characters in a right way is to manually change it to UTF-8 inside of a browser (or chose auto-detect mode).
I don't understand why IE10 force load 1251 instead of UTF-8.
The website to check is http://btlabs.ru
What really causes the problem is that the HTTP headers sent by the server include
Content-Type: text/html; charset=windows-1251
This overrides any meta tags. You should of course fix the errors with the meta tag as pointed out in other answers, and run a markup validator to check your code, but to fix the actual problem, you need to fix the .htaccess file. Without seeing the file and other server-side issues, it is impossible to tell how to fix that (e.g., server settings might prevent the effect of a per-directory .htaccess file and apply one global file set by the server admin). Note that the file name must have two c's, not one (.htaccess, not `.htacess').
You can check what headers are e.g. using Rex Swain’s HTTP Viewer.
The reason why things work on other browsers is that they apply the modern HTML5 principle “BOM wins them all”. That is, an HTTP header wins a meta tag in specifying the character encoding, but if the actual data begins with three bytes that constitute the UTF-8 encoded form of the Byte Order Mark (BOM), then, no matter what, the data will be interpreted as UTF-8 encoded. For some unknown reason, IE 10 does not do that (and neither does IE 11).
But this won’t be a problem if you just make the server send an HTTP header that declares UTF-8.
If the server has been set to declare windows-1251 and you cannot possibly change that, then you just need to live with it. Transcode your HTML files to windows-1251 then, and declare windows-1251 in a meta tag. This means that if you need any characters outside the limited repertoire representable in windows-1251, you need to represent them using character references.
perhaps because your 'o' in 'content' is not an ascii 'o'. notice that it is not red in Stackoverflow? i then copied it to a good text editor and see that it is indeed not an o. because the 'o' is not really an ascii 'o', that whole line probably should get ignored in every web browser, which should then depend on what default charset it uses. Microsoft and IE is notorious for picking bad defaults, thus is my reason why it doesn't work in IE. ;)
but codingaround has good advice too. it's best to put quotes around your attribute values. but that should not break a web browser.
you should use a doctype at the start:
<!DOCTYPE html>
<html lang='ru'>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
but the real culprit is your content and charset problem. notice my line. mine is very different. ;) that's the problem. note that mine has two ascii 'o's, one in "Content-Type" and another in 'content='.
As Shawn pointed out, copy and paste this:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
This is a really good example of how non-Ascii letters that look like English Ascii letters can really mess things up!
Maybe you forgot changing cоntent=text/html; to cоntent="text/html";
As Shawn has already pointed out, it could also be content="text/html; charset=utf-8".
But as you have tried both things out, can you confirm if the IE10 output looks like this?
I can't really help further with this, as the only thing I have here is an IE 10 online emulator.
So far the possible problems are:
Different o character
I see, that the <meta> tag is still outside of <head>, put it in place
Problems with IE handling the content and charset attributes
I'm using the UTF-8 charset to write my HTML and some of the text is in Hebrew.
I use the next lines to specify language for browsers:
<html dir="rtl" lang="he">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=9">
Some Explorer browsers which are set on Encoding: Auto-Select recognize my website as Hebrew and view the pages in the Hebrew (Windows) encoding. This makes the text show as gibberish because it is in UTF-8.
How can I use HTML or JavaScript to force all browsers to use the UTF-8 encoding and ignore any local settings that say otherwise?
You cannot override the browser setting of character encoding, if auto-selection is disabled. In such conditions, browsers give the user the final word; this is reflected in the HTML5 draft, which describes, in Determining the character encoding, the first step thusly: “If the user has explicitly instructed the user agent to override the document's character encoding with a specific encoding, optionally return that encoding with the confidence certain and abort these steps.”
The setting Encoding: Auto-Select enables auto-selection, and in your case, this makes browsers apply UTF-8, unless HTTP headers say otherwise. With the given data, it is impossible to say what goes wrong, if you really experience the problem when Auto-Select is on.
My program is fetching messages from a database, which contains English, German and several Eastern European languages. My Python script sets the encoding via:
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
and use the values fetched correctly from the database (if I check within my logs).
Unfortunately all browsers I tested (IE8, Firefox 3.0.10, Opera 9.64) switch based on my local language settings to:
Western ISO-8859-1 in Firefox
Western European (Windows) in IE
Automatic in Opera
Everything works fine as soon as I switch the character encoding manually in the browser.
The same happens if I manually generate the HTML file using UTF-8 (tested with TextMate respective jEdit), although both editors display the content correctly.
That works fine for English and German, but i.e. not for Russian. How can I force the "correct" character encoding?
ANSWER
The following entry within the VirtualHost (Apache configuration) section did the trick for me:
AddDefaultCharset utf-8
Many thanks for pointing me into the right direction, that helped a lot!
When the document is transfered over HTTP, the HTTP header information are the crutial information:
[…] conforming user agents must observe the following priorities when determining a document's character encoding (from highest priority to lowest):
An HTTP "charset" parameter in a "Content-Type" field.
A META declaration with "http-equiv" set to "Content-Type" and a value set for "charset".
The charset attribute set on an element that designates an external resource.
So make sure you declare the character encoding in the Content-Type header field and not just inside the document.