Issue with apostrophes in html and pound symbols - html

I have a problem, some email and web designs i receive have ’ instead of ' in the text. This creates problems with rendering on some email clients and it's difficult to manually catch them all.
Is there any type of software or online script that converts these symbols (along with the £ sign) to HTML compatible text? Would notepad or anything work?
I

You'll need to convert your text to html characters before putting it into your email html. This is a common issue when you import from MS Word, as it uses characters like curly quotes, hellips and mdashes that need converting first.
There are a whole bunch of converters out there, here are 3:
Email on Acid
Web2Generators
Charset
Here is an example of something written in MS Word:
“Hello?” he said to ‘it’. Wait – I’m not finished…
This converts to this:
“Hello?” he said to ‘it’. Wait – I’m not finished…
You should use the converted version in your email, or you could be lazy and just replace all instances of curly quotes with straight ones in your code. The grammar is not technically accurate, but most people will not mind.

Related

HTML encoding of Japanese text

I'm making a static HTML page that displays courtesy text in multiple languages. I noticed that if I paste ウェブサイトのメンテナンスの下で into Expression Blend, that text appears the same in the code. I think it's bad for compatibility and should be replaced by proper HTML entities.
I have tried http://www.opinionatedgeek.com/DotNet/Tools/HTMLEncode/encode.aspx but it returns me the same Japanese text.
Is it correct, from the point of view of browser compatibility, to paste that Japanese right into the source code of an HTML page?
Else, what is the correct HTML encoding of that text? Or, better, is there any tool that I can use to convert non-ASCII characters to HTML entities, possibly online and possibly free?
I think it's bad for compatibility and should be replaced by proper
HTML entities.
Quite the opposite actually, your preference should be to not use html entities but rather correctly declare document encoding as UTF-8 and use the actual characters. There are quite a few compelling reasons to do so, but the real question is why not use it since it's a well- and widely supported standard?
Some of those points have been summarised previously:
UTF-8 encodings are easier to read and edit for those who understand
what the character means and know how to type it.
UTF-8 encodings are just as unintelligible as HTML entity encodings
for those who don't understand them, but they have the advantage of
rendering as special characters rather than hard to understand decimal
or hex encodings.
[For example] Wikipedia... actually go through articles and convert
character entities to their corresponding real characters for the sake
of user-friendliness and searchability.
As long as you mark your web-page as UTF-8, either in the http headers or the meta tags, having foreign characters in your web-pages should be a non-issue. Alternately you could encode/decode these strings using encodeURI/decodeURI functions in JavaScript
encodeURI('ウェブサイトのメンテナンスの下で')
//returns"%E3%82%A6%E3%82%A7%E3%83%96%E3%82%B5%E3%82%A4%E3%83%88%E3%81%AE%E3%83%A1%E3%83%B3%E3%83%86%E3%83%8A%E3%83%B3%E3%82%B9%E3%81%AE%E4%B8%8B%E3%81%A7"
decodeURI("%E3%82%A6%E3%82%A7%E3%83%96%E3%82%B5%E3%82%A4%E3%83%88%E3%81%AE%E3%83%A1%E3%83%B3%E3%83%86%E3%83%8A%E3%83%B3%E3%82%B9%E3%81%AE%E4%B8%8B%E3%81%A7")
//returns ウェブサイトのメンテナンスの下で
If you are looking for a tool to convert a bunch of static strings to unicode characters, you could simply use encodeURI/decodeURI functions from a web-page developer console (firebug for mozilla/firefox). Hope this helps!
HTML entities are only useful if you need to represent a character that cannot be represented in the encoding your document is saved in. For example, ASCII has no specification for how to represent "€". If you want to use that character in an ASCII encoded HTML document, you have to encode it as € or not use it at all.
If you are using a character encoding for your document that can represent all the characters you need though, like UTF-8, there's no need for HTML entities. You simply need to make sure the browser knows what encoding the document is in so it can interpret it correctly. This is really the preferable method, since it simply keeps the source code readable. It really makes no sense to want to work with HTML entities if you can simply work with the actual characters.
See http://kunststube.net/frontback for some more information.

Apostrophes converting to periods in HTML

I have a client using a CMS for a site. When they enter apostrophes, they render as periods within the HTML. I've checked the raw source, and an apostrophe (' - not a MS Word curly "smart" apostrophe) is indeed there but it renders as a period.
I've gone into the database and manually entered apostrophes thinking perhaps it was the CMS, but the problem persists. I've seen the "diamond question mark" unrecognizable character appear before, but never this... For example, the word "they're" displays as "they.re"
Any ideas? I thought it could be an encoding issue but I have
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
in place.
Any help appreciated!
As a first workaround, you could tell the content providers use the “smart” apostrophe and, for use a single quotation marks, ‘smart’ single quotes (assuming thet work OK—check it first of course). After all, the Ascii "straight" apostrophe should only be used in programming and comparable contexts, not in any normal human-language content.
It sounds like a CMS oddity, but check first that the data sent by the server actually contains “.” U+002E and not something else that just gets rendered as a period by browsers. Then you could submit a bug report to CMS provider. It might be a good idea to test the entire Ascii of characters, and why not all of Windows Latin 1 (using a page containing them all and checking that they are rendered OK, naturally with normal < and & precautions).

Browser is HTML Encoding a character before sending it?

I cant believe what im seeing here! I have a normal, basic html form (havent changed the enctype), if someone puts a strange japanese character in the field and posts the form then in my database it is saving an HTML encoded version of the character. I am not processing the string at all except with a Trim(). Using classic ASP (not out of choice i might add!). I have a feeling this might have something to do with utf-8/encoding but ive tried messing around with the meta tag and content type and been unable to get the character to come through properly. To make things harder i dont seem to be able to get classic ASP debugging in VS express 2010. Any comments appreciated :)
As you can see in this demo and read in the standard (4.10.22.6.4.2), characters that are not supported by the selected encoding (such as Japanese ones in an ISO8859-* or cp1252 encoding) are encoded as HTML entities.
If you are fine with incorrectly handling user input that contains html entities in the clear, you can replace all numeric HTML entities in the user input with the corresponding Unicode character (however, doing so in ASP is hard since there is no inverse function to Server.HTMLEncode and Unicode support is pretty much nonexistent in the first place.
As an alternative, use UTF-8 (and/or a web development platform from this millennium) and all these problems go away. However, since that may not be an option, you may want the to unescape the HTML entities in different programs, for example with HttpUtility.HtmlDecode in C#, html_entity_decode in PHP, or HTMLParser.unescape in Python.

Paste from Outlook/Word/Office to Embeded Browser

So, we have a great application, that is going well, but some of our users like to copy their text to word before pasting into our application. When they do that, the HTML is parsed out somewhat properly, but usually contains tags from outlook or word, that our XHTML engine just doesn't like, or understand.
For example, a user types in a note into Word, has some minor formatting in it, and they past into our HTML editor (it's just a basic webbrowser with designmode turned on), the subsequent source includes <_o3a_p> tags, among others.
Am i going to have to just write a stripper for every type of MSO html tag?
I have had good luck pasting WORD content to Libre Office, and then re-selecting and copying the text out of Libre Office into a web form.
It keeps the formatting, and links, and removes all the Microsoft formatting Code.
As a user that sometimes copies data from Word to a web form (I sometimes like to spellcheck first), I've found great success by first pasting into Notepad, then copying from there and pasting into the web form.
However, Word still sometimes has the last laugh. If you have "smart quotes" enabled, it turns
This is the "best" way.
into
This is the “best” way.
(Note the quotes around the word "best").
The easy way to fix this is to turn off Smart Quotes before I begin to type; I can also use Notepad to find all of the "smart quote" symbols (“ ” ‘ ’) and replace them with "normal quote" symbols (" " ' ').
The consensus seems to be that while some tools available are somewhat successful at auto parsing ms work tags, none are 100% perfect. Methods to parse those tags depend upon what framework you are using.
Regular expression would probably be a clean fix.
Some more information about this topic can be found
on this blog post that basically documents the same struggle you seem to be having.

Are unicode characters better or more semantic than the simple text versions?

When I copy/paste text from most sites and pdfs, the following characters are almost always in the unicode equivalent:
double quote: " is “ and ” (“ and ”)
single quote: ' is ‘ and ’ (‘ and ’)
ellipsis: ... is … (…)
I understand ones that can't be represented without unicode like © and ¢, but even for those, I wonder.
When should you use these unicode equivalents? Are they more semantic than not using them? Are they better interpreted by devices (copy/paste/print)? I always find it annoying getting those quote and ellipsis characters because with textmate + programming, you don't use them.
When should you use these unicode equivalents? Are they more semantic than not using them?
Note that these are not “unicode equivalents”. Those characters are available in many character sets other than Unicode, and they are strictly distinct from the alternatives that you propose.
In typography, the left and right versions of the single and double quotation marks are correct. They provide the traditional appearance for those characters that has been used in print media for many years. The ellipsis character provides the correct spacing for an ellipsis that does not naturally occur when using consecutive full stop characters. So the reason all of these are used is to make the text appear correctly to human readers.
Are they better interpreted by devices (copy/paste/print)?
Any system that uses any character set should be designed to correctly handle that character set. If the text is encoded in Unicode, then any recent system (from the last 15 years at least) should be able to handle it, since Unicode is the de facto standard character set for all modern systems.
Not all Unicode-conformant systems will be able to display all characters correctly. This will depend on the fonts available, and even the rendering system that uses the fonts. But any Unicode-conformant system will be able to transmit the characters unaltered (such as in a copy and paste operation).
I always find it annoying getting those quote and ellipsis characters because with textmate + programming, you don't use them.
It is unusual to copy English (or whatever language) text directly into a program without having to add separate delimiters to that text. But most modern programming languages will not have any difficulty handling the text once it is property delimited.
Any systems that cannot handle Unicode correctly should be updated. Legacy character encodings will have no place in the future.
I think there's a simple explanation: MS Word converts these characters/sequences automatically as you type and a lot of text in the internet has been copied from this text editor.
Most of the articles I get for my site from other authors are sent as .doc file and I have to convert it. Usually, it contains these characters you've mentioned.
I'd also add one more: many different types of dashes instead of the hyphen. And also the low opening double quote (as seen in some european languages).
I usually let them stay in the text (all my pages are unicode). It's just important to remember it when playing around with regex etc (especially the dashes can be tricky and hard to spot).
HTML entities serve a triple purpose:
Being able to use characters that do not belong to the document character set, e.g., insert an euro symbol in a ISO-8859-1 document.
Escape characters that have a special meaning in HTML, such as angle brackets.
Make it easier to type characters that are not in your keyboard or are not supported by your editor, e.g. a copyright symbol.
Update:
My info is correct but I suspect I've answered the wrong question...
On the web, I would consider that markup adds semantic meaning, content does not. So it doesn't really matter which you use in this context.
Typographers would insist on “ and ”, where programmers don't care and just use regular old quotes ".
The key here is interoperability. There are different encoding schemes. As we've all been victim to, people paste content into an editor from WORD, which uses windows-1251 encoding. When you serve this content up via AJAX is usually breaks because AJAX uses UTF-8 encoding by default.
Office 2010 now allows for the saving of documents in UTF-8 format. Also, databases have different unicode encoding schemes. The best bet is to use UTF-8 end-to-end.
When you copy-pasta text that includes special characters, they will be left as they are. This is perfectly fine if the characters match the charset used by the webpage.
HTML entities are just a convenience for producing specific characters in any character set. Keyboards tend not to have keys to get symbols like ©, so the HTML entity is a shortcut.
I'm going to generalize and say that most of the time the content is UTF-8 (please correct me if I'm wrong). The copied characters are usually copied correctly and everything works great, if they aren't copied correctly, or the charset is subject to change, or you're after i18n support, go with the HTML or XML entities. Otherwise, leave them as they are, the browser will display them just fine.