HTML - how to render unicode symbols appearing (from api) such as ’ – etc - html

There is a data quality issue in our app. Basically some characters from a very long time ago were not saved with standard chars.
Dashes appear as –
Apostrophe appear as ’
etc
Is this standard Unicode? I have looked for a few tables but I couldn't find &#150 or &#146 that matches to the punctuation chars I'm expecting.
Also, is there an easy way to render those HTML characters? Right now, it is appearing as square boxes in some editors, and in Notepad++ it is appearing as SPA (in black box).

Related

How to properly display Hebrew in text widget?

I'm using Manjaro Linux KDE and the most recent versions of Tcl and Tk, and am attempting to display Hebrew in a text widget. In testing, the Hebrew text was pasted into the Tcl script in the Kate text editor and appears in the correct order, right to left with compound characters.
Without using a specific font in Tcl/Tk, the text prints from left to right and separates the components of compound characters, such that the vowel points and cantillation marks appear as separate characters. After using the SBL Hebrew font, the words look better but the vowel points are not located properly and they are still written from left to right. I tried using the \u200f and \u200e marks but it made no difference; but I really don't know what I'm doing there and simply tried prefixing and suffixing it to the Hebrew word. Reversing the the string helps but the vowel points are not combined with the consonants.
I'm not using Tkinter but this older SO post seems to indicate that it is a Linux issue with Tcl.
If I extract Hebrew from SQLite using Tcl and write it to the command line using puts, it displays correctly. Also, if I copy the reversed text from the Tk text widget and paste it in this SO question, it is displayed in the correct order. To clarify, by reversed here, I don't mean using string reverse but simply that it appears reversed in Tk but when pasted in this SO box, it displays correctly.
Would you please tell me what I'm doing wrong and how to get it to display properly?
I tried to follow this document on internationalization in Tcl and encoding but don't follow how this affects displaying Hebrew in a text Widget. I also came across a web site that has code for a unicode editor that displays several languages including Hebrew but I can't follow that code either. I tried running the code and, if select Hebrew language, it writes right to left but I don't see vowel points or cantillation marks; but I don't know much about typing the Hebrew language.
Thank you.
.tw tag configure heb -font {"SBL Hebrew" 18 normal}
.tw insert end "בְּרֵאשִׁ֖ית" "heb"
# Also tried "בְּרֵאשִׁ֖ית\u200f" and "\u200fבְּרֵאשִׁ֖ית".
# and "בְּרֵאשִׁ֖ית\u200e" and "\u200eבְּרֵאשִׁ֖ית".
# Tried .t insert end [string reverse $h ] "heb", which order the
# consonants but the vowel points and cantillation marks are not correct.
This is the correct rendering.
This is from Tk. The first is in normal order and the second using string reverse. It can be observed that the vowel points are not "on" the consonants and the cantillation marks are not correct. I know little about Hebrew but I can tell they don't match and appear to be printed as separate characters instead of combined. I think what looks like a "t" under the Hebrew letter that looks similar to a "W" is two characters on top of each other-- a dot and the symbol sort of similar to a left parenthesis in the correct rendering.
I don't know why but after rebooting and installing the next batch of updates, not that they have anything to do with Tk, the rendering is different when a font is not set. However, once the SBL Hebrew font is set, then the characters are separated as displayed above.
I can tell you know that the text renders very close to correctly with Tk on macOS (I'm not sure how much is just font differences, and there's a bit of clipping of the descender decorations that I don't like, but I don't think that's Tk itself doing the wrong thing).
That means that it's definitely a rendering bug that you're seeing. I suspect it might relate to the size of chunks of characters fed into the renderer; if the low levels of the renderer are only being given a character at a time, then they've got no chance to get the overall placement correct or to apply any character combining. I'm guessing that the real issue is that TkpDrawCharsInContext() just calls Tk_DrawChars(), if my reading of the comments is right. (By contrast, the macOS renderer does something different here.)
I don't have a workaround.

ETX characters (as L) showing up on websites

In the last few days I have had a couple of clients contact me saying that they are having some uppercase "L"'s appearing in places on their website. Upon investigating, I found that there were some random ETX characters on their websites. They are showing up on the websites on Windows (definitely on Chrome, maybe on other browsers too), but in Firefox on Mac I can see them in the source code. On Chrome on Mac I can't see them anywhere. Here are pictures of the problem:
picture of the issue
source code
My clients websites have not been updated in months so I'm guessing that Windows pushed out an update in the last week to the default language/encoding which is making these show up now.
Removing them is easy, but I wanted to understand where they are coming from and how I can avoid the problem in the future. It looks like the characters are in text that I would have copied out of Photoshop. Is there any easy way to sanitise and remove these kind of characters when I copy from Photoshop or other similar programs?
As I mentioned earlier, I am on Mac, using Chrome primarily. Is there any way to get Chrome to actually show these characters so that I can see if they are appearing?
You are correct that the issue is with Photoshop. Line breaks (Shift+Enter) are encoded in Photoshop as an ETX character (end of text), not an LF (line feed) or CRLF (carriage return + line feed).
These characters can be seen by pasting your content into a plain text editor such as Sublime Text. The find/replace function should make removing them easy.
I don't believe there is any way to get the ETX characters to display in Chrome for Mac.
However, since the characters are still present (even if they are invisible), you could select all the text on the page (Mac: Cmd+A / Win:
Ctrl+A) and paste it all into Sublime Text to find them.

Actionscript TextField doesn't display chinese characters (font is embedded)

So here's the situation. I have a bunch of textfields, containing some phrases, and a locale file containing translations of those phrases in several languages. I also have a textfield on the stage where I copied all the characters from the locale file (this means latin characters, plus special characters from french, spanish, and also characters in arabic, and chinese).
Now the problem: all translations appear ok, even arabic, except for chinese. For chinese, I don't see anything, only white space. If I compile the application with the chinese translations already entered in the textfields, I can see them, but as soon as I try to set them dynamically, everything disappears.
The font I'm using is Arial, Bold, and I even tried embedding the entire Chinese set of glyphs, but with no luck.
Also, I tried launching an alert window using External Interface to trace the chinese characters right before I'm setting them to the textfield, and I can see them appearing just fine in the alert box (I'm using Vizzy for watching traces, but for chinese I'm just seeing some squares).
Help me out..

flash cs5.5 as3 - get unicode character of Arabic Presentation forms A and B

I have a string like 'دبي' and i want to get its correct unicode character. Currently, I am using str.charCodeAt(index) to get its unicode character but for Arabic characters it gives between 0600 and 06FF. However, i want Arabic Presentation Forms A and B - whichever is actually written.
Can anyone suggest how to do this?
The string you posted consists of three normal Arabic letters in the 0600...06FF range, so what you are getting is the correct Unicode characters. If you mean that you would like to determine the contextual glyph forms used, then that’s outside the character level and cannot be determined from the string. (It can be determined, by applying rules of Arabic writing, which forms should be used, but that’s different from knowing which forms are actually used by the rendering software.)
Arabic Presentation Forms are legacy characters not meant for normal use. Normal rendering is not supposed to convert normal character to such forms but to select glyphs contextually.

Are unicode characters better or more semantic than the simple text versions?

When I copy/paste text from most sites and pdfs, the following characters are almost always in the unicode equivalent:
double quote: " is “ and ” (“ and ”)
single quote: ' is ‘ and ’ (‘ and ’)
ellipsis: ... is … (…)
I understand ones that can't be represented without unicode like © and ¢, but even for those, I wonder.
When should you use these unicode equivalents? Are they more semantic than not using them? Are they better interpreted by devices (copy/paste/print)? I always find it annoying getting those quote and ellipsis characters because with textmate + programming, you don't use them.
When should you use these unicode equivalents? Are they more semantic than not using them?
Note that these are not “unicode equivalents”. Those characters are available in many character sets other than Unicode, and they are strictly distinct from the alternatives that you propose.
In typography, the left and right versions of the single and double quotation marks are correct. They provide the traditional appearance for those characters that has been used in print media for many years. The ellipsis character provides the correct spacing for an ellipsis that does not naturally occur when using consecutive full stop characters. So the reason all of these are used is to make the text appear correctly to human readers.
Are they better interpreted by devices (copy/paste/print)?
Any system that uses any character set should be designed to correctly handle that character set. If the text is encoded in Unicode, then any recent system (from the last 15 years at least) should be able to handle it, since Unicode is the de facto standard character set for all modern systems.
Not all Unicode-conformant systems will be able to display all characters correctly. This will depend on the fonts available, and even the rendering system that uses the fonts. But any Unicode-conformant system will be able to transmit the characters unaltered (such as in a copy and paste operation).
I always find it annoying getting those quote and ellipsis characters because with textmate + programming, you don't use them.
It is unusual to copy English (or whatever language) text directly into a program without having to add separate delimiters to that text. But most modern programming languages will not have any difficulty handling the text once it is property delimited.
Any systems that cannot handle Unicode correctly should be updated. Legacy character encodings will have no place in the future.
I think there's a simple explanation: MS Word converts these characters/sequences automatically as you type and a lot of text in the internet has been copied from this text editor.
Most of the articles I get for my site from other authors are sent as .doc file and I have to convert it. Usually, it contains these characters you've mentioned.
I'd also add one more: many different types of dashes instead of the hyphen. And also the low opening double quote (as seen in some european languages).
I usually let them stay in the text (all my pages are unicode). It's just important to remember it when playing around with regex etc (especially the dashes can be tricky and hard to spot).
HTML entities serve a triple purpose:
Being able to use characters that do not belong to the document character set, e.g., insert an euro symbol in a ISO-8859-1 document.
Escape characters that have a special meaning in HTML, such as angle brackets.
Make it easier to type characters that are not in your keyboard or are not supported by your editor, e.g. a copyright symbol.
Update:
My info is correct but I suspect I've answered the wrong question...
On the web, I would consider that markup adds semantic meaning, content does not. So it doesn't really matter which you use in this context.
Typographers would insist on “ and ”, where programmers don't care and just use regular old quotes ".
The key here is interoperability. There are different encoding schemes. As we've all been victim to, people paste content into an editor from WORD, which uses windows-1251 encoding. When you serve this content up via AJAX is usually breaks because AJAX uses UTF-8 encoding by default.
Office 2010 now allows for the saving of documents in UTF-8 format. Also, databases have different unicode encoding schemes. The best bet is to use UTF-8 end-to-end.
When you copy-pasta text that includes special characters, they will be left as they are. This is perfectly fine if the characters match the charset used by the webpage.
HTML entities are just a convenience for producing specific characters in any character set. Keyboards tend not to have keys to get symbols like ©, so the HTML entity is a shortcut.
I'm going to generalize and say that most of the time the content is UTF-8 (please correct me if I'm wrong). The copied characters are usually copied correctly and everything works great, if they aren't copied correctly, or the charset is subject to change, or you're after i18n support, go with the HTML or XML entities. Otherwise, leave them as they are, the browser will display them just fine.