What is the meaning of -{ }- in MediaWiki wikitext? - mediawiki

In my MediaWiki wiki, any wikitext containing -{ }- is not parsed correctly. Do I need some extension?
Example:
-{Computer}-

The -{}- syntax is used by the rather poorly documented (but widely used, at least in some regions!) MediaWiki automatic language variant conversion feature (LanguageConverter), which converts text between different writing systems and local variants of a language, such as between simplified and tradition Chinese characters, or between Cyrillic and Latin alphabets in Serbian language.
Specifically, -{}- is used to manually override the automatic conversion, either for literal text (such as names or quotations) that should not be converted, or for special cases where the automatic conversion gets it wrong and needs to be overridden. The syntax for the latter case looks something like -{var1: Some text; var2: Something else}-, possibly with some flags at the beginning that change the behavior in various ways.
Alas, short of reading the code itself, I was unable to find any comprehensive documentation on what all these flags and such actually do. I do believe that there's some decent documentation available if you can read Chinese, but I can't, and the output of Google Translate leaves much to be desired.

Related

Does CppCMS support unicode?

I have been looking for a C++ web framework with high performance target.
I found out it, I am not sure that does it support Unicode because I see some its samples use std:string for render.
Does any one use it with Unicode output?
By unicode, I assume you mean UTF-8.
Yes, cppcms fully support UTF-8. I am from the ROC (Republic of China) and I use cppcms to output Chinese Traditional characters. On the cppcms mailing list, there are also many people from the PRC (People's Republic of China) and they use it with Simplified Chinese caracters. You won't have any problem for Vietnamese.
Check the wiki page for Encoding and UTF-8:
http://cppcms.com/wikipp/en/page/cppcms_1x_encoding_and_utf8
Basically, in your config.js file, make sure to properly declare your locale, e.g.:
"localization" : {
"locales" : [ "en_US.UTF-8" ]
}
Also, if you use mysql, make sure to declare the encoding in the database connection string, like this:
mysql:host=127.0.0.1;database=foo;user=bar;password=foobar;set_charset_name=utf8
That's basically it. With that, you can use std::wstring or anything you wish.

Single collection of reserved keywords from many programming languages?

I am looking for a collection of lists of keywords per programming language, preferably for a large set of popular languages, preferably in a machine-readable format. I failed to find such resource just from Googling. Is anyone familiar with such a list?
Hint - Many editors have such a list as a part of their syntax highlighting configuration. I looked at the Notepad++ config file, but unfortunately it completely mixes between reserved keywords and commonly-used functions. For example, mysql functions are listed as PHP keywords. Emacs, unfortunately, uses per-mode Lisp scripts. If you're using an editor with a textual syntax highlighting config file that clearly specifies the language-reserved keywords for a large selection of languages, please let me know.
I am not looking to build a language classifier or to automatically deduce the keywords from samples. These are separate tasks that were already discussed here at Stackoverflow. I am just looking for a large collection of language keywords.
Ultraedit has a large collection of Syntax files and they seem to distinguish actual reserved words from functions. Have a look and see if it fits the bill.

Is it advisable to have non-ascii characters in the URL?

We are currently working on a I18N project. I am wondering what are the complications of having the non-ascii characters in the URL. If its not advisable, what are the alternatives to deal with this problem?
EDIT (in response to Maxym's answer):
The site is going to be local to specific country and I need not worry about the world wide public accessing this site. I understand that from usability point of view, It is really annoying. What are the other technical problem associated with this?
It is possible to use non-ASCII/non-Latin domain names using IDNA. Further, you can always use percent encoding (like %20 for space) in URLs. RFC 3986 recommends UTF-8 encoding combined with percents:
the data should first be encoded as
octets according to the UTF-8
character encoding; then only those
octets that do not correspond to
characters in the unreserved set
should be percent-encoded. (...) For
example, the character A would be
represented as "A", the character
LATIN CAPITAL LETTER A WITH GRAVE
would be represented as "%C3%80", and
the character KATAKANA LETTER A would
be represented as "%E3%82%A2".
Modern clients (web browsers) are able to transform back and forth between percent encoding and Unicode, so the URL is transferred as ASCII but looks pretty for the user.
Make sure you're using a web framework/CMS that understands this encoding as well, to simplify URL input from webmasters/content editors.
I would say no. The reason is simple -> if you rely on world wide public, then it would be a big problem for people to type your url. I live in "cyrillic" world, it is possible to create cyrillic urls, but no one succeed with that, because even we are pretty lazy to change language and get used to type latin...
Update:
I can't say about alternatives, but sometimes some languages have informal or formal letter substitute, e.g. in German you can write Ö but in url you could see OE instead. Also you can consider english words, or words with similar sounds (so people from your country can remeber that writing, and other "countries" won't harm
depends on the target users... for example Nürnberg.de also looks at nuernberg.de for sake to make it easily accessible for native German user(as German keyboard is default and has all 4 extra key-symbols (öäüß) avaible to all German speakers), and do not forget that one of the goal I18N is to provide native language feel to the end user. Mac and Linux user have even more initiative way, like by clicking Alt+u on Mac will induce umlaut in characters to deal with I18N inputing.
I was just wondering what are the
complications of having the non-ascii
characters in the URL.
but the way you laid your question, it seems that your question is more around URI, rather then URL... and you are trying to fuse URN with non-ascii characters inside URI. there are no complications in it, if you know where and how to parse the your URN at server ( for example: in case of Django based server, the URN can be parsed and handled using regex inside url.py ).. all you need to keep in mind is that with web2.0( Ajax javascript based) evolution, everything mainly runs in utf-8, as Javascript specification demands utf-8 encoding. And thus utf-8 has evolving into a sort of standard. stick with utf-8 encoding specs, and you will hardly be facing any complications in URI parsing and working around it.
for example. check the URI http://de.wikipedia.org/wiki/Fürth or http://hi.wikipedia.org/wiki/जर्मनी .. irrespective of the encoding you write it in addressbar, browser will translate it to UTF-8, and send it to server.
NOTE : beside UTF-8, there are some symbols that are encoded using percentage encoding.. more about it can be located here...
http://en.wikipedia.org/wiki/Percent-encoding
You can use non-ascii characters in an url, but it's ugly because spécial caracters must be encoded like this:
http://www.w3schools.com/tags/ref_urlencode.asp

Convert chinese characters to hanyu pinyin

How to convert from chinese characters to hanyu pinyin?
E.g.
你 --> Nǐ
马 --> Mǎ
More Info:
Either accents or numerical forms of hanyu pinyin are acceptable, the numerical form being my preference.
A Java library is preferred, however, a library in another language that can be put in a wrapper is also OK.
I would like anyone who has personally used such a library before to recommend or comment on it, in terms of its quality/ reliabilitty.
The problem of converting hanzi to pinyin is a fairly difficult one. There are many hanzi characters which have multiple pinyin representations, depending on context. Compare 长大 (pinyin: zhang da) to 长城 (pinyin: chang cheng). For this reason, single-character conversion is often actually useless, unless you have a system that outputs multiple possibilities. There is also the issue of word segmentation, which can affect the pinyin representation as well. Though perhaps you already knew this, I thought it was important to say this.
That said, the Adso Package contains both a segmenter and a probabilistic pinyin annotator, based on the excellent Adso library. It takes a while to get used to though, and may be much larger than you are looking for (I have found in the past that it was a bit too bulky for my needs). Additionally, there doesn't appear to be a public API anywhere, and its C++ ...
For a recent project, because I was working with place names, I simply used the Google Translate API (specifically, the unofficial java port, which, for common nouns at least, usually does a good job of translating to pinyin. The problem is commonly-used alternative transliteration systems, such as "HongKong" for what should be "XiangGang". Given all of this, Google Translate is pretty limited, but it offers a start. I hadn't heard of pinyin4j before, but after playing with it just now, I have found that it is less than optimal--while it outputs a list of potential candidate pinyin romanizations it makes no attempt to statistically determine their likelihood. There is a method to return a single representation, but it will soon be phased out, as it currently only returns the first romanization, not the most likely. Where the program seems to do well is with conversion between romanizations and general configurability.
In short then, the answer may be either any one of these, depending on what you need. Idiosyncratic proper nouns? Google Translate. In need of statistics? Adso. Willing to accept candidate lists without context information? Pinyin4j.
In Python try
from cjklib.characterlookup import CharacterLookup
cjk = CharacterLookup('C')
cjk.getReadingForCharacter(u'北', 'Pinyin')
You would get
['běi', 'bèi']
Disclaimer: I'm the author of that library.
For Java, I'd try the pinyin4j library
As mentioned in other answers the conversion is fuzzy and even google translate apparently gets a certain percentage of character combinations wrong.
A reasonable result which will not be 100% accurate can be achieved with open-source libraries available for some programming languages.
The simplest code to do the conversion with python with the pypinyin library (to install it use pip3 install pypinyin):
from pypinyin import pinyin
def to_pinyin(chin):
return ' '.join([seg[0] for seg in pinyin(chin)])
print(to_pinyin('好久不见'))
# OUTPUT: hǎo jiǔ bú jiàn
NOTE: The pinyin method from the module returns a list of possible candidate segments, and the to_pinyin method takes the first variant whenever more than one conversion is available. For tricky corner cases this is likely to produce incorrect results, but generally you'll probably get at least a ~90..95% success rate.
There are a few other python libraries for pinyin conversion but in my tests they proved to have a higher error rate than pypinyin. Also, they don't appear to be actively maintained.
If you need better accuracy then you'll need a more complex approach that will rely on bigger datasets and possibly some machine learning.

Syntax highlight design pattern

I'm looking for some good overviews of best practices and common patterns for enabling syntax highlighting in a textbox. It seems like a very common exercise almost all languages have a UI control that enables syntax highlighting in different languages. I'm just curious to see if there is a common pattern of implementation.
Is everyone using regular expressions? Is there a repository for regular expressions that are commonly used in syntax highlighting scenarios?
Are there alternative/better approaches to syntax highlighting?
Update
Links to relevant resources about performing syntax highlighting in a given language or concepts related to syntax highlighting would be great. Lexing (lexical analysis) was brought up in an answer but without a link to learn more. Anything to help better understand this commonly solved problem would be great.
Lexical Analysis on Wikipedia
Regular expressions are definitely the first place most start out at. However, they can't really cope with many edge cases that one meets in most languages - text that looks like keywords can be in found string literals, string literals in turn can contain escaped delimiters, as well as special characters. Same thing goes for comments, etc.
Basically to do a good job of syntax highlighting you need to perform lexing of the source - parsing it with the application of language-specific heuristics to build a list of regions, where each region of the source is annotated with how it is to be styled.
As edits take place, you can again apply language rules to see how far this change can alter the presentation of a region. For example typing a letter inside a string literal simply makes the string literal region longer, but typing a closing quote truncates the region and turns the leftover part of it into code, subject to all the other lexing rules.