Chinese names and Unicode Basic Multilingual Plane (BMP) - mysql

I am building an application using MySQL, where Chinese names need to be stored in the database. I'm trying to decide whether or not using the basic utf8 encoding (which only works with the Basic Multilingual Plane, and stores a maximum of 3 bytes per character in a UTF-8 encoding), or if I need to make use of the utf8mb4 encoding, which permits characters from higher planes to be encoded/stored.
Is the Unicode Basic Multilingual Plane (BMP) sufficient to store all Chinese proper names?

MySQL's CHARACTER SET utf8 only handles 3-byte UTF-8 codes (BMP). Instead, use CHARACTER SET utf8mb4, which handles all 4-byte codes. Yes that includes all of currently defined Unicode for Chinese, Emoji, etc.
Use version 5.7, if practical.

TL;DR it doesn't matter, stick with utf8mb4 encoding, especially for new applications.
Long-form answer: the key difference between the two encodings is that utf8, long supported by MySQL, supports UTF8-encoded characters up to three bytes in length. As of 5.5.3, as noted by #rick-james, a new encoding, utf8mb4 relaxes this restriction, and otherwise has no disadvantages.
According to the MySQL documentation, the newer utf8mb4 encoding lifts this arbitrary three-character restriction, and there are few, if any disadvantages:
For a BMP character, utf8 and utf8mb4 have identical storage characteristics: same code values, same encoding, same length.
For a supplementary character, utf8 cannot store the character at all, whereas utf8mb4 requires four bytes to store it. Because utf8 cannot store the character at all, you have no supplementary characters in utf8 columns and need not worry about converting characters or losing data when upgrading utf8 data from older versions of MySQL.
Thus, my original question was misconceived: the maximum number of bytes to encode each character of a Chinese name shouldn't matter so long as the encoding you use actually supports encoding all Unicode code points.

Related

Is it safe to update tables from utf8 to utf8mb4 in MySQL?

I am aware that similar questions have been asked before, but we need a more definitive answer.
Is it safe to update MySQL tables encoded in utf8 to utf8mb4 in all cases. More specifically, even for varchar fields with strings generated using for example (in Java):
new BigInteger(130, random).toString(32)
From our understanding utf8mb4 is a superset of utf8 so our assumption would be that everything should be fine, but we would love some input from more MySQL superusers.
How the data was originally inserted in MySQL is irrelevant. Let's suppose you used the entire character set of utf8, e.g. the BMP characters.
utf8mb4 is a superset of utf8mb3 (alias utf8) as documented here
10.9.7 Converting Between 3-Byte and 4-Byte Unicode Character Sets
One advantage of converting from utf8mb3 to utf8mb4 is that this enables applications to use supplementary characters. One tradeoff is that this may increase data storage space requirements.
In terms of table content, conversion from utf8mb3 to utf8mb4 presents no problems:
For a BMP character, utf8mb4 and utf8mb3 have identical storage
characteristics: same code values, same encoding, same length.
For a supplementary character, utf8mb4 requires four bytes to store
it, whereas utf8mb3 cannot store the character at all. When
converting utf8mb3 columns to utf8mb4, you need not worry about
converting supplementary characters because there will be none.
In terms of table structure, these are the primary potential incompatibilities:
For the variable-length character data types (VARCHAR and the TEXT types), the maximum permitted length in characters is less for utf8mb4 columns than for utf8mb3 columns.
For all character data types (CHAR, VARCHAR, and the TEXT types), the maximum number of characters that can be indexed is less for utf8mb4 columns than for utf8mb3 columns.
Consequently, to convert tables from utf8mb3 to utf8mb4, it may be necessary to change some column or index definitions.
Personally I had some issues with indexes on relative long texts where the maximum size of the index was reached. It was a search index, not a unique index, so the workaround was to use less characters in the index. See also this answer
Of course I suppose that you will use the same collation. If you change the collation other issues apply.

Does MySQL UTF8 collation fit japanese and korean characters?

I've set all collation and characters sets to UTF8 in PHP and MySQL. There is no problem. But as seen on http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html, standard utf8_general_ci collation uses three bytes for storing characters. That should be enough to store all BMP characters. But I've still found no hint, if all korean and japanese characters are included in BMP or if there are characters that needs four bytes to be stored. I simply want to know, if utf8_general_ci and utf8_bin are really enough to store all korean/japanese characters, or if I have to use utf8mb4_general_ci and utf8mb4_bin?
The most frequently used characters are in the BMP. The characters in higher planes are mostly rare and historic, but some of them may be in use in personal names for example. If you can use utf8mb4 you probably should.

What MySQL collation is best for accepting all unicode characters?

Our column is currently collated to latin1_swedish_ci and special unicode characters are, obviously, getting stripped out. We want to be able to accept chars such as U+272A ✪, U+2764 ❤, (see this wikipedia article) etc. I'm leaning towards utf8_unicode_ci, would this collation handle these and other characters? I don't care about speed as this column isn't an index.
MySQL Version: 5.5.28-1
The collation is the least of your worries, what you need to think about is the character set for the column/table/database. The collation (rules governing how data is compared and sorted) is just a corollary of that.
MySQL supports several Unicode character sets, utf8 and utf8mb4 being the most interesting. utf8 supports Unicode characters in the BMP, i.e. a subset of all of Unicode. utf8mb4, available since MySQL 5.5.3, supports all of Unicode.
The collation to be used with any of the Unicode encodings is most likely xxx_general_ci or xxx_unicode_ci. The former is a general sorting and comparison algorithm independent of language, the latter is a more complete language independent algorithm supporting more Unicode features (e.g. treating "ß" and "ss" as equivalent), but is therefore also slower.
See https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-sets.html.

MySQL Workbench: Which collation will allow the widest range of characters, including foreign/acented characters?

I am creating an EER Model and want to find the collation that will provide me the most amount of characters to use. The characters that will be stored are generally standard English but on occasion the brands will have foreign and or accented characters. How can I ensure they are supported and not changed to squares or question marks down the road?
Generally I have them stored at UTF-16 but am not seeing that option available, in the default at least.
What you are looking for is the character set not the collation. The character set defines the set of symbols and encoding used to represent those symbols. The collation defines the rules used to compare the characters of a given character set and affect sorting.
Unicode character sets offer the broadest character support. MySQL supports two Unicode encodings:
UTF8 - uses up to 24 bits to encode a character, backwards compatible with ASCII encoding.
UCS2 - always uses 16 bits to encode each character, not compatible with ASCII encoding.
Within those two character sets MySQL has multiple collations that specify the sorting rules for different languages, Unicode rules, and binary comparison rules.
Look at: Character Set Support in MySQL Reference Manual.

utf-8 vs latin1

What are the advantages/disadvantages between using utf8 as a charset against using latin1?
If utf can support more chars and is used consistently wouldn't it always be the better choice? Is there any reason to choose latin1?
UTF8 Advantages:
Supports most languages, including RTL languages such as Hebrew.
No translation needed when importing/exporting data to UTF8 aware components (JavaScript, Java, etc).
UTF8 Disadvantages:
Non-ASCII characters will take more time to encode and decode, due to their more complex encoding scheme.
Non-ASCII characters will take more space as they may be stored using more than 1 byte (characters not in the first 127 characters of the ASCII characters set). A CHAR(10) or VARCHAR(10) field may need up to 30 bytes to store some UTF8 characters.
Collations other than utf8_bin will be slower as the sort order will not directly map to the character encoding order), and will require translation in some stored procedures (as variables default to utf8_general_ci collation).
If you need to JOIN UTF8 and non-UTF8 fields, MySQL will impose a SEVERE performance hit. What would be sub-second queries could potentially take minutes if the fields joined are different character sets/collations.
Bottom line:
If you don't need to support non-Latin1 languages, want to achieve maximum performance, or already have tables using latin1, choose latin1.
Otherwise, choose UTF8.
latin1 has the advantage that it is a single-byte encoding, therefore it can store more characters in the same amount of storage space because the length of string data types in MySql is dependent on the encoding. The manual states that
To calculate the number of bytes used to store a particular CHAR,
VARCHAR, or TEXT column value, you must take into account the
character set used for that column and whether the value contains
multibyte characters. In particular, when using a utf8 Unicode
character set, you must keep in mind that not all characters use the
same number of bytes. utf8mb3 and utf8mb4 character sets can require
up to three and four bytes per character, respectively. For a
breakdown of the storage used for different categories of utf8mb3 or
utf8mb4 characters, see Section 10.9, “Unicode Support”.
Furthermore lots of string operations (such as taking substrings and collation-dependent compares) are faster with single-byte encodings.
In any case, latin1 is not a serious contender if you care about internationalization at all. It can be an appropriate choice when you will be storing known safe values (such as percent-encoded URLs).
#Ross Smith II, Point 4 is worth gold, meaning inconsistency between columns can be dangerous.
To add value to the already good answers, here is a small performance test about the difference between charsets:
A modern 2013 server, real use table with 20000 rows, no index on concerned column.
SELECT 4 FROM subscribers WHERE 1 ORDER BY time_utc_str; (4 is cache buster)
varchar(20) CHARACTER SET latin1 COLLATION latin1_bin: 15ms
varbinary(20): 17ms
utf8_bin: 20ms
utf8_general_ci: 23ms
For simple strings like numerical dates, my decision would be, when performance is concerned, using utf8_bin (CHARACTER SET utf8 COLLATE utf8_bin). This would prevent any adverse effects with other code that expects database charsets to be utf8 while still being sort of binary.
Fixed-length encodings such as latin-1 are always more efficient in terms of CPU consumption.
If the set of tokens in some fixed-length character set is known to be sufficient for your purpose at hand, and your purpose involves heavy and intensive string processing, with lots of LENGTH() and SUBSTR() stuff, then that could be a good reason for not using encodings such as UTF-8.
Oh, and BTW. Do not confuse, as you seem to do, between a character set and an encoding thereof. A character set is some defined set of writeable glyphs. The same character set can have multiple distinct encodings. The various versions of the unicode standard each constitute a character set. Each of them can be subjected to either UTF-8, UTF-16 and "UTF-32" (not an official name, but it refers to the idea of using full four bytes for any character) encoding, and the latter two can each come in a HOB-first or HOB-last flavour.