Does MySQL UTF8 collation fit japanese and korean characters? - mysql

I've set all collation and characters sets to UTF8 in PHP and MySQL. There is no problem. But as seen on http://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html, standard utf8_general_ci collation uses three bytes for storing characters. That should be enough to store all BMP characters. But I've still found no hint, if all korean and japanese characters are included in BMP or if there are characters that needs four bytes to be stored. I simply want to know, if utf8_general_ci and utf8_bin are really enough to store all korean/japanese characters, or if I have to use utf8mb4_general_ci and utf8mb4_bin?

The most frequently used characters are in the BMP. The characters in higher planes are mostly rare and historic, but some of them may be in use in personal names for example. If you can use utf8mb4 you probably should.

Related

Which collation to use so that `ş` and `s` are treated as unique values?

The issue is that ş and s are interpreted by MySQL as identical values.
I'm new to MySQL, so I have no idea which collations would view them as unique.
The collations that I've tried using which don't work are:
utf8_general_ci
utf8_unicode_520_ci
utf8mb4_unicode_ci
utf8mb4_unicode_520_ci
Does anybody know which collation to use?
P.S. I also really need the collation to interpret emojis and other non-Latin characters, and, to my knowledge of MySQL and collations, the only collation able to do this is unicode?
utf8_turkish_ci and utf8_romanian_ci -- as shown in http://mysql.rjweb.org/utf8_collations.html
(Plus, of course, utf8_bin.)
For your added question: You are looking for a "character set" (not a "collation") that can represent Emoji and other non-Latin characters -- UTF-8 is the one to use. In MySQL, it is utf8mb4. The "collations" that are associated with that are named utf8mb4_.... Collations control ordering and equality, as indicated in the first part of your question about s and ş.
MySQL's CHARACTER SET utf8 is a subset of utf8mb4. Either can handle all the "letters" in the world. But only utf8mb4 can handle Emoji and some Chinese characters.

Chinese names and Unicode Basic Multilingual Plane (BMP)

I am building an application using MySQL, where Chinese names need to be stored in the database. I'm trying to decide whether or not using the basic utf8 encoding (which only works with the Basic Multilingual Plane, and stores a maximum of 3 bytes per character in a UTF-8 encoding), or if I need to make use of the utf8mb4 encoding, which permits characters from higher planes to be encoded/stored.
Is the Unicode Basic Multilingual Plane (BMP) sufficient to store all Chinese proper names?
MySQL's CHARACTER SET utf8 only handles 3-byte UTF-8 codes (BMP). Instead, use CHARACTER SET utf8mb4, which handles all 4-byte codes. Yes that includes all of currently defined Unicode for Chinese, Emoji, etc.
Use version 5.7, if practical.
TL;DR it doesn't matter, stick with utf8mb4 encoding, especially for new applications.
Long-form answer: the key difference between the two encodings is that utf8, long supported by MySQL, supports UTF8-encoded characters up to three bytes in length. As of 5.5.3, as noted by #rick-james, a new encoding, utf8mb4 relaxes this restriction, and otherwise has no disadvantages.
According to the MySQL documentation, the newer utf8mb4 encoding lifts this arbitrary three-character restriction, and there are few, if any disadvantages:
For a BMP character, utf8 and utf8mb4 have identical storage characteristics: same code values, same encoding, same length.
For a supplementary character, utf8 cannot store the character at all, whereas utf8mb4 requires four bytes to store it. Because utf8 cannot store the character at all, you have no supplementary characters in utf8 columns and need not worry about converting characters or losing data when upgrading utf8 data from older versions of MySQL.
Thus, my original question was misconceived: the maximum number of bytes to encode each character of a Chinese name shouldn't matter so long as the encoding you use actually supports encoding all Unicode code points.

What MySQL collation is best for accepting all unicode characters?

Our column is currently collated to latin1_swedish_ci and special unicode characters are, obviously, getting stripped out. We want to be able to accept chars such as U+272A ✪, U+2764 ❤, (see this wikipedia article) etc. I'm leaning towards utf8_unicode_ci, would this collation handle these and other characters? I don't care about speed as this column isn't an index.
MySQL Version: 5.5.28-1
The collation is the least of your worries, what you need to think about is the character set for the column/table/database. The collation (rules governing how data is compared and sorted) is just a corollary of that.
MySQL supports several Unicode character sets, utf8 and utf8mb4 being the most interesting. utf8 supports Unicode characters in the BMP, i.e. a subset of all of Unicode. utf8mb4, available since MySQL 5.5.3, supports all of Unicode.
The collation to be used with any of the Unicode encodings is most likely xxx_general_ci or xxx_unicode_ci. The former is a general sorting and comparison algorithm independent of language, the latter is a more complete language independent algorithm supporting more Unicode features (e.g. treating "ß" and "ss" as equivalent), but is therefore also slower.
See https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-sets.html.

utf-8 vs latin1

What are the advantages/disadvantages between using utf8 as a charset against using latin1?
If utf can support more chars and is used consistently wouldn't it always be the better choice? Is there any reason to choose latin1?
UTF8 Advantages:
Supports most languages, including RTL languages such as Hebrew.
No translation needed when importing/exporting data to UTF8 aware components (JavaScript, Java, etc).
UTF8 Disadvantages:
Non-ASCII characters will take more time to encode and decode, due to their more complex encoding scheme.
Non-ASCII characters will take more space as they may be stored using more than 1 byte (characters not in the first 127 characters of the ASCII characters set). A CHAR(10) or VARCHAR(10) field may need up to 30 bytes to store some UTF8 characters.
Collations other than utf8_bin will be slower as the sort order will not directly map to the character encoding order), and will require translation in some stored procedures (as variables default to utf8_general_ci collation).
If you need to JOIN UTF8 and non-UTF8 fields, MySQL will impose a SEVERE performance hit. What would be sub-second queries could potentially take minutes if the fields joined are different character sets/collations.
Bottom line:
If you don't need to support non-Latin1 languages, want to achieve maximum performance, or already have tables using latin1, choose latin1.
Otherwise, choose UTF8.
latin1 has the advantage that it is a single-byte encoding, therefore it can store more characters in the same amount of storage space because the length of string data types in MySql is dependent on the encoding. The manual states that
To calculate the number of bytes used to store a particular CHAR,
VARCHAR, or TEXT column value, you must take into account the
character set used for that column and whether the value contains
multibyte characters. In particular, when using a utf8 Unicode
character set, you must keep in mind that not all characters use the
same number of bytes. utf8mb3 and utf8mb4 character sets can require
up to three and four bytes per character, respectively. For a
breakdown of the storage used for different categories of utf8mb3 or
utf8mb4 characters, see Section 10.9, “Unicode Support”.
Furthermore lots of string operations (such as taking substrings and collation-dependent compares) are faster with single-byte encodings.
In any case, latin1 is not a serious contender if you care about internationalization at all. It can be an appropriate choice when you will be storing known safe values (such as percent-encoded URLs).
#Ross Smith II, Point 4 is worth gold, meaning inconsistency between columns can be dangerous.
To add value to the already good answers, here is a small performance test about the difference between charsets:
A modern 2013 server, real use table with 20000 rows, no index on concerned column.
SELECT 4 FROM subscribers WHERE 1 ORDER BY time_utc_str; (4 is cache buster)
varchar(20) CHARACTER SET latin1 COLLATION latin1_bin: 15ms
varbinary(20): 17ms
utf8_bin: 20ms
utf8_general_ci: 23ms
For simple strings like numerical dates, my decision would be, when performance is concerned, using utf8_bin (CHARACTER SET utf8 COLLATE utf8_bin). This would prevent any adverse effects with other code that expects database charsets to be utf8 while still being sort of binary.
Fixed-length encodings such as latin-1 are always more efficient in terms of CPU consumption.
If the set of tokens in some fixed-length character set is known to be sufficient for your purpose at hand, and your purpose involves heavy and intensive string processing, with lots of LENGTH() and SUBSTR() stuff, then that could be a good reason for not using encodings such as UTF-8.
Oh, and BTW. Do not confuse, as you seem to do, between a character set and an encoding thereof. A character set is some defined set of writeable glyphs. The same character set can have multiple distinct encodings. The various versions of the unicode standard each constitute a character set. Each of them can be subjected to either UTF-8, UTF-16 and "UTF-32" (not an official name, but it refers to the idea of using full four bytes for any character) encoding, and the latter two can each come in a HOB-first or HOB-last flavour.

CHARSET on column level in Mysql 5

My app has a table that has two columns needing utf8 and others are latin. Latin ones does not contain non-latin characters by definition and utf8 ones may or may not contain utf8 ones. One utf8 column is indexed and other is not.
I have three questions:
Is mixing charsets on a column level a good practice?
If a row (on this table) contains only latin chars and no utf8 chars how are data storage and index size affected? Put another way, is a utf8 column data/index size same as latin without storing any utf8 text.
Quantitively how are data and index storage affected on utf8 columns with respect to latin?
Thanks
UTF-8 is a variable length encoding. Characters inside the ASCII set will be encoded with one byte as in latin1; characters beyond that will be encoded using up to four bytes. A string consisting of ASCII characters will have the same length in UTF8 and latin1.
Is mixing charsets on a column level a good practice?
I have never done this, and would tend to say no, as it complicates the database schema unnecessarily. While the database engine should be able to deal with it fine, I would not use mixed charsets out of storage considerations. The savings will be minimal at best.
The only valid reason to mix charsets that I can think of is the use of different collations for a specific sort order and/or case/accent sensitive/insensitive searching.