Would anyone know of a reliable method (with mySQL or otherwise) to select rows in a database that contain Japanese characters? I have a lot of rows in my database, some of which only have alphanumeric characters, some of which have Japanese characters.
Rules when you have problem with character sets:
While creating database use utf8 encoding:
CREATE DATABASE _test DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;
Make sure all text fields (varchar and text) are using UTF-8:
CREATE TABLE _test.test (
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(255) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = MyISAM;
When you make a connection do this before you query/update the database:
SET NAMES utf8;
With phpMyAdmin - Choose UTF-8 when you login.
set web page encoding to utf-8 to make sure all post/get data will be in UTF-8 (or you'll have to since converting is painful..). PHP code (first line in the php file or at least before any output):
header('Content-Type: text/html; charset=UTF-8');
Make sure all your queries are written in UTF8 encoding. If using PHP:
6.1. If PHP supports code in UTF-8 - just write your files in UTF-8.
6.2. If php is compiled without UTF-8 support - convert your strings to UTF-8 like this:
$str = mb_convert_encoding($str, 'UTF-8', '<put your file encoding here');
$query = 'SELECT * FROM test WHERE name = "' . $str . '"';
That should make it work.
Following on to the helpful answer NickSoft, i had to set the encoding on the db connection to get it to work.
&characterEncoding=UTF8
Then the SET NAMES utf8; seemed to be redundant
As teneff stated, just use SELECT.
When installing MySQL, use UTF-8 as charset. Then, choosing utf8_general_ci as collation should do the work.
As Frosty stated, just use SELECT.
Look up the lowest and highest valued Japanese characters in the Unicode charts at http://www.unicode.org/roadmaps/bmp/ and use REGEXP. It may use several different regions of characters to get the whole Japanese character set. As long as you use the UTF-8 charset and utf8_general_ci collation, you should be able to use a REGEXP '[a-gk-nt-z]' where a-g represents one range of Unicode characters from the charts, k-n represents another range, etc.
There is limited number of japanese characters. You can search for these using
SELECT ... LIKE '%カ%'
Alternatively you can try their hexadecimal denomination -
SELECT ...LIKE CONCAT('%',CHAR(0x30ab),'%')
You may find useful this UTF-8 Japanese subset
http://www.utf8-chartable.de/unicode-utf8-table.pl?start=12448
Supposing you're using UTF-8 character set for fields, queries, results...
Related
I am unable to find the exact solution for MySQL
The thing is the column supports by default UTF-8 encoding which consists of 3 bytes. The Indian Rupee Symbol, since it is new has a 4 byte encoding. So we have to change the character encoding to utf8_general_ci by,
ALTER TABLE test_tb MODIFY COLUMN col VARCHAR(255)
CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL;
After executing the above query simply execute the following query to insert the symbol,
insert into test_tb values("₹");
Ta-Da!!!
You are talking Oracle, yet it is tagged MySQL. Which do you want? And what language and/or client tool are you using?
Copy and paste it. Which Rupee do you like? ৲ ৳ ૱ ௹ ₨ ꠸
Probably you want this one:
UNHEX('E282A8') = '₨'
which is U+20A8 or 8360 in non-MySQL contexts
You need to have CHARACTER SET utf8 on the table/column.
You need to have done SET NAMES utf8 (or equivalent) when connecting.
Simplest way to do it is, utf8mb4 stores all the symbols
ALTER TABLE AsinBuyBox CONVERT TO CHARACTER SET utf8mb4;
I want to convert my database to store unicode symbols.
Currently the tables have:
latin_swedish_ci collation and latin1 character set
OR
utf8_general_ci collation and utf8 character set
I am not sure how the existing data is encoded, but I suppose it is utf-8 encoded, as I am using Django which I think encodes the data in utf-8 before sending to the database.
My question is:
Can I convert the tables to utf8_unicode_ci collation and utf-8 character set using the following queries without messing up the existing data? (as sugested in this post)
ALTER DATABASE databasename CHARACTER SET utf8 COLLATE utf8_unicode_ci;
ALTER TABLE tablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;
Considering latin1 is subset of utf-8, I think it sould work. What do you guys think?
Thank you in advance.
P.S: The version of MySQL is: 5.1
Latin1 is not a subset of UTF-8 - ASCII is. Latin1, however, is represented in Unicode.
CONVERT TO should work, as long as the data was stored in the correct encoding in the first place. Django may have used UTF-8 on the database connection, but the database should have re-encoded on the fly.
To check the actual encoding used - Use the mysql command-line tool to execute an SQL query that selects a row that you know contains non-ASCII characters. Then use the mysql HEX() function to check the bytes used. If you see bytes greater than > 0x7f, check that they don't correspond to valid characters in https://en.wikipedia.org/wiki/ISO/IEC_8859-1#Codepage_layout
If you have c396 sitting in a latin1 column, and you want it to mean Ö, then you are half way to "double encoding". Do not use CONVERT TO; that will really get you into "double encoding".
Instead, you need the 2-step ALTER.
ALTER TABLE Tbl MODIFY COLUMN col VARBINARY(...) ...;
ALTER TABLE Tbl MODIFY COLUMN col VARCHAR(...) ... CHARACTER SET utf8 ...;
If you have already messed it up further, and now the Ö is hex C383E28093, then you need to fix double encoding.
This gets you the latin1 byte in 2 steps:
CONVERT(CONVERT(UNHEX('C383E28093') USING utf8) USING latin1) --> 'Ö' (C396)
HEX(CONVERT(CONVERT(UNHEX('C396') USING utf8) USING latin1)) --> 'Ö' in latin1 (D6)
This gets you the 2-byte utf8 encoding:
CONVERT(BINARY(CONVERT(CONVERT(UNHEX('C383E28093') USING utf8) USING latin1)) USING utf8)
Do you want the column to be latin1? Or utf8?
I am facing issues to insert Bulgarian language string using perl script in mysql. If I do manual insertion using query then it's working fine but while using perl it convert that string in to unknown characters.
I have perform below steps to resolve that issue but still no luck.
Set utf8 character set in database connection
$dbh->do("set character set utf8");
$dbh->do('SET NAMES utf8');
$dbh->{'mysql_enable_utf8'} = 1;
Also i have set default character set utf8 from my.cnf file.
Still I am getting Unknown characters.
Can any one suggest me how to resolve this issue ?
Thanks
See if these help:
use utf8;
use open ':std', ':encoding(UTF-8)';
It's not just MySQL that could be screwing things up -- the original bytes could be mis-encoded; the output could be improperly rendered; etc.
If you have some data stored, let's check to see if they are 'correct'. Do something like
SELECT col, HEX(col) FROM tbl WHERE ...
ДЖ should come out as hex D094D096 if it is correctly encoded in utf8. Note that Cyrillic mostly has D0xx hex for its characters.
I have more discussion here.
I've been using for a long time a database/connection with the wrong encoding, resulting the hebrew language characters in the database to display as unknown-language characters, as the example shows below:
I want to re-import/change the database with the inserted-wrong-encoded characters to the right encoded characters, so the hebrew characters will be displayed as hebrew characters and not as unknown parse like *"× ×תה מסכי×,×× ×©×™× ×ž×¦×™×¢×™× ×œ×™ כמה ×”× "*
For the record, when I display this unknown characters sql data with php - it shows as hebrew. when I'm trying to access it from the phpMyAdmin Panel - it shows as jibrish (these unknown characters).
Is there any way to fix it although there is some data already inserted in the database?
That feels like "double-encoded" Hebrew strings.
This partially recovers the text:
UNHEX(HEX(CONVERT('× ×תה מסכי×,××' USING latin1)))
--> '� �תה מסכי�,��
I do not know what leads to the � symbols.
Please do SELECT col, HEX(col) FROM ... WHERE ...; for some cell. I would expect שלום to give hex D7A9D79CD795D79D if it were correctly stored. For "double encoding", I would expect C397C2A9C397C593C397E280A2C397C29D.
Please provide the output from that SELECT, then I will work on how to recover the data.
Edit
Here's what I think happened.
The client had characters encoded as utf8; and
SET NAMES latin1 lied by claiming that the client had latin1 encoding; and
The column in the table declared CHARACTER SET utf8.
Yod did not jump out as a letter, so it took a while to see it. CONVERT(BINARY(CONVERT('×™×™123' USING latin1)) USING utf8) -->יי123
So, I am thinking that that expression will clean up the text. But be cautious; try it on a few rows before 'fixing' the entire table.
UPDATE table SET col = CONVERT(BINARY(CONVERT(col USING latin1)) USING utf8) WHERE ...;
If that does not work, here are 4 fixes for double-encoding that may or may not be equivalent. (Note: BINARY(xx) is probably the same as CONVERT(xx USING binary).)
I am not sure that you can do anything about the data that has already been stored in the database. However, you can import hebrew data properly by making sure you have the correct character set and collation.
the db collation has to be utf8_general_ci
the collation of the table with hebrew has to be utf8_general_ci
for example:
CREATE DATABASE col CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE TABLE `col`.`hebrew` (
`id` INT NOT NULL AUTO_INCREMENT,
`heb` VARCHAR(45) NOT NULL,
PRIMARY KEY (`id`)
) CHARACTER SET utf8
COLLATE utf8_general_ci;
INSERT INTO hebrew(heb) values ('שלום');
While creating database for my website I used below syntax.
CREATE DATABASE myDatabase DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
Now while my client is entering arabic characters, he see some weird output. I am using JSF 2.0 for web pages.
What changes do I need to make so that I can enter Arabic or any characters in my site and it get stored in DB.
Edit 1
While I am printing the data, I am seeing output as شسÙ?بشسÙ? بشسÙ?ب شسÙ?ب
Note:
I am using web application using JSF 2.0
You should set UTF8 charset for the connection before the inserting/reading data -
SET NAMES utf8;
INSERT INTO table VALUES(...);
SELECT * FROM table;
Use N'' when you insert data values, This denotes that the subsequent string is in Unicode (the N actually stands for National language character set)
INSERT INTO table VALUES(N'ArabicField');
I think you must use cp1256_general_ci instead of utf8_general_ci,
and don't forget to set the collation of the database and all fields that may contain Arabic words to utf8_general_ci.