Not able to display Chinese characters after loading it to Postgres DB - mysql

I have a source file which contains Chinese characters. After loading that file into a table in Postgres DB, all the characters are garbled and I'm not able to see the Chinese characters. The encoding on Postgres DB is UTF-8. I'm using the psql utility on my local mac osx to check the output. The source file was generated from mySql db using mysqldump and contains only insert statements.
INSERT INTO "trg_tbl" ("col1", "col2", "col3", "col4", "col5", "col6", "col7", "col7",
"col8", "col9", "col10", "col11", "col12", "col13", "col14",
"col15", "col16", "col17", "col18", "col19", "col20", "col21",
"col22", "col23", "col24", "col25", "col26", "col27", "col28",
"col29", "col30", "col31", "col32", "col33")
VALUES ( 1, 1, '与é<U+009D>žç½‘_首页&频é<U+0081>“页顶部广告ä½<U+008D>(946×90)',
'通æ <U+008F>广告(Leaderboard Banner)',
0,3,'',946,90,'','','','',0,'f',0,'',NULL,NULL,NULL,NULL,NULL,
'2011-08-19 07:29:56',0,0,0,'',NULL,0,NULL,'CPM',NULL,NULL,0);
What can I do to resolve this issue?

The text was mangled before producing that SQL statement. You probably wanted the text to start with 与 instead of the "Mojibake" version: 与. I suggest you fix the dump either to produce utf8 characters or hex. Then the load may work, or there may be more places to specify utf8, such as SET NAMES or the equivalent.
Also, for Chinese, CHARACTER SET utf8mb4 is preferred in MySQL.
é<U+009D>ž is so mangled I don't want to figure out the second character.

Related

Unicode text confounding Mysql queries

I have a PHP script that extracts attachments (Unicode text csv files) from gmail and uploads them to a mysql database. All of that works fine. But once in the database I cannot run the simplest of queries against the data.
If I first bring the file into Excel then export as a CSV file then all works fine, I can query and get the expected results.
I have done enough reading to understand (I think) that the issue is somehow related to the fact that Unicode text is either UTF8 or UTF16, but when I convert the table to either of those, the data comes in fine but I still cannot run a successful query.
Update:
I have an individual named White in the lastrep column of the data. The only way I can pull the associated records is by using wild cards between characters, as in:
SELECT * FROM `dailyactual` WHERE `lastrep` like "%W%h%i%t%e%"
Any help would be appreciated.
Jim
Use the UTF8MB4 collation. Instructions https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-upgrading.html
In utf8 or utf8mb4 character set, 'White' would be 'White' (hex 57 68 69 74 65). In utf16, there would be (effectively) a zero byte between each character; hex: 0057 0068 0069 0074 0065.
Can you get a hex dump of part of the file?
If you can specify the output of excel, do so. Else specify the input for mysql to be utf16 or whatever the encoding says. Since there are many ways of bringing a csv file into mysql, I can't be more specific.

Special characters (Cyrillic, Chinese) to MySQL database

I have a csv file, which contains some rows, which I want to insert to a MySQL table, using the LOAD DATA INFILE MySQL command. When I use the command, and the insert is ready, the special characters that are inserted are all messed up. The file stores the characters correctly (I think so, because when I open the file with an editor like EditPlus, the special characters are all mangled, but when opening with another editor, like EmEditor, the special characters appear correctly), the columns which will hold text with special characters are of colation utf8_general_ci, and they are either varchar columns or text columns. The table is an InnoDB table, with collation set to utf8_general_ci. I run the LOAD DATA INFILE command, from MariaDB command line, with the following parameters:
LOAD DATA INFILE '/path/to/csv/file' INTO TABLE tablename FIELDS TERMINATED BY '|' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
EDIT: I also tried using the SET NAMES "utf8"; command, before using the LOAD DATA INFILE one, with no success:|
MySQL needs to know what encoding (character set) the file is saved in in order to read and interpret it correctly.
The server uses the character set indicated by the
character_set_database system variable to interpret the information
in the file. SET NAMES and the setting of character_set_client do
not affect interpretation of input. If the contents of the input file
use a character set that differs from the default, it is usually
preferable to specify the character set of the file by using the
CHARACTER SET clause. A character set of binary specifies “no
conversion.”
Figure out what encoding your file is actually saved in, or explicitly save it in a specific encoding from your text editor (the editor that does interpret the characters correctly already), then add CHARACTER SET ... into the LOAD DATA statement. See the documentation for details: http://dev.mysql.com/doc/refman/5.7/en/load-data.html
Probably your file is not UTF8. In your editor, when saving, check that your character encoding of the file is UTF8. The fact that the editor renders characters correctly, does not mean it is saved as UTF8. Character encoding is either an option when saving the file, either a file property somewhere in the menus (depends on the editor).

MySQL Exporting Arabic/Persian Characters

I'm new to MySQL and i'm working on it through phpMyAdmin.
My problem is that i have imported some tables with (.sql) extension into a database with: UTF8_general_ci format and it contains some Arabic or Persian characters. However, when i export these data into an Excel file, they appear as the following:
The original value: أحمد الكمالي
The exported value: أحمد  الكمالي
I have searched and looked for this issue and tried to solve it by making the output and the server connection with the same format UTF8_general_ci. But, for some reason which i don't know, the phpMyAdmin doesn't allow me to change to the same format, it forces me to chose this: UTF8mb4_general_ci
Anyway, when i export the data, i'm making sure that the format is in UTF8 but it still appears like that.
How can i solve it or fix it?
Note: Here are some screenshots if you want to check organized by numbers.
http://www.megafileupload.com/rbt5/Screenshots.rar
I found easier way that you can rebuild excel file with correct characters.
Export your data from MySQL normally in CSV format.
Open new Excel and go to Data tab.
Select "From Text".if you not find this it is under "Get External Data".
Select your file.
Change file origin to Unicode(UTF-8) and select next.("Delimited" checked by default)
Select Comma delimiter and press finish.
you will see your language characters correctly.See more
Mojibake. Probably...
The bytes you have in the client are correctly encoded in utf8mb4 (good).
You connected with SET NAMES latin1 (or set_charset('latin1') or ...), probably by default. (It should have been utf8mb4.)
The column in the tables may or may not have been CHARACTER SET utf8mb4, but it should have been that.
(utf8 and utf8mb4 work equally well for Arabic/Persian.)
Please provide more details if this explanation does not suffice.

Using iconv to convert mysqldump-ed databases

Trying to quickly convert a latin1 mysql DB to utf8, I tried the following:
Dump the DB
run iconv -f latin1 -t utf8 on the resulting file
import into a fresh DB with UTF8 default encoding
This mostly works except... some letters get converted wrong (an example: uppercase accented 'U' becomes some garbled sequence starting with a question mark). Some conversion is taking place (od an a query result shows a two byte sequence where the latin1 byte was) and te latin1 version is alright. While I have so far been unsystematic in isolating the problem (late night; under deadline; etc.) the weirdness of the issue kills me: why would it fail on some letters and not all? Client connection? Column charset? Why I am not getting any diagnostics? I'm stymied.
Sure, I can work on isolating the issue and its details, but thought that maybe somebody ran into this already and can recognize it by this (admittedly rather poor) description.
Cheers
The data may have been stored as latin1 but it's possible that what ever client you used to dump the data has already exported it as UTF-8.
Open the dump file in a decent text editor (Notepad++, TextWrangler, Atom) and check which encoding allows all characters to be displayed properly.
Then when it comes to import the data back in, ensure your client is set to use UTF-8 on the import.
Don't use iconv, it only muddies the works.
Assuming that a table is declared to be latin1 and correctly contains latin1 bytes, but you would like to change it to utf8, do this to the table:
ALTER TABLE tbl CONVERT TO CHARACTER SET utf8;
It is also possible to do it with a dump and reload; it involves some changes to the arguments. Sorry I don't have the details.

change charset from UTF-8 to ISO-8859-1 in mysql

I started working on a legacy mysql database whose collation: latin1-default but tables are utf-8-default. Even though tables are mentioned with utf-8 (universal standard encoding) it doesn't render Swedish characters. It seems application related to this database encoding is ISO-8859-1. So, I would like to convert this database and data in it to ISO-8859-1 encoding. I tried with this command
iconv -f UTF-8 -t ISO-8859-1 webtest_backu_01.sql > converted-file.sql
it gives error: illegal input sequence at position
any help is appreciated. thanks.
Please take a look at this link: http://dev.mysql.com/doc/refman/5.0/en/charset-conversion.html
You can use the alter table command to make this conversion per-table if it is possible. I used this before successfully.
Example from the link:
ALTER TABLE t MODIFY col1 CHAR(50) CHARACTER SET utf8;
Also an important detail... Conversion may be lossy if the column contains characters that are not in both character sets... but I don't think ISO-8859-1 to UTF-8.
Give this a try for one of the tables and see if it works.