I have a PHP script that extracts attachments (Unicode text csv files) from gmail and uploads them to a mysql database. All of that works fine. But once in the database I cannot run the simplest of queries against the data.
If I first bring the file into Excel then export as a CSV file then all works fine, I can query and get the expected results.
I have done enough reading to understand (I think) that the issue is somehow related to the fact that Unicode text is either UTF8 or UTF16, but when I convert the table to either of those, the data comes in fine but I still cannot run a successful query.
Update:
I have an individual named White in the lastrep column of the data. The only way I can pull the associated records is by using wild cards between characters, as in:
SELECT * FROM `dailyactual` WHERE `lastrep` like "%W%h%i%t%e%"
Any help would be appreciated.
Jim
Use the UTF8MB4 collation. Instructions https://dev.mysql.com/doc/refman/5.7/en/charset-unicode-upgrading.html
In utf8 or utf8mb4 character set, 'White' would be 'White' (hex 57 68 69 74 65). In utf16, there would be (effectively) a zero byte between each character; hex: 0057 0068 0069 0074 0065.
Can you get a hex dump of part of the file?
If you can specify the output of excel, do so. Else specify the input for mysql to be utf16 or whatever the encoding says. Since there are many ways of bringing a csv file into mysql, I can't be more specific.
Related
I'm new to MySQL and i'm working on it through phpMyAdmin.
My problem is that i have imported some tables with (.sql) extension into a database with: UTF8_general_ci format and it contains some Arabic or Persian characters. However, when i export these data into an Excel file, they appear as the following:
The original value: أحمد الكمالي
The exported value: Ø£Øمد  الكمالي
I have searched and looked for this issue and tried to solve it by making the output and the server connection with the same format UTF8_general_ci. But, for some reason which i don't know, the phpMyAdmin doesn't allow me to change to the same format, it forces me to chose this: UTF8mb4_general_ci
Anyway, when i export the data, i'm making sure that the format is in UTF8 but it still appears like that.
How can i solve it or fix it?
Note: Here are some screenshots if you want to check organized by numbers.
http://www.megafileupload.com/rbt5/Screenshots.rar
I found easier way that you can rebuild excel file with correct characters.
Export your data from MySQL normally in CSV format.
Open new Excel and go to Data tab.
Select "From Text".if you not find this it is under "Get External Data".
Select your file.
Change file origin to Unicode(UTF-8) and select next.("Delimited" checked by default)
Select Comma delimiter and press finish.
you will see your language characters correctly.See more
Mojibake. Probably...
The bytes you have in the client are correctly encoded in utf8mb4 (good).
You connected with SET NAMES latin1 (or set_charset('latin1') or ...), probably by default. (It should have been utf8mb4.)
The column in the tables may or may not have been CHARACTER SET utf8mb4, but it should have been that.
(utf8 and utf8mb4 work equally well for Arabic/Persian.)
Please provide more details if this explanation does not suffice.
I have a DHTMLX grid on a page that saves data through a php connector file to a DB. The data from the grid is shown through xml encoding that is rendered in the PHP connector file.
Japanese words in the grid show up in Japanese but get saved as: ーダー
However they do stay in Japanese in the grid! (somehow...)
If I save something in the DB on php myadmin, it shows up in the grid as: ???
I checked and everything seems right...
DB fields: UTF-8 √
HTML headers: UTF-8 √
connector.php: UTF-8 √ (checked through network tab, devtools)
Is there anywhere else I should check?
When looking at the PHP file that gives me the DB values, I get XML data that's already garbled:
<rows><row id='00000000001'><cell><![CDATA[]]></cell><cell><![CDATA[??]]></cell><cell><![CDATA[33]]></cell><cell><![CDATA[]]></cell><cell><![CDATA[]]></cell><cell><![CDATA[?????????]]></cell>...
So maybe the problem lies before the data is received from the server. Does anyone know where I should look for the problem?
Were you expecting ーダー for ーダー? (Mojibake.)
Other times, do you get question marks?
Those two symptoms come from different causes. But both usually involve not declaring the client bytes to be utf8. In php, that can be done with mysqli_set_charset('utf8')
Question marks usually also involves failing to declare the column to be utf8.
To further diagnose, please do
SELECT col, HEX(col) FROM tbl WHERE ...
so we can see whether the text was mangled as it was inserted.
Trying to quickly convert a latin1 mysql DB to utf8, I tried the following:
Dump the DB
run iconv -f latin1 -t utf8 on the resulting file
import into a fresh DB with UTF8 default encoding
This mostly works except... some letters get converted wrong (an example: uppercase accented 'U' becomes some garbled sequence starting with a question mark). Some conversion is taking place (od an a query result shows a two byte sequence where the latin1 byte was) and te latin1 version is alright. While I have so far been unsystematic in isolating the problem (late night; under deadline; etc.) the weirdness of the issue kills me: why would it fail on some letters and not all? Client connection? Column charset? Why I am not getting any diagnostics? I'm stymied.
Sure, I can work on isolating the issue and its details, but thought that maybe somebody ran into this already and can recognize it by this (admittedly rather poor) description.
Cheers
The data may have been stored as latin1 but it's possible that what ever client you used to dump the data has already exported it as UTF-8.
Open the dump file in a decent text editor (Notepad++, TextWrangler, Atom) and check which encoding allows all characters to be displayed properly.
Then when it comes to import the data back in, ensure your client is set to use UTF-8 on the import.
Don't use iconv, it only muddies the works.
Assuming that a table is declared to be latin1 and correctly contains latin1 bytes, but you would like to change it to utf8, do this to the table:
ALTER TABLE tbl CONVERT TO CHARACTER SET utf8;
It is also possible to do it with a dump and reload; it involves some changes to the arguments. Sorry I don't have the details.
I have a source file which contains Chinese characters. After loading that file into a table in Postgres DB, all the characters are garbled and I'm not able to see the Chinese characters. The encoding on Postgres DB is UTF-8. I'm using the psql utility on my local mac osx to check the output. The source file was generated from mySql db using mysqldump and contains only insert statements.
INSERT INTO "trg_tbl" ("col1", "col2", "col3", "col4", "col5", "col6", "col7", "col7",
"col8", "col9", "col10", "col11", "col12", "col13", "col14",
"col15", "col16", "col17", "col18", "col19", "col20", "col21",
"col22", "col23", "col24", "col25", "col26", "col27", "col28",
"col29", "col30", "col31", "col32", "col33")
VALUES ( 1, 1, '与é<U+009D>žç½‘_首页&频é<U+0081>“页顶部广告ä½<U+008D>(946×90)',
'通æ <U+008F>广告(Leaderboard Banner)',
0,3,'',946,90,'','','','',0,'f',0,'',NULL,NULL,NULL,NULL,NULL,
'2011-08-19 07:29:56',0,0,0,'',NULL,0,NULL,'CPM',NULL,NULL,0);
What can I do to resolve this issue?
The text was mangled before producing that SQL statement. You probably wanted the text to start with 与 instead of the "Mojibake" version: 与. I suggest you fix the dump either to produce utf8 characters or hex. Then the load may work, or there may be more places to specify utf8, such as SET NAMES or the equivalent.
Also, for Chinese, CHARACTER SET utf8mb4 is preferred in MySQL.
é<U+009D>ž is so mangled I don't want to figure out the second character.
I have an MYSQL Database in utf-8 format, but the Characters inside the Database are ISO-8859-1 (ISO-8859-1 Strings are stored in utf-8). I've tried with recode, but it only converted e.g. ü to ü). Does anybody out there has an solution??
If you tried to store ISO-8859-1 characters in the a database which is set to UTF-8 you just managed to corrupt your "special characters" -- as MySQL would retrieve the bytes from the database and try to assemble them as UTF-8 rather than ISO-8859-1. The only way to read the data correctly is to use a script which does something like:
ResultSet rs = ...
byte[] b = rs.getBytes( COLUMN_NAME );
String s = new String( b, "ISO-8859-1" );
This would ensure you get the bytes (which came from a ISO-8859-1 string from what you said) and then you can assemble them back to ISO-8859-1 string.
The other problem as well -- what do you use to "view" the strings in the database -- is it not the case that your console doesn't have the right charset to display those characters rather than the characters being stored wrongly?
NOTE: Updated the above after the last comment
I just went through this. The biggest part of my solution was exporting the database to .csv and Find / Replace the characters in question. The character at issue may look like a space, but copy it directly from the cell as your Find parameter.
Once this is done - and missing this is what took me all morning:
Save the file as CSV ( MS-DOS )
Excellent post on the issue
Source of MS-DOS idea