I used mysql for excel in order to import data from excel to mysql DB.
When I used the "Select" command in Mysql Workbench I realised that the greek Characters appeared like question marks "?"
Then I saved the excel file as .csv and opened it with notepad++ in order to encode it with utf8.
Then i used the following command and again the problem with the greek chars made even worse.
LOAD DATA LOCAL INFILE 'c:/working.csv'
INTO TABLE tablexxx
CHARACTER SET UTF8
FIELDS TERMINATED BY ';' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
Can you please help me out. Dead End here!!!
Try the greek character set
CHARACTER SET CP869;
But usually it should work with UTF-8 maybe you need to change the settings in notepad++ or in the MySQL Workbench and the other programs you desire to use.
Related
I am facing big problem to correct strange chinese character to be normal when importing '.csv' file in My SQL query browser. When I import the CSV data, it will show me like below the picture:
The database has already change to UTF-8 format, but it show me the strange chinese character:
enter image description here
My SQL query like the below:
LOAD DATA LOCAL INFILE
'c:/2019/countries20.csv'
INTO TABLE countries
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES
(id,name,country_code,language);
My CSV file info like the below picture, I have change CSV file encode to UTF-8:
enter image description here
Anyone can help me find which part I get wrong? Thanks.
Include in the LOAD DATA: CHARACTER SET utf8mb4.
Declare the target column to be CHARACTER SET utf8mb4.
I am trying to dump some data from mysql into an excel sheet .
But the exported excel sheet contains some special accented characters(like ï¿½ï¿ Â) which is not present in the database .
My database is in "latin1" encoding and the centOS has "utf-8" code page .
I tried to convert the field while selecting as below , but still few of the fields contain those accented characters .
_
select convert(cat.name USING latin1)as category from category_table cat
into outfile 'course_details.csv'
fields terminated by '|' lines terminated by '\n';
Also, converting it to utf8 also doesn't help .
I have also tried double conversions .
Thanks for the help in advance !
looks like conversion to latin 1 solved the problem . The other accented characters were already seeded into the db via some invalid means .
I want to import an .csv file into MySQL Database by:
load data local infile 'C:\\Users\\t_lichtenberger\\Desktop\\tblEnvironmentLog.csv'
into table tblenvironmentlog
character set utf8
fields terminated by ';'
lines terminated by '\n'
ignore 1 lines;
The .csv file looks like:
But I am getting the following error and I cannot explain why:
Error Code: 1300. Invalid utf8 character string: 'M'
Any suggestions?
Nothing else I tried worked for me, including ensuring that my .csv was saved with UTF-8 encoding.
This worked:
When using LOAD DATA LOCAL INFILE, set CHARACTER SET latin1 instead of CHARACTER SET utf8mb4 as shown in https://dzone.com/articles/mysql-57-utf8mb4-and-the-load-data-infile
Here is a full example that worked for me:
TRUNCATE homestead_daily.answers;
SET FOREIGN_KEY_CHECKS = 0;
TRUNCATE homestead_daily.questions;
SET FOREIGN_KEY_CHECKS = 1;
LOAD DATA LOCAL INFILE 'C:/Users/me/Desktop/questions.csv' INTO TABLE homestead_daily.questions
CHARACTER SET latin1
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES
(type, question, created_at, updated_at);
SELECT * FROM homestead_daily.questions;
See what the settings for the export were. Look for "UTF-8".
This suggests that "Truncated text" is caused by the data not being encoded as utf8mb4. Outside MySQL, "look for "UTF-8". (Inside, MySQL, utf8 and utf8mb4 work equally well for all European character sets, so the ü should not be a problem.
If it was exported as "cp1252" (or any of a number of encodings), the byte for ü would not be valid for utf8mb4, leading to truncation.
If this analysis is correct, there are two solutions:
Plan A: Export as UTF-8.
Plan B: Import as latin1. (You do not need to change the column/table definition, just the LOAD DATA.)
Just open the csv file in your text editor (like Nodepad++)
and change the file Encoding to UTF-8
then import your csv file
It's complaining about 'M' but I think it's in München and the actual problematic character is next one, the umlaut 'ü'.
One simple way to test would be to try loading a file with just the first 2 rows & see if that works. Then add the 3rd row, try again & see if that fails.
If you can't or don't want to replace these special characters in your data, then you'll need to start investigating the character sets configured in your CSV file, database, table, columns, tools etc...
Are you using MySQL 5.7 or above? Then something simple to try would be to change to character set utf8mb4 in your load data command.
See How MySQL 5.7 Handles 'utf8mb4' and the Load Data Infile for a similar issue.
Also see:
import geonames allCountries.txt into MySQL 5.7 using LOAD INFILE - ERROR 1300 (HY000)
Trouble with utf8 characters; what I see is not what I stored
“Incorrect string value” when trying to insert UTF-8 into MySQL via JDBC?
I have a csv file, which contains some rows, which I want to insert to a MySQL table, using the LOAD DATA INFILE MySQL command. When I use the command, and the insert is ready, the special characters that are inserted are all messed up. The file stores the characters correctly (I think so, because when I open the file with an editor like EditPlus, the special characters are all mangled, but when opening with another editor, like EmEditor, the special characters appear correctly), the columns which will hold text with special characters are of colation utf8_general_ci, and they are either varchar columns or text columns. The table is an InnoDB table, with collation set to utf8_general_ci. I run the LOAD DATA INFILE command, from MariaDB command line, with the following parameters:
LOAD DATA INFILE '/path/to/csv/file' INTO TABLE tablename FIELDS TERMINATED BY '|' ENCLOSED BY '"' LINES TERMINATED BY '\r\n';
EDIT: I also tried using the SET NAMES "utf8"; command, before using the LOAD DATA INFILE one, with no success:|
MySQL needs to know what encoding (character set) the file is saved in in order to read and interpret it correctly.
The server uses the character set indicated by the
character_set_database system variable to interpret the information
in the file. SET NAMES and the setting of character_set_client do
not affect interpretation of input. If the contents of the input file
use a character set that differs from the default, it is usually
preferable to specify the character set of the file by using the
CHARACTER SET clause. A character set of binary specifies “no
conversion.”
Figure out what encoding your file is actually saved in, or explicitly save it in a specific encoding from your text editor (the editor that does interpret the characters correctly already), then add CHARACTER SET ... into the LOAD DATA statement. See the documentation for details: http://dev.mysql.com/doc/refman/5.7/en/load-data.html
Probably your file is not UTF8. In your editor, when saving, check that your character encoding of the file is UTF8. The fact that the editor renders characters correctly, does not mean it is saved as UTF8. Character encoding is either an option when saving the file, either a file property somewhere in the menus (depends on the editor).
I have a lowercase thorn separated file that I need to load into a MySQL database (5.1.54) using the LOAD DATA INFILE ... query.
The file I'm trying to load is located on the same server as the MySQL database, and I'm issuing the query from a Windows machine using SQLYog, which uses the MySQL C client library.
I'm having some major issues, I've tried using the FIELDS TERMINATED BY 0x00FE syntax using all the variations of the thorn character I can think of, and I've tried changing the character set of the connection (SET NAMES ...), but I consistently get the warning...
Warning Code : 1638
Non-ASCII separator arguments are not fully supported
...and all the data loads into the first column.
Is there any way around this at all? Or am I resigned to pre-processing the file with sed to replace all the thorn's with a more sensible character before loading?
I have succeeded to load this data with Data Import tool (CSV format) in dbForge Studio for MySQL. I just set 'Þ' as custom delimiter. The import from the CSV format is fully supported in free Express Edition.
I decided to fix the file by replacing the non-ASCII character with a character that MySQL's LOAD DATA INFILE ... would understand.
Use od to get the octal byte value of the offending character - od -b file.log - in this case it's 376.
Use grep to make sure the character you want to replace it with doesn't already exist in the file - grep -n '|' file.log.
Use sed and printf to replace the non-ASCII character - sed -i 's/'$(printf '\376')'/|/g' file.log.