Chrome Extension internationalization error? - json

When i add "default_locale": "en", to my manifest file, then package it and try to upload it, i get the error message saying, An error occurred: Message JSON file must be in UTF-8 encoding. ... But this is how they tell you to set it up in their i18n page... What is the problem?

I'm not sure he's missing the Byte Order Mark -- which is neither required nor recommended in UTF-8 (as endianness isn't relevant) -- I think it's just the file encoding. I believe Notepad adds the BOM by default when you save as UTF-8, but I always save files without it as some programs that receive UTF-8 data don't expect the BOM.
But yes, bottom line, go to the file and Save As in UTF-8. If you are using Notepad, I'd recommend using a different editor -- for JS something lightweight like Notepad++ works well -- so that you can change the default encoding, among many other benefits.

It seems that you simply missing Byte Order Mask in front of your manifest file. If you are on Windows simply open it in Notepad, click File -> Save As and choose UTF-8 from Encoding combo box.
Obviously you can use other tools to "convert" to UTF-8...

just solved the issue... After googling awhile... turns out MS notepad is inadequate even with utf-8 encoding... Download "Notepad2"... and set the encoding to utf-8... somehow it works now!

Related

Special characters in CSV (utf-8) file appear as ? on new laptop but not on old one (both with Excel 2016)

I regularly export CSV files from Shopware and edit them in Excel (Windows 10 + Office 2016). The special symbols appear garbled (e.g. –) but I can correct that with a "find-and-replace" macro. Annoying but workable.
However, I just got a new laptop also with Windows 10 + Office 2016 but there, the special symbols appear as white question marks on black diamonds (��). When I open the same files on the old PC I still get the good old garbled (but fixable) special symbols.
I have checked every setting I can think of but cannot find any difference between the 2 PCs. Does anyone have an idea what could be causing this and how to fix it?
Thanks!
The "garbled characters" in the old laptop are UTF-8-encoded file data decoded as (probably) Windows-1252 encoding. It seems like the new laptop is using a different default encoding.
If you export your CSV files as UTF-8 w/ BOM and Excel will display them properly without "find-and-replace". If Shopware doesn't have the option to export as UTF-8 w/ BOM, you can use an editor like NotePad++ to load the UTF-8-encoded CSV and re-save it as UTF-8 w/ BOM.
The UTF-16 encoding should also work if that is an option for export.
The culprit was an optional beta setting under Control panel / Clock and Region / Administrative / Change System locale => Beta: Use Unicode UTF-8 for worldwide language support. Once I unchecked the box, the �� disappeared and everything was back to normal.
The next part of the solution is to open the CSV files with a text editor, e.g. Notepad, and save them with UTF-8 w/ BOM encoding. After doing that, the special characters appear correctly in Excel, eliminating the need for "find and replace".
Big thanks to Mark Tolonen + Skomisa for pointing me in the right direction.

Is there any annotation / comments I can put in file for PhpStorm to force file encoding?

We are using Windows-1252 character-set in one of our files. I have set proper file encoding in Phpstorm > Settings for this particular PHP file. Remaining project is UTF8. This works for me.
The problem comes with other developers in my organization. They have UTF encoding set in their settings and they don't have this file specific custom settings. When they save anything in this file, it converts the special characters.
Is there any doc block OR annotation like
// #FILE_ENCODING Windows-1252
that I can put in my PHP file so PhpStorm auto detects it?

Encoding Issue in Talend Open Studio

I am working on a Talend Project, Where we are Transforming data from 1000's of XML files to CSV and we are creating CSV file encoding as UTF-8 from Talend itself.
But the issue is that some of the Files are created as UTF-8 and some of them created as ASCII , I am not sure why this is happening The files should always be created as UTF.
As mentioned in the comments, UTF8 is a superset of ASCII. This means that the code point for any ASCII characters will be the same in UTF8 as ASCII.
Any program identifying a file containing only ASCII characters will then simply assume it is ASCII encoded. It is only when you include characters outside of the ASCII character set that the file may be recognised by whatever heuristic the reading program uses.
The only exception to this is for file types that specifically state their encoding. This includes things like (X)HTML and XML which typically start with an encoding declaration.
You can go to the Advanced tab of the tFileOutputDelimited (or other kind of tFileOutxxx) you are using and select UTF-8 encoding.
Here is an image of the advanced tab where to perform the selection
I am quite sure the unix file util makes assumptions based on content of the file being in some range and or having specific start (magic numbers). In your case if you generate a perfectly valid UTF-8 file, but you just use only the ASCII subset the file util will probably flag it as ASCII. In that event you are fine, as you have a valid UTF-8 file. :)
To force talend to get a file as you wish, you can add an additional column to your file (for example in a tMap) and set an UTF-8 character in this column. The generated file will be in UTF8 as the other repliers mentioned.

Using UTF-8 encoding, CSV file with special properties/foreign characters not preserved when imported into MySQL (phpMyAdmin)

My table needs to support pretty much all characters (Japanese, Danish, Russian, etc.)
However, while saving the 2-columned table as CSV from Excel with UTF-8 encoding, then importing it with phpMyAdmin with UTF-8 encoding selected, a lot of the original characters go missing (the ones with special properties such as umlauts, accents, etc.) Also, anything following problematic characters is removed entirely. I haven't the slightest idea what is causing this problem.
EDIT: For those that come upon the same issue, I'd suggest opening your CSV file in Notepad++ and going to "Encoding > Convert to UTF-8" (not "Encode in UTF-8") first. Then import it. It will surely work.
I found an answer here:
https://help.salesforce.com/apex/HTViewSolution?id=000003837
Bascially save as a unicode text file from excel,
then replace all tabs with commas in code friendly text editor,
re-save as utf8
change file from .txt to .csv
exporting directly from excel to .csv causes problems with Japanese, this is why I went searching for help...

force eclipse to ignore character encoding attribute

I'm working with a web framework that uses a dynamic character encoding in its html templates, like this:
<meta charset="${_response_encoding}">
The problem is when I try to edit this file in Eclipse, Eclipse thinks this is a literal encoding type, and thus refuses to open the file, saying:
"Unsupported Character Encoding" Character encoding
"${_response_encoding}" is not supported by this platform.
Is there any way to tell Eclipse to stop trying to be "smart" (because it plainly isn't) and just show me the text? I've tried using "Open With... Text Editor" but still same result.
Change the content type for HTML files:
Go to Windows -> preferences -> General -> Content types and change encoding (set them to utf-8) for all the file extensions you need.
Choose "Other" and then select UTF-8. Then your template will render as normal.
I had a similar problem, except I was receiving the error message when trying to save the document after changing the character encoding. I resolved the problem by doing the following in Eclipse before putting in the non-standard charset value:
Rename the file to have a non-HTML file extension.
Open the file using an editor other than the HTML one.
Change the charset value to the non-standard value you want.
Rename the file to have the original extension.
Open the file.
Follow the buttons and prompts to set the character encoding to the real encoding of the file.
After this, the file should still be usable while still having the non-standard charset value.
If you're having Eclipse treat it like an HTML file, it is being smart. That's not a valid encoding name. Have you tried just templating the entire meta tag?
(as mentioned in a comment) In Eclipse Indigo, when opening the file you see the Unsupported character encoding message along with a Set Encoding button. Us that button to set the UTF-8 encoding. Eclipse does not change the variable in the HTML file.
True, this is done on a file-by-file basis, however, in my project I import the same meta header file for every screen. Actually, I have only two files to setup (those that are logged in and those that are not).