\x00 is coming between each char in Pyspark dataframe - csv

I am reading.CSV UTF-8 file into Pyspark dataframe. In dataframe getting \x00 in each char of each column in dataframe.
For example
In csv-
Username
Xyz
In dataframe, it's coming in like square box. When collect() it show like below. So square box =\x00
\x00U\x00S\x00E....
\x00X\x00y\x00Z\x00
Can you please help

Issue
Your issue may be that you are not reading the file using the correct encoding.
Recommendation
You may achieve this using the encoding option when reading from your csv file. #JosefZ suggested using utf_16_BE and utf_16_LE which is a good start for determining what the real encoding is being used by your file. However, while these options may work in python, pyspark is looking for the following string encoding values:
US-ASCII
ISO-8859-1
UTF-8
UTF-16BE
UTF-16LE
UTF-16
You may learn more about string encodings here.
For your specific use case, you could try using .option("encoding","UTF-16") or simply replace and test the encoding with the options listed above eg:
df = (
spark.read
.format("csv")
.option("header",False) # optional
.option("encoding","UTF-16")
.schema("username string") # optional
.load("testfile.d") # - replace with your actual file name
)
Full Reproducible Example
# Creating test csv file
with open("testfile.d","wb") as fp:
fp.write(b'\x00X\x00y\x00Z') # writing encoded content
# reading and displaying content without specifying encoding
spark.read.format("csv").option("header",False).schema("username string").load("testfile.d").show()
+--------+
|username|
+--------+
| �X�y�Z|
+--------+
# reading and displaying content by specifying encoding
spark.read.format("csv").option("header",False).option("encoding","UTF-16").schema("username string").load("testfile.d").show()
+--------+
|username|
+--------+
| XyZ|
+--------+

Related

How does Pyarrow read_csv handle different file encodings?

I have a .dat file that I had been reading with pd.read_csv and always needed to use encoding="latin" for it to read properly / without error. When I use pyarrow.csv.read_csv I dont see a parameter to select the encoding of the file but it still works without issue(which is great! but i dont understand why / if it only auto handles certain encodings). The only parameters im using are setting the delimiter="|" (with ParseOptions) and auto_dict_encode=True with (ConvertOptions).
How is pyarrow handling different encoding types?
pyarrow currently has no functionality to deal with different encodings, and assumes UTF8 for string/text data.
But the reason it doesn't raise an error is that pyarrow will read any non-UTF8 strings as a "binary" type column, instead of "string" type.
A small example:
# writing a small file with latin encoding
with open("test.csv", "w", encoding="latin") as f:
f.writelines(["col1,col2\n", "u,ù"])
Reading with pyarrow gives string for the first column (which only contains ASCII characters, thus also valid UTF8), but reads the second column as binary:
>>> from pyarrow import csv
>>> csv.read_csv("test.csv")
pyarrow.Table
col1: string
col2: binary
With pandas you indeed get an error by default (because pandas has no binary data type, and will try to read all text columns as python strings, thus UTF8):
>>> pd.read_csv("test.csv")
...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xf9 in position 0: invalid start byte
>>> pd.read_csv("test.csv", encoding="latin")
col1 col2
0 u ù
It's now possible to specify encodings with pyarrow.read_csv.
According to the pyarrow docs for read_csv:
The encoding can be changed using the ReadOptions class.
A minimal example follows:
from pyarrow import csv
options = csv.ReadOptions(encoding='latin1')
table = csv.read_csv('path/to/file', options)
From what I can tell, the functionality was added in this PR, so it should work starting with pyarrow 1.0.

Dump Chinese data into a json file

I am falling on a problem, while dumping a chinese data (non-latin language data) into a json file.
I am trying to store list into a json file with the following code;
with open("file_name.json","w",encoding="utf8") as file:
json.dump(edits,file)
It will dumped without any errors.
When i am viewing a file, it will look like this,
[{sentence: \u5979\u7d30\u5c0f\u8072\u5c0d\u6211\u8aaa\uff1a\u300c\u6211\u501f\u4f60\u4e00\u679d\u925b\u7b46\u3002\u300d}...]
And I also tried out, without encoding option.
with open("file_name.json","w") as file:
json.dump(edits,file)
My question is, why my json file look like this, and how to dump my json file with having chinese string instead of unicode string.
Any helps would be appreciated. Thanks : )
Check out the docs for json.dump.
Specifically, it has a switch ensure_ascii that if set to False should make the function not escape the characters.
If ensure_ascii is true (the default), the output is guaranteed to have all incoming non-ASCII characters escaped. If ensure_ascii is false, these characters will be output as-is.

Decode or unescape \u00f0\u009f\u0091\u008d to 👍

We all know UTF-8 is hard. I exported my messages from Facebook and the resulting JSON file escaped all non-ascii characters to unicode code points.
I am looking for an easy way to unescape these unicode code points to regular old UTF-8. I also would love to use PowerShell.
I tried
$str = "\u00f0\u009f\u0091\u008d"
[Regex]::Replace($str, "\\[Uu]([0-9A-Fa-f]{4})", `
{[char]::ToString([Convert]::ToInt32($args[0].Groups[1].Value, 16))} )
but that only gives me ð as a result, not 👍.
I also tried using Notepad++ and I found this SO post: How to convert escaped Unicode (e.g. \u0432\u0441\u0435) to UTF-8 chars (все) in Notepad++. The accepted answer also results in exactly the same as the example above: ð.
I found the decoding solution here: the UTF8.js library that decodes the text perfectly and you can try it out here (with \u00f0\u009f\u0091\u008d as input).
Is there a way in PowerShell to decode \u00f0\u009f\u0091\u008d to receive 👍? I'd love to have real UTF-8 in my exported Facebook messages so I can actually read them.
Bonus points for helping me understand what \u00f0\u009f\u0091\u008d actually represents (besides it being some UTF-8 hex representation). Why is it the same as U+1F44D or \uD83D\uDC4D in C++?
The Unicode code point of the 👍character is U+1F44D.
Using the variable-length UTF-8 encoding, the following 4 bytes (expressed as hex. numbers) are needed to represent this code point: F0 9F 91 8D.
While these bytes are recognizable in your string,
$str = "\u00f0\u009f\u0091\u008d"
they shouldn't be represented as \u escape codes, because they're not Unicode code units / code point, they're bytes.
With a 4-hex-digit escape sequence (UTF-16), the proper representation would require 2 16-bit Unicode code units, a so-called surrogate pair, which together represent the single non-BMP code point U+1F44D:
$str = "\uD83D\uDC4D"
If your JSON input used such proper Unicode escapes, PowerShell would process the string correctly; e.g.:
'{ "str": "\uD83D\uDC4D" }' | ConvertFrom-Json > out.txt
If you examine file out.txt, you'll see something like:
str
---
👍
(The output was sent to a file, because console windows wouldn't render the 👍char. correctly, at least not without additional configuration; note that if you used PowerShell Core on Linux or macOS, however, terminal output would work.)
Therefore, the best solution would be to correct the problem at the source and use proper Unicode escapes (or even use the characters themselves, as long as the source supports any of the standard Unicode encodings).
If you really must parse the broken representation, try the following workaround (PSv4+), building on your own [regex]::Replace() technique:
$str = "A \u00f0\u009f\u0091\u008d for Mot\u00c3\u00b6rhead."
[regex]::replace($str, '(?:\\u[0-9a-f]{4})+', { param($m)
$utf8Bytes = (-split ($m.Value -replace '\\u([0-9a-f]{4})', '0x$1 ')).ForEach([byte])
[text.encoding]::utf8.GetString($utf8Bytes)
})
This should yield A 👍 for Motörhead.
The above translates sequences of \u... escapes into the byte values they represent and interprets the resulting byte array as UTF-8 text.
To save the decoded string to a UTF-8 file, use ... | Set-Content -Encoding utf8 out.txt
Alternatively, in PSv5+, as Dennis himself suggests, you can make Out-File and therefore it's virtual alias, >, default to UTF-8 via PowerShell's global parameter-defaults hashtable:
$PSDefaultParameterValues['Out-File:Encoding'] = 'utf8'
Note, however, that on Windows PowerShell (as opposed to PowerShell Core) you'll get an UTF-8 file with a BOM in both cases - avoiding that requires direct use of the .NET framework: see Using PowerShell to write a file in UTF-8 without the BOM
iso-8859-1 - very often - intermediate member in operations with Utf-8
$text=[regex]::Unescape("A \u00f0\u009f\u0091\u008d for Mot\u00c3\u00b6rhead.")
Write-Host "[regex]::Unescape(utf-8) = $text"
$encTo=[System.Text.Encoding]::GetEncoding('iso-8859-1') # Change it to yours (iso-8859-2) i suppose
$bytes = $encTo.GetBytes($Text)
$text=[System.Text.Encoding]::UTF8.GetString($bytes)
Write-Host "utf8_DecodedFrom_8859_1 = $text"
[regex]::Unescape(utf-8) = A ð for Motörhead.
utf8_DecodedFrom_8859_1 = A 👍 for Motörhead.
What pleases in mklement0 example - it is easy to get an encoded string of this type.
What is bad - the line will be huge. (First 2 nibbles '00' is a waste)
I must admit, the mklement0 example is charming.
The code for encoding - one line only!!!:
$emoji='A 👍 for Motörhead.'
[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
$str=(([System.Web.HttpUtility]::UrlEncode($emoji)) -replace '%','\u00') -replace '\+',' '
$str
You can decode this by the standard url way:
$str="A \u00f0\u009f\u0091\u008d for Mot\u00c3\u00b6rhead."
$str=$str -replace '\\u00','%'
[Reflection.Assembly]::LoadWithPartialName("System.Web") | Out-Null
[System.Web.HttpUtility]::UrlDecode($str)
A 👍 for Motörhead.

Reading csv files with quoted fields containing embedded commas

I am reading a csv file in Pyspark as follows:
df_raw=spark.read.option("header","true").csv(csv_path)
However, the data file has quoted fields with embedded commas in them which
should not be treated as commas. How can I handle this in Pyspark ? I know pandas can handle this, but can Spark ? The version I am using is Spark 2.0.0.
Here is an example which works in Pandas but fails using Spark:
In [1]: import pandas as pd
In [2]: pdf = pd.read_csv('malformed_data.csv')
In [3]: sdf=spark.read.format("org.apache.spark.csv").csv('malformed_data.csv',header=True)
In [4]: pdf[['col12','col13','col14']]
Out[4]:
col12 col13 \
0 32 XIY "W" JK, RE LK SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE
1 NaN OUTKAST#THROOTS~WUTANG#RUNDMC
col14
0 23.0
1 0.0
In [5]: sdf.select("col12","col13",'col14').show()
+------------------+--------------------+--------------------+
| col12| col13| col14|
+------------------+--------------------+--------------------+
|"32 XIY ""W"" JK| RE LK"|SOMETHINGLIKEAPHE...|
| null|OUTKAST#THROOTS~W...| 0.0|
+------------------+--------------------+--------------------+
The contents of the file :
col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col11,col12,col13,col14,col15,col16,col17,col18,col19
80015360210876000,11.22,X,4076710258,,,sxsw,,"32 YIU ""A""",S5,,"32 XIY ""W"" JK, RE LK",SOMETHINGLIKEAPHENOMENON#YOUGOTSOUL~BRINGDANOISE,23.0,cyclingstats,2012-25-19,432,2023-05-17,CODERED
61670000229561918,137.12,U,8234971771,,,woodstock,,,T4,,,OUTKAST#THROOTS~WUTANG#RUNDMC,0.0,runstats,2013-21-22,1333,2019-11-23,CODEBLUE
I noticed that your problematic line has escaping that uses double quotes themselves:
"32 XIY ""W"" JK, RE LK"
which should be interpreter just as
32 XIY "W" JK, RE LK
As described in RFC-4180, page 2 -
If double-quotes are used to enclose fields, then a double-quote appearing inside a field must be escaped by preceding it with another double quote
That's what Excel does, for example, by default.
Although in Spark (as of Spark 2.1), escaping is done by default through non-RFC way, using backslah (\). To fix this you have to explicitly tell Spark to use doublequote to use as an escape character:
.option("quote", "\"")
.option("escape", "\"")
This may explain that a comma character wasn't interpreted correctly as it was inside a quoted column.
Options for Spark csv format are not documented well on Apache Spark site, but here's a bit older documentation which I still find useful quite often:
https://github.com/databricks/spark-csv
Update Aug 2018: Spark 3.0 might change this behavior to be RFC-compliant. See SPARK-22236 for details.
For anyone doing this in Scala: Tagar's answer nearly worked for me (thank you!); all I had to do was escape the double quote when setting my option param:
.option("quote", "\"")
.option("escape", "\"")
I'm using Spark 2.3, so I can confirm Tagar's solution still seems to work the same under the new release.
For anyone who is still wondering if their parse is still not working after using Tagar's solution.
Pyspark 3.1.2
.option("quote", "\"")
is the default so this is not necessary however in my case I have data with multiple lines and so spark was unable to auto detect \n in a single data point and at the end of every row so using .option("multiline", True) solved my issue along with .option('escape', "\"") So generally its better to use the multiline option by default
Delimiter(comma) specified inside quotes will be ignored by default. Spark SQL does have inbuilt CSV reader in Spark 2.0.
df = session.read
.option("header", "true")
.csv("csv/file/path")
more about CSV reader here - .

How to read .csv file that contains utf-8 values by pandas dataframe

I'm trying to read .csv file that contains utf-8 data in some of its columns. The method of reading is by using pandas dataframe. The code is as following:
df = pd.read_csv('Cancer_training.csv', encoding='utf-8')
Then I got the following examples of errors with different files:
(1) 'utf-8' codec can't decode byte 0xcf in position 14:invalid continuation byte
(2) 'utf-8' codec can't decode byte 0xc9 in position 3:invalid continuation byte
Could you please share your ideas and experience with such problem? Thank you.
[python: 3.4.1.final.0,
pandas: 0.14.1]
sample of the raw data, I cannot put full record because of the legal restrictions of the medical data:
I had this problem for no apparent reason, I managed to get it work using this:
df = pd.read_csv('file', encoding = "ISO-8859-1")
not sure why though
I've also done as Irh09 proposed but the second file it read it was wrongly decoded and couldn't find a column with tildes (á, é, í, ó, ú).
So I recomend encapsulating the error like this:
try:
df = pd.read_csv('file', encoding = "utf-8")
except:
df = pd.read_csv('file', encoding= "ISO-8859-1")