Spark: write a CSV with null values as empty columns - csv

I'm using PySpark to write a dataframe to a CSV file like this:
df.write.csv(PATH, nullValue='')
There is a column in that dataframe of type string. Some of the values are null. These null values display like this:
...,"",...
I would like them to be display like this instead:
...,,...
Is this possible with an option in csv.write()?
Thanks!

Easily with emptyValue option setted
emptyValue: sets the string representation of an empty value. If None is set, it use the default value, "".
from pyspark import Row
from pyspark.shell import spark
df = spark.createDataFrame([
Row(col_1=None, col_2='20151231', col_3='Hello'),
Row(col_1=2, col_2='20160101', col_3=None),
Row(col_1=3, col_2=None, col_3='World')
])
df.write.csv(PATH, header=True, emptyValue='')
Output
col_1,col_2,col_3
,20151231,Hello
2,20160101,
3,,World

Related

Json string written to Kafka using Spark is not converted properly on reading

I read a .csv file to create a data frame and I want to write the data to a kafka topic. The code is the following
df = spark.read.format("csv").option("header", "true").load(f'{file_location}')
kafka_df = df.selectExpr("to_json(struct(*)) AS value").selectExpr("CAST(value AS STRING)")
kafka_df.show(truncate=False)
And the data frame looks like this:
value
"{""id"":""d215e9f1-4d0c-42da-8f65-1f4ae72077b3"",""latitude"":""-63.571457254062715"",""longitude"":""-155.7055842710919""}"
"{""id"":""ca3d75b3-86e3-438f-b74f-c690e875ba52"",""latitude"":""-53.36506636464281"",""longitude"":""30.069167069917597""}"
"{""id"":""29e66862-9248-4af7-9126-6880ceb3b45f"",""latitude"":""-23.767505281795835"",""longitude"":""174.593140405442""}"
"{""id"":""451a7e21-6d5e-42c3-85a8-13c740a058a9"",""latitude"":""13.02054867061598"",""longitude"":""20.328402498420786""}"
"{""id"":""09d6c11d-7aae-4d17-8cd8-183157794893"",""latitude"":""-81.48976715040848"",""longitude"":""1.1995769642056189""}"
"{""id"":""393e8760-ef40-482a-a039-d263af3379ba"",""latitude"":""-71.73949722379649"",""longitude"":""112.59922770487054""}"
"{""id"":""d6db8fcf-ee83-41cf-9ec2-5c2909c18534"",""latitude"":""-4.034680969008576"",""longitude"":""60.59645511854336""}"
After I wrote it to Kafka I want to read it and transform the binary data from column "value" back to json string but the result is that the value contains only the id, not the whole string. Any ideea why?
from pyspark.sql import functions as F
df = consume_from_event_hub(topic, bootstrap_servers, config, consumer_group)
string_df = df.select(F.col("value").cast("string"))
string_df.display()
value
794541bc-30e6-4c16-9cd0-3c5c8995a3a4
20ea5b50-0baa-47e3-b921-f9a3ac8873e2
598d2fc1-c919-4498-9226-dd5749d92fc5
86cd5b2b-1c57-466a-a3c8-721811ab6959
807de968-c070-4b8b-86f6-00a865474c35
e708789c-e877-44b8-9504-86fd9a20ef91
9133a888-2e8d-4a5a-87ce-4a53e63b67fc
cd5e3e0d-8b02-45ee-8634-7e056d49bf3b
the CSV the format is this
id,latitude,longitude
bd6d98e1-d1da-4f41-94ba-8dbd8c8fce42,-86.06318155350924,-108.14300138138589
c39e84c6-8d7b-4cc5-b925-68a5ea406d52,74.20752175171859,-129.9453606091319
011e5fb8-6ab7-4ee9-97bb-acafc2c71e15,19.302250885973592,-103.2154291337162
You need to remove selectExpr("CAST(value AS STRING)") since to_json already returns a string column
from pyspark.sql.functions import col, to_json, struct
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load(f'{file_location}')
kafka_df = df.select(to_json(struct(col("*"))).alias("value"))
kafka_df.show(truncate=False)
I'm not sure what's wrong with the consumer. That should have worked unless consume_from_event_hub does something specifically to extract the ID column

How to dump pandas dataframe as json

While dumping a dataframe to json getting escape characters along with double qoutes
Expected Output :
"[{"a":"1","b":"5"},
{"a":"2","b":"6"},
{"a":"3","b":"7"},
{"a":"4","b":"8"}]"
Result :
"[{\"a\":\"1\",\"b\":\"5\"},
{\"a\":\"2\",\"b\":\"6\"},
{\"a\":\"3\",\"b\":\"7\"},
{\"a\":\"4\",\"b\":\"8\"}]"
AB1 = AB.to_json(orient='records',encoding='utf-8')
return json.dumps(AB1)
You don't need the to_json part. Depending on what your input is, just use dump or dumps.

How to omit the header in when use spark to read csv.file?

I am trying to use Spark to read a csv file in jupyter notebook. So far I have
spark = SparkSession.builder.master("local[4]").getOrCreate()
reviews_df = spark.read.option("header","true").csv("small.csv")
reviews_df.collect()
This is how the reviews_df looks like:
[Row(reviewerID=u'A1YKOIHKQHB58W', asin=u'B0001VL0K2', overall=u'5'),
Row(reviewerID=u'A2YB0B3QOHEFR', asin=u'B000JJSRNY', overall=u'5'),
Row(reviewerID=u'AAI0092FR8V1W', asin=u'B0060MYKYY', overall=u'5'),
Row(reviewerID=u'A2TAPSNKK9AFSQ', asin=u'6303187218', overall=u'5'),
Row(reviewerID=u'A316JR2TQLQT5F', asin=u'6305364206', overall=u'5')...]
But each row of the data frame contains the column names, how can I reformat the data, so that it can become:
[(u'A1YKOIHKQHB58W', u'B0001VL0K2', u'5'),
(u'A2YB0B3QOHEFR', u'B000JJSRNY', u'5')....]
Dataframe always returns Row objects, thats why when you issue collect() on dataframe, it shows -
Row(reviewerID=u'A1YKOIHKQHB58W', asin=u'B0001VL0K2', overall=u'5')
to get what you want, you can do -
reviews_df.rdd.map(lambda row : (row.reviewerID,row.asin,row.overall)).collect()
this will return you tuple of values of rows

Json fields getting sorted by default when converted to spark DataFrame

When I create a dataframe from json file, the fields from the json file are sorted by default in the dataframe. How to avoid this sorting?
Jsonfile having one json message per line:
{"name":"john","age":10,"class":2}
{"name":"rambo","age":11,"class":3}
When I create Data frame from this file as:
val jDF = sqlContext.read.json("/user/inputfiles/sample.json")
a DF is created as jDF: org.apache.spark.sql.DataFrame = [age: bigint, class: bigint, name: string]
. In the DF the fields are sorted by default.
How do we avoid this from happening?
Im unable to understand what is going wrong here.
Appreciate any help in sorting out the problem.
For Question 1:
A simple way is to do select on the DataFrame:
val newDF = jDF.select("name","age","class")
The order of parameters is the order of the columns you want.
But this could be verbose if there are many columns and you have to define the order yourself.

Obtain "None" values after specifying schema and reading json file in pyspark

I have a file on s3 in json format(filename=a). I read it and create a dataframe (df) using sqlContext.read.json. On checking df.printSchema; the schema is not what I want. So I specify my own schema with double and string type.
Then I reload the json data in a dataframe (df3) specifying the above schema but when I do df3.head(1) I see "None" values for some of my variables.
See code below -
df = sqlContext.read.json(os.path.join('file:///data','a'))
print df.count()
df.printSchema()
df.na.fill(0)
After specifying my own schema (sch). Since the schema code is long I haven't included it here.
sch=StructType(List(StructField(x,DoubleType,true),StructField(y,DoubleType,true)))
f = sc.textFile(os.path.join('file:///data','a'))
f_json = f.map(lambda x: json.loads(x))
df3 = sqlContext.createDataFrame(f_json, sch)
df3.head(1)
[Row(x=85.7, y=None)]
I obtain 'None' values for all my columns with DoubleType (datatype) when I do df3.head(1).Am I doing something wrong when I reload the df3 dataframe?
I was able to take care of "None" by doing df.na.fill(0)!