Spark dataframe from Json string with nested key - json

I have several columns to be extracted from json string. However one field has nested values. Not sure how to deal with that?
Need to explode into multiple rows to get values of field name, Value1, Value2.
import spark.implicits._
val df = Seq(
("1", """{"k": "foo", "v": 1.0}""", "some_other_field_1"),
("2", """{"p": "bar", "q": 3.0}""", "some_other_field_2"),
("3",
"""{"nestedKey":[ {"field name":"name1","Value1":false,"Value2":true},
| {"field name":"name2","Value1":"100","Value2":"200"}
|]}""".stripMargin, "some_other_field_3")
).toDF("id","json","other")
df.show(truncate = false)
val df1= df.withColumn("id1",col("id"))
.withColumn("other1",col("other"))
.withColumn("k",get_json_object(col("json"),"$.k"))
.withColumn("v",get_json_object(col("json"),"$.v"))
.withColumn("p",get_json_object(col("json"),"$.p"))
.withColumn("q",get_json_object(col("json"),"$.q"))
.withColumn("nestedKey",get_json_object(col("json"),"$.nestedKey"))
.select("id1","other1","k","v","p","q","nestedKey")
df1.show(truncate = false)

You can parse the nestedKey using from_json and explode it:
val df2 = df1.withColumn(
"nestedKey",
expr("explode_outer(from_json(nestedKey, 'array<struct<`field name`:string, Value1:string, Value2:string>>'))")
).select("*", "nestedKey.*").drop("nestedKey")
df2.show
+---+------------------+----+----+----+----+----------+------+------+
|id1| other1| k| v| p| q|field name|Value1|Value2|
+---+------------------+----+----+----+----+----------+------+------+
| 1|some_other_field_1| foo| 1.0|null|null| null| null| null|
| 2|some_other_field_2|null|null| bar| 3.0| null| null| null|
| 3|some_other_field_3|null|null|null|null| name1| false| true|
| 3|some_other_field_3|null|null|null|null| name2| 100| 200|
+---+------------------+----+----+----+----+----------+------+------+

i did it in one dataframe
val df1= df.withColumn("id1",col("id"))
.withColumn("other1",col("other"))
.withColumn("k",get_json_object(col("json"),"$.k"))
.withColumn("v",get_json_object(col("json"),"$.v"))
.withColumn("p",get_json_object(col("json"),"$.p"))
.withColumn("q",get_json_object(col("json"),"$.q"))
.withColumn("nestedKey",get_json_object(col("json"),"$.nestedKey"))
.withColumn(
"nestedKey",
expr("explode_outer(from_json(nestedKey, 'array<struct<`field name`:string, Value1:string, Value2:string>>'))")
).withColumn("fieldname",col("nestedKey.field name"))
.withColumn("valueone",col("nestedKey.Value1"))
.withColumn("valuetwo",col("nestedKey.Value2"))
.select("id1","other1","k","v","p","q","fieldname","valueone","valuetwo")```
still working to make it more elegant

Related

How to parse json string to different columns in spark scala?

While reading parquet file this is the following file data
|id |name |activegroup|
|1 |abc |[{"groupID":"5d","role":"admin","status":"A"},{"groupID":"58","role":"admin","status":"A"}]|
data types of each field
root
|--id : int
|--name : String
|--activegroup : String
activegroup column is string explode function is not working. Following is the required output
|id |name |groupID|role|status|
|1 |abc |5d |admin|A |
|1 |def |58 |admin|A |
Do help me with parsing the above in spark scala latest version
First you need to extract the json schema:
val schema = schema_of_json(lit(df.select($"activeGroup").as[String].first))
Once you got it, you can convert your activegroup column, which is a String to json (from_json), and then explode it.
Once the column is a json, you can extract it's values with $"columnName.field"
val dfresult = df.withColumn("jsonColumn", explode(
from_json($"activegroup", schema)))
.select($"id", $"name",
$"jsonColumn.groupId" as "groupId",
$"jsonColumn.role" as "role",
$"jsonColumn.status" as "status")
If you want to extract the whole json and the element names are ok to you you can use the * to do it:
val dfresult = df.withColumn("jsonColumn", explode(
from_json($"activegroup", schema)))
.select($"id", $"name", $"jsonColumn.*")
RESULT
+---+----+-------+-----+------+
| id|name|groupId| role|status|
+---+----+-------+-----+------+
| 1| abc| 5d|admin| A|
| 1| abc| 58|admin| A|
+---+----+-------+-----+------+

Scala - How to convert JSON Keys and Values as columns

How to parse below Input Json into key and value columns. Any help is appreciated.
Input:
{
"name" : "srini",
"value": {
"1" : "val1",
"2" : "val2",
"3" : "val3"
}
}
Output DataFrame Column:
name key value
-----------------------------
srini 1 val1
srini 2 val2
srini 3 val3
//++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++Input DataFrame :
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|json_file |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|{"file_path":"AAA/BBB.CCC.zip","file_name":"AAA_20200202122754.json","received_time":"2020-03-31","obj_cls":"Monitor","obj_cls_inst":"Monitor","relation_tree":"Source~>HD_Info~>Monitor","s_tag":"ABC1234","Monitor":{"Index":"0","Vendor_Data":"58F5Y","Monitor_Type":"Lenovo Monitor","HnfoID":"650FEC74"}}|
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
How to convert this above json file in a DataFrame like below :
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
|file_path |file_name |received_time |obj_cls |obj_cls_inst |relation_tree |s_tag |attribute_name |attribute_value |
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
|AAA/BBB.CCC.zip |AAA_20200202122754.json|2020-03-31 |Monitor |Monitor |Source~>HD_Info~>Monitor |ABC1234 |Index |0 |
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
|AAA/BBB.CCC.zip |AAA_20200202122754.json|2020-03-31 |Monitor |Monitor |Source~>HD_Info~>Monitor |ABC1234 |Vendor_Data |58F5Y |
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
|AAA/BBB.CCC.zip |AAA_20200202122754.json|2020-03-31 |Monitor |Monitor |Source~>HD_Info~>Monitor |ABC1234 |Monitor_Type |Lenovo Monitor |
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
|AAA/BBB.CCC.zip |AAA_20200202122754.json|2020-03-31 |Monitor |Monitor |Source~>HD_Info~>Monitor |ABC1234 |HnfoID |650FEC74 |
+----------------+-----------------------+--------------+--------+-------------+-------------------------+----------+----------------+----------------+
//**********************************************
val rawData = sparkSession.sql("select 1").withColumn("obj_cls", lit("First")).withColumn("s_tag", lit("S_12345")).withColumn("jsonString", lit("""{"id":""1,"First":{"Info":"ABCD123","Res":"5.2"}}"""))
Once you have your json loaded into a DF as follows:
+-----+------------------+
| name| value|
+-----+------------------+
|srini|[val1, val2, val3]|
+-----+------------------+
First you select the whole values items:
df.select($"name", $"value.*")
This will give yo this:
+-----+----+----+----+
| name| 1| 2| 3|
+-----+----+----+----+
|srini|val1|val2|val3|
+-----+----+----+----+
Then you need to pivot the columns to become rows, for this I usually define a helper function kv:
def kv (columnsToTranspose: Array[String]) = explode(array(columnsToTranspose.map {
c => struct(lit(c).alias("k"), col(c).alias("v"))
}: _*))
Then you create an array fo the desired columns:
val pivotCols = Array("1", "2", "3")
And finally apply the function to the previous DF:
df.select($"name", $"value.*")
.withColumn("kv", kv(pivotCols))
.select($"name", $"kv.k" as "key", $"kv.v" as "value")
Result:
+-----+---+-----+
| name|key|value|
+-----+---+-----+
|srini| 1| val1|
|srini| 2| val2|
|srini| 3| val3|
+-----+---+-----+
EDIT
If you don't wanna mannually specify the columns to pivot, you can use an intermediate df as follows:
val dfIntermediate = df.select($"name", $"value.*")
dfIntermediate.withColumn("kv", kv(dfIntermediate.columns.tail))
.select($"name", $"kv.k" as "key", $"kv.v" as "value")
And you will obtain the very same result:
+-----+---+-----+
| name|key|value|
+-----+---+-----+
|srini| 1| val1|
|srini| 2| val2|
|srini| 3| val3|
+-----+---+-----+
EDIT2
With the new example is the same, you just need to change which columns you read/pivot
val pivotColumns = Array("HnfoId", "Index", "Monitor_Type", "Vendor_Data")
df.select("file_path", "file_name", "received_time", "obj_cls", "obj_cls_inst", "relation_tree", "s_Tag", "Monitor.*").withColumn("kv", kv(pivotColumns)).select($"file_path", $"file_name", $"received_time", $"obj_cls", $"obj_cls_inst", $"relation_tree", $"s_Tag", $"kv.k" as "attribute_name", $"kv.v" as "attribute_value").show
+---------------+--------------------+-------------+-------+------------+--------------------+-------+--------------+---------------+
| file_path| file_name|received_time|obj_cls|obj_cls_inst| relation_tree| s_Tag|attribute_name|attribute_value|
+---------------+--------------------+-------------+-------+------------+--------------------+-------+--------------+---------------+
|AAA/BBB.CCC.zip|AAA_2020020212275...| 2020-03-31|Monitor| Monitor|Source~>HD_Info~>...|ABC1234| HnfoId| 650FEC74|
|AAA/BBB.CCC.zip|AAA_2020020212275...| 2020-03-31|Monitor| Monitor|Source~>HD_Info~>...|ABC1234| Index| 0|
|AAA/BBB.CCC.zip|AAA_2020020212275...| 2020-03-31|Monitor| Monitor|Source~>HD_Info~>...|ABC1234| Monitor_Type| Lenovo Monitor|
|AAA/BBB.CCC.zip|AAA_2020020212275...| 2020-03-31|Monitor| Monitor|Source~>HD_Info~>...|ABC1234| Vendor_Data| 58F5Y|
+---------------+--------------------+-------------+-------+------------+--------------------+-------+--------------+---------------+

How to read custom formatted dates as timestamp in pyspark

I want to use spark.read() to pull data from a .csv file, while enforcing a schema. However, I can't get spark to recognize my dates as timestamps.
First I create a dummy file to test with
%scala
Seq("1|1/15/2019 2:24:00 AM","2|test","3|").toDF().write.text("/tmp/input/csvDateReadTest")
Then I try to read it, and provide a dateFormat string, but it doesn't recognize my dates, and sends the records to the badRecordsPath
df = spark.read.format('csv')
.schema("id int, dt timestamp")
.option("delimiter","|")
.option("badRecordsPath","/tmp/badRecordsPath")
.option("dateFormat","M/dd/yyyy hh:mm:ss aaa")
.load("/tmp/input/csvDateReadTest")
As the result, I get just 1 record in df (ID 3), when I'm expecting to see 2. (IDs 1 and 3)
df.show()
+---+----+
| id| dt|
+---+----+
| 3|null|
+---+----+
You must change the dateFormat to timestampFormat since in your case you need a timestamp type and not a date. Additionally the value of timestamp format should be mm/dd/yyyy h:mm:ss a.
Sample data:
Seq(
"1|1/15/2019 2:24:00 AM",
"2|test",
"3|5/30/1981 3:11:00 PM"
).toDF().write.text("/tmp/input/csvDateReadTest")
With the changes for the timestamp:
val df = spark.read.format("csv")
.schema("id int, dt timestamp")
.option("delimiter","|")
.option("badRecordsPath","/tmp/badRecordsPath")
.option("timestampFormat","mm/dd/yyyy h:mm:ss a")
.load("/tmp/input/csvDateReadTest")
And the output:
+----+-------------------+
| id| dt|
+----+-------------------+
| 1|2019-01-15 02:24:00|
| 3|1981-01-30 15:11:00|
|null| null|
+----+-------------------+
Note that the record with id 2 failed to comply with the schema definition and therefore it will contain null. If you want to keep also the invalid records you need to change the timestamp column into string and the output in this case will be:
+---+--------------------+
| id| dt|
+---+--------------------+
| 1|1/15/2019 2:24:00 AM|
| 3|5/30/1981 3:11:00 PM|
| 2| test|
+---+--------------------+
UPDATE:
In order to change the string dt into timestamp type you could try with df.withColumn("dt", $"dt".cast("timestamp")) although this will fail and replace all the values with null.
You can achieve this with the next code:
import org.apache.spark.sql.Row
import java.text.SimpleDateFormat
import java.util.{Date, Locale}
import java.sql.Timestamp
import scala.util.{Try, Success, Failure}
val formatter = new SimpleDateFormat("mm/dd/yyyy h:mm:ss a", Locale.US)
df.map{ case Row(id:Int, dt:String) =>
val tryParse = Try[Date](formatter.parse(dt))
val p_timestamp = tryParse match {
case Success(parsed) => new Timestamp(parsed.getTime())
case Failure(_) => null
}
(id, p_timestamp)
}.toDF("id", "dt").show
Output:
+---+-------------------+
| id| dt|
+---+-------------------+
| 1|2019-01-15 02:24:00|
| 3|1981-01-30 15:11:00|
| 2| null|
+---+-------------------+
Hi here is the sample code
df.withColumn("times",
from_unixtime(unix_timestamp(col("df"), "M/dd/yyyy hh:mm:ss a"),
"yyyy-MM-dd HH:mm:ss.SSSSSS"))
.show(false)

from_json of Spark sql return null values

I loaded a parquet file into a spark dataframe as follows :
val message= spark.read.parquet("gs://defenault-zdtt-devde/pubsub/part-00001-e9f8c58f-7de0-4537-a7be-a9a8556sede04a-c000.snappy.parquet")
when I perform a collect on my dataframe I get the following result :
message.collect()
Array[org.apache.spark.sql.Row] = Array([118738748835150,2018-08-20T17:44:38.742Z,{"id":"uplink-3130-85bc","device_id":60517119992794222,"group_id":69,"group":"box-2478-2555","profile_id":3,"profile":"eolane-movee","type":"uplink","timestamp":"2018-08-20T17:44:37.048Z","count":3130,"payload":[{"timestamp":"2018-08-20T17:44:37.048Z","data":{"battery":3.5975599999999996,"temperature":27}}],"payload_encrypted":"9da25e36","payload_cleartext":"fe1b01aa","device_properties":{"appeui":"7ca97df000001190","deveui":"7ca97d0000001bb0","external_id":"Product: 3.7 / HW: 3.1 / SW: 1.8.8","no_de_serie_eolane":"4904","no_emballage":"S02066","product_version":"1.3.1"},"protocol_data":{"AppNonce":"e820ef","DevAddr":"0e6c5fda","DevNonce":"85bc","NetID":"000007","best_gateway_id":"M40246","gateway.
The schema of this dataframe is
message.printSchema()
root
|-- Id: string (nullable = true)
|-- publishTime: string (nullable = true)
|-- data: string (nullable = true)
My aim is to work on the data column which holds json data and to flatten it.
I wrote the following code
val schemaTotal = new StructType (
Array (StructField("id",StringType,false),StructField("device_id",StringType),StructField("group_id",LongType), StructField("group",StringType),StructField("profile_id",IntegerType),StructField("profile",StringType),StructField("type",StringType),StructField("timestamp",StringType),
StructField("count",StringType),
StructField("payload",new StructType ()
.add("timestamp",StringType)
.add("data",new ArrayType (new StructType().add("battery",LongType).add("temperature",LongType),false))),
StructField("payload_encrypted",StringType),
StructField("payload_cleartext",StringType),
StructField("device_properties", new ArrayType (new StructType().add("appeui",StringType).add("deveui",StringType).add("external_id",StringType).add("no_de_serie_eolane",LongType).add("no_emballage",StringType).add("product_version",StringType),false)),
StructField("protocol_data", new ArrayType (new StructType().add("AppNonce",StringType).add("DevAddr",StringType).add("DevNonce",StringType).add("NetID",LongType).add("best_gateway_id",StringType).add("gateways",IntegerType).add("lora_version",IntegerType).add("noise",LongType).add("port",IntegerType).add("rssi",DoubleType).add("sf",IntegerType).add("signal",DoubleType).add("snr",DoubleType),false)),
StructField("lat",StringType),
StructField("lng",StringType),
StructField("geolocation_type",StringType),
StructField("geolocation_precision",StringType),
StructField("delivered_at",StringType)))
val dataframe_extract=message.select($"Id",
$"publishTime",
from_json($"data",schemaTotal).as("content"))
val table = dataframe_extract.select(
$"Id",
$"publishTime",
$"content.id" as "id",
$"content.device_id" as "device_id",
$"content.group_id" as "group_id",
$"content.group" as "group",
$"content.profile_id" as "profile_id",
$"content.profile" as "profile",
$"content.type" as "type",
$"content.timestamp" as "timestamp",
$"content.count" as "count",
$"content.payload.timestamp" as "timestamp2",
$"content.payload.data.battery" as "battery",
$"content.payload.data.temperature" as "temperature",
$"content.payload_encrypted" as "payload_encrypted",
$"content.payload_cleartext" as "payload_cleartext",
$"content.device_properties.appeui" as "appeui"
)
table.show() gives me null values for all columns:
+---------------+--------------------+----+---------+--------+-----+----------+-------+----+---------+-----+----------+-------+-----------+-----------------+-----------------+------+
| Id| publishTime| id|device_id|group_id|group|profile_id|profile|type|timestamp|count|timestamp2|battery|temperature|payload_encrypted|payload_cleartext|appeui|
+---------------+--------------------+----+---------+--------+-----+----------+-------+----+---------+-----+----------+-------+-----------+-----------------+-----------------+------+
|118738748835150|2018-08-20T17:44:...|null| null| null| null| null| null|null| null| null| null| null| null| null| null| null|
+---------------+--------------------+----+---------+--------+-----+----------+-------+----+---------+-----+----------+-------+-----------+-----------------+-----------------+------+
, whereas table.printSchema() gives me the expected result, any idea how to solve this, please?
I am working with Zeppelin as a first prototyping step thanks a lot in advance for your help.
Best Regards
from_json() SQL function has below constraint to be followed to convert column value to a dataframe.
whatever the datatype you have defined in the schema should match with the value present in the json, if there is any column's mismatch value leads to null in all column values
e.g.:
'{"name": "raj", "age": 12}' for this column value
StructType(List(StructField(name,StringType,true),StructField(age,StringType,true)))
The above schema will return you a null value on both the columns
StructType(List(StructField(name,StringType,true),StructField(age,IntegerType,true)))
The above schema will return you an expected dataframe
For this thread possible reason could be this, if there is any mismatched column value present, from_json will return all column value as null

Spark CSV read/write for empty field

I want to write by Dataframe's empty field as empty but it always writes as NULL. I want to write NULLS as ? and empty as empty/blank. Same while reading from a csv.
val df = sqlContext.createDataFrame(Seq(
(0, "a"),
(1, "b"),
(2, "c"),
(3, ""),
(4, null)
))
scala> df.show
| 0| a|
| 1| b|
| 2| c|
| 3| |
| 4|null|
+---+----+
df.write.mode(SaveMode.Overwrite).format("com.databricks.spark.csv").option("nullValue","?").save("/xxxxx/test_out")
written output :
0,a
1,b
2,c
3,?
4,?
.option("treatEmptyValuesAsNulls" , "false")
This option does not work.
I need the empty to write as empty
0,a
1,b
2,c
3,
4,?
Try using sql-
I am using spark 2.2.
val ds= sqlContext.sql("select `_1`, case when `_2` is not null then `_2` else case when `_2` is null then '?' else case when `_2` = '' then '' end end end as val "+
"from global_temp.test");
ds.write.csv("<output path>");