I have a nested JSON dataframe in Spark which looks like below
root
|-- data: struct (nullable = true)
| |-- average: long (nullable = true)
| |-- sum: long (nullable = true)
| |-- time: string (nullable = true)
|-- password: string (nullable = true)
|-- url: string (nullable = true)
|-- username: string (nullable = true)
I need to convert the time variable under the data struct to timestamp data type. Following is the code I tried, but did not give me the result i wanted.
val jsonStr = """{
"url": "imap.yahoo.com",
"username": "myusername",
"password": "mypassword",
"data": {
"time":"2017-1-29 0-54-32",
"average": 234,
"sum": 123}}"""
val json: JsValue = Json.parse(jsonStr)
import sqlContext.implicits._
val rdd = sc.parallelize(jsonStr::Nil);
var df = sqlContext.read.json(rdd);
df.printSchema()
val dfRes = df.withColumn("data",makeTimeStamp(unix_timestamp(df("data.time"),"yyyy-MM-dd hh-mm-ss").cast("timestamp")))
dfRes.printSchema();
case class Convert(time: java.sql.Timestamp)
val makeTimeStamp = udf((time: java.sql.Timestamp) => Convert(
time))
Result of my code:
root
|-- data: struct (nullable = true)
| |-- time: timestamp (nullable = true)
|-- password: string (nullable = true)
|-- url: string (nullable = true)
|-- username: string (nullable = true)
My code is actually removing the other elements inside the data struct(which are average and sum) instead of just casting the time string to timestamp data type. For basic data management operations on JSON dataframes, Do we need to write UDF as and when we need a functionality or is there a library available for JSON data management. I am currently using Play framework for working with JSON objects in spark. Thanks in advance.
You can try this:
val jsonStr = """{
"url": "imap.yahoo.com",
"username": "myusername",
"password": "mypassword",
"data": {
"time":"2017-1-29 0-54-32",
"average": 234,
"sum": 123}}"""
val json: JsValue = Json.parse(jsonStr)
import sqlContext.implicits._
val rdd = sc.parallelize(jsonStr::Nil);
var df = sqlContext.read.json(rdd);
df.printSchema()
val dfRes = df.withColumn("data",makeTimeStamp(unix_timestamp(df("data.time"),"yyyy-MM-dd hh-mm-ss").cast("timestamp"), df("data.average"), df("data.sum")))
case class Convert(time: java.sql.Timestamp, average: Long, sum: Long)
val makeTimeStamp = udf((time: java.sql.Timestamp, average: Long, sum: Long) => Convert(time, average, sum))
This will give the result:
root
|-- url: string (nullable = true)
|-- username: string (nullable = true)
|-- password: string (nullable = true)
|-- data: struct (nullable = true)
| |-- time: timestamp (nullable = true)
| |-- average: long (nullable = false)
| |-- sum: long (nullable = false)
The only thing changed is Convert case class and makeTimeStamp UDF.
Assuming you can specify the Spark schema upfront, the automatic string-to-timestamp type coercion should take care of the conversions.
import org.apache.spark.sql.types._
val dschema = (new StructType).add("url", StringType).add("username", StringType).add
("data", (new StructType).add("sum", LongType).add("time", TimestampType))
val df = spark.read.schema(dschema).json("/your/json/on/hdfs")
df.printSchema
df.show
This article outlines a few more techniques to deal with bad data; worth a read for your use-case.
I have some JSON data like this:
{"gid":"111","createHour":"2014-10-20 01:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 01:40:37.0"},{"revId":"4","modDate":"2014-11-20 01:40:40.0"}],"comments":[],"replies":[]}
{"gid":"222","createHour":"2014-12-20 01:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 01:39:31.0"},{"revId":"4","modDate":"2014-11-20 01:39:34.0"}],"comments":[],"replies":[]}
{"gid":"333","createHour":"2015-01-21 00:00:00.0","revisions":[{"revId":"25","modDate":"2014-11-21 00:34:53.0"},{"revId":"110","modDate":"2014-11-21 00:47:10.0"}],"comments":[{"comId":"4432","content":"How are you?"}],"replies":[{"repId":"4441","content":"I am good."}]}
{"gid":"444","createHour":"2015-09-20 23:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 23:23:47.0"}],"comments":[],"replies":[]}
{"gid":"555","createHour":"2016-01-21 01:00:00.0","revisions":[{"revId":"135","modDate":"2014-11-21 01:01:58.0"}],"comments":[],"replies":[]}
{"gid":"666","createHour":"2016-04-23 19:00:00.0","revisions":[{"revId":"136","modDate":"2014-11-23 19:50:51.0"}],"comments":[],"replies":[]}
I can read it in:
val df = sqlContext.read.json("./data/full.json")
I can print the schema with df.printSchema
root
|-- comments: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- comId: string (nullable = true)
| | |-- content: string (nullable = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- content: string (nullable = true)
| | |-- repId: string (nullable = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
I can show the data df.show(10,false)
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
|comments |createHour |gid|replies |revisions |
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
|[] |2014-10-20 01:00:00.0|111|[] |[[2014-11-20 01:40:37.0,2], [2014-11-20 01:40:40.0,4]] |
|[] |2014-12-20 01:00:00.0|222|[] |[[2014-11-20 01:39:31.0,2], [2014-11-20 01:39:34.0,4]] |
|[[4432,How are you?]]|2015-01-21 00:00:00.0|333|[[I am good.,4441]]|[[2014-11-21 00:34:53.0,25], [2014-11-21 00:47:10.0,110]]|
|[] |2015-09-20 23:00:00.0|444|[] |[[2014-11-20 23:23:47.0,2]] |
|[] |2016-01-21 01:00:00.0|555|[] |[[2014-11-21 01:01:58.0,135]] |
|[] |2016-04-23 19:00:00.0|666|[] |[[2014-11-23 19:50:51.0,136]] |
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
I can print / read the schema val dfSc = df.schema as:
StructType(StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true), StructField(createHour,StringType,true), StructField(gid,StringType,true), StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true), StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true))
I can print this out nicer:
println(df.schema.fields.mkString(",\n"))
StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true),
StructField(createHour,StringType,true),
StructField(gid,StringType,true),
StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true),
StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true)
Now if I read in the same file without the comments and replies row, with val df2 = sqlContext.read.
json("./data/partialRevOnly.json") simply deleting those rows, I get something like this with printSchema:
root
|-- comments: array (nullable = true)
| |-- element: string (containsNull = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: string (containsNull = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
I don't like that, so I use:
val df3 = sqlContext.read.
schema(dfSc).
json("./data/partialRevOnly.json")
where the original schema was dfSc. So now I get exactly the schema I had before with the removed data:
root
|-- comments: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- comId: string (nullable = true)
| | |-- content: string (nullable = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- content: string (nullable = true)
| | |-- repId: string (nullable = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
This is perfect ... well almost. I would like to assign this schema to a variable similar to this:
val textSc = StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true),
StructField(createHour,StringType,true),
StructField(gid,StringType,true),
StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true),
StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true)
OK - This won't work due to double quotes, and 'some other structural' stuff, so try this (with error):
import org.apache.spark.sql.types._
val textSc = StructType(Array(
StructField("comments",ArrayType(StructType(StructField("comId",StringType,true), StructField("content",StringType,true)),true),true),
StructField("createHour",StringType,true),
StructField("gid",StringType,true),
StructField("replies",ArrayType(StructType(StructField("content",StringType,true), StructField("repId",StringType,true)),true),true),
StructField("revisions",ArrayType(StructType(StructField("modDate",StringType,true), StructField("revId",StringType,true)),true),true)
))
Name: Compile Error
Message: <console>:78: error: overloaded method value apply with alternatives:
(fields: Array[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: java.util.List[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: Seq[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType
cannot be applied to (org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField)
StructField("comments",ArrayType(StructType(StructField("comId",StringType,true), StructField("content",StringType,true)),true),true),
... Without this error (that I cannot figure a quick way around), I would like to then use textSc in place of dfSc to read in the JSON data with an imposed schema.
I cannot find a '1-to-1 match' way of getting (via println or ...) the schema with acceptable syntax (sort of like above). I suppose some coding can be done with case matching to iron out the double quotes. However, I'm still unclear what rules are required to get the exact schema out of the test fixture that I can simply re-use in my recurring production (versus test fixture) code. Is there a way to get this schema to print exactly as I would code it?
Note: This includes double quotes and all the proper StructField/Types and so forth to be code-compatible drop in.
As a sidebar, I thought about saving a fully-formed golden JSON file to use at the start of the Spark job, but I would like to eventually use date fields and other more concise types instead of strings at the applicable structural locations.
How can I get the dataFrame information coming out of my test harness (using a fully-formed JSON input row with comments and replies) to a point where I can drop the schema as source-code into production code Scala Spark job?
Note: The best answer is some coding means, but an explanation so I can trudge, plod, toil, wade, plow and slog thru the coding is helpful too. :)
I recently ran into this. I'm using Spark 2.0.2 so I don't know if this solution works with earlier versions.
import scala.util.Try
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.catalyst.parser.LegacyTypeStringParser
import org.apache.spark.sql.types.{DataType, StructType}
/** Produce a Schema string from a Dataset */
def serializeSchema(ds: Dataset[_]): String = ds.schema.json
/** Produce a StructType schema object from a JSON string */
def deserializeSchema(json: String): StructType = {
Try(DataType.fromJson(json)).getOrElse(LegacyTypeStringParser.parse(json)) match {
case t: StructType => t
case _ => throw new RuntimeException(s"Failed parsing StructType: $json")
}
}
Note that the "deserialize" function I just copied from a private function in the Spark StructType object. I don't know how well it will be supported across versions.
Well, the error message should tell you everything you have to know here - StructType expects a sequence of fields as an argument. So in your case schema should look like this:
StructType(Seq(
StructField("comments", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("comId", StringType, true),
StructField("content", StringType, true))), true), true),
StructField("createHour", StringType, true),
StructField("gid", StringType, true),
StructField("replies", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("content", StringType, true),
StructField("repId", StringType, true))), true), true),
StructField("revisions", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("modDate", StringType, true),
StructField("revId", StringType, true))),true), true)))
I'm trying to read a Json file which is like :
[
{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"31","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"5","up":null,"Crate":"2"}
,{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"}
,{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"}
]}
,{"IFAM":"EQR","KTM":1430006400000,"COL":22,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"0"}
,{"MLrate":"35","Nrout":"1","up":null,"Crate":"5"}
,{"MLrate":"30","Nrout":"6","up":null,"Crate":"2"}
,{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}
,{"MLrate":"38","Nrout":"8","up":null,"Crate":"1"}
]}
,...
]
I've tried the command:
val df = sqlContext.read.json("namefile")
df.show()
But this does not work : my columns are not recognized...
If you want to use read.json you need a single JSON document per line. If your file contains a valid JSON array with documents it simply won't work as expected. For example if we take your example data input file should be formatted like this:
{"IFAM":"EQR","KTM":1430006400000,"COL":21,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"}, {"MLrate":"31","Nrout":"0","up":null,"Crate":"2"}, {"MLrate":"30","Nrout":"5","up":null,"Crate":"2"} ,{"MLrate":"34","Nrout":"0","up":null,"Crate":"4"} ,{"MLrate":"33","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"8","up":null,"Crate":"2"} ]}
{"IFAM":"EQR","KTM":1430006400000,"COL":22,"DATA":[{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"0","up":null,"Crate":"0"} ,{"MLrate":"35","Nrout":"1","up":null,"Crate":"5"} ,{"MLrate":"30","Nrout":"6","up":null,"Crate":"2"} ,{"MLrate":"30","Nrout":"0","up":null,"Crate":"2"} ,{"MLrate":"38","Nrout":"8","up":null,"Crate":"1"} ]}
If you use read.json on above structure you'll see it is parsed as expected:
scala> sqlContext.read.json("namefile").printSchema
root
|-- COL: long (nullable = true)
|-- DATA: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Crate: string (nullable = true)
| | |-- MLrate: string (nullable = true)
| | |-- Nrout: string (nullable = true)
| | |-- up: string (nullable = true)
|-- IFAM: string (nullable = true)
|-- KTM: long (nullable = true)
If you don't want to format your JSON file (line by line) you could create a schema using StructType and MapType using SparkSQL functions
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
// Convenience function for turning JSON strings into DataFrames
def jsonToDataFrame(json: String, schema: StructType = null):
DataFrame = {
val reader = spark.read
Option(schema).foreach(reader.schema)
reader.json(sc.parallelize(Array(json)))
}
// Using a struct
val schema = new StructType().add("a", new StructType().add("b", IntegerType))
// call the function passing the sample JSON data and the schema as parameter
val json_df = jsonToDataFrame("""
{
"a": {
"b": 1
}
} """, schema)
// now you can access your json fields
val b_value = json_df.select("a.b")
b_value.show()
See this reference documentation for more examples and details
https://docs.databricks.com/spark/latest/spark-sql/complex-types.html#transform-complex-data-types-scala