Spark importing split JSON payload - json

I have a json structure which contains some top-level metadata and a payload which is equivalent to what pandas would define as a split orientation json payload. This was done to reduce duplication as we ingest a lot of these files.
Usually in pandas I would load the json file and pass the required parts (index, column names and data) to a DataFrame constructor, which would give me a flat table which is easy to work with and can then be exported to influxdb or sql.
obj = load_json('file.json')
df = pd.DataFrame(index=obj['payload']['Time'], columns=obj['Names']['Time'], data=obj['payload']['Data'])
df['Machine_ID'] = obj['Machine_ID']
df['TimeSend'] = obj['TimeSend']
df['Version'] = obj['Version']
It seems that this schema is not easy to flatten with spark, as the data isn't record based and so the column names and data aren't associated. Is there anyway I can process this to flat schema with spark or should I add an extra pandas processing step in my pipeline.
root
|-- Machine_ID: string (nullable = true)
|-- TimeSend: string (nullable = true)
|-- Version: long (nullable = true)
|-- Payload: struct (nullable = true)
| |-- Data: array (nullable = true)
| | |-- element: array (containsNull = true)
| | | |-- element: double (containsNull = true)
| |-- Names: array (nullable = true)
| | |-- element: string (containsNull = true)
| |-- Time: array (nullable = true)
| | |-- element: string (containsNull = true)
Edit: I found a way that works, however I'm curious whether the ordering can be relied on, because I split the dataframe before adding id's.
Probably its better to zip Time and Data to enable them both to be exploded, and work from there.
# Make flattened dataframe
df_ = df.select(col('Payload.Time').alias('Time'), col('Payload.Names').alias('Names'), col('Payload.Data').alias('Data'), col('Machine_ID'), col('TimeSend'), col('Version'))
# Make exploded `Data` table
columns = df_.rdd.flatMap(lambda x: x.Names).collect()
df_a = df_.select(explode(col('Data')))
df_a = df_a.select([df_a.col[x] for x in range(len(columns))])
df_a = rdd_data.toDF(columns)
df_a = df_a.withColumn("id", monotonically_increasing_id())
# Make exploded `Metadata` table
df_b = df_.select(explode(col('Time')), col('Machine_ID'), col('TimeSend'), col('Version'))
df_b = df_b.withColumn("id", monotonically_increasing_id())
# Join tables
df_c = df_a.join(df_b, "id")
# Schema is now flattened and joined
df_c.printSchema()
root
|-- id: long (nullable = false)
|-- Machine_ID: string (nullable = true)
|-- TimeSend: string (nullable = true)
|-- Version: long (nullable = true)
|-- Index: string (nullable = true) <- From Payload.Time
|-- TagA: double (nullable = true) <- From Payload.Names & Data
|-- TagB: double (nullable = true) <- || -
|-- TagC: double (nullable = true) <- || -
|-- TagD: double (nullable = true) <- || -

Related

infer schema with complex type

I gave a text file with complex type column. Could you please tell about automatically inferring schema with array, map and structure type in Spark.
Source:
name,work_place,gender_age,skills_score,depart_title,work_contractor
Michael|Montreal,Toronto|Male,30|DB:80|Product:Developer^DLead
Will|Montreal|Male,35|Perl:85|Product:Lead,Test:Lead
Shelley|New York|Female,27|Python:80|Test:Lead,COE:Architect
Lucy|Vancouver|Female,57|Sales:89,HR:94|Sales:Lead
code example:
val employeeComplexDF = spark
.read
.option("header", "true")
.option("inferSchema", "true")
.csv("src/main/resources/employee_complex/employee.txt")
parsed schema (fact):
root
|-- name: string (nullable = true)
|-- work_place: string (nullable = true)
|-- gender_age: string (nullable = true)
|-- skills_score: string (nullable = true)
|-- depart_title: string (nullable = true)
|-- work_contractor: string (nullable = true)
Expected schema is schema with ArrayType, ...

Converting List with string to json pyspark

I have a pyspark dataframe with an input schema like
|-- runName: string (nullable = true)
|-- action_name: string (nullable = true)
|-- model_payload: string (nullable = true)
|-- model_type: string (nullable = true)
|-- did_pass: string (nullable = true)
|-- ymd: string (nullable = false)
Inside model_payload is a list containing a json and I want to pull out the data from here and create a separate dataframe for it. However at the moment model_payload is a string.
root
|-- dataset_A: string (nullable = true)
|-- dataset_B: string (nullable = true)
|-- ks_statistic: double (nullable = true)
|-- pvalue: double (nullable = true)
|-- rejected_hypothesis: boolean (nullable = true)
|-- target_ks_statistic: double (nullable = true)
|-- target_pvalue: double (nullable = true)
|-- action: string (nullable = true)
Where the json in model payload looks like
d = {
"dataset_A": str,
"dataset_B": str,
"ks_statistic": str,
"pvalue": str,
"rejected_hypothesis": bool,
"target_ks_statistic": str,
"target_pvalue": str,
}
The only solution I've found so far is to transform this to a pandas dataframe and use json.loads(). However this is very slow and not suitable for large datasets
according to your payload, you have to create the struct in pyspark and use it to parse your data.
from pyspark.sql import functions as F, types as T
schm = T.StructType(
[
T.StructField("dataset_A", T.StringType()),
T.StructField("dataset_B", T.StringType()),
T.StructField("ks_statistic", T.StringType()),
T.StructField("pvalue", T.StringType()),
T.StructField("rejected_hypothesis", T.BooleanType()),
T.StructField("target_ks_statistic", T.StringType()),
T.StructField("target_pvalue", T.StringType()),
]
)
df.withColumn("model_payload", F.from_json("model_payload", schm)).select(
"model_payload.*"
)

How to create a schema from JSON file using Spark Scala for subset of fields?

I am trying to create a schema of a nested JSON file so that it can become a dataframe.
However, I am not sure if there is way to create a schema without defining all the fields in the JSON file if I only need the 'id' and 'text' from it - a subset.
I am currently doing it using scala in spark shell. As you can see from the file, I downloaded it as part-00000 from HDFS.
.
From the manuals on JSON:
Apply the schema using the .schema method. This read returns only
the columns specified in the schema.
So you are good to go with what you imply.
E.g.
import org.apache.spark.sql.types.{StructType, StructField, StringType, IntegerType};
val schema = new StructType()
.add("op_ts", StringType, true)
val df = spark.read.schema(schema)
.option("multiLine", true).option("mode", "PERMISSIVE")
.json("/FileStore/tables/json_stuff.txt")
df.printSchema()
df.show(false)
returns:
root
|-- op_ts: string (nullable = true)
+--------------------------+
|op_ts |
+--------------------------+
|2019-05-31 04:24:34.000327|
+--------------------------+
for this schema:
root
|-- after: struct (nullable = true)
| |-- CODE: string (nullable = true)
| |-- CREATED: string (nullable = true)
| |-- ID: long (nullable = true)
| |-- STATUS: string (nullable = true)
| |-- UPDATE_TIME: string (nullable = true)
|-- before: string (nullable = true)
|-- current_ts: string (nullable = true)
|-- op_ts: string (nullable = true)
|-- op_type: string (nullable = true)
|-- pos: string (nullable = true)
|-- primary_keys: array (nullable = true)
| |-- element: string (containsNull = true)
|-- table: string (nullable = true)
|-- tokens: struct (nullable = true)
| |-- csn: string (nullable = true)
| |-- txid: string (nullable = true)
gotten from same file using:
val df = spark.read
.option("multiLine", true).option("mode", "PERMISSIVE")
.json("/FileStore/tables/json_stuff.txt")
df.printSchema()
df.show(false)
This latter is just for proof.

xml field in JSON data

I would like to know how to read and parse the xml field which is a part of JSON data.
root
|-- fields: struct (nullable = true)
| |-- custid: string (nullable = true)
| |-- password: string (nullable = true)
| |-- role: string (nullable = true)
| |-- xml_data: string (nullable = true)
and that xml_data has lots of column in it. Lets say these fields inside the XML_data are like nested cols of the "FIELDS" data. So how to parse all the columns "custid","password","role","xml_data.refid","xml_data.refname" all of them into one data frame.
Long question short, how to parse and read xml data that is inside a JSON file as a String content.
This is little tricky, however could be achieve in below simple steps:
Parse XML String to JSON String and append identifier to it (below case: ' ')
Convert entire Dataframe to Dataset of JSON String
Map on Dataset of String, create a valid JSON via identifying the identifier appended in step 1.
Convert Dataset of Valid JSON to Dataframe, That's it done!!
That's it done!!
import spark.implicits._
import scala.xml.XML
import org.json4s.Xml.toJson
import org.json4s.jackson.JsonMethods.{compact, render}
import org.apache.spark.sql.functions.udf
val rdd = spark
.sparkContext
.parallelize(Seq("{\"fields\":{\"custid\":\"custid\",\"password\":\"password\",\"role\":\"role\",\"xml_data\":\"<person><refname>Test Person</refname><country>India</country></person>\"}}"))
val df = spark.read.json(rdd.toDS())
val xmlToJsonUDF = udf { xmlString: String =>
val xml = XML.loadString(xmlString)
s"''${compact(render(toJson(xml)))}''"
}
val xmlParsedDf = df.withColumn("xml_data", xmlToJsonUDF(col("fields.xml_data")))
val jsonDs = xmlParsedDf.toJSON
val validJsonDs = jsonDs.map(value => {
val startIndex = value.indexOf("\"''")
val endIndex = value.indexOf("''\"")
val data = value.substring(startIndex, endIndex).replace("\\", "")
val validJson = s"${value.substring(0, startIndex)}$data${value.substring(endIndex)}"
.replace("\"''", "")
.replace("''\"", "")
validJson
})
val finalDf = spark.read.json(validJsonDs)
finalDf.show(10)
finalDf.printSchema()
finalDf
.select("fields.custid", "fields.password", "fields.role", "fields.xml_data", "xml_data.person.refname", "xml_data.person.country")
.show(10)
Input & Output:
//Input
{"fields":{"custid":"custid","password":"password","role":"role","xml_data":"<person><refname>Test Person</refname><country>India</country></person>"}}
//Final Dataframe
+--------------------+--------------------+
| fields| xml_data|
+--------------------+--------------------+
|[custid, password...|[[India, Test Per...|
+--------------------+--------------------+
//Final Dataframe Schema
root
|-- fields: struct (nullable = true)
| |-- custid: string (nullable = true)
| |-- password: string (nullable = true)
| |-- role: string (nullable = true)
| |-- xml_data: string (nullable = true)
|-- xml_data: struct (nullable = true)
| |-- person: struct (nullable = true)
| | |-- country: string (nullable = true)
| | |-- refname: string (nullable = true)

Re-using A Schema from JSON within a Spark DataFrame using Scala

I have some JSON data like this:
{"gid":"111","createHour":"2014-10-20 01:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 01:40:37.0"},{"revId":"4","modDate":"2014-11-20 01:40:40.0"}],"comments":[],"replies":[]}
{"gid":"222","createHour":"2014-12-20 01:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 01:39:31.0"},{"revId":"4","modDate":"2014-11-20 01:39:34.0"}],"comments":[],"replies":[]}
{"gid":"333","createHour":"2015-01-21 00:00:00.0","revisions":[{"revId":"25","modDate":"2014-11-21 00:34:53.0"},{"revId":"110","modDate":"2014-11-21 00:47:10.0"}],"comments":[{"comId":"4432","content":"How are you?"}],"replies":[{"repId":"4441","content":"I am good."}]}
{"gid":"444","createHour":"2015-09-20 23:00:00.0","revisions":[{"revId":"2","modDate":"2014-11-20 23:23:47.0"}],"comments":[],"replies":[]}
{"gid":"555","createHour":"2016-01-21 01:00:00.0","revisions":[{"revId":"135","modDate":"2014-11-21 01:01:58.0"}],"comments":[],"replies":[]}
{"gid":"666","createHour":"2016-04-23 19:00:00.0","revisions":[{"revId":"136","modDate":"2014-11-23 19:50:51.0"}],"comments":[],"replies":[]}
I can read it in:
val df = sqlContext.read.json("./data/full.json")
I can print the schema with df.printSchema
root
|-- comments: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- comId: string (nullable = true)
| | |-- content: string (nullable = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- content: string (nullable = true)
| | |-- repId: string (nullable = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
I can show the data df.show(10,false)
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
|comments |createHour |gid|replies |revisions |
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
|[] |2014-10-20 01:00:00.0|111|[] |[[2014-11-20 01:40:37.0,2], [2014-11-20 01:40:40.0,4]] |
|[] |2014-12-20 01:00:00.0|222|[] |[[2014-11-20 01:39:31.0,2], [2014-11-20 01:39:34.0,4]] |
|[[4432,How are you?]]|2015-01-21 00:00:00.0|333|[[I am good.,4441]]|[[2014-11-21 00:34:53.0,25], [2014-11-21 00:47:10.0,110]]|
|[] |2015-09-20 23:00:00.0|444|[] |[[2014-11-20 23:23:47.0,2]] |
|[] |2016-01-21 01:00:00.0|555|[] |[[2014-11-21 01:01:58.0,135]] |
|[] |2016-04-23 19:00:00.0|666|[] |[[2014-11-23 19:50:51.0,136]] |
+---------------------+---------------------+---+-------------------+---------------------------------------------------------+
I can print / read the schema val dfSc = df.schema as:
StructType(StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true), StructField(createHour,StringType,true), StructField(gid,StringType,true), StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true), StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true))
I can print this out nicer:
println(df.schema.fields.mkString(",\n"))
StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true),
StructField(createHour,StringType,true),
StructField(gid,StringType,true),
StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true),
StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true)
Now if I read in the same file without the comments and replies row, with val df2 = sqlContext.read.
json("./data/partialRevOnly.json") simply deleting those rows, I get something like this with printSchema:
root
|-- comments: array (nullable = true)
| |-- element: string (containsNull = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: string (containsNull = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
I don't like that, so I use:
val df3 = sqlContext.read.
schema(dfSc).
json("./data/partialRevOnly.json")
where the original schema was dfSc. So now I get exactly the schema I had before with the removed data:
root
|-- comments: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- comId: string (nullable = true)
| | |-- content: string (nullable = true)
|-- createHour: string (nullable = true)
|-- gid: string (nullable = true)
|-- replies: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- content: string (nullable = true)
| | |-- repId: string (nullable = true)
|-- revisions: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- modDate: string (nullable = true)
| | |-- revId: string (nullable = true)
This is perfect ... well almost. I would like to assign this schema to a variable similar to this:
val textSc = StructField(comments,ArrayType(StructType(StructField(comId,StringType,true), StructField(content,StringType,true)),true),true),
StructField(createHour,StringType,true),
StructField(gid,StringType,true),
StructField(replies,ArrayType(StructType(StructField(content,StringType,true), StructField(repId,StringType,true)),true),true),
StructField(revisions,ArrayType(StructType(StructField(modDate,StringType,true), StructField(revId,StringType,true)),true),true)
OK - This won't work due to double quotes, and 'some other structural' stuff, so try this (with error):
import org.apache.spark.sql.types._
val textSc = StructType(Array(
StructField("comments",ArrayType(StructType(StructField("comId",StringType,true), StructField("content",StringType,true)),true),true),
StructField("createHour",StringType,true),
StructField("gid",StringType,true),
StructField("replies",ArrayType(StructType(StructField("content",StringType,true), StructField("repId",StringType,true)),true),true),
StructField("revisions",ArrayType(StructType(StructField("modDate",StringType,true), StructField("revId",StringType,true)),true),true)
))
Name: Compile Error
Message: <console>:78: error: overloaded method value apply with alternatives:
(fields: Array[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: java.util.List[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType <and>
(fields: Seq[org.apache.spark.sql.types.StructField])org.apache.spark.sql.types.StructType
cannot be applied to (org.apache.spark.sql.types.StructField, org.apache.spark.sql.types.StructField)
StructField("comments",ArrayType(StructType(StructField("comId",StringType,true), StructField("content",StringType,true)),true),true),
... Without this error (that I cannot figure a quick way around), I would like to then use textSc in place of dfSc to read in the JSON data with an imposed schema.
I cannot find a '1-to-1 match' way of getting (via println or ...) the schema with acceptable syntax (sort of like above). I suppose some coding can be done with case matching to iron out the double quotes. However, I'm still unclear what rules are required to get the exact schema out of the test fixture that I can simply re-use in my recurring production (versus test fixture) code. Is there a way to get this schema to print exactly as I would code it?
Note: This includes double quotes and all the proper StructField/Types and so forth to be code-compatible drop in.
As a sidebar, I thought about saving a fully-formed golden JSON file to use at the start of the Spark job, but I would like to eventually use date fields and other more concise types instead of strings at the applicable structural locations.
How can I get the dataFrame information coming out of my test harness (using a fully-formed JSON input row with comments and replies) to a point where I can drop the schema as source-code into production code Scala Spark job?
Note: The best answer is some coding means, but an explanation so I can trudge, plod, toil, wade, plow and slog thru the coding is helpful too. :)
I recently ran into this. I'm using Spark 2.0.2 so I don't know if this solution works with earlier versions.
import scala.util.Try
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.catalyst.parser.LegacyTypeStringParser
import org.apache.spark.sql.types.{DataType, StructType}
/** Produce a Schema string from a Dataset */
def serializeSchema(ds: Dataset[_]): String = ds.schema.json
/** Produce a StructType schema object from a JSON string */
def deserializeSchema(json: String): StructType = {
Try(DataType.fromJson(json)).getOrElse(LegacyTypeStringParser.parse(json)) match {
case t: StructType => t
case _ => throw new RuntimeException(s"Failed parsing StructType: $json")
}
}
Note that the "deserialize" function I just copied from a private function in the Spark StructType object. I don't know how well it will be supported across versions.
Well, the error message should tell you everything you have to know here - StructType expects a sequence of fields as an argument. So in your case schema should look like this:
StructType(Seq(
StructField("comments", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("comId", StringType, true),
StructField("content", StringType, true))), true), true),
StructField("createHour", StringType, true),
StructField("gid", StringType, true),
StructField("replies", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("content", StringType, true),
StructField("repId", StringType, true))), true), true),
StructField("revisions", ArrayType(StructType(Seq( // <- Seq[StructField]
StructField("modDate", StringType, true),
StructField("revId", StringType, true))),true), true)))