Need help Parsing strange JSON with scala - json

I am working on parsing json to spark dataframe in scala. I have a nested json file of 50 different records of different household items. On JSON I am trying to parse the equipment tag is as below:
"equipment":[{"tv":[""]}]
Due to this item name (ex: tv in this case) is becoming column name than values.
Ideally this tag should be like,
"equipment":["tv"]
Is there a way parse this type of JSON tags/ contents?
Due to this the dataframe schema is being shown as:
|-- equipment: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- ac: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- tv: array (nullable = true)
| | | |-- element: string (containsNull = true)
Where you can see that (above) ac & tv are becoming column headers. Instead of that i need them to shown as values. The dataframe should look like:
+----------+
|equipment |
+----------+
|tv |
|ac |
+----------+

A simple explode function should have done the trick for you but looking at your schema, two explode functions would do the trick as
val newdf = dataframe.withColumn("equipment", explode($"equipment"))
newdf.withColumn("equipment", explode(array($"equipment.*"))).show(false)
With these steps you should have the desired result as in the question.
Edited
From your comments it seems that you are trying to explode the fieldNames not the values. So the following code should work then
val newdf = dataframe.withColumn("equipment", explode($"equipment"))
sc.parallelize(newdf.select("equipment.*").schema.fieldNames.toSeq).toDF("equipment").show(false)
Here's the complete code I am testing with
val data = Seq("""{"equipment":[{"tv":[""],"ac":[""]}]}""")
val dataframe = sqlContext.read.json(sc.parallelize(data))
dataframe.printSchema()
val newdf = dataframe.withColumn("equipment", explode($"equipment"))
sc.parallelize(newdf.select("equipment.*").schema.fieldNames.toSeq).toDF("equipment").show(false)
the printed schema matches with yours
root
|-- equipment: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- ac: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- tv: array (nullable = true)
| | | |-- element: string (containsNull = true)
And the result I get matches with your expected result
+---------+
|equipment|
+---------+
|ac |
|tv |
+---------+

Related

scala dataframe column names replace '-' with _ for nested json

I am working with Nested json, using scala and need to replace the - in column names with _.
Schema of json:
|-- a-type: struct (nullable = true)
| |-- x-Type: array (nullable = true)
| | |-- element: string (containsNull = true)
| |-- part: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- x-Type: array (nullable = true)
| | | | |-- element: string (containsNull = true)
| | | |-- Length: long (nullable = true)
| | | |-- Order: long (nullable = true)
| | | |-- y-Name: string (nullable = true)
| | | |-- Payload-Text: string (nullable = true)
| |-- Date: string (nullable = true)
I am using below code which only works at first level. However, I have to replace - with _ at all levels. Any help is really appreciated.
Code used currently:
scJsonDF.columns.foreach { col =>
println(col + " after column replace " + col.replaceAll("-", ""))
scJsonDFCorrectedCols = scJsonDFCorrectedCols.withColumnRenamed(col, col.replaceAll("-", "")
)
}
I am looking for a dynamic solution as there are different structures available.
One of the solution I found is to flatten the json and update column names. I used link here to help https://gist.github.com/fahadsiddiqui/d5cff15698f9dc57e2dd7d7052c6cc43 and updated a line
col(x.toString).as(x.toString.replace(".", "_"))
col(x.toString).as(x.toString.replaceAll("-","_").replace(".", "_"))

How to read a string value in JSON array struct?

This is my code:
df_05_body = spark.sql("""
select
gtin
, principalBody.constituents
from
v_df_04""")
df_05_body.createOrReplaceTempView("v_df_05_body")
df_05_body.printSchema()
This is the schema:
root
|-- gtin: array (nullable = true)
| |-- element: string (containsNull = true)
|-- constituents: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: struct (containsNull = true)
| | | |-- constituentCategory: struct (nullable = true)
| | | | |-- value: string (nullable = true)
| | | | |-- valueRange: string (nullable = true)
How to change the principalBody.constituents row in the SQL to read the fields constituentCategory.value and constituentCategory.valueRange?
The column constituents is an array of arrays of structs. If your intent is to get a flat structure then you'll need to flatten the nested arrays, then explode:
df_05_body = spark.sql("""
WITH
v_df_04_exploded AS (
SELECT
gtin,
explode(flatten(principalBody.constituents)) AS constituent
FROM
v_df_04 )
SELECT
gtin,
constituent.constituentCategory.value,
constituent.constituentCategory.valueRange
FROM
v_df_04_exploded
""")
Or simply using inline after flatten like this:
df_05_body = spark.sql("""
SELECT
gtin,
inline(flatten(principalBody.constituents))
FROM
v_df_04_exploded
""")

Spark Scala - Split Array of Structs into Dataframe Columns

I have a nested source json file that contains an array of structs. The number of structs varies greatly from row to row and I would like to use Spark (scala) to dynamically create new dataframe columns from the key/values of the struct where the key is the column name and the value is the column value.
Example Minified json record
{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}
dataframe schema
scala> val df = spark.read.json("file:///tmp/nested_test.json")
root
|-- key1: struct (nullable = true)
| |-- key2: struct (nullable = true)
| | |-- key3: string (nullable = true)
| | |-- key4: string (nullable = true)
| | |-- key5: struct (nullable = true)
| | | |-- key6: string (nullable = true)
| | | |-- key7: string (nullable = true)
| | | |-- values: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- name: string (nullable = true)
| | | | | |-- value: string (nullable = true)
Whats been done so far
df.select(
($"key1.key2.key3").as("key3"),
($"key1.key2.key4").as("key4"),
($"key1.key2.key5.key6").as("key6"),
($"key1.key2.key5.key7").as("key7"),
($"key1.key2.key5.values").as("values")).
show(truncate=false)
+----+----+----+----+----------------------------------------------------------------------------+
|key3|key4|key6|key7|values |
+----+----+----+----+----------------------------------------------------------------------------+
|AK |EU |001 |N |[[valuesColumn1, 9.876], [valuesColumn2, 1.2345], [valuesColumn3, 8.675309]]|
+----+----+----+----+----------------------------------------------------------------------------+
There is an array of 3 structs here but the 3 structs need to be spilt into 3 separate columns dynamically (the number of 3 can vary greatly), and I am not sure how to do it.
Sample Desired output
Notice that there were 3 new columns produced for each of the array elements within the values array.
+----+----+----+----+-----------------------------------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-----------------------------------------+
|AK |EU |001 |N |9.876 |1.2345 |8.675309 |
+----+----+----+----+-----------------------------------------+
Reference
I believe that the desired solution is something similar to what was discussed in this SO post but with 2 main differences:
The number of columns is hardcoded to 3 in the SO post but in my circumstance, the number of array elements is unknown
The column names need to be driven by the name column and the column value by the value.
...
| | | | |-- element: struct (containsNull = true)
| | | | | |-- name: string (nullable = true)
| | | | | |-- value: string (nullable = true)
You could do it this way:
val sac = new SparkContext("local[*]", " first Program");
val sqlc = new SQLContext(sac);
import sqlc.implicits._;
import org.apache.spark.sql.functions.split
import scala.math._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions.{ min, max }
val json = """{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}"""
val df1 = sqlc.read.json(Seq(json).toDS())
val df2 = df1.select(
($"key1.key2.key3").as("key3"),
($"key1.key2.key4").as("key4"),
($"key1.key2.key5.key6").as("key6"),
($"key1.key2.key5.key7").as("key7"),
($"key1.key2.key5.values").as("values")
)
val numColsVal = df2
.withColumn("values_size", size($"values"))
.agg(max($"values_size"))
.head()
.getInt(0)
val finalDFColumns = df2.select(explode($"values").as("values")).select("values.*").select("name").distinct.map(_.getAs[String](0)).orderBy($"value".asc).collect.foldLeft(df2.limit(0))((cdf, c) => cdf.withColumn(c, lit(null))).columns
val finalDF = df2.select($"*" +: (0 until numColsVal).map(i => $"values".getItem(i)("value").as($"values".getItem(i)("name").toString)): _*)
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).show(false)
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).drop($"values").show(false)
The resulting final output as :
+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
|AK |EU |001 |N |9.876 |1.2345 |8.675309 |
+----+----+----+----+-------------+-------------+-------------+
Hope I got your question right!
----------- EDIT with Explanation----------
This block gets the number of columns to be created for the array structure.
val numColsVal = df2
.withColumn("values_size", size($"values"))
.agg(max($"values_size"))
.head()
.getInt(0)
finalDFColumns is the DF created with all the expected columns as output with null values.
Below block returns the different columns that needs to be created from the array structure.
df2.select(explode($"values").as("values")).select("values.*").select("name").distinct.map(_.getAs[String](0)).orderBy($"value".asc).collect
Below block combines the above new columns with the other columns in df2 initialized with empty/null values.
foldLeft(df2.limit(0))((cdf, c) => cdf.withColumn(c, lit(null)))
Combining these two blocks if you print the output you will get :
+----+----+----+----+------+-------------+-------------+-------------+
|key3|key4|key6|key7|values|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+------+-------------+-------------+-------------+
+----+----+----+----+------+-------------+-------------+-------------+
Now we have the structure ready. We need the values for corresponding columns here. Below block gets us the values:
df2.select($"*" +: (0 until numColsVal).map(i => $"values".getItem(i)("value").as($"values".getItem(i)("name").toString)): _*)
This results like below:
+----+----+----+----+--------------------+---------------+---------------+---------------+
|key3|key4|key6|key7| values|values[0][name]|values[1][name]|values[2][name]|
+----+----+----+----+--------------------+---------------+---------------+---------------+
| AK| EU| 001| N|[[valuesColumn1, ...| 9.876| 1.2345| 8.675309|
+----+----+----+----+--------------------+---------------+---------------+---------------+
Now we need to rename the columns as we have in the first block above. So we will use the zip function to merge the columns and then use foldLeft method to rename the output columns as below :
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).show(false)
This results in the below structure:
+----+----+----+----+--------------------+-------------+-------------+-------------+
|key3|key4|key6|key7| values|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+--------------------+-------------+-------------+-------------+
| AK| EU| 001| N|[[valuesColumn1, ...| 9.876| 1.2345| 8.675309|
+----+----+----+----+--------------------+-------------+-------------+-------------+
We are almost there. We now just need to remove the unwanted values column like this:
finalDF.columns.zip(finalDFColumns).foldLeft(finalDF)((fdf, column) => fdf.withColumnRenamed(column._1, column._2)).drop($"values").show(false)
Thus resulting into expected output as follows -
+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
|AK |EU |001 |N |9.876 |1.2345 |8.675309 |
+----+----+----+----+-------------+-------------+-------------+
I'm not sure if I was able to explain it clearly. But if you try breaking the above statements/code and try printing it you will get to know how we are reaching till the output. You could find the explanation with examples for different functions used in this logic on internet.
I found that this approach performed much better and was easier to understand using an explode and pivot:
val json = """{"key1":{"key2":{"key3":"AK","key4":"EU","key5":{"key6":"001","key7":"N","values":[{"name":"valuesColumn1","value":"9.876"},{"name":"valuesColumn2","value":"1.2345"},{"name":"valuesColumn3","value":"8.675309"}]}}}}"""
val df = spark.read.json(Seq(json).toDS())
// schema
df.printSchema
root
|-- key1: struct (nullable = true)
| |-- key2: struct (nullable = true)
| | |-- key3: string (nullable = true)
| | |-- key4: string (nullable = true)
| | |-- key5: struct (nullable = true)
| | | |-- key6: string (nullable = true)
| | | |-- key7: string (nullable = true)
| | | |-- values: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- name: string (nullable = true)
| | | | | |-- value: string (nullable = true)
// create final df
val finalDf = df.
select(
$"key1.key2.key3".as("key3"),
$"key1.key2.key4".as("key4"),
$"key1.key2.key5.key6".as("key6"),
$"key1.key2.key5.key7".as("key7"),
explode($"key1.key2.key5.values").as("values")
).
groupBy(
$"key3", $"key4", $"key6", $"key7"
).
pivot("values.name").
agg(min("values.value")).alias("values.name")
// result
finalDf.show
+----+----+----+----+-------------+-------------+-------------+
|key3|key4|key6|key7|valuesColumn1|valuesColumn2|valuesColumn3|
+----+----+----+----+-------------+-------------+-------------+
| AK| EU| 001| N| 9.876| 1.2345| 8.675309|
+----+----+----+----+-------------+-------------+-------------+

Writing a nested dataframe to JSON file removes camelcase on attributes

I have a big nested Dataframe with a lots of columns. Here is an extract of the anonymized schema :
df.printSchema()
root
|-- column1: null (nullable = true)
|-- camelCaseColumn1: string (nullable = false)
|-- column2: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- column3: string (nullable = true)
| | |-- camelCaseColumn2: string (nullable = true)
| | |-- column4: string (nullable = true)
| | |-- camelCaseColumn3: struct (nullable = true)
| | | |-- column5: null (nullable = true)
| | | |-- column6: null (nullable = true)
| | |-- camelCaseColumn4: string (nullable = true)
I write the DataFrame to JSON format :
df.write.mode("overwrite").json(targetPath)
And I use copyMerge() function next to merge all the part-files generated :
FileUtil.copyMerge(fs, srcPath, fs, dstFile, deleteSource, configuration, null)
Then when I get the result JSON file for example using an hdfs dfs -cat or -get :
{
"column1":"value",
"camelCaseColumn1":"value",
"column2":[
{
"column3":"value",
"camelcasecolumn2":"value",
"column4":"value",
"camelcasecolumn3":{
"column5":"value",
"column6":"value"
},
"camelcasecolumn4":"value",
We see that the camelCase has been preserved on the first levels of the JSON but on the deeper levels it has been lowercased.
Do you have any explanation and may be a way to preserv camelcase on JSON attributes whatever their level in the file ? We are using Spark 1.6.3 on our environment.
EDIT : Found a solution, see comment below.

Unable to fetch Json Column using sparkDataframe:org.apache.spark.sql.AnalysisException: cannot resolve 'explode;

Can Someone Help me In this Scenario.I am reading one Json File using spark/scala and then trying to access column name but while accessing the column name i am getting below error message.
org.apache.spark.sql.AnalysisException: cannot resolve
'explode(`b2b_bill_products_prod_details`.`amt`)'
due to data type mismatch: input to function explode should be
array or map type, not DoubleType;;
Please see the Json Schema and my code below.
root
|-- b2b: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- add1: string (nullable = true)
| | |-- bill: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- amt: double (nullable = true)
| | | | |-- products: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- prod_details: struct (nullable = true)
| | | | | | | |-- amt: double (nullable = true)
I want to access amt field(last line in the json schema) I am writing below spark/scala code
df.withColumn("b2b_bill",explode($"b2b.bill"))
.withColumn("b2b_bill_products",explode($"b2b_bill.products"))
.withColumn("b2b_bill_products_prod_details", explode($"b2b_bill_products.prod_details"))
.withColumn("b2b_bill_products_prod_details_amt",explode($"b2b_bill_products_prod_details.amt"))
Your 4th explode function is applied on the amt: double column, wherein the explode function expects array/map input type. Thats the error reported.
Edit
You can access the inner most amt field with the expression given below,
df.withColumn("b2b_bill",explode($"b2b.bill"))
.withColumn("b2b_bill_products",explode($"b2b_bill.products"))
.withColumn("b2b_bill_products_prod_details_amt", $"b2b_bill_products.element.prod_details.amt")