How to covert the nested json to datafarme [duplicate] - json

This question already has answers here:
reading json file in pyspark
(4 answers)
Closed 3 years ago.
I have the json data and I want to convert the json data into dataframe
[
{FlierNumber:,BaggageTypeReturn:,FirstName:K,Title:1,MiddleName:D,LastName:Gupta,MealTypeOnward:,DateOfBirth:,BaggageTypeOnward:,SeatTypeOnward:,MealTypeReturn:,FrequentAirline:null,Type:A,SeatTypeReturn:},
{FlierNumber:,BaggageTypeReturn:,FirstName:Sweety,Title:2,MiddleName:,LastName:Gupta,MealTypeOnward:,DateOfBirth:,BaggageTypeOnward:,SeatTypeOnward:,MealTypeReturn:,FrequentAirline:null,Type:A,SeatTypeReturn:}
]

The JSON you gave above is invalid. Here is the syntactically correct JSON format
[{"FlierNumber":"","BaggageTypeReturn":"","FirstName":"K","Title":"1","MiddleName":"D","LastName":"Gupta","MealTypeOnward":"","DateOfBirth":"","BaggageTypeOnward":"","SeatTypeOnward":"","MealTypeReturn":"","FrequentAirline":"null","Type":"A","SeatTypeReturn":""},{"FlierNumber":"","BaggageTypeReturn":"","FirstName":"Sweety","Title":"2","MiddleName":"","LastName":"Gupta","MealTypeOnward":"","DateOfBirth":"","BaggageTypeOnward":"","SeatTypeOnward":"","MealTypeReturn":"","FrequentAirline":"null","Type":"A","SeatTypeReturn":""}]
If it is present in a file you can read in spark directly using
val jsonDF = spark.read.json("filepath\sample.json")
jsonDF.printSchema()
jsonDF.show
Result is:
root
|-- BaggageTypeOnward: string (nullable = true)
|-- BaggageTypeReturn: string (nullable = true)
|-- DateOfBirth: string (nullable = true)
|-- FirstName: string (nullable = true)
|-- FlierNumber: string (nullable = true)
|-- FrequentAirline: string (nullable = true)
|-- LastName: string (nullable = true)
|-- MealTypeOnward: string (nullable = true)
|-- MealTypeReturn: string (nullable = true)
|-- MiddleName: string (nullable = true)
|-- SeatTypeOnward: string (nullable = true)
|-- SeatTypeReturn: string (nullable = true)
|-- Title: string (nullable = true)
|-- Type: string (nullable = true)
+-----------------+-----------------+-----------+---------+-----------+---------------+--------+--------------+--------------+----------+--------------+--------------+-----+----+
|BaggageTypeOnward|BaggageTypeReturn|DateOfBirth|FirstName|FlierNumber|FrequentAirline|LastName|MealTypeOnward|MealTypeReturn|MiddleName|SeatTypeOnward|SeatTypeReturn|Title|Type|
+-----------------+-----------------+-----------+---------+-----------+---------------+--------+--------------+--------------+----------+--------------+--------------+-----+----+
| | | | K| | null| Gupta| | | D| | | 1| A|
| | | | Sweety| | null| Gupta| | | | | | 2| A|
+-----------------+-----------------+-----------+---------+-----------+---------------+--------+--------------+--------------+----------+--------------+--------------+-----+----+

Related

Processing puzzle for complex json

I'm new to data processing with pyspark, pandas. I need some guidence to understand how I can process a relatively complex json coming out of puppet db.
Schema is something like below
root
|-- Hostname: string (nullable = true)
|-- facts-mountpoints: struct (nullable = true)
| |-- /: struct (nullable = true)
| | |-- available: string (nullable = true)
| | |-- available_bytes: long (nullable = true)
| | |-- capacity: string (nullable = true)
| | |-- device: string (nullable = true)
| | |-- filesystem: string (nullable = true)
| | |-- options: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- size: string (nullable = true)
| | |-- size_bytes: long (nullable = true)
| | |-- used: string (nullable = true)
| | |-- used_bytes: long (nullable = true)
| |-- /acfs01: struct (nullable = true)
| | |-- available: string (nullable = true)
| | |-- available_bytes: long (nullable = true)
| | |-- capacity: string (nullable = true)
| | |-- device: string (nullable = true)
| | |-- filesystem: string (nullable = true)
| | |-- options: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- size: string (nullable = true)
| | |-- size_bytes: long (nullable = true)
| | |-- used: string (nullable = true)
| | |-- used_bytes: long (nullable = true)
I use pyspark to create dataframe and process the data.
My problem is that each host can have extra different NFS mounts attached so facts-mountspoints is dynamic and not same across hosts to just flatten/explode and do the work.
To ease of the problem I want to filter out the filesystem="nfs" and get only mounts which are standard and are non nfs.
No matter what I tried I still could not find how to do a filter like below to build my columns.
facts-mountpoints.*.filesystem<>'nfs'
Is there a magical way to filter out on the known struct->unknown struct->field with json dataframes ?
If thats not possible maybe filter out on the mount point names (second struct)
Sample json file can be found here
https://github.com/coskan/stackof/blob/0b29f4f0645e28d3efa297a1c4e949f4a985c639/sample_data.json

In PySpark, how do I read a specific JSON attribute that has been loaded to a dataframe?

I am trying to get the value of "__delta" from the following JSON schema that has been loaded to a dataframe. How do I do that in Pyspark?
root
|-- d: struct (nullable = true)
| |-- __delta: string (nullable = true)
| |-- __next: string (nullable = true)
| |-- results: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- ABRVW: string (nullable = true)
| | | |-- ADRNR: string (nullable = true)
| | | |-- ANRED: string (nullable = true)
with the struct type JSON object just select the object with the attribute you want to get.
df.select("d.__delta")
How about df.select($"d.__delta")

Apache spark: Write JSON DataFrame partitionBy nested columns

I have this kind of JSON data:
{
"data": [
{
"id": "4619623",
"team": "452144",
"created_on": "2018-10-09 02:55:51",
"links": {
"edit": "https://some_page",
"publish": "https://some_publish",
"default": "https://some_default"
}
},
{
"id": "4619600",
"team": "452144",
"created_on": "2018-10-09 02:42:25",
"links": {
"edit": "https://some_page",
"publish": "https://some_publish",
"default": "https://some_default"
}
}
}
I read this data using Apache spark and I want to write them partition by id column. When I use this:
df.write.partitionBy("data.id").json(<path_to_folder>)
I will get error:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Partition column data.id not found in schema
I also tried to use explode function like that:
import org.apache.spark.sql.functions.{col, explode}
val renamedDf= df.withColumn("id", explode(col("data.id")))
renamedDf.write.partitionBy("id").json(<path_to_folder>)
That actually helped, but each id partition folder contained the same original JSON file.
EDIT: schema of df DataFrame:
|-- data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- created_on: string (nullable = true)
| | |-- id: string (nullable = true)
| | |-- links: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- edit: string (nullable = true)
| | | |-- publish: string (nullable = true)
Schema of renamedDf DataFrame:
|-- data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- created_on: string (nullable = true)
| | |-- id: string (nullable = true)
| | |-- links: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- edit: string (nullable = true)
| | | |-- publish: string (nullable = true)
|-- id: string (nullable = true)
I am using spark 2.1.0
I found this solution: DataFrame partitionBy on nested columns
And this example:http://bigdatums.net/2016/02/12/how-to-extract-nested-json-data-in-spark/
But none of this helped me to solve my problem.
Thanks in andvance for any help.
try the following code:
val renamedDf = df
.select(explode(col("data")) as "x" )
.select($"x.*")
renamedDf.write.partitionBy("id").json(<path_to_folder>)
You are just missing a select statement after the initial explode
val df = spark.read.option("multiLine", true).option("mode", "PERMISSIVE").json("/FileStore/tables/test.json")
df.printSchema
root
|-- data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- created_on: string (nullable = true)
| | |-- id: string (nullable = true)
| | |-- links: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- edit: string (nullable = true)
| | | |-- publish: string (nullable = true)
| | |-- team: string (nullable = true)
import org.apache.spark.sql.functions.{col, explode}
val df1= df.withColumn("data", explode(col("data")))
df1.printSchema
root
|-- data: struct (nullable = true)
| |-- created_on: string (nullable = true)
| |-- id: string (nullable = true)
| |-- links: struct (nullable = true)
| | |-- default: string (nullable = true)
| | |-- edit: string (nullable = true)
| | |-- publish: string (nullable = true)
| |-- team: string (nullable = true)
val df2 = df1.select("data.created_on","data.id","data.team","data.links")
df2.show
+-------------------+-------+------+--------------------+
| created_on| id| team| links|
+-------------------+-------+------+--------------------+
|2018-10-09 02:55:51|4619623|452144|[https://some_def...|
|2018-10-09 02:42:25|4619600|452144|[https://some_def...|
+-------------------+-------+------+--------------------+
df2.write.partitionBy("id").json("FileStore/tables/test_part.json")
val f = spark.read.json("/FileStore/tables/test_part.json/id=4619600")
f.show
+-------------------+--------------------+------+
| created_on| links| team|
+-------------------+--------------------+------+
|2018-10-09 02:42:25|[https://some_def...|452144|
+-------------------+--------------------+------+
val full = spark.read.json("/FileStore/tables/test_part.json")
full.show
+-------------------+--------------------+------+-------+
| created_on| links| team| id|
+-------------------+--------------------+------+-------+
|2018-10-09 02:55:51|[https://some_def...|452144|4619623|
|2018-10-09 02:42:25|[https://some_def...|452144|4619600|
+-------------------+--------------------+------+-------+

what is optimal way to parse following kafka JSON message to pyspark dataframe?

I'm using spark structured streaming to read kafka topic and want to convert following complex JSON (kafka-msgs) in to dataframe having "NAME,ADDRESS,DESCRIPTION,CODE,DEPARTMENT,INFA_OP_TYPE,DTL__CAPXTIMESTAMP" columns.
{
"meta_data": [{"name":{"string":"INFA_SEQUENCE"},"value":
{"string":"2,PWX_GENERIC"},"type":null},
{"name":{"string":"INFA_TABLE_NAME"},"value":{"string":"customers"},"type":null},
{"name":{"string":"INFA_OP_TYPE"},"value":{"string":"INSERT_EVENT"},"type":null},
{"name":{"string":"DTL__CAPXRESTART1"},"value":{"string":"B+IABwAfA"},"type":null},
{"name":{"string":"DTL__CAPXRESTART2"},"value":{"string":"AAABpMwgRDk="},"type":null},
{"name":{"string":"DTL__CAPXUOW"},"value":{"string":"AAMKPgAAqaIABg=="},"type":null},
{"name":{"string":"DTL__CAPXUSER"},"value":null,"type":null},
{"name":{"string":"DTL__CAPXTIMESTAMP"},"value":{"string":"201807310934257270000000"},"type":null},
{"name":{"string":"DTL__CAPXACTION"},"value":{"string":"I"},"type":null}],
"columns":{"array":[{"name":{"string":"NAME"},"value":{"string":"ABCD"},"isPresent":{"boolean":true}},
{"name":{"string":"ADDRESS"},"value":{"string":"123,Bark street"},"isPresent":{"boolean":true}},
{"name":{"string":"DESCRIPTION"},"value":{"string":"Canadian"},"isPresent":{"boolean":true}},
{"name":{"string":"CODE"},"value":{"string":"3_1"},"isPresent":{"boolean":true}},
{"name":{"string":"DEPARTMENT"},"value":{"string":"HR"},"isPresent":{"boolean":true}}
] }
}
I'm able to extract two json object "meta_data" and "columns" but I'm unable to explode "columns.array"
newJsonObj = events.select(get_json_object(events.value,'$.meta_data').alias('meta_data'),get_json_object(events.value,'$.columns.array').alias('columns'))
And I don't know how to extract values from two json object and create dataframe having columns from both json object.
-- Schema of events dataframe --
root
|-- columns: struct (nullable = true)
| |-- array: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- isPresent: struct (nullable = true)
| | | | |-- boolean: boolean (nullable = true)
| | | |-- name: struct (nullable = true)
| | | | |-- string: string (nullable = true)
| | | |-- value: struct (nullable = true)
| | | | |-- string: string (nullable = true)
|-- meta_data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- name: struct (nullable = true)
| | | |-- string: string (nullable = true)
| | |-- type: string (nullable = true)
| | |-- value: struct (nullable = true)
| | | |-- string: string (nullable = true)

how to parse the wiki infobox json with scala spark

I was trying to get the data from json data which I got it from wiki api
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Rajanna&rvsection=0
I was able to print the schema of that exactly
scala> data.printSchema
root
|-- batchcomplete: string (nullable = true)
|-- query: struct (nullable = true)
| |-- pages: struct (nullable = true)
| | |-- 28597189: struct (nullable = true)
| | | |-- ns: long (nullable = true)
| | | |-- pageid: long (nullable = true)
| | | |-- revisions: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- *: string (nullable = true)
| | | | | |-- contentformat: string (nullable = true)
| | | | | |-- contentmodel: string (nullable = true)
| | | |-- title: string (nullable = true)
I want to extract the data of the key "*" |-- *: string (nullable = true)
Please suggest me a solution.
One problem is
pages: struct (nullable = true)
| | |-- 28597189: struct (nullable = true)
the number 28597189 is unique to every title.
First we need to parse the json to get the key (28597189) dynamically then use this to extract the data from spark dataframe like below
val keyName = dataFrame.selectExpr("query.pages.*").schema.fieldNames(0)
println(s"Key Name : $keyName")
this will give you the key dynamically:
Key Name : 28597189
Then use this to extract the data
var revDf = dataFrame.select(explode(dataFrame(s"query.pages.$keyName.revisions")).as("revision")).select("revision.*")
revDf.printSchema()
Output:
root
|-- *: string (nullable = true)
|-- contentformat: string (nullable = true)
|-- contentmodel: string (nullable = true)
and we will be renaming the column * with some key name like star_column
revDf = revDf.withColumnRenamed("*", "star_column")
revDf.printSchema()
Output:
root
|-- star_column: string (nullable = true)
|-- contentformat: string (nullable = true)
|-- contentmodel: string (nullable = true)
and once we have our final dataframe we will call show
revDf.show()
Output:
+--------------------+-------------+------------+
| star_column|contentformat|contentmodel|
+--------------------+-------------+------------+
|{{EngvarB|date=Se...| text/x-wiki| wikitext|
+--------------------+-------------+------------+