Processing puzzle for complex json - json

I'm new to data processing with pyspark, pandas. I need some guidence to understand how I can process a relatively complex json coming out of puppet db.
Schema is something like below
root
|-- Hostname: string (nullable = true)
|-- facts-mountpoints: struct (nullable = true)
| |-- /: struct (nullable = true)
| | |-- available: string (nullable = true)
| | |-- available_bytes: long (nullable = true)
| | |-- capacity: string (nullable = true)
| | |-- device: string (nullable = true)
| | |-- filesystem: string (nullable = true)
| | |-- options: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- size: string (nullable = true)
| | |-- size_bytes: long (nullable = true)
| | |-- used: string (nullable = true)
| | |-- used_bytes: long (nullable = true)
| |-- /acfs01: struct (nullable = true)
| | |-- available: string (nullable = true)
| | |-- available_bytes: long (nullable = true)
| | |-- capacity: string (nullable = true)
| | |-- device: string (nullable = true)
| | |-- filesystem: string (nullable = true)
| | |-- options: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- size: string (nullable = true)
| | |-- size_bytes: long (nullable = true)
| | |-- used: string (nullable = true)
| | |-- used_bytes: long (nullable = true)
I use pyspark to create dataframe and process the data.
My problem is that each host can have extra different NFS mounts attached so facts-mountspoints is dynamic and not same across hosts to just flatten/explode and do the work.
To ease of the problem I want to filter out the filesystem="nfs" and get only mounts which are standard and are non nfs.
No matter what I tried I still could not find how to do a filter like below to build my columns.
facts-mountpoints.*.filesystem<>'nfs'
Is there a magical way to filter out on the known struct->unknown struct->field with json dataframes ?
If thats not possible maybe filter out on the mount point names (second struct)
Sample json file can be found here
https://github.com/coskan/stackof/blob/0b29f4f0645e28d3efa297a1c4e949f4a985c639/sample_data.json

Related

In PySpark, how do I read a specific JSON attribute that has been loaded to a dataframe?

I am trying to get the value of "__delta" from the following JSON schema that has been loaded to a dataframe. How do I do that in Pyspark?
root
|-- d: struct (nullable = true)
| |-- __delta: string (nullable = true)
| |-- __next: string (nullable = true)
| |-- results: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- ABRVW: string (nullable = true)
| | | |-- ADRNR: string (nullable = true)
| | | |-- ANRED: string (nullable = true)
with the struct type JSON object just select the object with the attribute you want to get.
df.select("d.__delta")
How about df.select($"d.__delta")

spark showing NULL value while parsing JSON file

I have a JSON file that I am reading in spark.
The schema is getting displayed however when I am trying to read the info column or any sub-element it is always NULL. (which is not NULL)
//reading file
val df = spark.read.json("FilePath")
df.printSchema()
root
|-- data_is: boolean (nullable = true)
|-- Student: struct (nullable = true)
| |-- Id: string (nullable = true)
| |-- JoinDate: string (nullable = true)
| |-- LeaveDate: string (nullable = true)
|-- Info: struct (nullable = true)
| |-- details: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- Father_salary: double (nullable = true)
| | | |-- Mother_salary: double (nullable = true)
| | | |-- Address: String (nullable = true)
| |-- studentInfo: struct (nullable = true)
| | |-- Age: double (nullable = true)
| | |-- Name: String (nullable = true)
df.select("Student").show()
shows the filed value in Student element
even when I parse Student.Id I can get the ID
But whenever parsing the Info, I am always getting a NULL value which is not NULL in the file.
df.select("Info").show() // is showing as NULL
df.select("Info.detail").show() // is showing as NULL
even Info.Summary is NULL.
Can anybody suggest how to get the actual field value instead of NULL?
JSON File
{"Student":{"JoinDate":"20200909","LeaveDate":"20200909","id":"XA12"},"Info":{"studentInfo":{"Age":13,"Name":"Alex"},"details":[{"Father_salary":1234.00,"Mother_salary":0,"Address":""}]},"data_is":true}

what is optimal way to parse following kafka JSON message to pyspark dataframe?

I'm using spark structured streaming to read kafka topic and want to convert following complex JSON (kafka-msgs) in to dataframe having "NAME,ADDRESS,DESCRIPTION,CODE,DEPARTMENT,INFA_OP_TYPE,DTL__CAPXTIMESTAMP" columns.
{
"meta_data": [{"name":{"string":"INFA_SEQUENCE"},"value":
{"string":"2,PWX_GENERIC"},"type":null},
{"name":{"string":"INFA_TABLE_NAME"},"value":{"string":"customers"},"type":null},
{"name":{"string":"INFA_OP_TYPE"},"value":{"string":"INSERT_EVENT"},"type":null},
{"name":{"string":"DTL__CAPXRESTART1"},"value":{"string":"B+IABwAfA"},"type":null},
{"name":{"string":"DTL__CAPXRESTART2"},"value":{"string":"AAABpMwgRDk="},"type":null},
{"name":{"string":"DTL__CAPXUOW"},"value":{"string":"AAMKPgAAqaIABg=="},"type":null},
{"name":{"string":"DTL__CAPXUSER"},"value":null,"type":null},
{"name":{"string":"DTL__CAPXTIMESTAMP"},"value":{"string":"201807310934257270000000"},"type":null},
{"name":{"string":"DTL__CAPXACTION"},"value":{"string":"I"},"type":null}],
"columns":{"array":[{"name":{"string":"NAME"},"value":{"string":"ABCD"},"isPresent":{"boolean":true}},
{"name":{"string":"ADDRESS"},"value":{"string":"123,Bark street"},"isPresent":{"boolean":true}},
{"name":{"string":"DESCRIPTION"},"value":{"string":"Canadian"},"isPresent":{"boolean":true}},
{"name":{"string":"CODE"},"value":{"string":"3_1"},"isPresent":{"boolean":true}},
{"name":{"string":"DEPARTMENT"},"value":{"string":"HR"},"isPresent":{"boolean":true}}
] }
}
I'm able to extract two json object "meta_data" and "columns" but I'm unable to explode "columns.array"
newJsonObj = events.select(get_json_object(events.value,'$.meta_data').alias('meta_data'),get_json_object(events.value,'$.columns.array').alias('columns'))
And I don't know how to extract values from two json object and create dataframe having columns from both json object.
-- Schema of events dataframe --
root
|-- columns: struct (nullable = true)
| |-- array: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- isPresent: struct (nullable = true)
| | | | |-- boolean: boolean (nullable = true)
| | | |-- name: struct (nullable = true)
| | | | |-- string: string (nullable = true)
| | | |-- value: struct (nullable = true)
| | | | |-- string: string (nullable = true)
|-- meta_data: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- name: struct (nullable = true)
| | | |-- string: string (nullable = true)
| | |-- type: string (nullable = true)
| | |-- value: struct (nullable = true)
| | | |-- string: string (nullable = true)

how to parse the wiki infobox json with scala spark

I was trying to get the data from json data which I got it from wiki api
https://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Rajanna&rvsection=0
I was able to print the schema of that exactly
scala> data.printSchema
root
|-- batchcomplete: string (nullable = true)
|-- query: struct (nullable = true)
| |-- pages: struct (nullable = true)
| | |-- 28597189: struct (nullable = true)
| | | |-- ns: long (nullable = true)
| | | |-- pageid: long (nullable = true)
| | | |-- revisions: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- *: string (nullable = true)
| | | | | |-- contentformat: string (nullable = true)
| | | | | |-- contentmodel: string (nullable = true)
| | | |-- title: string (nullable = true)
I want to extract the data of the key "*" |-- *: string (nullable = true)
Please suggest me a solution.
One problem is
pages: struct (nullable = true)
| | |-- 28597189: struct (nullable = true)
the number 28597189 is unique to every title.
First we need to parse the json to get the key (28597189) dynamically then use this to extract the data from spark dataframe like below
val keyName = dataFrame.selectExpr("query.pages.*").schema.fieldNames(0)
println(s"Key Name : $keyName")
this will give you the key dynamically:
Key Name : 28597189
Then use this to extract the data
var revDf = dataFrame.select(explode(dataFrame(s"query.pages.$keyName.revisions")).as("revision")).select("revision.*")
revDf.printSchema()
Output:
root
|-- *: string (nullable = true)
|-- contentformat: string (nullable = true)
|-- contentmodel: string (nullable = true)
and we will be renaming the column * with some key name like star_column
revDf = revDf.withColumnRenamed("*", "star_column")
revDf.printSchema()
Output:
root
|-- star_column: string (nullable = true)
|-- contentformat: string (nullable = true)
|-- contentmodel: string (nullable = true)
and once we have our final dataframe we will call show
revDf.show()
Output:
+--------------------+-------------+------------+
| star_column|contentformat|contentmodel|
+--------------------+-------------+------------+
|{{EngvarB|date=Se...| text/x-wiki| wikitext|
+--------------------+-------------+------------+

spark hivecontext working with queries issues

I'm trying to get information from Jsons to create tables in Hive.
This is my Json schema:
root
|-- info: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- stations: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- bikes: string (nullable = true)
| | | | |-- id: string (nullable = true)
| | | | |-- slots: string (nullable = true)
| | | | |-- streetName: string (nullable = true)
| | | | |-- type: string (nullable = true)
| | |-- updateTime: long (nullable = true)
|-- date: string (nullable = true)
|-- numRecords: string (nullable = true)
I'm using this query:
sqlContext.sql("SELECT info.updateTime FROM STATIONS").foreach(println)
This is what i get:
[WrappedArray(1449098169, 1449108553, 1449098468)]
But i don't know how to put this information in a table to use it after from the Hive console.
I used this:
query.write.save("/home/cloudera/Desktop/select")
And it creates something, but i don't know how to use it.
Thanks
You can do it in several ways...it depends.
First way: Have the table created in the query
sqlContext.sql("create table mytable AS SELECT info.updateTime FROM STATIONS")
// now you can query mytable
Second way: write the DataFrame with saveAsTable()
sqlContext.sql("SELECT info.updateTime FROM STATIONS").saveAsTable("othertable")