Exploding Entire JSON File in PySpark - json

I am trying to normalize (perhaps not the precise term) a nested JSON object in PySpark. The actual data I care about is under articles. The schema is:
df = spark.read.json(filepath)
df.printSchema()
root
|-- articles: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- adultLanguage: string (nullable = true)
| | |-- companies: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- contentCount: long (nullable = true)
| | | | |-- exchange: string (nullable = true)
| | | | |-- isin: string (nullable = true)
| | | | |-- name: string (nullable = true)
| | | | |-- primary: boolean (nullable = true)
| | | | |-- symbol: string (nullable = true)
| | | | |-- titleCount: long (nullable = true)
| | |-- content: string (nullable = true)
| | |-- copyright: string (nullable = true)
| | |-- duplicateGroupId: string (nullable = true)
| | |-- estimatedPublishedDate: string (nullable = true)
| | |-- harvestDate: string (nullable = true)
| | |-- id: string (nullable = true)
| | |-- indexTerms: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- domains: array (nullable = true)
| | | | | |-- element: string (containsNull = true)
| | | | |-- name: string (nullable = true)
| | | | |-- score: string (nullable = true)
| | |-- language: string (nullable = true)
| | |-- languageCode: string (nullable = true)
| | |-- licenses: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- name: string (nullable = true)
| | |-- linkedArticles: array (nullable = true)
| | | |-- element: string (containsNull = true)
| | |-- originalUrl: string (nullable = true)
| | |-- print: struct (nullable = true)
| | | |-- pageNumber: string (nullable = true)
| | | |-- publicationEdition: string (nullable = true)
| | |-- publishedDate: string (nullable = true)
| | |-- publishingPlatform: struct (nullable = true)
| | | |-- itemId: string (nullable = true)
| | |-- semantics: struct (nullable = true)
| | | |-- entities: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- properties: array (nullable = true)
| | | | | | |-- element: struct (containsNull = true)
| | | | | | | |-- name: string (nullable = true)
| | | | | | | |-- value: string (nullable = true)
| | | | | |-- provider: string (nullable = true)
| | |-- sentiment: struct (nullable = true)
| | | |-- score: string (nullable = true)
| | |-- sequenceId: string (nullable = true)
| | |-- source: struct (nullable = true)
| | | |-- category: string (nullable = true)
| | | |-- editorialRank: string (nullable = true)
| | | |-- feed: struct (nullable = true)
| | | | |-- id: string (nullable = true)
| | | | |-- idFromPublisher: string (nullable = true)
| | | | |-- inWhiteList: string (nullable = true)
| | | | |-- language: string (nullable = true)
| | | | |-- mediaType: string (nullable = true)
| | | | |-- publishingPlatform: string (nullable = true)
| | | | |-- rank: struct (nullable = true)
| | | | | |-- inboundLinkCount: string (nullable = true)
| | | |-- homeUrl: string (nullable = true)
| | | |-- id: string (nullable = true)
| | | |-- location: struct (nullable = true)
| | | | |-- country: string (nullable = true)
| | | | |-- countryCode: string (nullable = true)
| | | | |-- region: string (nullable = true)
| | | | |-- state: string (nullable = true)
| | | | |-- subregion: string (nullable = true)
| | | |-- name: string (nullable = true)
| | |-- title: string (nullable = true)
| | |-- topics: array (nullable = true)
| | | |-- element: struct (containsNull = true)
| | | | |-- name: string (nullable = true)
| | |-- url: string (nullable = true)
| | |-- wordCount: string (nullable = true)
|-- status: string (nullable = true)
|-- totalResults: string (nullable = true)
I can successfully use select to grab variables that are strings, such as:
df1 = df.select(explode(df.articles).alias('articles'))
df2 = df1.select(
'articles.adultLanguage',
...)
but I don't know how I can get variables that are multiple levels down, such as author and companies that are arrays with potentially multiple values.
{"status": "SUCCESS", "totalResults": "294629", "articles": [{"sequenceId": "557545999680", "id": "24414529721", "language": "English", "languageCode": "en", "title": "Young CEO recruits seasoned advisers", "content": "A black punching bag hovers alongside the desk in Grant Verstandig's office. A pair of worn sneakers rests on the arm of a white, modular sofa. Overhead, a photo of a hulking Muhammad Ali hangs from the wall.\n\nIf it weren't for the panoramic view of the Georgetown waterfront, this space could be mistaken for a college dorm room. Perhaps that's fitting for the 22-year-old chief executive of Audax Health, a start-up that blends social media with health care. The company has been Verstandig's brainchild since the District native endured a spate of intensive knee surgeries to correct sports-induced injuries. Its banner product, called Careverge, will make a public debut next week at the Consumer Electronics Show in Las Vegas.\n\nCareverge users answer a series of questions about their health history that range from daily dietary habits to specific chronic illnesses. The site allows them to anonymously read relevant Web resources, connect with similar users and set health goals. Audax plans to market Careverge as a benefit for companies to offer employees.Heavy-hitting backers\n\nWhile Washington has become home to a crop of 20-something entrepreneurs with ambitious plans to launch businesses, Audax may stand out for the seasoned lineup of mentors Verstandig has managed to recruit to its board.\n\nJohn Sculley, the former chief executive of Pepsi and Apple, has been a financial backer and business adviser since May 2011. He had been hunting for a health care investment when a business contact introduced him to Audax.\n\nFrom the health arena, Verstandig has brought on Dr. Richard Klausner, former executive director for global health of the Bill and Melinda Gates Foundation and director of the National Cancer Institute from 1995 to 2001.\n\nKlausner, who worked with Verstandig's mother in the Clinton administration, introduced him to the health and science fields as a high school student through summer work at the National Institutes of Health.\n\nAlso on the board are Roger W. Ferguson Jr.,president and chief executive of retirement services provider TIAA-CREF, and John Wallis Rowe, the former chairman and chief executive of insurance firm Aetna.Knee injury sparked idea\n\nVerstandig said the makings of the business really began in his Brown University dorm room where, while laid up after a knee operation, he began to compile a list of industry contacts. Then through social media, e-mail and telephone calls, he began to network.\n\n\"The candid truth is spending a lot of time not being able to move made you focus,\" said Verstandig, who would later drop out to focus on the company full time.\n\nHis persistence and charm - it's clear Verstandig can talk his way through almost any social situation - impressed Sculley. During their meeting, Sculley twice asked Verstandig to name his biggest mistake. His response - setting unrealistic expectations and misreading progress - aren't uncommon for first-time entrepreneurs.\n\n\"In every case, except this one, I always work with serial entrepreneurs,\" Sculley said. \"I said, 'Gee, this violates everything I said I'm going to do. I'm not going to work with people who have never built companies before and yet here is a guy who resembles in some ways Steve Jobs and Bill Gates when they were in their 20s.'\"\n\nFor a maturity beyond his years, Verstandig certainly looks his age. Smelling of cologne and wearing a V-neck sweater and dark denim jeans, his office attire could transition easily to a bar in Foggy Bottom.\n\nBut casual is to be expected at a company like Audax. Ping-pong tables, remote-control helicopters and oversized bean-bag chairs are just a few of the start-up staples that the company makes available to its 55 employees.No more do-it-all himself\n\nVerstandig admits that his position has come with a steep learning curve, particularly as a first-time CEO without any prior business experience or education.\n\n\"Back in the early days I did everything myself because I thought I could do it faster, better, quicker, but now I just hire people who are smarter and hire people who are more experienced,\" he said.\n\nAnd then there are challenges beyond his control. Health care can be a notoriously stubborn market where attempts at innovation become bogged down by bureaucracy, regulation and big business.\n\n\"You have to be acutely aware of all those things that are swirling around, but at the end of the day, one thing I've learned from my mentors is you can't control what you can't control,\" Verstandig said.\n\nTiming, however, may be on the company's side. Verstandig and Sculley both believe that health care reform at the federal level combined with other initiatives to revamp the system make now an opportune moment for a company like Audax to make a strong play.\n\n\"I am absolutely convinced that the health care problem that we have in the economy will eventually be solved largely by innovation from the private sector and not the government,\" Sculley said.\n\noverlys#washpost.com", "publishedDate": "2012-01-02T00:00:00Z", "harvestDate": "2012-01-02T00:00:00Z", "estimatedPublishedDate": "2012-01-02T00:00:00Z", "url": "https://some.url.com/?a=24414529721&p=6e2&v=1&x=F98g2RdEJP-tyqi1V-39-A", "originalUrl": "http://some.url.com/noarticleurl?type=Companydnf&lnlni=54MC-P5M1-JBFW-C23W-00000-00", "wordCount": "822", "copyright": "Copyright 2012 The Washington Post. All Rights Reserved.", "duplicateGroupId": "24414529721", "media": {}, "publishingPlatform": {"itemId": "54MC-P5M1-JBFW-C23W-00000-00"}, "adultLanguage": "false", "topics": [{"name": "Executive moves news"}], "indexTerms": [{"domains": ["IND"], "name": "SOCIAL MEDIA", "score": "75"}, {"domains": ["IND"], "name": "CONSUMER ELECTRONICS", "score": "74"}, {"domains": ["SUB"], "name": "EXECUTIVES", "score": "90"}, {"domains": ["SUB"], "name": "STUDENT HOUSING", "score": "89"}, {"domains": ["SUB"], "name": "KNEE DISORDERS & INJURIES", "score": "89"}, {"domains": ["SUB"], "name": "WOUNDS & INJURIES", "score": "78"}, {"domains": ["SUB"], "name": "RESEARCH INSTITUTES", "score": "78"}, {"domains": ["SUB"], "name": "MEDICAL RESEARCH", "score": "76"}, {"domains": ["SUB"], "name": "HEALTH DEPARTMENTS", "score": "76"}, {"domains": ["SUB"], "name": "ENTREPRENEURSHIP", "score": "75"}, {"domains": ["SUB"], "name": "CHRONIC DISEASES", "score": "75"}, {"domains": ["SUB"], "name": "SPORTS INJURIES", "score": "74"}, {"domains": ["SUB"], "name": "FOUNDATIONS", "score": "73"}, {"domains": ["SUB"], "name": "BOARD CHANGES", "score": "70"}, {"domains": ["SUB"], "name": "CANCER", "score": "69"}, {"domains": ["SUB"], "name": "NUTRITION", "score": "68"}, {"domains": ["SUB"], "name": "TALKS & MEETINGS", "score": "65"}, {"domains": ["SUB"], "name": "TRADE SHOWS", "score": "54"}], "companies": [{"name": "Global Health Ltd", "symbol": "WSY", "exchange": "BER", "isin": "AU000000GLH2", "titleCount": 0, "contentCount": 1, "primary": true}, {"name": "Global Health Ltd", "symbol": "GLH", "exchange": "ASX", "isin": "AU000000GLH2", "titleCount": 0, "contentCount": 1, "primary": true}, {"name": "Space Co Ltd", "symbol": "9622", "exchange": "TKS", "isin": "JP3400050005", "titleCount": 0, "contentCount": 1, "primary": true}], "semantics": {"entities": [{"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "Aetna.Knee"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "Audax Group"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "PepsiCo"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "Audax Health"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "Apple Inc."}], "provider": "3"}, {"properties": [{"name": "type", "value": "Company"}, {"name": "value", "value": "TIAA"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Steve Jobs"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "John Rowe"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Richard Klausner"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Grant Verstandig"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "John Sculley"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Muhammad Ali"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Roger W. Ferguson, Jr."}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Bill Gates"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Hillary Clinton"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Person"}, {"name": "value", "value": "Careverge"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Organization"}, {"name": "value", "value": "Brown University"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Organization"}, {"name": "value", "value": "National Institutes of Health"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Organization"}, {"name": "value", "value": "Bill & Melinda Gates Foundation"}], "provider": "3"}, {"properties": [{"name": "type", "value": "Organization"}, {"name": "value", "value": "National Cancer Institute"}], "provider": "3"}]}, "sentiment": {"score": "0.08718867"}, "print": {"publicationEdition": "Every Edition", "pageNumber": "A06"}, "author": {"publishingPlatform": {}}, "licenses": [{"name": "Company Licensed"}], "linkedArticles": [], "source": {"id": "93252", "name": "The Washington Post", "homeUrl": "http://www.washingtonpost.com/", "category": "National", "editorialRank": "1", "location": {"country": "United States", "countryCode": "US", "region": "Americas", "subregion": "Northern America", "state": "District of Columbia"}, "metrics": {"mozscape": {}}, "feed": {"id": "8528", "mediaType": "Print", "publishingPlatform": "Company Licensed", "idFromPublisher": "783", "language": "Unassigned", "rank": {"inboundLinkCount": "0"}, "inWhiteList": "false"}}}]}
Thanks

This should work. Let me know if you have any questions
spark = SparkSession.builder.getOrCreate()
df = spark.read.json("test.json")
df.createOrReplaceTempView("test")
spark.sql("select article.adultLanguage, company.* from test lateral view explode(articles) as article lateral view explode(article.companies) as company ").show(10,False)

Related

Converting a Struct to an Array in Pyspark

This is my goal:
I try to analyze the json files created by Microsoft's Azure Data Factory. I want to convert them into a set of relational tables.
To explain my problem, I have tried to create a sample with reduced complexity.
You can produce two sample records with below python code:
sample1 = """{
"name": "Pipeline1",
"properties": {
"parameters": {
"a": {"type": "string", "default": ""},
"b": {"type": "string", "default": "chris"},
"c": {"type": "string", "default": "columbus"},
"d": {"type": "integer", "default": "0"}
},
"annotations": ["Test","Sample"]
}
}"""
sample2 = """{
"name": "Pipeline2",
"properties": {
"parameters": {
"x": {"type": "string", "default": "X"},
"y": {"type": "string", "default": "Y"},
},
"annotations": ["another sample"]
}
My first approach to load those data is of course, to read them as json structures:
df = spark.read.json(sc.parallelize([sample1,sample2]))
df.printSchema()
df.show()
but this returns:
root
|-- _corrupt_record: string (nullable = true)
|-- name: string (nullable = true)
|-- properties: struct (nullable = true)
| |-- annotations: array (nullable = true)
| | |-- element: string (containsNull = true)
| |-- parameters: struct (nullable = true)
| | |-- a: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- type: string (nullable = true)
| | |-- b: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- type: string (nullable = true)
| | |-- c: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- type: string (nullable = true)
| | |-- d: struct (nullable = true)
| | | |-- default: string (nullable = true)
| | | |-- type: string (nullable = true)
+--------------------+---------+--------------------+
| _corrupt_record| name| properties|
+--------------------+---------+--------------------+
| null|Pipeline1|{[Test, Sample], ...|
|{
"name": "Pipel...|Pipeline2| null|
+--------------------+---------+--------------------+
As you can see, the second sample was not loaded, apparently because the schemas of sample1 and sample2 are different (different names of parameters).
I do not know, why Microsoft has decided to make the parameters elements of a struct and not of an array - but I can't change that.
Let me come back to my goal: I would like to create two dataframes out of those samples:
The first dataframe should contain the annotations (with the columns pipeline_name and annotation), the other dataframe should contain the parameters (with the columns pipeline_name, parameter_name, parameter_type and parameter_default).
Does anybody know a simple way, to convert elements of a struct (not array) into rows of a dataframe?
First of all, I was thinking about a user defined function which converts the json code one by one and loops over the elements of the "parameters" structure to return them as elements of an array. But I did not find out exactly, how to achieve that. I have tried:
import json
from pyspark.sql.types import *
# create a dataframe with the json data as strings
df = spark.createDataFrame([Row(json=sample1), Row(json=sample2)])
#define desired schema
new_schema = StructType([
StructField("pipeline", StructType([
StructField("name", StringType(), True)
,StructField("params", ArrayType(StructType([
StructField("paramname", StringType(), True)
,StructField("type", StringType(), True)
,StructField("default", StringType(), True)
])), None)
,StructField("annotations", ArrayType(StringType()), True)
]), True)
])
def parse_pipeline(source:str):
dict = json.loads(source)
name = dict["name"]
props = dict["properties"]
paramlist = [ ( key, value.get('type'), value.get('default')) for key, value in props.get("parameters",{}).items() ]
annotations = props.get("annotations")
return {'pipleine': { 'name':name, 'params':paramlist, 'annotations': annotations}}
parse_pipeline_udf = udf(parse_pipeline, new_schema)
df = df.withColumn("data", parse_pipeline_udf(F.col("json")))
But this returns an error message: Failed to convert the JSON string '{"metadata":{},"name":"params","nullable":null,"type":{"containsNull":true,"elementType":{"fields":[{"metadata":{},"name":"paramname","nullable":true,"type":"string"},{"metadata":{},"name":"type","nullable":true,"type":"string"},{"metadata":{},"name":"default","nullable":true,"type":"string"}],"type":"struct"},"type":"array"}}' to a field.
Maybe the error comes from the return value of my udf. But if that's the reason, how should I pass the result.
Thank you for any help.
First, I fixed you data sample : """ and } missing, an extra ,:
sample1 = """{
"name": "Pipeline1",
"properties": {
"parameters": {
"a": {"type": "string", "default": ""},
"b": {"type": "string", "default": "chris"},
"c": {"type": "string", "default": "columbus"},
"d": {"type": "integer", "default": "0"}
},
"annotations": ["Test","Sample"]
}
}"""
sample2 = """{
"name": "Pipeline2",
"properties": {
"parameters": {
"x": {"type": "string", "default": "X"},
"y": {"type": "string", "default": "Y"}
},
"annotations": ["another sample"]
}
}"""
Just fixing this, you should have the sample2 included when using your basic code.
But if you want "array", actually, you need a map type.
new_schema = T.StructType([
T.StructField("name", T.StringType()),
T.StructField("properties", T.StructType([
T.StructField("annotations", T.ArrayType(T.StringType())),
T.StructField("parameters", T.MapType(T.StringType(), T.StructType([
T.StructField("default", T.StringType()),
T.StructField("type", T.StringType()),
])))
]))
])
df = spark.read.json(sc.parallelize([sample1, sample2]), new_schema)
and the result :
df.show(truncate=False)
+---------+-----------------------------------------------------------------------------------------------------+
|name |properties |
+---------+-----------------------------------------------------------------------------------------------------+
|Pipeline1|[[Test, Sample], [a -> [, string], b -> [chris, string], c -> [columbus, string], d -> [0, integer]]]|
|Pipeline2|[[another sample], [x -> [X, string], y -> [Y, string]]] |
+---------+-----------------------------------------------------------------------------------------------------+
df.printSchema()
root
|-- name: string (nullable = true)
|-- properties: struct (nullable = true)
| |-- annotations: array (nullable = true)
| | |-- element: string (containsNull = true)
| |-- parameters: map (nullable = true)
| | |-- key: string
| | |-- value: struct (valueContainsNull = true)
| | | |-- default: string (nullable = true)
| | | |-- type: string (nullable = true)

parse json data with spark 2.3

I have the following json data :
{
"3200": {
"id": "3200",
"value": [
"cat",
"dog"
]
},
"2000": {
"id": "2000",
"value": [
"bird"
]
},
"2500": {
"id": "2500",
"value": [
"kitty"
]
},
"3650": {
"id": "3650",
"value": [
"horse"
]
}
}
the schema of this data , with printSchema utilty after we load the data with spark is as follows:
root
|-- 3200: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: array (nullable = true)
| | |-- element: string (containsNull = true)
|-- 2000: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: array (nullable = true)
| | |-- element: string (containsNull = true)
|-- 2500: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: array (nullable = true)
| | |-- element: string (containsNull = true)
|-- 3650: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- value: array (nullable = true)
| | |-- element: string (containsNull = true)
and I want to get the following dataframe
id value
3200 cat
2000 bird
2500 kitty
3200 dog
3650 horse
How can I do the parsing to get this expected output
Using spark-sql
Dataframe step (same as in Mohana's answer)
val df = spark.read.json(Seq(jsonData).toDS())
Build a temp view
df.createOrReplaceTempView("df")
Result:
val cols_k = df.columns.map( x => s"`${x}`.id" ).mkString(",")
val cols_v = df.columns.map( x => s"`${x}`.value" ).mkString(",")
spark.sql(s"""
with t1 ( select map_from_arrays(array(${cols_k}),array(${cols_v})) s from df ),
t2 ( select explode(s) (key,value) from t1 )
select key, explode(value) value from t2
""").show(false)
+----+-----+
|key |value|
+----+-----+
|2000|bird |
|2500|kitty|
|3200|cat |
|3200|dog |
|3650|horse|
+----+-----+
You can use stack() function to transpose the dataframe then extract key field and explode value field using explode_outer function.
val spark = SparkSession.builder().master("local[*]").getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
import spark.implicits._
val jsonData = """{
| "3200": {
| "id": "3200",
| "value": [
| "cat",
| "dog"
| ]
| },
| "2000": {
| "id": "2000",
| "value": [
| "bird"
| ]
| },
| "2500": {
| "id": "2500",
| "value": [
| "kitty"
| ]
| },
| "3650": {
| "id": "3650",
| "value": [
| "horse"
| ]
| }
|}
|""".stripMargin
val df = spark.read.json(Seq(jsonData).toDS())
df.selectExpr("stack (4, *) key")
.select(expr("key.id").as("key"),
explode_outer(expr("key.value")).as("value"))
.show(false)
+----+-----+
|key |value|
+----+-----+
|2000|bird |
|2500|kitty|
|3200|cat |
|3200|dog |
|3650|horse|
+----+-----+

Convert column with string json string to column with dictionary in pyspark

I have a column with following structure in my dataframe.
+--------------------+
| data|
+--------------------+
|{"sbar":{"_id":"5...|
|{"sbar":{"_id":"5...|
|{"sbar":{"_id":"5...|
|{"sbar":{"_id":"5...|
|{"sbar":{"_id":"5...|
+--------------------+
only showing top 5 rows
The data inside column is a json string. I want to convert the column to some other type (map, struct..). How do I do this with a udf function? I have created a function like this but cant figure out what the return type should be. I tried StructType and MapType which threw error. This is my code.
import json
from pyspark.sql.types import MapType, StructType
udf_getDict = F.udf(lambda x: json.loads(x), StructType)
subset.select(udf_getDict(F.col('data'))).printSchema()
You can use an approach with spark.read.json and df.rdd.map such as this:
json_string = """
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
"""
df2 = spark.createDataFrame(
[
(1, json_string),
],
['id', 'txt']
)
df2.dtypes
[('id', 'bigint'), ('txt', 'string')]
new_df = spark.read.json(df2.rdd.map(lambda r: r.txt))
new_df.printSchema()
root
|-- glossary: struct (nullable = true)
| |-- GlossDiv: struct (nullable = true)
| | |-- GlossList: struct (nullable = true)
| | | |-- GlossEntry: struct (nullable = true)
| | | | |-- Abbrev: string (nullable = true)
| | | | |-- Acronym: string (nullable = true)
| | | | |-- GlossDef: struct (nullable = true)
| | | | | |-- GlossSeeAlso: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- para: string (nullable = true)
| | | | |-- GlossSee: string (nullable = true)
| | | | |-- GlossTerm: string (nullable = true)
| | | | |-- ID: string (nullable = true)
| | | | |-- SortAs: string (nullable = true)
| | |-- title: string (nullable = true)
| |-- title: string (nullable = true)

Traversing through the Json object

I have a json file which has the following data:
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": [
"GML",
"XML"
]
},
"GlossSee": "markup"
}
}
}
}
}
I need to read this file in pyspark and traverse through all the elements in the json. I need to recognize all the struct, array and array of struct columns and need to create separate hive tables for each struct and array column.
For Example:
Glossary will be one table with "title" as the column
GlossEntry will be another table with columns "ID", "SortAs", "GlossTerm", "acronym", "abbrev"
The data will grow in the future with more nested structures. So i will have to write a generalized code which traverses through all the JSON elements and recognizes all the structs and array columns.
Is there a way to loop through every element in the nested struct?
Spark is able to automatically parse and infer json schema. Once its in the spark dataframe, you can access elements with the json by specifying its path.
json_df = spark.read.json(filepath)
json_df.printSchema()
Output:
root
|-- glossary: struct (nullable = true)
| |-- GlossDiv: struct (nullable = true)
| | |-- GlossList: struct (nullable = true)
| | | |-- GlossEntry: struct (nullable = true)
| | | | |-- Abbrev: string (nullable = true)
| | | | |-- Acronym: string (nullable = true)
| | | | |-- GlossDef: struct (nullable = true)
| | | | | |-- GlossSeeAlso: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- para: string (nullable = true)
| | | | |-- GlossSee: string (nullable = true)
| | | | |-- GlossTerm: string (nullable = true)
| | | | |-- ID: string (nullable = true)
| | | | |-- SortAs: string (nullable = true)
| | |-- title: string (nullable = true)
| |-- title: string (nullable = true)
Then choose the fields to extract:
json_df.select("glossary.title").show()
json_df.select("glossary.GlossDiv.GlossList.GlossEntry.*").select("Abbrev","Acronym","ID","SortAs").show()
Extracted output:
+----------------+
| title|
+----------------+
|example glossary|
+----------------+
+-------------+-------+----+------+
| Abbrev|Acronym| ID|SortAs|
+-------------+-------+----+------+
|ISO 8879:1986| SGML|SGML| SGML|
+-------------+-------+----+------+

Transform JSON Dataframe/RDD schema in Spark

I am seeking help in transforming a JSON dataframe/Rdd from one schema structure to another. I am looking at more than 100,000 rows of json data being read and would like to transform that data before inserting into a document store. I am performing the transformations in Spark using scala. Currently i am using json4s framework to parse one json row at a time and transforming it before inserting into the document store, which is not leveraging the power of Spark processing. I would like to transform all the data together, which I believe will speed up the processing.
Following is a code snippet to illustrate my problem.
My input data looks like the inpString string, with multiple rows of JSON.
val inpString= """[{
"data": {
"id": "1234",
"name": "abc",
"time": "2015-01-01 13-44-21",
"x": [
50,
10
],
"y": [
100,
20
],
"z": [
150,
30
],
"x_limit": [
70,
90,
15,
20
],
"y_limit": [
70,
90,
15,
20
],
"z_limit": [
70,
90,
15,
20
]
}},
{
"data": {
"id": "1235",
"name": "cde",
"time": "2015-01-01 3-21-01",
"x": [
50,
10
],
"y": [
100,
20
],
"z": [
150,
30
],
"x_limit": [
70,
90,
15,
20
],
"y_limit": [
70,
90,
15,
20
],
"z_limit": [
70,
90,
15,
20
]
}}]"""
I read it into a dataframe and am able to select, groupby and do all other operations using Sparksql.
val inputRDD = sc.parallelize(inpString::Nil)
val inputDf = sqlContext.read.json(inputRDD)
inputDf.printSchema()
Schema looks like below
root
|-- data: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- name: string (nullable = true)
| |-- time: string (nullable = true)
| |-- x: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- x_limit: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- y: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- y_limit: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- z: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- z_limit: array (nullable = true)
| | |-- element: long (containsNull = true)
The x array in my input data has two types of readings(say type1 and type2) and the x_limit array has the lower and upper limits for the two types of readings in x array. x_limit[0] is the lower limit for type1, x_limit[1] is the upper limit for type1, x_limit[2] is the lower limit for type2 and x_limit[3] is the upper limit for type2. I need to group all the data for type1 together in one struct and all the data for type2 in one struct.
Following code snippet will give us the output schema
val outString= """[{
"data": {
"id": "1234",
"name": "abc",
"time": "2015-01-01 13-44-21",
"type1": {
"x_axis" : 50,
"y_axis" : 100,
"z_axis" : 150,
"x_lower_limit" : 70,
"x_upper_limit" : 90,
"y_lower_limit" : 70,
"y_upper_limit" : 90,
"z_lower_limit" : 70,
"z_upper_limit" : 90
},
"type2": {
"x_axis" : 10,
"y_axis" : 20,
"z_axis" : 30,
"x_lower_limit" : 15,
"x_upper_limit" : 20,
"y_lower_limit" : 15,
"y_upper_limit" : 20,
"z_lower_limit" : 15,
"z_upper_limit" : 20
}
}
},
{
"data": {
"id": "1235",
"name": "cde",
"time": "2015-01-01 3-21-01",
"type1": {
"x_axis" : 50,
"y_axis" : 100,
"z_axis" : 150,
"x_lower_limit" : 70,
"x_upper_limit" : 90,
"y_lower_limit" : 70,
"y_upper_limit" : 90,
"z_lower_limit" : 70,
"z_upper_limit" : 90
},
"type2": {
"x_axis" : 10,
"y_axis" : 20,
"z_axis" : 30,
"x_lower_limit" : 15,
"x_upper_limit" : 20,
"y_lower_limit" : 15,
"y_upper_limit" : 20,
"z_lower_limit" : 15,
"z_upper_limit" : 20
}
}
}
]"""
val outputRDD = sc.parallelize(outString::Nil)
val outputDf = sqlContext.read.json(outputRDD)
outputDf.printSchema()
Output Schema
root
|-- data: struct (nullable = true)
| |-- id: string (nullable = true)
| |-- name: string (nullable = true)
| |-- time: string (nullable = true)
| |-- type1: struct (nullable = true)
| | |-- x_axis: long (nullable = true)
| | |-- x_lower_limit: long (nullable = true)
| | |-- x_upper_limit: long (nullable = true)
| | |-- y_axis: long (nullable = true)
| | |-- y_lower_limit: long (nullable = true)
| | |-- y_upper_limit: long (nullable = true)
| | |-- z_axis: long (nullable = true)
| | |-- z_lower_limit: long (nullable = true)
| | |-- z_upper_limit: long (nullable = true)
| |-- type2: struct (nullable = true)
| | |-- x_axis: long (nullable = true)
| | |-- x_lower_limit: long (nullable = true)
| | |-- x_upper_limit: long (nullable = true)
| | |-- y_axis: long (nullable = true)
| | |-- y_lower_limit: long (nullable = true)
| | |-- y_upper_limit: long (nullable = true)
| | |-- z_axis: long (nullable = true)
| | |-- z_lower_limit: long (nullable = true)
| | |-- z_upper_limit: long (nullable = true)
I did some research to find for a similar scenario but could not. I appreciate your inputs.