Accessing child fields of a nested json data using sparksql - json

I'm doing an exploratory data analysis with hadoop job history files log data.
Below given is the sample data used for the analysis
{"type":"AM_STARTED","event":{"org.apache.hadoop.mapreduce.jobhistory.AMStarted":{"applicationAttemptId":"appattempt_1450790831122_0001_000001","startTime":1450791753482,"containerId":"container_1450790831122_0001_01_000001","nodeManagerHost":"centos65","nodeManagerPort":52981,"nodeManagerHttpPort":8042}}}
i just need to select the child values like applicationAttemptId, startTime, containerId of the event
org.apache.hadoop.mapreduce.jobhistory.AMStarted
i tried the below simple select query
val out=sqlcontext.sql("select event.org.apache.hadoop.mapreduce.jobhistory.AMStarted.applicationAttemptId from sample")
but it throws the below error
org.apache.spark.sql.analysisException: no such struct field org in org.apache.hadoop.mapreduce.jobhistory.AMStarted.applicationAttemptId
unfortunately the data field look like this "org.apache.hadoop.mapreduce.jobhistory.AMStarted"
i manipulated the data myself like this org_apache_hadoop_mapreduce_jobhistory.AMStarted and tried the same query like this one below
val out=sqlcontext.sql("select event.org_apache_hadoop_mapreduce_jobhistory_AMStarted.applicationAttemptId from sample")
Now i'm able to access the child fields of AMStarted. but it's not the right way to do so,
Is there any way to do so without manipulating the data.

After spending some quality time searching for a solution , got the simple idea of using back ticks as quotes in the field name did the trick for me.
`org.apache.hadoop.mapreduce.jobhistory`.AMStarted
And then the query works like a charm,
val out=sqlcontext.sql("select event.`org.apache.hadoop.mapreduce.jobhistory.AMStarted'.applicationAttemptId from sample")

Related

Creating External Table with Redshift Spectrum from nested JSON

I’m creating an external table from json data with input format org.apache.hadoop.mapred.TextInputFormat and output format org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat with SerDe org.openx.data.jsonserde.JsonSerDe.
One of the attributes of the json is a highly nested json called groups. The nested data doesn't follow a strict schema, so not all json within groups have the same attributes. I'm having trouble accessing group's attributes and I suspect that I am not casting groups to the proper datatype.
Here is a sample of the data
{"entity":"1111111","date":"2019-05-29T00:00:00.000Z","dataset":"authorizations","aggregations":{"sellersAuths":1,"sellersDeAuths":0},"groups":{"sellersAuths":{"mws_region":{"USAmazon":1},"created_by":{"SWIPE":1},"last_updated_by":{"SWIPE":1}},"sellersDeAuths":{"mws_region":{"EUAmazon":0},"created_by":{"SellerCent":0},"last_updated_by":{"JPAmazon":0}}}}
{"entity":"22222222","date":"2019-05-29T00:00:00.000Z","dataset":"authorizations","aggregations":{"sellersAuths":1,"sellersDeAuths":0},"groups":{"sellersAuths":{"mws_region":{"EUAmazon":1},"created_by":{"SWIPE":1},"last_updated_by":{"SWIPE":1}},"sellersDeAuths":{"mws_region":{"EUAmazon":0},"created_by":{"SWIPE":0},"last_updated_by":{"SWIPE":0}}}}
{"entity":"3333333","date":"2019-05-29T00:00:00.000Z","dataset":"authorizations","aggregations":{"sellersAuths":1,"sellersDeAuths":0},"groups":{"sellersAuths":{"mws_region":{"EUAmazon":1},"created_by":{"SWIPE":1},"last_updated_by":{"SWIPE":1}},"sellersDeAuths":{"mws_region":{"EUAmazon":0},"created_by":{"SWIPE":0},"last_updated_by":{"SWIPE":0}}}}
I've tried a couple of different ways of casting the data type of groups when creating the external table. I tried using super type and when I select for groups I get the entire json, but when I select for an attribute of groups such as select groups.sellersAuths from ... or select groups."sellersAuths" from ... I get relation groups does not exist.
I've tried casting it as a struct<key:VARCHAR, value:struct<key:VARCHAR, value:struct<key:VARCHAR, value:FLOAT8>>>, whoever when access something like groups.key or groups.value.key, I always get NULL. I'm not sure how to cast the data type of groups when creating the external table. I'm not sure if my use case is what the super type is for.
I've also tried using JSON_PARSE after I cast the data to VARCHAR, or super or struct but that presents issues as well.
Thanks a ton for reading!

How to query an array field (AWS Glue)?

I have a table in AWS Glue, and the crawler has defined one field as array.
The content is in S3 files that have a json format.
The table is TableA, and the field is members.
There are a lot of other fields such as strings, booleans, doubles, and even structs.
I am able to query them all using a simpel query such as:
SELECT
content.my_boolean,
content.my_string,
content.my_struct.value
FROM schema.tableA;
The issue is when I add content.members into the query.
The error I get is: [Amazon](500310) Invalid operation: schema "content" does not exist.
Content exists because i am able to select other fiels from the main key in the json (content).
Probably is something related with how to perform the query agains array field in Spectrum.
Any idea?
You have to rename the table to extract the fields from the external schema:
SELECT
a.content.my_boolean,
a.content.my_string,
a.content.my_struct.value
FROM schema.tableA a;
I had the same issue on my data, I really don't know why it needs this cast but it works. If you need to access elements of an array you have to explod it like:
SELECT member.<your-field>,
FROM schema.tableA a, a.content.members as member;
Reference
You need to create a Glue Classifier.
Select JSON as Classifier type
and for the JSON Path input the following:
$[*]
then run your crawler. It will infer your schema and populate your table with the correct fields instead of just one big array. Not sure if this was what you were looking for but figured I'd drop this here just in case others had the same problem I had.

How do I convert a column of JSON strings into a parquet table

I am trying to convert some data that I am receiving into a parquet table that I can eventually use for reporting, but feel like I am missing a step.
I receive files that are CSVs where the format is "id", "event", "source" where the "event" column is a GZIP compressed JSON string. I've been able to get a dataframe set up that extracts the three columns, including getting the JSON string unzipped. So I have a table now that has
id | event | source | unencoded_event
Where the unencoded_event is the JSON string.
What I'd like to do at this point is to take that one string column of JSON and parse it out into individual columns. Based on a comment from another developer (that the process of converting to parquet is smart enough to just use the first row of my results to figure out schema), I've tried this:
df1 = spark.read.json(df.select("unencoded_event").rdd).write.format("parquet").saveAsTable("test")
But this just gives me a single column table with a column of _corrupt_record that just has the JSON string again.
What I'm trying to get to is to take schema:
{
"agent"
--"name"
--"organization"
"entity"
--"name"
----"type"
----"value"
}
And get the table to, ultimately, look like:
AgentName | Organization | EventType | EventValue
Is the step I'm missing just explicitly defining the schema or have I oversimplified my approach?
Potential complications here: the JSON schema is actually more involved than above; I've been assuming I can expand out the full schema into a wider table and then just return the smaller set I care about.
I have also tried taking a single result from the file (so, a single JSON string), saving it as a JSON file and trying to read from it. Doing so works, i.e., doing the spark.read.json(myJSON.json) parses the string into the arrays I was expecting. This is also true if I copy multiple strings.
This doesn't work if I take my original results and try to save them. If I try to save just the column of strings as a json file
dfWrite = df.select(col("unencoded_event"))
dfWrite.write.mode("overwrite").json(write_location)
then read them back out, this doesn't behave the same way...each row is still treated as strings.
I did find one solution that works. This is not a perfect solution (I'm worried that it's not scalable), but it gets me to where I need to be.
I can select the data using get_json_object() for each column I want (sorry, I've been fiddling with column names and the like over the course of the day):
dfResults = df.select(get_json_object("unencoded_event", "$.agent[0].name").alias("userID"),
get_json_object("unencoded_event", "$.entity[0].identifier.value").alias("itemID"),
get_json_object("unencoded_event", "$.entity[0].detail[1].value").alias("itemInfo"),
get_json_object("unencoded_event", "$.recorded").alias("timeStamp"))
The big thing I don't love about this is that it appears I can't use filter/search options with get_json_object(). That's fine for the forseeable future, because right now I know where all the data should be and don't need to filter.
I believe I can also use from_json() but that requires defining the schema within the notebook. This isn't a great option because I only need a small part of the JSON, so it feels like unnecessary effort to define the entire schema. (I also don't have control over what the overall schema would be, so this becomes a maintenance issue.)

Dynamic field name query using N1QL

I'm having a use case here which I can't seem to solve. Basically, I need to create a webservice where users may query the couchbase cluster "dynamically". Indeed, i'm storing metadata of different files, and the "creation" of this metadata is up to the user : I don't have specific fields in my Java POJO, i'm inserting a MAP which gets inserted as a nested object in couchbase.
Now the query I need is pretty simple on paper and looks something like this :
#Query("#{#n1ql.selectEntity} WHERE #{#n1ql.filter} AND $1 = $2")
List<FileMetadata> findListMetadata(String pKey, String pValue);
But it doesn't seem to work, $1 doesn't seem to ever get replaced by the pKey variable.
I'm using CouchBase 4.5 with the Spring Data connector.
Any ideas on how to solve that use case ?
You need to dynamically generate the query string, so that pKey is inserted into the query string and pValue is passed as a parameter (as you are doing).

Parsing json into data structures with lower case field names

I am parsing JSON into ABAP structures, and it works:
DATA cl_oops TYPE REF TO cx_dynamic_check.
DATA(text) = `{"TEXT":"Hello ABAP, I'm JSON!","CODE":"123"}`.
TYPES: BEGIN OF ty_structure,
text TYPE string,
code TYPE char3,
END OF ty_structure.
DATA : wa_structure TYPE ty_structure.
TRY.
text = |\{"DATA":{ text }\}|.
CALL TRANSFORMATION id OPTIONS clear = 'all'
SOURCE XML text
RESULT data = wa_structure.
WRITE: wa_structure-text , wa_structure-code.
CATCH cx_transformation_error INTO cl_oops.
WRITE cl_oops->get_longtext( ).
ENDTRY.
The interesting part is that the CODE and TEXT are case sensitive. For most external systems, having all CAPS identifiers is ugly, so I have been trying to parse {"text":"Hello ABAP, I'm JSON!","code":"123"} without any success. I looked into the options, I looked whether a changed copy of id migh accomplish this, I googled it and have no idea how to accomplish this.
Turns out that SAP has a sample program on how to do this.
There is basically an out of the box transformation that does this for you called demo_json_xml_to_upper. The name is a bit unfortunate, so I would suggest renaming this transformation and adding it to the customer namespace.
I am a bit bummed that this only works through xstrings, so debugging it becomes a pain. But, it works perfeclty and solved my problem.
My solution to this is low tech. I spent hours looking for a simple way to get out of this mess that the JSON response could have the fieldnames in lower or camel case. Here it is: if you know the field names - obviously you do because your table has the same column names - just replace the lower case name with an upper case one in your xstring.
If in your table the field is USERS_ID and in the JSON xstring it is users_ID - go for that:
replace all occurrences of 'users_ID' in ls_string with 'USERS_ID'.
Do the same for all fields and the object name and call transformation ID.