How to parse JSON file with no SparkSQL? - json

I have the following JSON file.
{
"reviewerID": "ABC1234",
"productID": "ABCDEF",
"reviewText": "GOOD!",
"rating": 5.0,
},
{
"reviewerID": "ABC5678",
"productID": "GFMKDS",
"reviewText": "Not bad!",
"rating": 3.0,
}
I want to parse without SparkSQL and use a JSON parser.
The result of parse that i want is textfile.
ABC1234::ABCDEF::5.0
ABC5678::GFMKDS::3.0
How to parse the json file by using json parser in spark scala?

tl;dr Spark SQL supports JSONs in the format of one JSON per file or per line. If you'd like to parse multi-line JSONs that can appear together in a single file, you'd have to write your own Spark support as it's not currently possible.
A possible solution is to ask the "writer" (the process that writes the files to be nicer and save one JSON per file) that would make your life much sweeter.
If that does not give you much, you'd have to use mapPartitions transformation with your parser and somehow do the parsing yourself.
val input: RDD[String] = // ... load your JSONs here
val jsons = jsonRDD.mapPartitions(json => // ... use your JSON parser here)

Related

Extract nested JSON content from JSON with use of JSON Extractor in jMeter

I have JSON content inside another JSON that I need to extract as it is, without parsing its contents:
{
"id": 555,
"name": "aaa",
"JSON": "{\r\n \"fake1\": {},\r\n \"fake2\": \"bbbb\",\r\n \"fake3\": \"eee\" \r\n}",
"after1": 1,
"after2": "test"
}
When I use JSON Extractor with JSON Path expression:
$.JSON
It returns:
"{
"fake1": {},
"fake2": "bbbb",
"fake3": "eee"
}"
when I need to get the raw string:
"{\r\n \"fake1\": {},\r\n \"fake2\": \"bbbb\",\r\n \"fake3\": \"eee\" \r\n}"
I think you need to switch to JSR223 PostProcessor instead of the JSON Extractor and use the following code:
def json = new groovy.json.JsonSlurper().parse(prev.getResponseData()).JSON
vars.put('rawString', org.apache.commons.text.StringEscapeUtils.escapeJson(json))
You will be able to refer the extracted value as ${rawString} where required.
More information:
Apache Groovy - Parsing and producing JSON
Apache Groovy: What Is Groovy Used For?
console.log(JSON.stringify(data.JSON))
Here data is your given JSON data.
At first, you have to extract your JSON/data. Then you have to stringify the JSON data using JSON.stringify().
The confusing fact you have done here is that you named your key in the JSON object as "JSON".
In js when you extract a JSON object if there is another nested JSON object you will always get JSON data by just data.key_name
where data is JSON data
key is for Nested JSON key

Write JSON to a file in pretty print format in Django

I need to create some JSON files for exporting data from a Django system to Google Big Query.
The problem is that Google BQ imposes some characteristics in the JSON file, for example, that each object must be in a different line.
json.dumps writes a stringified version of the JSON, so it is not useful for me.
Django serializes writes better JSON, but it put all in one line. All the information I found about pretty-printing is about json.dumps, which I cannot use.
I will like to know if anyone knows a way to create a JSON file in the format required by Big Query.
Example:
JSONSerializer = serializers.get_serializer("json")
json_serializer = JSONSerializer()
data_objects = DataObject.objects.all()
with open("dataobjects.json", "w") as out:
json_serializer.serialize(data_objects, stream=out)
json.dumps is OK. You have to use indent like this.
import json
myjson = '{"latitude":48.858093,"longitude":2.294694}'
mydata = json.loads(myjson)
print(json.dumps(mydata, indent=4, sort_keys=True))
Output:
{
"latitude": 48.858093,
"longitude": 2.294694
}

How to read invalid JSON format amazon firehose

I've got this most horrible scenario in where i want to read the files that kinesis firehose creates on our S3.
Kinesis firehose creates files that don't have every json object on a new line, but simply a json object concatenated file.
{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}{"param1":"value1","param2":numericvalue2,"param3":"nested {bracket}"}
Now is this a scenario not supported by normal JSON.parse and i have tried working with following regex: .scan(/({((\".?\":.?)*?)})/)
But the scan only works in scenario's without nested brackets it seems.
Does anybody know an working/better/more elegant way to solve this problem?
The one in the initial anwser is for unquoted jsons which happens some times. this one:
({((\\?\".*?\\?\")*?)})
Works for quoted jsons and unquoted jsons
Besides this improved it a bit, to keep it simpler.. as you can have integer and normal values.. anything within string literals will be ignored due too the double capturing group.
https://regex101.com/r/kPSc0i/1
Modify the input to be one large JSON array, then parse that:
input = File.read("input.json")
json = "[#{input.rstrip.gsub(/\}\s*\{/, '},{')}]"
data = JSON.parse(json)
You might want to combine the first two to save some memory:
json = "[#{File.read('input.json').rstrip.gsub(/\}\s*\{/, '},{')}]"
data = JSON.parse(json)
This assumes that } followed by some whitespace followed by { never occurs inside a key or value in your JSON encoded data.
As you concluded in your most recent comment, the put_records_batch in firehose requires you to manually put delimiters in your records to be easily parsed by the consumers. You can add a new line or some special character that is solely used for parsing, % for example, which should never be used in your payload.
Other option would be sending record by record. This would be only viable if your use case does not require high throughput. For that you may loop on every record and load as a stringified data blob. If done in Python, we would have a dictionary "records" having all our json objects.
import json
def send_to_firehose(records):
firehose_client = boto3.client('firehose')
for record in records:
data = json.dumps(record)
firehose_client.put_record(DeliveryStreamName=<your stream>,
Record={
'Data': data
}
)
Firehose by default buffers the data before sending it to your bucket and it should end up with something like this. This will be easy to parse and load in memory in your preferred data structure.
[
{
"metadata": {
"schema_id": "4096"
},
"payload": {
"zaza": 12,
"price": 20,
"message": "Testing sendnig the data in message attribute",
"source": "coming routing to firehose"
}
},
{
"metadata": {
"schema_id": "4096"
},
"payload": {
"zaza": 12,
"price": 20,
"message": "Testing sendnig the data in message attribute",
"source": "coming routing to firehose"
}
}
]

How to load multiline JSON in spark with Java

I am looking for a way to load multiline JSON into Spark using Java. The Spark SQLContext has methods to load JSON, but it only supports "one record per line". I have a multiline JSON file that I need to process.
Example input:
The JSON contains words, definitions and example sentences :
{
"one-armedbandit":
[
{
"function": "noun",
"definition": "slot machine",
"examples":
[
]
}
],
...
}
The Spark ingestion methods indeed accept a json-line format. You could consider using a json processor to convert your data to this format before processing.
What I did was read the JSON into a List of POJOs with a JSON processor, then called parallelize on the SparkContext to get a JavaRDD.

How to convert DataFrame to Json?

I have a huge JSON file, a small part from it as follows:
{
"socialNews": [{
"adminTagIds": "",
"fileIds": "",
"departmentTagIds": "",
........
........
"comments": [{
"commentId": "",
"newsId": "",
"entityId": "",
....
....
}]
}]
.....
}
I have applied lateral view explode on socialNews as follows:
val rdd = sqlContext.jsonFile("file:///home/ashish/test")
rdd.registerTempTable("social")
val result = sqlContext.sql("select * from social LATERAL VIEW explode(socialNews) social AS comment")
Now I want to convert back this result (DataFrame) to JSON and save into a file, but I am not able to find any Scala API to do the conversion.
Is there any standard library to do this or some way to figure it out?
val result: DataFrame = sqlContext.read.json(path)
result.write.json("/yourPath")
The method write is in the class DataFrameWriter and should be accessible to you on DataFrame objects. Just make sure that your rdd is of type DataFrame and not of deprecated type SchemaRdd. You can explicitly provide type definition val data: DataFrame or cast to dataFrame with toDF().
If you have a DataFrame there is an API to convert back to an RDD[String] that contains the json records.
val df = Seq((2012, 8, "Batman", 9.8), (2012, 8, "Hero", 8.7), (2012, 7, "Robot", 5.5), (2011, 7, "Git", 2.0)).toDF("year", "month", "title", "rating")
df.toJSON.saveAsTextFile("/tmp/jsonRecords")
df.toJSON.take(2).foreach(println)
This should be available from Spark 1.4 onward. Call the API on the result DataFrame you created.
The APIs available are listed here
sqlContext.read().json(dataFrame.toJSON())
When you run your spark job as
--master local --deploy-mode client
Then,
df.write.json('path/to/file/data.json') works.
If you run on cluster [on header node], [--master yarn --deploy-mode cluster] better approach is to write data to aws s3 or azure blob and read from it.
df.write.json('s3://bucket/path/to/file/data.json') works.
If you still can't figure out a way to convert Dataframe into JSON, you can use to_json or toJSON inbuilt Spark functions.
Let me know if you have a sample Dataframe and a format of JSON to convert.