Looping through all json elements using Unity Boomlagoon Json - json

I'm using Boomlagoon Json in my Unity project. My Json file has several lines in it, and so far I can only get Boomlagoon to read the first one only. Is there a way I can make a loop where it will go through all parse the entire json file?
Here is my json:
{"type": 1, "squads": [{"player_id": 1, "squad": [1, 2, 3, 4]}, {"player_id": 2, "squad": [6, 7, 8, 9]}], "room_number": 1, "alliance_id": 1, "level": 1}
{"type": 2, "squads": [{"player_id": 2, "squad": [1, 2, 3, 4]}, {"player_id": 3, "squad": [6, 7, 8, 9]}], "room_number": 2, "alliance_id": 1, "level": 1}
{"type": 3, "squads": [{"player_id": 3, "squad": [1, 2, 3, 4]}, {"player_id": 4, "squad": [6, 7, 8, 9]}], "room_number": 3, "alliance_id": 1, "level": 1}
And when I do a loop like this:
foreach (KeyValuePair<string, JSONValue> pair in emptyObject) { ... }
it only gives me results for the first entry (in this example type:1). Thanks.

Your file actually contains 3 JSON objects, and what happens when you parse it is that the parsing stops once the first object ends. You need to parse each line separately to get all of the data.
As an aside, you'll notice that if you paste your JSON into the validator at jsonlint.com it'll give you a parsing error where the second object begins.

Related

Concatenate folder of multiple newline-delimited JSON files into single file

We have a directory /our_jsons that has the files:
file1.json
{"team": 1, "leagueId": 1, "name": "the ballers"}
{"team": 2, "leagueId": 1, "name": "the hoopers"}
file2.json
{"team": 3, "leagueId": 1, "name": "the gamerrs"}
{"team": 4, "leagueId": 1, "name": "the drivers"}
file3.json
{"team": 5, "leagueId": 1, "name": "the jumpers"}
{"team": 6, "leagueId": 1, "name": "the riserss"}
and we need to stack these into a single file output_file.json, that simply has all of the JSONs in our directory combined / stacked on top of one another:
output_file.json
{"team": 1, "leagueId": 1, "name": "the ballers"}
{"team": 2, "leagueId": 1, "name": "the hoopers"}
{"team": 3, "leagueId": 1, "name": "the gamerrs"}
{"team": 4, "leagueId": 1, "name": "the drivers"}
{"team": 5, "leagueId": 1, "name": "the jumpers"}
{"team": 6, "leagueId": 1, "name": "the riserss"}
Is this possible to do with a bash command in Mac / Linux? We're hoping this is easier than combining ordinary JSONs because these are NDJSONs and so the files truly simply need to just be stacked on top of one-another. Our full data is much larger (~10GB of data split over 100+ newline-delimited JSONs), and we're hoping to find a decently-performant (under 2-5 minutes) solution if possible. I just installed and am reading docs on jq currently, and will update if we come up with a solution.
EDIT:
It looks like jq . our_jsons/* > output_file.json concats the JSONs, however the output is not an ND JSON but rather an ordinary (and invalid) JSON file...
cat tmp/* | jq -c '.' > tmp/output_file.json appears to get the job done!

Json column in Pandas dataframe - Parsing and splitting

I have an json dataframe with tedx talks as items (rows), that has a column 'ratings' in json format going like this. (The column depicts how the talk was described by audience)
[{"id": 7, "name": "Funny", "count": 19645}, {"id": 1, "name": "Beautiful", "count": 4573}, {"id": 9, "name": "Ingenious", "count": 6073}, ..........]
[{"id": 7, "name": "Funny", "count": 544}, {"id": 3, "name": "Courageous", "count": 139}, {"id": 2, "name": "Confusing", "count": 62}, {"id": 1, "name": "Beautiful", "count": 58}, ........]
Obviously the order of the descriptive words name is not standard/same for each item (tedx talk). Each word has an id(same for all talks) and a count respectively for each talk.
I am interested in manipulating the data and extracting three new integer columns regarding counts of: funny, inspiring, confusing, storing there the count for each of those words for the respective talks
Among other stuff, tried this
df['ratings'] = df['ratings'].map(lambda x: dict(eval(x)))
in return i get this error
File "C:/Users/Paul/Google Drive/WEEK4/ted-talks/w4e1.py", line 30, in
df['ratings'] = df['ratings'].map(lambda x: dict(eval(x)))
ValueError: dictionary update sequence element #0 has length 3; 2 is required
Been trying several different ways, but havent been able to even get values from the json formatted column properly. Any suggestions?
You can use list comprehension with flattening and convert string repr to list of dict by ast.literal_eval what is better solution like eval:
import pandas as pd
import ast
df = pd.DataFrame({'ratings': ['[{"id": 7, "name": "Funny", "count": 19645}, {"id": 1, "name": "Beautiful", "count": 4573}, {"id": 9, "name": "Ingenious", "count": 6073}]', '[{"id": 7, "name": "Funny", "count": 544}, {"id": 3, "name": "Courageous", "count": 139}, {"id": 2, "name": "Confusing", "count": 62}, {"id": 1, "name": "Beautiful", "count": 58}]']})
print (df)
ratings
0 [{"id": 7, "name": "Funny", "count": 19645}, {...
1 [{"id": 7, "name": "Funny", "count": 544}, {"i...
df1 = pd.DataFrame([y for x in df['ratings'] for y in ast.literal_eval(x)])
print (df1)
id name count
0 7 Funny 19645
1 1 Beautiful 4573
2 9 Ingenious 6073
3 7 Funny 544
4 3 Courageous 139
5 2 Confusing 62
6 1 Beautiful 58

Do I have to reorganize the data to animate it on time, using Unity and C#?

My Json file looks something like this (it's huge, so this is just simplified):
{
"foo": {
"id": [
20,
1,
3,
4,
60,
1,
],
"times": [
330.89,
5.33,
353.89,
33.89,
14.5,
207.5,
]
},
"poo": {
"id": [
20,
1,
3,
4,
60,
1,
],
"times": [
3.5,
323.89,
97.7,
154.5,
27.5,
265.60,
]
}
}
I have a similar json file as the one above, but a much more complex one. What I want to do is to use the "time" and "id" data and perform an action for the right "id" at the exact time. So the variables id and times are actually mapped to each other (has the same index). Is there a method to take out the right id for the right time to perform an action without having too many complicated loops?

Loading lines of JSON from Amzon S3 to DyanmoDB

I have some output from my apache-spark (PySpark) code that looks like this (very simple JSON objects on per line):
{'id': 1, 'value1': 'blah', 'value2': 1, 'value3': '2016-07-19 19:35:13'}
{'id': 2, 'value1': 'yada', 'value2': 1, 'value3': '2016-07-19 19:35:13'}
{'id': 3, 'value1': 'blah', 'value2': 2, 'value3': '2016-07-19 19:35:13'}
{'id': 4, 'value1': 'yada', 'value2': 2, 'value3': '2016-07-19 19:35:13'}
{'id': 5, 'value1': 'blah', 'value2': 3, 'value3': '2016-07-19 19:35:13'}
{'id': 6, 'value1': 'yada', 'value2': 4, 'value3': '2016-07-19 19:35:13'}
I want to write them to a DynamoDB table as documents. I don't want to convert this to the Map format (if I can avoid it). Any ideas on how to pull this off? So little documentation on the formatting issue.
There is some new DocumentClient() thing, but I can't use it from CLI. For example, feeding one of the above lines as an item to the 'put-item' aws cli command gives error:
aws dynamodb put-item --table-name mytable --item file://item.txt
Parameter validation failed:
Invalid type for parameter Item.......
A JSON string, such as the following, can't be put-itemed directly in DynamoDB:
{'id': 1, 'value1': 'blah', 'value2': 1, 'value3': '2016-07-19 19:35:13'}
It needs to have a format like:
{"id": {"N": 1}, "value1": {"S": "blah"}, "value2": {"N": 1}, "value3": {"S": "2016-07-19 19:35:13"}}
That is because, from the former, DynamoDB doesn't have a way to know the data types of id, value1 etc.
As I see it, you have two options:
Transform your data, from the former to latter, by using some utility. For example, jq.
Use AWS Data Pipeline.

Parse JSON data into Scala List and HashMap

I have JSON data shown below. I am using Python to encode a list, a dictionary and another list into JSON. The final JSON data will look like so:
{
"0": [3, 3, 3],
"1": {
"0": [0, 8, 9],
"1": [1, 2, 3, 4, 10, 11],
"2": [4]
},
"2": [1, 1, 1, 1]
}
My aim is to write some type of Scala function to extract the JSON data in a way that allows:
"0": [3, 3, 3] to be a List(3,3,3)
{"0":[0,8,9], ...} to be a HashMap[Int,List[Int]]
"2": [1, 1, 1, 1] to be a List(1,1,1,1)
Note the length of original Python list and dictionary will vary in size, and the "0", "1", "2" will always be there representing the list, dictionary and list in this order.
I am quite new to Scala and struggling on how to do it without using external libraries. I am trying to use spray-json to do it, since I am using a newer version of Scala (no built-in json parser).
That doesn't look like valid JSON to me, which means any of the JSON parsers you could use won't work. Is that structure fixed? You may want to instead convert it to something thats valid JSON.
eg.
{
"list" : [ 1,1,1],
"someotherObject" : {
"0" : [1,2,3]
},
"anotherList" : [9,8,7]
}
Then you could use Argonaut (for example), and define a decoder, which tells how to map that JSON to object types you specify. See http://argonaut.io/doc/codec/