I am quite new to programming and especially Microsoft.Graph
I am having problems handling the response to:
https://graph.microsoft.com/v1.0/me/drive/root/children
the response looks like this (just much longer):
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('xyz%40hotmail.com')/drive/root/children",
"value": [
{
"createdBy": {
"user": {
"displayName": "xyz",
"id": "cf58e4781082"
}
},
"createdDateTime": "2009-01-08T08:52:07.063Z",
"cTag": "adDpFREJDR4RTQMTgxMDgyITEyOC42MzYxODM0MTU0Mjc3MDAwMDA",
"eTag": "aRURCQ0Y1OEU0A4MiExMjguMA",
"id": "EDBCF58E471082!128",
"lastModifiedBy": {
"user": {
"displayName": "xyz",
"id": "edbcf58e48082"
}
}, ............. etc...
The response that I received is correct, in JSON format (I believe ><), but I cannot figure out how to parse it into an array containing the folders name.
Please help!
Have considered using the Microsoft Graph client library? It will deserialize the JSON. Your call will look like this:
// Return all children files and folders off the drive root.
var driveItems = await graphClient.Me.Drive
.Root
.Children
.Request()
.GetAsync();
foreach (var item in driveItems)
{
// Get your item information
}
Here's some samples to help you get started:
https://github.com/microsoftgraph?utf8=%E2%9C%93&q=csharp
You can use the JavaScriptSerializer to do this. Assuming
//json contains the JSON Response
var jsonOutput = new System.Web.Script.Serialization.JavaScriptSerializer();
jsonOutput.DeserializeObject(json);
This has been discussed earlier. See this thread: Easiest way to parse JSON response
Refer this link for JavaScriptSerializer: https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx
Related
I want to get posts from Reddit API. Posts are in "children" node but each object has another object inside.
Can somebody help me write a function that convert this JSON to a list of dart objects?
Here is JSON string.
{
"kind": "Listing",
"data": {
"after": "t3_zzzhq4",
"dist": 2,
"children": [
{
"kind": "t3",
"data": {
"selftext": "blablabla",
"author_fullname": "3xblabla",
"title": "moreblabla",
"created": 1672515982,
"id": "10020p0"
}
},
{
"kind": "t3",
"data": {
"selftext": "blablabla",
"author_fullname": "3xblabla",
"title": "moreblabla",
"created": 1672515982,
"id": "10020p0"
}
}
],
"before": null
}
}
I tried all the tutorials on the topic of complex json parsing, but none of them met my needs. I would know how to parse simple json, but here it is deeply nested JSON, which bothers me a lot, and i cant quite grasp it. Appreciete any help.
Solution:
First go to json to dart and paste JSON string, this generator will make you all classes needed for your JSON.
Then you will need to decode string:
final jsonResponse = json.decode(jsonString);
And then deserialize your JSON like this:
List postslist = list.map((i) => Post.fromJson(i['data'])).toList();
For me, it was crucial i['data']. After adding that, i could deserialize all objects that were living inside that node. Thanks everyone! Hope that someone else this will be helpful! Cheers.
I have a Datastream[ObjectNode] which I read as deserialized json from a kafka topic. One of the element of this ObjectNode is actually an array of events. This array has varying length. The incoming json stream looks like this :
{
"eventType": "Impression",
"deviceId": "359849094258487",
"payload": {
"vertical_name": "",
"promo_layout_type": "aa",
"Customer_Id": "1011851",
"ecommerce": {
"promoView": {
"promotions": [{
"name": "/-category_icons_all",
"id": "300275",
"position": "slot_5_1",
"creative": "Central/Gift Card/00000001B890D1739913DDA956AB5C79775991EC"
}, {
"name": "/-category_icons_all",
"id": "300276",
"position": "slot_6_1",
"creative": "Lifestyle/Gift Card/00000001B890D1739913DDA956AB5C79775991EC"
}, {
"name": "/-category_icons_all",
"id": "413002",
"position": "slot_7_1",
"creative": "Uber/Deals/00000001B890D1739913DDA956AB5C79775991EC"
}]
}
}
}
}
I want to be able to explode the promotions array so that each element inside becomes an individual message which can be written to a sink kafka topic. Does Flink provide the explode feature in DataStream and/or Table API?
I have tried to do a RichFlatMap on this stream to be able to collect individual rows but this also just returns me a DataStream[Seq[GenericRecord]] as below:
class PromoMapper(schema: Schema) extends RichFlatMapFunction[node.ObjectNode,Seq[GenericRecord]] {
override def flatMap(value: ObjectNode, out: Collector[Seq[GenericRecord]]): Unit = {
val promos = value.get("payload").get("ecommerce").get("promoView").get("promotions").asInstanceOf[Seq[node.ObjectNode]]
val record = for{promo <- promos} yield {
val processedRecord: GenericData.Record = new GenericData.Record(schema)
promo.fieldNames().asScala.foreach(f => processedRecord.put(f,promo.get(f)))
processedRecord
}
out.collect(record)
}
}
Please help.
Using a flatmap is the right idea (not sure why you bothered with a RichFlatMap, but that's a detail).
Seems like you should be calling out.collect(processedRecord) for each element inside the for loop, rather than once on the Seq produced by that loop.
I'm new to node-red and want to parse content received from wikipedia api. I send requests to the query endpoint:
https://en.wikipedia.org/w/api.php?action=query&titles={{{query}}}&prop=revisions&rvprop=parsetree&format=json&rvsection=0
The response looks similar to this:
{
...,
"query": {
"normalized": [ ... ],
"pages": {
"123456789": {
"pageid": 123456789,
"ns": 0,
"title": "title",
"revisions": [{
"parsetree": "...."
}]
}
}
}
}
I need to parse the content of parsetree, but am unable to get the first json object of pages dynamically.
Of course I can do something like: msg.payload.query.pages.123456789.revisions[0].parsetree
But I have a lot of titles i like to query and to process.
Is there an other way to get the content of parsetree?
You can always get hold of the list of keys in an object using the Object.keys(obj) method (doc)
So something like this should work
var pages = Object.keys(msg.payload.query.pages);
for (var i=0; i<pages.length; i++) {
var parsetree = msg.payload.query.pages[pages[i]].revisions[0].parsetree;
...
}
I have a problem of huge http response with a json slab, where only portion is point of interest.
I cannot change the response structure.
here is an example
{
"searchString": "search",
"redirectUrl": "",
"0": {
"numRecords": 123,
"refinementViewModelCollector": {},
// Lots of data here
"results": [
{
"productCode": "123",
"productShortDescription": "Desc",
"brand": "Brand",
"productReview": {
"reviewScore": 0
},
"priceView": {
"salePriceDisplayable": false,
},
"productImageUrl": "url",
"alternateImageUrls": [
"url1"
],
"largeProductImageUrl": "url4",
"videoUrl": ""
},
{
"productCode": "124",
"productShortDescription": "Desc",
"brand": "Brand",
"productReview": {
"reviewScore": 0
},
"priceView": {
"salePriceDisplayable": false,
},
"preOrder": false,
"productImageUrl": "url",
"alternateImageUrls": [
"url1"
],
"largeProductImageUrl": "url4",
"videoUrl": ""
}
]
//lots of data here
}
}
My point of interest is entries in results Jason Array, but the are sitting in the middle of json
I created a small Play WS Client like this:
val wsClient: WSClient = ???
val ret = wsClient.url("url").stream()
ret.flatMap { response =>
response.body.via(JsonFraming.objectScanner(1024))
.map(_.utf8String)
.runWith(Sink.foreach(println))
}
this will not work because it will take whole json slab as Json object. I need to skip some data until "results": entry appear in the stream, then start parsing entries and skip all the rest.
Any ideas how to do this?
Check out Alpakka's JSON module, which can stream specific parts of a nested JSON structure:
response
.body
.via(JsonReader.select("$.0.results[*]"))
.map(_.utf8String)
.runWith(Sink.foreach(println)) // or runForeach(println)
There are parsers that support parsing as a stream. For a good example check out this Circe example https://github.com/circe/circe/tree/master/examples/sf-city-lots
I'd love a better, Scala-specific answer to this question, but check out the "Mixed Reads Example" in the documentation for Google's GSON library:
https://sites.google.com/site/gson/streaming
Gson also supports mixed streaming & object model access. This lets your application have the best of both worlds: the productivity of object model access with the efficiency of streaming
...
This code reads a JSON document containing an array of messages. It steps through array elements as a stream to avoid loading the complete document into memory. It is concise because it uses Gson’s object-model to parse the individual messages
This should have great memory-performance (the code reads from a Java InputStream, so the full structure is never in memory), but may require some effort to get your results into Scala case classes.
Currently been working on eliminating the excess "," comma on the json object I have below.
{"rules": {
"1000": {
"action": "2",
"category": "skype",
"entity": "Private",
"id": "1000",
},
"1200": {
"action": "2",
"category": "http",
"entity": "Public",
"id": "1200",
},
"100": {
"action": "2",
"category": "ftp",
"entity": "Public",
"id": "100",
},
"0": {
"entity": "Private",
"category": "alcohol, tobacco",
"action": "1",
"id": "low",
},
"3000": {
} }}
Maybe you have some insights on what's the cleanest way to eliminate it using AngularJS.
The data was parsed from this code snippet.
var request = {
url: 'sample/uri',
method: "GET",
transformResponse: specialTransform
};
var response = $q.defer( );
$http( request ).success( function( THIS DATA -> data, status ) {
eval
var fixTrailingCommas = function (jsonString) {
var jsonObj;
eval('jsonObj = ' + jsonString);
return JSON.stringify(jsonObj);
};
fixTrailingCommas('{"rules": { "1000": { "action": "2", "category": "skype", "entity": "Private", "id": "1000" , } } }');
Please use eval here only if you completely trust incoming json, and also be aware of other eval evils as described on MDN and its note on JSON parsing
Note that since JSON syntax is limited compared to JavaScript syntax, many valid JavaScript literals will not parse as JSON. For example, trailing commas are not allowed in JSON, and property names (keys) in object literals must be enclosed in quotes. Be sure to use a JSON serializer to generate strings that will be later parsed as JSON.
You may also choose to rely on implementation of JSON2 by Douglas Crockford which uses eval internally
On current browsers, this file does nothing,
preferring the built-in JSON object. There is no reason to use this file unless
fate compels you to support IE8, which is something that no one should ever
have to do again.
But because we really need to use this library, we have to make few code modifications, e.g. simply comment out JSON type check, which will then override native browser object (or we may also introduce new JSON2 global variable)
//if (typeof JSON !== 'object') {
JSON = {};
//}
P.S. Other parsing fuctions json_parse.js and json_parse_state.js, which don't use eval, throw a syntax error
Angular part
var config = {
transformResponse: function (data, headers) {
if(headers("content-type") === "application/json" && angular.isString(data)) {
try {
data = JSON.parse(data);
} catch (e) {
// if parsing error, try another parser
// or just fix commas, if you know for sure that the problem is in commas
data = JSON2.parse(data);
}
return data;
} else {
return data;
}
}
};
$http.get("rules.json", config).success(function (data) {
$scope.rules = data;
});
So as you said, the JSON is wrongly generated on the server you are taking it from, can you change the way it is generated there? (Follow this: Can you use a trailing comma in a JSON object?)
In case you are unable to do so, you need to use something like mentioned here:
Can json.loads ignore trailing commas?
library to repair a JSON object, like: https://www.npmjs.com/package/jsonrepair
(try some online fix tool here: http://www.javascriptformat.com/)
or some regexp magic