There is an inconsistency extracting OBJ - autodesk-forge

We started to notice some inconsistency while extracting OBJ file using https://developer.api.autodesk.com/modelderivative/v2/designdata/{urn}/manifest
Sometimes there is a role as obj ("role":"obj") when the model manifest is completed however sometimes there is no obj role when exactly the same file was processed.
What is the reason of this inconsistency? How to prevent it?

Related

How is the WeatherForecastController in template Blazor application configured to return JSON?

I created a standard Blazor Client application, but found when adding any new API controllers they simply do not work.
Even just copying the WeatherForecastController to a WeatherForecast2Controller returns the same error (as an HTML error page is returned instead of any JSON data):
'<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.
I assume there must be some configuration somewhere to tell it what return type to expect, but no idea where to look. The only controller that returns valid JSON data is the WeatherForecastController you get out of the box.
Works:
group = await Http.GetFromJsonAsync<GenericGroup[]>($"WeatherForecast");
Fails:
group = await Http.GetFromJsonAsync<GenericGroup[]>($"WeatherForecast2");
Controllers are identical aside from name (literally copied the existing controller). I have been through every single file in the solution and don't see anything that would cause WeatherForecast to be a special routing.
Clue 1:
It does not hit a breakpoint in any controller aside from the WeatherForecastController
Clue 2:
The HTML returned (which causes the Json deserialise to fail) is actually the initial Loading... from the Blazor client load, so it looks like the requests are going to the standard routing and not treated as API calls.
Clue 3:
This does not happen unless authentication has been added to the application. I tried it on a vanilla Blazor app (no auth) and it works just fine. The failing app is using Azure B2C for auth.
Turns out you can cause things to blow up if you inject the wrong logger parameter in the generic ILogger<>. It was the one thing I missed changing. Good to know.

JSONs with (supposedly) same format treated differently by BigQuery - one accepted, one rejected

I am trying to upload JSON files to BigQuery. The JSON files are outputs from the Lighthouse auditing tool. I have made some changes them in Python to make field names acceptable for BigQuery and converted the format into newline JSON.
I am now testing this process and I have found that while for many web pages the upload runs without issue, BigQuery is rejecting some of the JSON files. The rejected JSONs always seem to be from the same website, for example, many of the audit JSONs from Topshop have failed on upload (the manipulations in Python run without issue). What I am confused by is that I can see no difference in the formatting/structure of the JSONs which succeed and fail.
I have included some examples here of the JSON files: https://drive.google.com/open?id=1x66PoDeQGfOCTEj4l3VqMIjjhdrjqs9w
The error I get from BigQuery when a JSON fails to load is this:
Error while reading table: build_test_2f38f439_7e9a_4206_ada6_ac393e55b8ec4_source, error message: Failed to parse JSON: No active field found.; ParsedString returned false; Could not parse value; Could not parse value; Could not parse value; Could not parse value; Could not parse value; Could not parse value; Parser terminated before end of string
I have also attempted to upload the failed JSONs to a new table through the interface using the autodetect feature (in an attempt to discover whether the Schema was at fault) and these uploads fail too, with the same error.
This makes me think the JSON files must be wrong, but I have copied them into several different JSON validators which all accept them as one row of valid JSON.
Any help understanding this issue would be much appreciated, thank you!
When you load JSON files to BigQuery, it's good to remember that there are some limitations associated to this format. You can find them here. Even though your files might be valid JSON files, some of them may not comply with BigQuery limitations, so I would recommend you to double check if they are actually correct for BigQuery.
I hope that helps.
I eventually found the error here through a long trial and error process where I uploaded first the first-half and then the second-half of the JSON file to BigQuery. The second-half failed so I split that in half again to see which half the error occurred in. This continued until I found the line.
At a deep level of nesting there was a situation where one field was always a list of strings, but when there were no values associated with the field it appeared as an empty string (rather than an empty list). This inconsistency was causing the error. The trial and error process was long but given the vague error message and that the JSON was thousands of lines long, this seemed like the most efficient way to get there.

APP-ENGINE load data from static json file or load data into the datastore?

Im new to app-engine. Writing a rest api. Wondering if anyone has been in this dilemma before?
This data that i have is not alot (3 to 4 pages) and but it changes annually.
Option 1: Write the data as json and parse the json file every time a request comes in.
Option 2: Model into objects and throw into the datastore and then retrieve them whenever a requests comes in.
Does anyone know the pros and cons for each of this method or any better solutions if any.
Of course the answer is it depends.
Here are some of the questions I'd ask myself to make a decision -
do you want to make the change to the data dependent on a code push?
is there sensitive information in the data that should not be checked in to a VCS
what other parts of your system is dependent on this data
how likely are your assumptions about the data going to change in terms of frequency of updating and size
Assuming the data is small (<1MB) and there's no sensitive information in it, I'd start out loading the JSON file as it's the simplest solution.
You don't have to parse the data on each request, but you can parse it at the top level once and effectively treat it as a constant.
Something along these lines -
import os
import json
DATA_FILE = os.path.join(os.path.dirname(__file__), 'YOUR_DATA_FILE.json')
with open(DATA_FILE, 'r') as dataFile:
JSON_DATA = json.loads(dataFile.read())
You can then use JSON_DATA like a dictionary in your code.
awesome_data = JSON_DATA['data']['awesome']
In case you need to access the data in multiple places, you can move this into its own module (ex. config.py) and import JSON_DATA wherever you need it.
Ex. in main.py
from config import JSON_DATA
# do something w/ JSON_DATA

Reading Large JSON file into variable in C#.net

I am trying to parse the JSON files and insert into the SQL DB.My parser worked perfectly fine as long as the files are small (less than 5 MB).
I am getting "Out of memory exception" when trying to read the large(> 5MB) files.
if (System.IO.Directory.Exists(jsonFilePath))
{
string[] files = System.IO.Directory.GetFiles(jsonFilePath);
foreach (string s in files)
{
var jsonString = File.ReadAllText(s);
fileName = System.IO.Path.GetFileName(s);
ParseJSON(jsonString, fileName);
}
}
I tried the JSONReader approach, but no luck on getting the entire JSON into string or variable.Please advise.
Use 64 bit, check RredCat's answer on a similar question:
Newtonsoft.Json - Out of memory exception while deserializing big object
NewtonSoft Jason Performance Tips
Read the article by David Cox about tokenizing:
"The basic approach is to use a JsonTextReader object, which is part of the Json.NET library. A JsonTextReader reads a JSON file one token at a time. It, therefore, avoids the overhead of reading the entire file into a string. As tokens are read from the file, objects are created and pushed onto and off of a stack. When the end of the file is reached, the top of the stack contains one object — the top of a very big tree of objects corresponding to the objects in the original JSON file"
Parsing Big Records with Json.NET
The json file is too large to fit in memory, in any form.
You must use a JSON reader that accepts a filename or stream as input. It's not clear from your question which JSON Reader you are using. From which library?
If your JSON reader builds the whole JSON tree, you will still run out of memory. As you read the JSON file, either cherry pick the data you are looking for, or write data structures to another on-disk format that can be easily queried, for example, an sqlite database.

Cocoa touch - Error getting JSON key

So I am parsing the www.twitch.tv API json page.
Here is the link: (these 2 people are normally always streaming - if the json data shows as [] it means they are offline. if someone is offline, json has no data)
http://api.justin.tv/api/stream/list.json?channel=vokemodels
http://api.justin.tv/api/stream/list.json?channel=mathmind
My problem is that in my iOS application, I can do:
[dictionary objectForKey:#"stream_count"]];
And I do successfully get data from the JSON and it will log correctly. However, when trying to get data from the "screen_cap_url_medium" key, I do the following code:
[dictionary objectForKey:#"screen_cap_url_medium"]];
And I get a (null) when logging. I am absolutely positive I am retrieving the JSON data, and I do not have any typos.
When I NSLog the entire JSON array from one of the above links, the "screen_cap_url_medium" is one of the only keys that are in quotes.
Can anyone tell me what I'm doing wrong?
If you inspect your json you'll see that screen_cap_url_medium is under channel object, so you can access it like this:
[dictionary valueForKeyPath:#"channel.screen_cap_url_medium"];
PS. Here dictionary is obviously the first object of the root array you get back from your response.