ARVR Toolkit Fragment Payload - autodesk-forge

I'm trying to get fragment data using the VRAR Toolkit API so that we can make some optimizations to the mesh data. We can create a scene, process the SVF into toolkit scene, and scene process finishes, but we're having issues when we actually have to get the fragment data.
Using the following endpoint:
https://developer-api.autodesk.io/modelderivative/v2/arkit/MODEL_URN/mesh/MESH_ID/FRAG_ID
Returns a 200 with an octet-stream, but I can't find any documentation as to what the contents of the octet-stream are. According to the documentation (https://app.swaggerhub.com/apis/cyrillef/forge-ar_kit/1.2.1#/ARVR-Toolkit/get_asset_fragment) we can specify whether to use legacy or openctm.
1) What is the legacy format? How can verts, normals, uv, etc. be extracted?
2) I tried the openctm option and saved the returned octet-stream to a .ctm file and tried opening in the OpenCTM Viewer available from (http://openctm.sourceforge.net/) but always get CTM_BAD_FORMAT error when trying to open the file for viewing. How can I confirm my openctm payload is correct?

The SVF format (including the mesh data format) isn't publicly documented but you can get some idea about its structure from the AR/VR Toolkit's Unity package source code: https://github.com/wallabyway/ARVRToolkit/blob/master/unity-src/ARVRToolkit/Assets/Forge/ARKit/MeshRequest.cs#L54-L89.

Related

Upload files to object storage using python SDK

I am using Python SDK for OCI. I tried the Upload manager example and its working perfectly fine when i try to upload files from file system. But i have to expose this python code as REST service (using flask) and files to be uploaded to object storage will come as payload for REST. Does it have to multipart/mixed content type in this case or can it be multipart/form-data as well.
#user1945183, are you asking if Upload Manager can support multipart/form-data? Yes, Upload Manager can take in multipart/form-data.
The Upload Manager uses Object Storage's CreateMultipartUpload API. You can learn more about the CreateMultipartUpload API doc.
From CreateMultipartUploadDetails Reference you will find that content-type is optional and have no effect on Object Storage behavior.
The optional Content-Type header that defines the standard MIME type format of
the object to upload. Specifying values for this header has no effect on
Object Storage behavior. Programs that read the object determine what to do
based on the value provided. For example, you could use this header to
identify and perform special operations on text only objects.

Loading a BIM360 model from a deployed site returns 9: BAD_DATA_NO_VIEWABLE_CONTENT

I have some JavaScript that calls the function Autodesk.Viewing.Document.load(...). When I run this locally, I am able to successfully load a model, but when the code is deployed the exact same model returns the error
9: BAD_DATA_NO_VIEWABLE_CONTENT
Any ideas what the issue would be?
There are various reasons why loading a model from Forge could return the error code 9, for example:
you are loading a model that has not been processed using the POST job endpoint yet
the model has already been processed but the translation failed for some reason (use the GET :urn/manifest endpoint to check if you see "status": "success")
the model was translated successfully but there is no actual viewable content in it (e.g. a Revit model with no 2D sheets and no 3D views specified)
the model was translated successfully, but the output derivatives have been removed, either manually (using the DELETE :urn/manifest endpoint) or perhaps automatically after the original file was removed from a Forge OSS bucket

How connect json file with after effects for rendering dynamic video

I wan't to learn how connect some json file with after effects for rendering dynamic videos.
Eg i have a form in some webpage:
this form included one input which people are using there their name.
And then i create some json file like that array of objects with this form.
data = [
{
name: 'John'
},
{
name: 'Mike'
}
]
and i wan't to create with these json objects for each name some video about few second there will be shown just name from json and render some mp4 video.
How to do that?
which steps following?
if it will be web form i think i'll need to connect json file dynamically too right?
so after effects will read this json file from some url ?
There are many ways to go about doing this, but a single answer on Stack Overflow probably won't give you everything you need.
Network communication can be done using the CEP framework provided by Adobe which can then execute ExtendScript code which actually does the manipulation of the layers inside the AEP project file. You can use node modules to perform the network communication, and then write ExtendScript code to pass in the JSON data to that.
While not free, you might want to explore Dataclay's Templater extension to help you accomplish what you want. It not only does what you are asking out of the box, but it has some rules-based AI to reconfigure layers both temporally and spatially. You can point Templater to a URL that response with an array of JSON objects and have it process that data. In addition to this, it has event hooks which allow you to execute any script within the Shell or with the ExtendScript engine during its versioning process.
Hope this helps!

Not able to completely hide model in forge viewer

I am trying to hide the complete model in forge-viewer, for that I am calling forge visibility API as follows:
viewer.impl.visibilityManager.setNodeOff(viewer.model.getModelId(),true)
where viewer is a object of GUIViewer3D.
But this is not hidding the complete model, some components in the model are still visible.
I think it is a bug in forge-viewer because if I pass a root node of model to setNodeOff() API it has to hide whole model. I am sure that there is only one model loaded in my viewer session.
Refer this image for the elements which are still visible after calling viewer.impl.visibilityManager.setNodeOff(viewer.model.getModelId(),true)
getModelId() returns model IDs, and the visibility APIs expect node IDs. In order to hide the entire model, consider the following:
viewer.hide(model.getRootId());
Could you try something for me...
Could you try adding the header 'x-ads-force':'true' to the POST job request when you are converting the .RVT file?
Here is the documentation on the API request...
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/job-POST/#headers
Then, retrieve the URN and feed it to the Forge Viewer, as before, like this example:
https://forge.autodesk.com/en/docs/model-derivative/v2/reference/http/urn-manifest-GET/#example
.
So... What's going on?
When you convert a RVT to a SVF (for the the Forge Viewer), it produces a random set of DBIds.
When you call the GetProperties API, it uses that cached SVF to pull the DBIds and provide you with a result.
Back in August, the GetProperty API engine, was updated. It produces a different order of DBIds compared to the old converted SVF.
Since the old SVF is cached, even if you submit a new job, it will use the old SVF.
To avoid the old SVF, we need to 'force convert' the RVT to SVF conversion, to flush that cache, so that the new SVF is visible and the Forge Viewer can retrieve it.
Once the Forge Viewer can see the new SVF, it should match the DBIds of the GetProperties API.
Complicated huh?
Let me know if that fixes the problem.
As my goal is to completely hide model from viewer, I achieved this by following:
viewer.clearSelection();
viewer.model.setAllVisibility(0);
viewer.impl.toggleGhosting(false);
viewer.impl.toggleGroundShadow(false);
This gives me the required behavior. The elements which I was not able to hide are now hidden through this approach.
to completely hide the model you can isolate an empty list, which will show the whole model ghosted, then you can turn off ghosting globally
viewer.isolate([]);
viewer.setGhosting(false);

Google Cloud Functions: loading GCS JSON files into BigQuery with non-standard keys

I have a Google Cloud Storage bucket where a legacy system drops NEW_LINE_DELIMITED_JSON files that need to be loaded into BigQuery.
I wrote a Google Cloud Function that takes the JSON file and loads it up to BigQuery. The function works fine with sample JSON files - the problem is the legacy system is generating a JSON with a non-standard key:
{
"id": 12345,
"#address": "XXXXXX"
...
}
Of course the "#address" key throws everything off and the cloud function errors out ...
Is there any option to "ignore" the JSON fields that have non-standard keys? Or to provide a mapping and ignore any JSON field that is not in the map? I looked around to see if I could deactivate the autodetect and provide my own mapping, but the online documentation does not cover this situation.
I am contemplating the option of:
Loading the file in memory into a string var
Replace #address with address
Convert the json new line delimited to a list of dictionaries
Use bigquery stream insert to insert the rows in BQ
But I'm afraid this will take a lot longer, the file size may exceed the max 2Gb for functions, deal with unicode when loading file in a variable, etc. etc. etc.
What other options do I have?
And no, I cannot modify the legacy system to rename the "#address" field :(
Thanks!
I'm going to assume the error that you are getting is something like this:
Errors: query: Invalid field name "#address". Fields must contain
only letters, numbers, and underscores, start with a letter or
underscore, and be at most 128 characters long.
This is an error message on the BigQuery side, because cols/fields in BigQuery have naming restrictions. So, you're going to have to clean your file(s) before loading them into BigQuery.
Here's one way of doing it, which is completely serverless:
Create a Cloud Function to trigger on new files arriving in the bucket. You've already done this part by the sounds of things.
Create a templated Cloud Dataflow pipeline that is trigged by the Cloud Function when a new file arrives. It simply passes the name of the file to process to the pipeline.
In said Cloud Dataflow pipeline, read the JSON file into a ParDo, and using a JSON parsing library (e.g. Jackson if you are using Java), read the object and get rid of the "#" before creating your output TableRow object.
Write results to BigQuery. Under the hood, this will actually invoke a BigQuery load job.
To sum up, you'll need the following in the conga line:
File > GCS > Cloud Function > Dataflow (template) > BigQuery
The advantages of this:
Event driven
Scalable
Serverless/no-ops
You get monitoring alerting out of the box with Stackdriver
Minimal code
See:
Reading nested JSON in Google Dataflow / Apache Beam
https://cloud.google.com/dataflow/docs/templates/overview
https://shinesolutions.com/2017/03/23/triggering-dataflow-pipelines-with-cloud-functions/
disclosure: the last link is to a blog which was written by one of the engineers I work with.