I need to save svf2 files in cache of browser to enable offline mode on my site. I have already saved all data from modelderivative manifest and derivativeservice manifest. Now I need to get all other files from .svf (.pf, .bin, ...) which are required in forge-viewer with forge-convert-utils
When I want to use forge-convert-utils like this:
const reader = await SvfReader.FromDerivativeService(urn, guid, { token: token }, undefined, Region.EMEA);
It will return an Error 400 and config parameter of returned error has an url regions/eu/designdata/...urn.../manifest/undefined.
I was looking internaly how it works and find out that there is no condition that checks if object of manifest with given guid contains urn.(manifest has no urn for svf2). There is only check for type, role and guid. (on line 208 of reader.ts)
const resources = helper.search({ type: 'resource', role: 'graphics', guid });
How can I find and save those .bin, etc files ?
I'm the author of forge-convert-utils.
SVF2 is something I've been considering to support as well but it looks like this may not be possible after all. First of all, while SVF(1) is a simple file-based format and at this point it's pretty much stable and unlikely to change, SVF2 is much more sophisticated (it's not really a file format, but rather a database querying system on top of a persistent WebSocket connection) and it's still evolving. And more importantly, as #AlexAR pointed out, downloading and caching SVF2 assets is not permitted by the legal terms.
So if you need to cache your APS (formerly Forge) models for offline use, you'll need to use the original SVF format.
Related
I am writing an ArcGIS Pro Add-In and would like to view items in the geoprocessing history programmatically. The goal of this would be to get the list of parameters and tools used, to be able to better understand and recreate a workflow later, and perhaps, in another project where we would not have direct access to the history within ArcGIS Pro.
After a lot of searching through documentation, online posts, and debugging breakpoints in my code, I've found that some of this data does exist privately within the HistoryProjectItem class, but since this is a private class member, within a sealed class it seems that there would be nothing I can do to access this data. The other place I've seen this data is less than ideal, with the user having an option to write the geoprocessing history to an XML log file that lives within /AppData/Roaming/ESRI/ArcGISPro/ArcToolbox/History. Our team has been told that this file may be a problem because certain recursive operations may cause the file to balloon out of control, and after reading online, it seems that most people want this setting disabled to avoid large log files taking up space on their machine. Overall the log file doesn't seem like a great option as we fear it could slow down a user by having the program write large log files while they are working.
I was wondering if this data is stored somewhere that I have missed that could be accessed programmatically from the add-in. It seems to me that the data within Project.Items is always stored regardless of user settings but appears to be inaccessible this way to due class member visibility. I'm unfamiliar with geodatabases and ArcGIS file formats to know if a project will always have a .gdb which perhaps we could read the history from there.
Any insights on how to better read the Geoprocessing history in a minimally intrusive way to the user would be ideal. Is this data available elsewhere?
This was the closest/best solution I have found so far without writing to the history logs that most people avoid due to filesize bloat, and warnings that one operation may run other operations recursively causing the file to balloon massively.
https://community.esri.com/t5/arcgis-pro-sdk-questions/can-you-access-geoprocessing-history-programmatically-using-the/m-p/1007833#M5842
it involves reading the .arpx file (which is written to on save) by unzipping it, parsing the XML, and filtering the contents to only GPHistoryOperations. From there I was able to read all the parameters, environment options, status, and duration of the operation that I was hoping to gain.
public static void ListHistory()
{
// this can be run in a console app (or within a Pro add-in)
CIMGISProject project = GetProject(#"D:\tests\topologies\topotest1.aprx");
foreach(CIMProjectItem hist in project.ProjectItems
.Where(itm => itm.ItemType == "GPHistory"))
{
Debug.Print($"+++++++++++++++++++++++++++");
Debug.Print($"{hist.Name}");
XmlDocument doc = new XmlDocument();
doc.LoadXml(hist.PropertiesXML);
//it sure would be nice if Pro SDK had things like MdProcess class in ArcObjects
//https://desktop.arcgis.com/en/arcobjects/latest/net/webframe.htm#MdProcess.htm
var json = JsonConvert.SerializeXmlNode(doc, Newtonsoft.Json.Formatting.Indented);
Debug.Print(json);
}
}
static CIMGISProject GetProject(string aprxPath)
{
//aprx files are actually zip files
//https://www.nuget.org/packages/SharpZipLib
using (var zipFile = new ZipFile(aprxPath))
{
var entry = zipFile.GetEntry("GISProject.xml");
using (var stream = zipFile.GetInputStream(entry))
{
using (StreamReader reader = new StreamReader(stream))
{
var xml = reader.ReadToEnd();
//deserialize the xml from the aprx file to hydrate a CIMGISProject
return ArcGIS.Core.CIM.CIMGISProject.FromXml(xml);
}
};
};
}
Code provided by Kirk Kuykendall
I am using autodesk Forge to to a post job to create a thumbnail. Below is the body of the httpheaders that I am sending.
{ "input" :
{ "urn":"urnNum",
"compressedUrn" : true,
"rootFilename" : "RodDesign"},
"output" :
{ "destination" : { "region" : "us"},
"formats" : [{"type":"thumbnail"}]
}
}
The status code is 200, but it gives back the response :
Failed to download the design description for the input design.
I am not sure how to interpret this. I don't know if it's a problem with data that I have sent or there is an issue with the file I am trying to access, or something else.
Any help is appreciated.
Your rootfilename should be complete with extension name and match exactly the model file in the archive:
{ "input" :
{ "urn":"dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LnppcA",
"compressedUrn" : true,
"rootFilename" : "RodDesign.rvt"},
"output" :
{
"formats" : [{"type":"thumbnail"}] //output region defaults to US so no need for it here
}
}
If your model file has no extension name then our engine won't be able to determine which extractor to dispatch it to - our translation engine ID the type of a model file based on the its extension name (e.g. rvt, dwg etc).
Also make sure the archive is completely uploaded to the bucket - query the object details using GET buckets/:bucketKey/objects/:objectName/details
and check its size and SHA1 (may use this to calculate for your local files and the digests should be identical for the same contents) readings.
EDIT
What is an archive?
ZIP, RAR, TAR etc.
Why do we need to use a file name with an extension when we are giving it a urn - which should be a unique identifier from which an entry is pulled that tells the reader the type of file? I don't see a file type in the forge data. How do we determine this programmatically?
If you are translating a design file directly and not an archive (see here for what that means) then leave the compressedUrn and rootFilename out. See tutorial here for details.
How do we know if the file is compressed? If the code needs to extension of the root file name to send it to the right extractor, why isn't that needed for a file that isn't compressed?
Of course our translation service requires that the model files in the bucket to have the correct extension name so it can dispatch them to the correct extractor.
How do I know the format of the file? What is the default for Fusion 360? How do I know if I have the format correct and there is just another error producing the same message?
That'd be f2d for 2d drawing and f3d for 3d models. You can always google to find answers for the correct extension names.
Is this a problem with finding the source file and how do I address that?
You will need to make sure that the model file in the bucket has the correct extension name otherwise it won't get dispatched to the corresponding extractor and hence the error.
I want to use viewer API in a completely private network, do you have any problems in this case?
Requests throw when using loadModel
Well it depends what you mean by 'complete private network'
if you want to use the viewer by storing the translation results (svf bubbles) on you network, and serving bubbles' files from one of your server, then it is ok (we call this offline viewing) - then you also need to copy the viewer javascript files on your server to do that. We got an example to do this at: https://extract.autodesk.io/ and source code at: https://github.com/cyrillef/extract.autodesk.io
if you want to do something different from what I wrote above, you would need to contact us # forge.help#autodesk.com and explain in more details what you want to do
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
I'm writting an application using angular and spring. It has to be able to work offline. After doing some research I found that application cache is the best way to go since I need to cache all the .css and .js files. The problem is that I can't get the data returned by spring and asked by the $resource object to be cached. When I turn off the server, static data are cached but I get a "GET error" in chrome's console about the .json he can't retrieve.
angular.module('MonService', ['ngResource']).
factory('Projet', function($resource){
return $resource('json/accueil');
});
I've tried something such as saving the response manually in a .json file then caching this file as well and use it as the source for the $resource but it seems long and complicated...
Or using localstorage, something like :
var cache, AmettreDansCache;
donne = {};
cache= window.localStorage.getItem('projets');
if (!cache) {
AmettreDansCache= $resource('json/accueil');
window.localStorage.setItem('projets', JSON.stringify(AmettreDansCache));
return AmettreDansCache
}
else{
return angular.extend(donne, JSON.parse(cache));
}
i don't think this is working, anyway what's the way to do it using application cache only ?
There is a module built on top of resource that does caching, on both reading and writing data. You should check it out. It will keep a copy of your data on the clients browser and when failed to save (due to being offline) it will save it locally and keep retrying to save.
https://github.com/goodeggs/angular-cached-resource
There is also a small article about the module:
http://bites.goodeggs.com/open_source/angular-cached-resource/