We have a scenario where Box(box.com) has multiple versions for same file and we need to update and process the current version of file in our application.
Please let me know the process to upload new file version in same bucket and process it.
Currently, we are unable to render the updated Autodesk file view. It is still showing the old file view.
You can simply upload a new version of the file using the same bucketKey/objectKey pair, first make sure that you are using x-ads-force set to true in the header of your /POST job request.
The viewer is caching the data when loading a model, so once the translation is done, you also need to make sure you are clearing your cache or testing in incognito session.
See that article for more details: I Make Changes and Nothing Happens
Hope that helps.
Related
We are having a real time data upload application, that reads continuous streaming file. We developed this logic 2 year before.
In chrome 81, real time file upload stopped working because file BLOB read by chrome is not getting updated (If file data changed), it still shows old Blob. Whatever new data adding in file not showing. Further I see, all File API properties showing old data e.g. File.lastModifiedDate, File.Size.
In previous versions of chrome, all these properties got updated when file content got changed.
Uploaded file details. File grown upto 1000KB, FILE API still shows old details with size 49.2KB.
I believe this is a new bug in Chrome 81 has to be addressed. Please advice if their is an alternative.
Unfortunately this new behavior is working as intended. File objects on the web always were supposed to be immutable snapshots. Chrome in the past unfortunately had a number of exceptions where this behavior wasn't implemented properly (largely because the implementation predates the specification), and those inconsistencies were fixed in M81.
In Chrome we're also experimenting with the Native File System API (https://web.dev/native-file-system/), which explicitly does intend to support the use case of being able to read from files even after they are modified, so that might be an alternative.
With the latest ForgeARKit-update-6-2018.1, I was trying to load my model in Unity, with the sample Unity scene 'loadAtStartup'. I can successfully load the sample models from 'Sandbox', but I couldn't load my model, which was uploaded through script 'test-2legged'.
Error message shows 504, it seems not reaching the service:
AsyncRequestCompleted The remote server returned an error: (504) Gateway Time-out.
UnityEngine.Debug:Log(Object)
Autodesk.Forge.ARKit.RequestQueueMgr:AsyncRequestCompleted(Object, AsyncCompletedEventArgs) (at Assets/Forge/CodeBase/RequestQueue.cs:322)
UnityEngine.UnitySynchronizationContext:ExecuteTasks()
Model URN:
dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bWFvbGlua3ppOHM3cnlvZWx4bjVndnR4bjcyZWc2N2l0dGp0a2MvMmZsb29yX0FyYy5pZmM=
[Update 23/4/2019]
I found that I can successfully load the same model with ForgeARKit-update-3-2017.1.2f1. I compare the Forge code in Unity. I think it has something to do with the service URL. The version 6 is fetching models from 'https://developer-api-beta.autodesk.io' while version 3 is fetching from 'https://developer-api.autodesk.io'. Meanwhile the shell script 'test-2legged' is uploading to the latter one ('https://developer-api.autodesk.io'). That is why it counldn'd find the resource. Question here is how can I upload model to the 'beta' ARKit? I tried modifying the URL in script 'test-2legged' but it doesn't work. Below screen-shot is the output of script 'test-2legged' when fetching from 'beta' ARkit. It seems model is uploaded successfully, but some parsing post-work failed. I guess the response format is also changed in the beta version. Is there a beta version of 'test-2legged' scripts (and other Scene Preparation scripts)?
Please comments, Thanks.
This is correct. My apologizes for this, I know we did not documented very well on the server changes.
This update6 assumes you are using the new server under beta right now. The scripts and update 3 are using the legacy server. Note that the 2 servers are not necessarily compatible and store the data in different places, so make sure to always use the same server in Unity as the one you used to prepare the scene. When we will switch everyone to the new server, we will transfer the data from the legacy server to the new server cloud storage.
The Update3 package will still be able to read scenes from the new server, as we make sure the old Unity code stays compatible.
Note as well you need to use SafeBase64 encoded string everywhere. I saw in your description that you are using base64 encoded (not safe). The new server will be more strict of parameters and format, so I encourage you to test your scripts/code on the beta server.
Last, I am working on a new Unity code update, and documentation which will be released next week. Make sure to use this version - it adds support for 3legged, automatic 2/3legged token refresh, and more. If you got scenes failing, please contact me directly and share your models and URN. I'll either test it on my development environment, or look into our log files for the reason to fail. My email address is my first name at autodesk.com
Thank you Cyrille for your help!! I am replying you here as it's easier to insert images.
I replaced the function 'xbase64encode()' with 'xbase64safeencode()', and now it works! However it seems for some model it still responds some error and in that case it cannot be loaded in Unity. (as the image below). I checked the script and I think all the encoding are using SafeBase64. Any clue of that? Or is that caused by my model?
BTW, the loading performance is greatly improved than the legacy version!! It looks almost the same as the web client. Huge Thanks for that!
Good to know that there is going to be an update next week. Yes I will test it and get back to you later.
I have a trace events JSON data and I want to view it using chrome://tracing. However, I don’t want to load the JSON every time.
Is there a way I can pass the JSON data to chrome://tracing so that without manually clicking load data all my data gets loaded?
The Trace-viewer tool currently loads the JSON file in 3 ways:
When recording a new trace
When loading the file via the load button after the file has been picked
When dropping the file into the tab (drag & drop)
All of these do a one-time update to the active trace.
Technical details
Look at the profiling_view and notice
tr.ui.b.readFile(file).then(
...
this.setActiveTrace(file.name, data);
and a few variations on calls to setActiveTrace from beginRecording , onLoadClicked_ and dropHandler_ .
Solution suggestions
Yes, there are a few ways in which you can pass the json data to chrome://tracing (i.e. trace viewer) without manually clicking load data .
Depending on how much effort you want to put into it:
Don't manually click load but drag and drop the file
Automate the drag & drop (example with selenium) based on a script which watches for file changes to the JSON
Automate the load based on file changes
Contribute to Trace Viewer yourself and add this feature. See the tracing ecosystem explainer and contributing. Note it is now part of the Catapult project in GitHub.
See fswatch - a cross-platform file change monitor.
Here is a solution, if you can relax the requirement that you have to open it with Chrome-Tracing. Speedscope is a nice replacement and can be easily started from the command line.
For offline use, or convenience in the terminal, you can also install speedscope via npm:
npm install -g speedscope
Invoking speedscope /path/to/profile will load speedscope in your default browser. Source
speedscope <my-chrome-tracing.json> opens the file.
Speedscopes offers different views but not sometimes not the same view as Chrome-Tracing. So it might not be the right choice for all use cases.
this GopherCon video shows another solution I guess, which is to load into the browser an HTML page, probably with embedded Javascript in it, that “loads” the trace file. Although I suspect not from a disk file but by “serving” it directly via a custom http server.
I am having trouble using dartdocgen and dartdoc-viewer to pump my JSON files to the browser. I have had success getting all the JSON files from my application but haven't had any success actually viewing them in the browser. Based on my research, the best way to do this is hosting dartdoc-viewer on a local server as mentioned by this document:
https://www.dartlang.org/tools/dartdocgen/#deploy
However I just cannot seem to get it to work following these directions (I would like to approach it via dartium):
https://github.com/dart-lang/dartdoc-viewer/
I understand that once I am able to run pub build and compile to javascript that I dump the client/build folder into my server along with the docs folder under the URL, I am golden. That's where the issue is, how to get it from the docs folder to javascript to the browser.
I would like to be able to use dartdocgen to it's full potential so can I get some ideas?
Just run dartdocgen --serve .
see https://www.dartlang.org/tools/dartdocgen/#view-locally
Is not what you are looking for?
Well, using HTML5 file handlining api we can read files with the collaboration of inpty type file. What about ready files with pat like
/images/myimage.png
etc??
Any kind of help is appreciated
Yes, if it is chrome! Play with the filesytem you will be able to do that.
The simple answer is; no. When your HTML/CSS/images/JavaScript is downloaded to the client's end you are breaking loose of the server.
Simplistic Flowchart
User requests URL in Browser (for example; www.mydomain.com/index.html)
Server reads and fetches the required file (www.mydomain.com/index.html)
index.html and it's linked resources will be downloaded to the user's browser
The user's Browser will render the HTML page
The user's Browser will only fetch the files that came with the request (images/someimages.png and stuff like scripts/jquery.js)
Explanation
The problem you are facing here is that when HTML is being rendered locally it has no link with the server anymore, thus requesting what /images/ contains file-wise is not logically comparable as it resides on the server.
Work-around
What you can do, but this will neglect the reason of the question, is to make a server-side script in JSP/PHP/ASP/etc. This script will then traverse through the directory you want. In PHP you can do this by using opendir() (http://php.net/opendir).
With a XHR/AJAX call you could request the PHP page to return the directory listing. Easiest way to do this is by using jQuery's $.post() function in combination with JSON.
Caution!
You need to keep in mind that if you use the work-around you will store a link to be visible for everyone to see what's in your online directory you request (for example http://www.mydomain.com/my_image_dirlist.php would then return a stringified list of everything (or less based on certain rules in the server-side script) inside http://www.mydomain.com/images/.
Notes
http://www.html5rocks.com/en/tutorials/file/filesystem/ (seems to work only in Chrome, but would still not be exactly what you want)
If you don't need all files from a folder, but only those files that have been downloaded to your browser's cache in the URL request; you could try to search online for accessing browser cache (downloaded files) of the currently loaded page. Or make something like a DOM-walker and CSS reader (regex?) to see where all file-relations are.