Google Drive Live API: Server Export of Collaboration Document - google-drive-api

I have a requirement to build an application with the following features:
Statistical and Source data is presented on simple HTML pages
Some missing Source data can be added from that HTML page ( data will be both exact numerical values and discriptive text )
Some new Source data can be added from those pages
Confirmed and verified data will NOT be editable via the HTML interface
Data is stored and made continuously available via the HTML interface
Periodically the data added/changed from the interface needs to be pulled back into the source data - but in a VERY controlled way. All data changes and submissions will need verification and checking - and some will trigger re-runs of models ( some of which take hours to run ).
In terms of overview architecture I have:
Large DB that stores and manages the data - this is designed for import process's and analysis. It is not ideal for web presentation or interface
Code servers that manipulate the data for imports and analysis
Frontend server that works as a proxy to add layer of security to S3
Collection of generated html files on S3 presenting the data required
Before reading about the Google Drive Realtime API my rough plan was to simply serialize data from the HTML interface and post to S3. The import server scripts would then check for new information, grab it, check it, log it and process it into the main data set.
That basic process however would mean that once changes were submitted from the web page - they would be lost from the users view until they had been processed by the backend.
With the Google Drive Realtime API it would appear I could get the best of both worlds.
However for the above to work I would need to be able to access the Collaboration Document in code from the code servers and export the data.
The Realtime API gives javascript access to Export and hand off to a function - however in my use case I want to automate the Export from the Collaboration Document.
The Google Drive SDK does not as far as I can see give any hints on downloading/exporting a file of type "Collaboration File".
What "non-browser-user" triggered methods are there for interfacing with the Collaboration Documents and exporting them?
David

Server-side export is not supported right now. What you could do is save the realtime model to a regular drive file, and read from that using the standard Drive API. See https://developers.google.com/drive/realtime/models-files for some discussion on different ways to setup interactions between realtime models and Drive Files.

Related

How to embed Autodesk Forge model viewer into a website?

I have created a web application for viewing models using the AutoDesk Forge Viewer, and I want to be able to add this onto a website. I used this tutorial: https://learnforge.autodesk.io/#/tutorials/viewmodels (using node.js for the language option).
The goal is to have the user access the viewer application from the website. I have been using VS code live server for testing. However, when I link the page that has the viewer into my own website, the viewer does not load the buckets or allow for creation of new buckets. It is just stuck on a loading symbol like below:
[Loading screen][1]
Could I please have the following questions answered:
What is the proper way to embed this application onto a website in the manner I have described above?
What part of the code controls where the buckets are loaded in?
Thank You.
[1]: https://i.stack.imgur.com/4Xlfv.png
LearnForge tutorial is an example on how to work with Forge API. As a web app, it depends on how the developer(you) designs the user interface, workflow, and data management.
e.g. you can remove the panel of bucket & object lists ,keeping the viewer only in the UI. while you will need to design how to provide the object id (urn) which will be loaded in the viewer. Normally, you would need to setup your own users management, logging process etc, and set your own user permission. Then the user logs in, the web app lists all files (objects) he has permission to check, and when one file is selected, get the urn and load the model in Forge Viewer.
if the end user of your app is BIM360 users, you could take advantage of BIM360 data management workflow, which follows the same permission specified with BIM360. Then the other tutorial will be a good start.
https://learnforge.autodesk.io/#/tutorials/viewhubmodels
In any case, the workflow and UI are defined by yourself. I hope this explains. If you have any further questions that need a meeting call, please feel free to check the calendar of our team:
https://calendly.com/autodeskforge

Forge Design Automation Revit Workitem Arguments

I'm following the Design Automation API v3 tutorial for Revit.
When doing a workitem post I'm a little unclear about the "rvtFile" and "result" arguments. Can the rvtFile url be in an aws bucket? Also what are the restrictions for the result website? It states that it needs to be a signed url, but can this just be another aws bucket? Or do I need to create a website? (Note: I've never done any web development. Everything I know i learned from this tutorial)
Since Design Automation for Revit runs on cloud (and not your local machine), it needs a way to download your input files. You may put your files on any of the storage service providers (say Amazon S3) and provide direct download links to it. For Design Automation to have access to it, you will either need to make those files be public urls or keep them private and generate a signed url for it. When DA4R runs your workitem, the direct download urls provided in the workitem payload will be called to download your files to the worker machine.
Design Automation also does not store any of your result files. So, you will have to generate a signed url for uploading them to appropriate cloud location(s) (say a location in Amazon S3 bucket).
While Amazon S3 is just an example, there are several other storage providers. I also recommend reading Autodesk Forge's Data management APIs:
https://forge.autodesk.com/api/data-management-cover-page/
EDIT:
Useful links
Tutorials: https://learnforge.autodesk.io/
AU Class: https://www.autodesk.com/autodesk-university/class/Revit-Data-Forge-How-Can-Design-Automation-Revit-API-Help-Me-2018

hosting a JSON file for a 3rd party app/service to use

We currently use Jive Cloud N which can use the Rest API and allows the use of Custom Apps. Our UI devs have created an app which uses a JS GET to pull data from a JSON file for our "Birthdays and Anniversaries" tile.
At the moment, the JSON file is hosted on our UI dev's Google Cloud Apps account, but we wish to host it internally so we don't have to keep contacting them for changes.
I uploaded the file to our OneDrive for Business storage and created a public URL with full read permissions but the Jive platform is throwing an error trying to load the custom app.
The error is that the file
has been blocked by CORS policy: No "Access-Control-Allow-Origin"
header is present
Our dev said that to get it working on his Google Cloud App storage, he had to specify the allow-control-allow-origin field in the server's server app.yaml file. I don't know what this is and if there is an equivalent for ODfB/SharePoint.
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used? Or am I better off trying to setup a Google Cloud App storage location and replicate our dev's setup? FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Thanks in advance
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used?
FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Per my understanding, you could leverage Azure Blob Storage to store your JSON file, and you could use Microsoft Azure Storage Explorer to easily manage/share your files.
Moreover, You could manage anonymous read access to your containers and blobs, refer to this tutorial for more details. Also, you could leverage SAS to grant limited access to your storage account for other clients, you could follow this tutorial for getting started with SAS.
For a simple way, you could create your storage account and leverage Microsoft Azure Storage Explorer to manage/share your file as follows:
For cross domain accessing, you need to configure CORS Setting:
For sharing your file(blob), you could Set Container Public Access Level or leverage SAS to grant limited access to your file for other clients as follows:
Right click your container, select "Set Public Access Level":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json
Also, you could right click your JSON file, click "Get Shared Access Signature":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json?st=2017-02-28T08%3A04%3A00Z&se=2017-09-01T08%3A04%3A00Z&sp=r&sv=2015-12-11&sr=b&sig=rVkorHeNOd4j2YhkmmxZ6DfXVLf1FoN2smY6mNRIoWs%3D

Python Web Crawler with stored Web History

I'm Creating a Python Web crawler, with the ability to browse web history & parse through the information and store important information within a Database for Forensics/Academic Purposes. I understand the functionality to browse web sites but the part I'm struggling with is to be able too crawl through web history I will give a scenario:
During Forensic Investigation.
You have been given a full Forensic Image of Suspects Computer, you then locate the AppData folder for Google Chrome which stores all information about suspect including form information, credentials & web history.
How would I set up the web crawler to only search through data in the suspects web history.
I am also having issues accessing the information stored within Google Chrome User Data to try view my personal information which is stored here as a start, I am currently attempting to use DB Browser to view the files to try see my own web history however I'm not having much luck with this. Any Suggestions
For those interested in this project of mine I can update this thread as I go so you can see the progress of my web-crawler the end result will have the ability to take web-history and data from public & private websites sort important information i.e. name, address, D.O.B into a database for to be used later as a biographic dictionary.
I WILL STRESS THIS AGAIN AS THIS IS ALL FOR ACADEMIC PURPOSES IN CONTROLLED ENVIROMENT AND USED ON A TEST/FAKE ACCOUNT
Hindsight (https://github.com/obsidianforensics/hindsight) is an open source tool written in Python that can parse a ton of information from the files in /Google/Chrome/User Data/ directory.
You could look at it's source for inspiration, or just run the tool and parse its output (it can produce XLSX, JSON, or SQLite) in your crawler.

Offline Form Submission - Data SYnc

We have a Loan Management System, and as everybody knows there is Field Investigation like Residence, Office, Business Verification.
So we have a requirement to actually support offline data entry also.
Meaning, the Field Investigation officer may download the "template" in his mobile or and the save data. Later when he is connected to App, he can sync that data.
As of now in our web application, we have JSP pages to render above specific forms.
1.) How to pragmatically download the template or html content.
2.) Save the form data in local DB or say browser db
3.) Then later sync that Json data with relational DB.
Best is to download the jsp content from ajax request, then process it's html content and through HttpClient get the response for each and every url's (javascript, css) included in the package.
Zip it and then make it downloadable through browser.