I'm trying to use and modify the extract.autodesk.io (thanks to Cyrille Fauvel) but not yet successful. In a nut shell, this is what I want to do:
user drag-drop the design file (i'm ok with this)
I've removed the submit button - so right after uploading, extraction should begin in autodesk's server. (i've added a .done to trigger the auto-extraction : uploadFile (uri).done(function(){SubmitProjectDirect();}); )
no need to load a temp viewer for view/test
automatically download the bubble in zip file into our local server folder.
Delete uploaded model right away as our projects are mostly strictly confidential.
I'm encountering a 405 'Method not allowed' on 'api/file' sub folder, which I believe it should be autodesk's folder in the server.
Can anyone point the root urn of api/file?
I seem to get stuck on item 2 above due to the 405 error. But if get passed that one, I still need to solve 3, 4 and 5.
Appreciate any help...
In light of the additional comment above, the issue is a bit more complicated than I thought originally. In order to upload a file on the Autodesk cloud storage, you need to use specific endpoints, with a PUT verb and provide an oAuth Access Token.
It should be possible to setup the Flow.js to use all the above, but since it is a javascript library running on your client, it means anyone can steal your access token and use it illegitimately to either access your data, or consume your cloud credit to do action on your behalf.
Another issue is that the OSS minimum chunk is 5Mb - see this article, so you need to control this as well as providing OSS the byte assembly range information.
I would not recommend uploading to OSS from the client directly for security reason, but if you do not want to store on your server as a temporary storage, we can either proxy the Flow.js upload on the OSS storage or pipe the uploaded chunk on the Autodesk cloud storage. Both of the solution will be secured with no storage on your server, but traffic will continue to go via your server. I will create a branch on the github repo in a few days to demonstrate both approach.
Related
My goal is to make a website (hosted on Google's App Engine through a Bucket) that includes a upload button much like
<p>Directory: <input type="file" webkitdirectory mozdirectory /></p>
that prompts users to select a main directory.
The main directory will first generate a subfolder and have discrete files being written every few seconds, up to ~4000 per subfolder, upon which the machine software will create another subfolder and continue, and so on.
I want Google Bucket to automatically create a Bucket folder based on metadata (e.g. user login ID and time) in the background, and the website should monitor the main directory and subfolders, and automatically upload every file, sequentially from the time they are finished being written locally, into the Cloud Bucket folder. Each 'session' is expected to run for ~2-5 days.
Creating separate Cloud folders is meant to separate user data in case of multiple parallel users.
Does anyone know how this can be achieved? Would be good if there's sample code to adapt into existing HTML.
Thanks in advance!
As per #JohnHanely, this is not really feasible using a application. I also do not understand the use case entirely but I can provide some insight into monitoring Cloud Buckets.
GCP provides Cloud Functions:
Respond to change notifications emerging from Google Cloud Storage. These notifications can be configured to trigger in response to various events inside a bucket—object creation, deletion, archiving and metadata updates.
The Cloud Storage Triggers will help you avoid having to monitor the buckets yourself and can instead leave that to GCF.
Maybe you could expand on what you are trying to achieve with that many folders? Are you trying to create ~4,000 sub-folders per user? There may be a better path forward should we know more about the intended use of the data? Is seems you want hold data and perhaps a DB is better suited?
- Application
|--Accounts
|---- User1
|-------Metadata
|----User2
|------Meatadata
We currently use Jive Cloud N which can use the Rest API and allows the use of Custom Apps. Our UI devs have created an app which uses a JS GET to pull data from a JSON file for our "Birthdays and Anniversaries" tile.
At the moment, the JSON file is hosted on our UI dev's Google Cloud Apps account, but we wish to host it internally so we don't have to keep contacting them for changes.
I uploaded the file to our OneDrive for Business storage and created a public URL with full read permissions but the Jive platform is throwing an error trying to load the custom app.
The error is that the file
has been blocked by CORS policy: No "Access-Control-Allow-Origin"
header is present
Our dev said that to get it working on his Google Cloud App storage, he had to specify the allow-control-allow-origin field in the server's server app.yaml file. I don't know what this is and if there is an equivalent for ODfB/SharePoint.
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used? Or am I better off trying to setup a Google Cloud App storage location and replicate our dev's setup? FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Thanks in advance
To get to my question: How can I host this JSON file on ODfB or even somewhere on our Azure tenancy so that it can be used?
FYI - I'd prefer the former because we're using M$ for a number of cloud hosted services already.
Per my understanding, you could leverage Azure Blob Storage to store your JSON file, and you could use Microsoft Azure Storage Explorer to easily manage/share your files.
Moreover, You could manage anonymous read access to your containers and blobs, refer to this tutorial for more details. Also, you could leverage SAS to grant limited access to your storage account for other clients, you could follow this tutorial for getting started with SAS.
For a simple way, you could create your storage account and leverage Microsoft Azure Storage Explorer to manage/share your file as follows:
For cross domain accessing, you need to configure CORS Setting:
For sharing your file(blob), you could Set Container Public Access Level or leverage SAS to grant limited access to your file for other clients as follows:
Right click your container, select "Set Public Access Level":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json
Also, you could right click your JSON file, click "Get Shared Access Signature":
Sample file for share: https://brucechen.blob.core.windows.net/brucechen/index.json?st=2017-02-28T08%3A04%3A00Z&se=2017-09-01T08%3A04%3A00Z&sp=r&sv=2015-12-11&sr=b&sig=rVkorHeNOd4j2YhkmmxZ6DfXVLf1FoN2smY6mNRIoWs%3D
I have some data for a webapp that I would like to store on the server. What would be a good location to put those files?
I have a couple of static HTML pages that contain instance specific information. They need to survive a re-deploy of the webapp. They need to be editable by the server's administrator. They are included in other HTML pages using the html object tag.
I want to store preferences on the server, but cannot use a database. I am using JSP to write and read the preferences. There is no sensitive data in the preferences. Currently I am using the log directory. But obviously that is not a great choice.
I am using Tomcat. I thought of creating an appdata/myapp directory under the webapp directory. Is that good or bad?
If the server's administrator can also deploy the app, I would add the data file itself into the source control for the app, and deploy it all together. This way you get revision control of the data, and you get the ability to revert to known good data if the server fails.
If the administrator can't deploy the app, but can only edit the file, then you need plans to back up that file in the case that the server or server filesystem dies.
A third solution would be a hybrid: put the app in one source code repository. Put the data in a second source code repository. The administrator can edit the data and deploy the data. The developer can edit the app source code, and deploy the source code. This way, both are revision controlled, but you've separated responsibility for who maintains what.
I am making a chrome extension which needs to add/delete/modify file in any location in our hard drive. The location can be temporary folder. How is it possible to make it. Please give comments and helpful links which can lead to me have this work done.
You can not, but adding a local server (nodejs/deno/cs-script/go/python/lua/..) to have a fixed logic (security) to do file stuff and providing a http server to answer back in an ajax/jsonp request would work.
The extension will not be able to install the software part.
edit: if you want to get started using nodejs, this could help
edit2: With File and Directory Entries API (this could help) you can get hold of a FILE OR complete FOLDER (getDirectory(), showDirectoryPicker()).
Thankfully, this is impossible.
Google or any other company wouldn't have many friend if their extension(s') installation caused compromise including complete control over any files(ie. control over machine) on your hard drive. The extension can save information to disk in a location that is available for storing local information as mentioned. You will not have any execute permission on the root or anywhere nor will you have any read or write permission outside of the storage location.
However, extensions can still be malicious if they gather information from a user of a web page (I am sure that Google can filter some suspicious extensions).
If you really need to make changes on your hard drive you can store information on a server and poll for changes with a windows client application or perhaps you can find where the storage information is kept and access it from there from a windows app.
i have been going around in circles here and have totally confused myself. I need some help.
I am (trying to) writing an application for a client that in concept is simple. he want a google write document with a button. the google drive account has several folders, each shared with several people. when he drops a new file in one of the folders, he wants to be able to open this write file, this file is the template for his email. he clicks the button, the system calls the changes service in the Google Drive SDK https://developers.google.com/drive/manage-changes, gets the list of files that have been added since the last time it was checked, then pull the list of people that the file has been shared with, and use the write file as a template to send that list of people an email saying their file is ready.
SO, easy enough, right?
I started by looking at the built in functions in the Google App Script API. I found this method, https://developers.google.com/apps-script/class_docslist#find in the DocsList class. problem is the description for the query simply says "the query string". So at first i tried the Drive SDK query parameters, which are
var files = DocsList.find("modifiedDate > 2012-12-20T12:00:00-08:00.");
it didn't work. that leads me to believe it is a simple full text search on the content. Thats not good enough.
That lead me into trying to call a Drive SDK method from within an App Script application. Great, we need an OLap 2 authentication. easy enough. found the objects in the script reference and hit my wall.
Client ID and Client Secret.
you see, when i create what this really is, a service account, the olap control in apps script doesn't know how to handle the encrypted json and pass it back and forth. Then when i tried to create and use an installed applications key, i get authentication errors because the controls again, don't know what to do with the workflow. and finally, when i try to create a web app key, i can't because i don't have the site host name or redirect URI. And i can't use the application key ability because since im working with files OLap 2 is required.
i used the anonymous olap for a while, but hit the limit of anonymous calls per day in the effort of trying to figure out the code a bit, thats not going to work because the guy is going to be pushing this button constantly thru the day.
i have been pounding my head on the desk over this for 5 hours now. i need some help here, can anyone give me a direction to go?
PS, yes, i know i can use the database controls and load the entire list of files into memory and compare it to the list of files in the database. problem being, we are talking tens of thousands of files. bad idea.
I wouldn't use DocsList anymore - DriveApp is supposed to be a more reliable replacement. Some of the commands have changed, so instead of find, use searchFiles. This should work more effectively (they even use a query like yours as an example).