Looking for a way to get the embed code for Box files via the API.
We have integrated Box in our app and the new embedded file/folder viewer is awesome. I'd love to give people the option to view their files/folder within our app via the embed option but need a way to get the embed code
e.g: https://www.box.com/embed/{file reference}.swf
Thanks for any info,
Dan.
EDIT: v2 Folder Embed (requires User to be logged in)
<iframe
src="https://box.com/embed_widget/000000000000/files/0/f/#{#folder.id}?view=expanded&sort=name&direction=ASC&theme=blue"
width="100%"
height="800"
frameborder="0">
</iframe>
The Folder Embed Code does not appear to be currently supported.
See Comments from the Dev Team at the bottom of this post
http://developers.blog.box.com/2012/10/11/even-more-v2-updates/
I am currently using v1 API CreateFileEmbed
http://developers.box.net/w/page/50509454/create_file_embed
Update [Sep 2013]
Box has released a public beta of View API at http://developers.box.com/view/
This is another HTML5 viewer for.
Box documentation and support is horrible. Here's what I've figured out:
# given you have your box file in a variable named "file":
result = file.api.file_embed(file.id) # this may throw Box::Api::NotShared
embed_html = result['file_embed_html'] if result.respond_to?(:[])
Unfortunately, you'll have to make sure the file is publicly shared first. There's apparently no way to do private shares in the API, from what I've found, except via email (wtf?).
For completeness, shares can be done via the api like so: (note the file.unshare method is [at the time of writing] broken, thus the call through file.api)
file.share_public
file.api.unshare_public(file.type, file.id)
Maybe this is old, but, This steps work for me.
First, share the file or folder with the api, next, extract the shared_link and later join with the embed url (https://app.box.com/embed_widget/000000000000/s/SHARED_LINK_VALUE).
See: Create shared foldersCreate shared filesEmbed files or folders
Related
I have set up an S3 bucket to host static files.
When using the website endpoint (http://.s3-website-us-east-1.amazonaws.com/): it forces me to set an index file. When the file isn't found, it throws an error instead of listing directory contents.
When using the s3 endpoint (.s3.amazonaws.com): I get an XML listing of the files, but I need an HTML listing that users can click the link to the file.
I have tried setting the permissions of all files and the bucket itself to "List" for "Everyone" in the AWS Console, but still no luck.
I have also tried some of the javascript alternatives, but they either don't work under the website url (that redirects to the index file) or just don't work at all. As a last resort, a collapsible javascript listing would be better than nothing, but I haven't found a good one.
Is this possible? If so, do I need to change permissions, ACL or something else?
I've created a simple bit of JS that creates a directory index in HTML style that you are looking for: https://github.com/rgrp/s3-bucket-listing
The README has specific instructions for handling Amazon S3 "website" buckets: https://github.com/rgrp/s3-bucket-listing#website-buckets
You can see a live example of the script in action on this s3 bucket (in website mode): http://data.openspending.org/
There is also this solution: https://github.com/caussourd/aws-s3-bucket-listing
Similar to https://github.com/rgrp/s3-bucket-listing but I couldn't make it work with Internet Explorer. So https://github.com/caussourd/aws-s3-bucket-listing works with IE and also add the possibility to order the files by names, size and date. On the downside, it doesn't follow folders: only the files at one level are displayed.
This might solve your problem. Security settings for Everyone group:
(you need the bucketexplorer.com software for this)
If you are sharing files of HTTP, you may or may not want people to be able to list the contents of a bucket (folder.) If you want the bucket contents to be listed when someone enters the bucket name (http://s3.amazonaws.com/bucket_name/), then edit the Access Control List and give the Everyone group the access level of Read (and do likewise with the contents of the bucket.) If you don’t want the bucket contents list-able but do want to share the file within it, disable Read access for the Everyone group for the bucket itself, and then enable Read access for the individual files within the bucket.
I created a much simpler solution. Just place the index.html file in root of your folder and it will do the job. No configuration required. https://github.com/prabhatsharma/s3-directorylisting
I had a similar problem and created a JavaScript-and-iframe solution that works pretty well for listing directories in S3 website files. You just have to drop a couple of .html files into the directory you want to list. You can find it here:
https://github.com/adam-p/s3-file-list-page
I found s3browser, which allowed me to set up a directory on the main web site that allowed browsing of the s3 bucket. It worked very well and was very easy to set up.
Using another approach base in pure JavaScript and AWS SDK JavaScript API. Not need PHP or other engine just pure web site (Apache or even IIS).
https://github.com/juvs/s3-bucket-browser
Not intent for deploy on your own bucket (for me, no make sense).
Using the new IAM Users from AWS you can provide more specific and secure access to your buckets. No need to publish your bucket to website and make all public.
If you want secure the access, you can use the conventional methods to authenticate users for your current web site.
Hope this help too!
I've read up about this error but the proposed solutions don't seem to work for .doc/.docx files.
I am building a web app which involves displaying pdf/doc files. The files are stored in a google storage bucket, and I am using Firebase's getDownloadURL() method to get a link which I can use as the source in an <iframe>. This works fine for PDF files directly. However, given that this direct display is not possible for doc/docx files, I tried displaying them through Google Docs Viewer by taking the generated URL and appending as follows:
https://docs.google.com/gview?url=https://firebasestorage.googleapis.com/v0/b/project-name.appspot.com/o/filename?alt=media&token=a-b-c-1-2-3
This yields a Refused to display <URL> in a frame because it set X-Frame-Options to same origin error. I have also tried adding an &embedded=true to the URL as has been suggested in other similar queries, but that yields another error: Unchecked runtime.lastError: Could not establish connection. Receiving end does not exist.
I thought this could be an issue with parsing the URL due to the "&", so I changed it to "%26", but the "sameorigin" error persists.
I'm not sure how to tackle this, and any guidance on how to resolve this issue (or alternative ways of solving the problem) would be greatly appreciated.
Google docs creates its own storage objects, and will only serve those objects. It won't display other objects that happen to be in doc/docx format from other repositories.
It sounds like you need a way to render objects you uploaded (using Firebase) to GCS. I don't have experience doing that specific thing but I suggest you try to find some software that does it. For example from a quick web search I found Render docx file in a browser.
I'm designing/developing a simple HTML5-based webpage.
But, rather than having the videos (e.g. MP4 and/or WEBM files)
based locally on the web-server, I want to store them all
in Google 'cloud-storage', by referencing them with a full
URL in the 'src' attribute of a tag.
So, my first question is simply whether it's possible to derive
such a reference URL, to a video file that I've uploaded into my
Google acct's basic 15-GB of free storage? (Or do I need
to first buy an 'official' starter unit of Google Cloud Storage?)
Secondly, could someone please point me to a tutorial or 'recipe'
for how to compute such a URL, so that I can build a simple initial
prototype to validate such a design approach.
TIA...
Dave
It's actually almost trivial (once I bit-the-bullet and registered
for a 60-day free trial of "Google Cloud Platform".)
It seems those older-style URLs (full of long strings of hex-chars)
are a thing of the past. That actually makes sense, since the
'bucket name' that you create to store your files in, must be
"globally-unique" and becomes part of the URL.
https://storage.googleapis.com/your-bucket-name/Steve_Jobs-2mins.mp4
So, it becomes as simple as just using their 'console' tool to create
a bucket, upload your file(s) into that bucket, declare each 'public/shared',
and then reference the resulting URL in the 'src' attribute of your
video or source HTML tag.
You can view my working example here:
http://weasel.firmfriends.us/HTMLVideoFromCloud/
[ For details, you can 'view page source' on the HTML. ]
Cheers...
Dave
I have a bunch of URLs and I am trying to see what is the page load time (PLT) for those URLs in chrome on Windows. Now there are many ways to do this - but what I want is to automate the process so that chrome can read from somewhere the URLs I want to measure the PLT for and output the results somewhere, may be in another file.
Is there any tool I can make use of here? Or perhaps write a plugin that can read from a file when I start chrome and do this job for me? I am not sure how simple or complicated this can get, since I have no experience in this.
One way I can think of is to add a plugin that can measure the PLT in chrome, write a batch file which contains commands to invoke chrome and open the URLs in separate tabs. However, with this I still have to manually look at the PLT and record them, and I wish to automate this too.
Any help would be appreciated.
""Chrome doesn't technically allow you to access the local file system, but you might be able to do it with this: https://developer.chrome.com/extensions/npapi.html.
Another approach is to send the data to another web location via an API. The Google Drive API comes to mind: https://developers.google.com/drive.
You may already be aware that analyzation of the pages can be done via a content script. Simply inject the JavaScript code or libraries you need into pages the user opens, via the manifest file, something like this:
"content_scripts": [
{
"matches" : [
"<all_urls>"
],
"js" : [
"some_content_script.js"
]
}
],
You'll also need to add "all_urls" to the permissions section of the manifest file.
The load time calculation could simply be accomplished with a timer starting the beginning of the page load (as soon as the script is injected), and ending on "document.onload".
Sounds like a pretty useful extension to be honest!
There are a couple ways you could approach this
Use WebPageTest - either get an API key for the public instance, or install your own private instance (http://andydavies.me/blog/2012/09/18/how-to-create-an-all-in-one-webpagetest-private-instance/)
Drive Chrome via it's remote debug API - Andrea provides an example of how to use the API to generate HAR files, but your case would be simpler - https://github.com/andydavies/chrome-har-capturer
You could also probably hack this Chrome extension to post the times to a remote site - https://chrome.google.com/webstore/detail/page-load-time/fploionmjgeclbkemipmkogoaohcdbig via a background window
Well, using HTML5 file handlining api we can read files with the collaboration of inpty type file. What about ready files with pat like
/images/myimage.png
etc??
Any kind of help is appreciated
Yes, if it is chrome! Play with the filesytem you will be able to do that.
The simple answer is; no. When your HTML/CSS/images/JavaScript is downloaded to the client's end you are breaking loose of the server.
Simplistic Flowchart
User requests URL in Browser (for example; www.mydomain.com/index.html)
Server reads and fetches the required file (www.mydomain.com/index.html)
index.html and it's linked resources will be downloaded to the user's browser
The user's Browser will render the HTML page
The user's Browser will only fetch the files that came with the request (images/someimages.png and stuff like scripts/jquery.js)
Explanation
The problem you are facing here is that when HTML is being rendered locally it has no link with the server anymore, thus requesting what /images/ contains file-wise is not logically comparable as it resides on the server.
Work-around
What you can do, but this will neglect the reason of the question, is to make a server-side script in JSP/PHP/ASP/etc. This script will then traverse through the directory you want. In PHP you can do this by using opendir() (http://php.net/opendir).
With a XHR/AJAX call you could request the PHP page to return the directory listing. Easiest way to do this is by using jQuery's $.post() function in combination with JSON.
Caution!
You need to keep in mind that if you use the work-around you will store a link to be visible for everyone to see what's in your online directory you request (for example http://www.mydomain.com/my_image_dirlist.php would then return a stringified list of everything (or less based on certain rules in the server-side script) inside http://www.mydomain.com/images/.
Notes
http://www.html5rocks.com/en/tutorials/file/filesystem/ (seems to work only in Chrome, but would still not be exactly what you want)
If you don't need all files from a folder, but only those files that have been downloaded to your browser's cache in the URL request; you could try to search online for accessing browser cache (downloaded files) of the currently loaded page. Or make something like a DOM-walker and CSS reader (regex?) to see where all file-relations are.