I have been using OSS (Object Storage Service) buckets to store videos and I need links that don't expire.
When I generate a link in OSS browser I get the option to set the time of expiry, but there is no option to turn the expiry off.
Is there any way to generate links that do not expire?
There is only one way to do it. You can change the bucket ACL to a public or public read (please remember about the risk of data leak) or better solution could be to change ACL for an object only.
Change OSS bucket ACL https://www.alibabacloud.com/help/doc-detail/31898.htm
Change OSS object ACL https://partners-intl.aliyun.com/help/doc-detail/52284.htm
Option 2 is much better because the whole bucket can still have private ACL while single objects can be in public read.
Once the bucket or object (preferred way) has public read ACL when you click Copy File URL you can share the object.
FYI - the default sharing of the object is set to ?Expires=1544615363 which is in seconds and it equals to 17877.49262731 days and it is 48.97 Years :)
Related
I'am currently playing around with the Couchbase Sync-Gateway and have built a demo app.
What is the intended behavior if a user logs in with the same username on a different device (which has an empty database) or if he deleted the local database?
I'am expecting that all the data from the server should get synced back to the clients.
Is this correct?
My problem is that if i'am deleting the database or login from a different device, nothing will get synced.
Ok i figured it out and it's exactly how i thought it would be.
If i log in from a different device i get all the data synced automatically.
My problem was the missing sync function. I thought it will use a default and route all documents to the public channel automatically.
I'am now using the following simple sync-function:
"sync": `function (doc, oldDoc) {
channel('!');
access('demo#example.com', '*');
}`
This will simply route all documents to the public channel and grant my demo-user access to it.
I think this shouldn't be used in production but it's a good starting point for playing around.
Now everything is working fine.
Edit: I've now found the missing info:
https://docs.couchbase.com/sync-gateway/current/configuration-properties.html#databases-this_db-sync
If you don't supply a sync function, Sync Gateway uses the following default sync function
...
The channels property is an array of strings that contains the names of the channels to which the document belongs. If you do not include a channels property in a document, the document does not appear in any channels.
I have followed this tutorial and have uploaded my file successfully to: https://developer.api.autodesk.com/oss/v2/buckets/timmyisabucket/objects/audobon_arch.rvt
It has uploaded successfully and I can verify this by calling https://developer.api.autodesk.com/modelderivative/v2/designdata/dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6dGltbXlpc2FidWNrZXQvYXVkb2Jvbl9hcmNoLnJ2dA==/metadata/c63a6682-a73c-a2a8-a08c-dfeee25781f4/properties which successfully returns all the object properties.
However, when I ask the api to list all the objects inside the bucket, it simply returns an empty list!
The endpoint I'm calling: https://developer.api.autodesk.com/oss/v2/buckets/timmyisabucket/objects
The response:
{
"items": []
}
Where am I going wrong?
Thanks
Just to close this off, the helpful comment by Xiaodong Liang led me to the the fact that my bucket type was created with the incorrect type of "Transient" meaning that all my stuff gets deleted after 24 hours.
It should have been Temporary or Persistent.
Retention policy
Transient
Think of this type of storage as a cache. Use it for
ephemeral results. For example, you might use this for objects that
are part of producing other persistent artifacts, but otherwise are
not required to be available later.
Objects older than 24 hours are removed automatically. Each upload of
an object is considered unique, so, for example, if the same rendering
is uploaded multiple times, each of them will have its own retention
period of 24 hours.
Temporary
This type of storage is suitable for artifacts produced for
user-uploaded content where after some period of activity, the user
may rarely access the artifacts.
When an object has reached 30 days of age, it is deleted.
Persistent
Persistent storage is intended for user data. When a file
is uploaded, the owner should expect this item to be available for as
long as the owner account is active, or until he or she deletes the
item.
I want to use viewer API in a completely private network, do you have any problems in this case?
Requests throw when using loadModel
Well it depends what you mean by 'complete private network'
if you want to use the viewer by storing the translation results (svf bubbles) on you network, and serving bubbles' files from one of your server, then it is ok (we call this offline viewing) - then you also need to copy the viewer javascript files on your server to do that. We got an example to do this at: https://extract.autodesk.io/ and source code at: https://github.com/cyrillef/extract.autodesk.io
if you want to do something different from what I wrote above, you would need to contact us # forge.help#autodesk.com and explain in more details what you want to do
I'm building an application that stores files into the FIWARE Object Storage. I don't quite understand what is the correct way of storing files into the storage.
The code python code snippet below taken from the Object Storage - User and Programmers Guide shows 2 ways of doing it:
def store_text(token, auth, container_name, object_name, object_text):
headers = {"X-Auth-Token": token}
# 1. version
#body = '{"mimetype":"text/plain", "metadata":{}, "value" : "' + object_text + '"}'
# 2. version
body = object_text
url = auth + "/" + container_name + "/" + object_name
return swift_request('PUT', url, headers, body)
The 1. version confuses me, because when I first looked at the only Node.js module (repo: fiware-object-storage) that works with Object Storage, it seemed to use 1. version. As the module was making calls to the old (v.1.1) API version instead of the presumably newest (v.2.0), referencing to the python example, not sure if that is an outdated version of doing it or not.
As I played more with the module, realised it didn't work and the code for it was a total mess. So I forked the project and quickly understood that I will need rewrite it form the ground up, taking the above mention python example from the usage guide as an reference. Link to my repo.
As of writing this the only methods that aren't implement is the object storage (PUT) and object fetching (GET).
Had some addition questions about the Object Storage which I sent to fiware-lab-help#lists.fiware.org, but haven't heard anything back so asking them here.
Haven't got much experience with writing API libraries. Should I need to worry about auth token expiring? I presume it is not needed to make a new authentication, every time we interact with storage. The authentication should happen once when server is starting-up (we create a instance) and it internally keeps it. Should I implement some kind of mechanism that refreshes the token?
Does the tenant id change? From the quote below is presume that getting a tenant I just a one time deal, then later you can use it in the config to make less authentication calls.
A valid token is required to access an object store. This section
describes how to get a valid token assuming an identity management
system compatible with OpenStack Keystone is being used. If the
username, password and tenant details are known, only step 3 is
required. source
During the authentication when fetching tenants how should I select the "right" one? For now i'm just taking the first one similar as the example code does.
Is it true that a object storage container belongs to only a single region?
Use only what you call version 2. Ignore your version 1. It is commented out in the example. It should be removed from the documentation.
(1) The token will be valid for some period of time. This could be an hour or a day, depending on the setup. This period of time should be specified in the token that is returned by the authentication service. The token needs to be periodically refreshed.
(2) The tenant id does not change.
(3) Typically only one tenant id is returned. It is possible, however, that you were assigned more than one id, in which case you have to pick which one you are currently using. Containers typically belong to a single tenant and are not shared between tenants.
(4) Containers are typically limited to a single region. This may change in the future when multi-region support for a container is added to Swift.
Solved my troubles and created the NPM module that works with the FIWARE Object Storage: https://github.com/renarsvilnis/fiware-object-storage-ge
Im creating an app that needs to track the location of the user (with their knowledge, like a running app) so that I can show them their route later. Should I use HTML5 with some timeout interval to save the coordinates every N seconds and if so, how often should I save the data and how should I save it (locally using local storage or post it to the server?)
Also, what is the easiest way to display the map of where the user has been later?
Has anyone done anything like this before?
The timeout interval for forge.geolocation is up to you and the balance of responsiveness of your application. Also, network traffic is expensive. So maybe you can buffer... say the last 10 geopositions... and then Http post (or whatever... see Parse below) in bulk? And since the geo data sounds like temporary device data why would there be a need to persist using forge.prefs? Unless maybe you need to the app to work "offline"?
For permanent storage I would look at Parse (generous free plan) and their Parse.GeoPoint class via their Javascript or REST Api as one possible solution. They have some nifty methods like (kilometersTo, milesTo, radiansTo) - https://parse.com/docs/js/symbols/Parse.GeoPoint.html