I'm trying to deploy an NFT. It's a replica from the mfers contract. The mfers contract has the token URI referenced as ipfs://QmWiQE65tmpYzcokCheQmng2DCM33DEhjXcPB6PanwpAZo/#
Where # is the token number. Across all tokens, the ipfs hash QmWiQE65tmpYzcokCheQmng2DCM33DEhjXcPB6PanwpAZo is the same. How do you set something like this up in ipfs? Note that ipfs URL returns a json object. Thanks in advance.
If you are interacting with IPFS directly, Mutable File System API abstract away the process of creating directories/subdirectories, treating IPFS as a file system. It isn't how IPFS works, but it is a more intuitive layer on top of IPFS.
If you're uploading via NFT.storage API or client, you might want to check out an uploadDirectory example.
See also: https://gist.github.com/tougerthao/9452b85fcb1f2b2b129c43dd5ecdc885
Related
I'm trying to create a collection of 10,000 (ERC-721) tokens, whose metadata are stored on IPFS.
Each image associated to a token would be uploaded on IPFS beforehand with its unique CDI.
Since tokens will not be minted all at once, at first I want each metadata json to be empty and link to a placeholder image.
My question is: without setting specific TokenURI in my contract (which I want to avoid), how can I change the json file associated with a token when it's minted, without changing the BaseURI which must be common for all tokens?
This is how it should work:
ipfs://Qx000000000000000000/1.json // json file points to nothing
// token 1 is minted
ipfs://Qx000000000000000000/1.json // json file is updated but keeps the same ipfs base URI
I guess it should involve IPNS, but I can't find a specific guide on the best practice for this. Although I see this method is used all the time, for example even by the Bored Ape Yacht Club collection.
I am routing messages from an Azure IoT Hub to a blob container (Azure Storage as a routing endpoint). The messages sent to the IoT Hub are of Content Type: 'application/json' and Content Encoding: 'UTF-8'. However, when they arrive in blob storage several of these messages are batched together into one file with Content Type 'application/octet-stream'. Thus, for instance Power BI is not able to read these files in JSON format when reading directly from the blob.
Is there any way to route these messages so that each single message is saved as a json file in the blob container?
Tl;dr : Please make use of the Encoding option to specify AVRO or JSON format & Batch Frequency/Size to control the batch.
"With an Azure Storage container as a custom endpoint, IoT Hub will write messages to a blob based on the batch frequency and block size specified by the customer. After either the batch size or the batch frequency is hit, whichever happens first, IoT Hub will then write the enqueued messages to the storage container as a blob. You can also specify the naming convention you want to use for your blobs, as shown below."
The below image shows how we navigate to the IoTHub's message routing section to add a custom endpoint of a blob storage account.
-The below image shows how we configure the settings of the batch count and the size. Also please make use of the Encoding section to specify the message format such as AVRO or JSON
Please leave a comment below to let us know if you need further help in this matter.
The message encoding needs to be done by the device stream or as part of a module to translate the protocol. Each protocol (AMQP, MQTT, and HTTP) uses a different method to encode the message from base64 to UTF-8.
To route messages based on message body, you must first add property 'contentType' (ct) to the end of the MQTT topic and set its value to be application/json;charset=utf-8. An example is shown below.
devices/{device-id}/messages/events/$.ct=application%2Fjson%3Bcharset%3Dutf-8
https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-mqtt-support
I have already a code to retrieve the objects in the bucket using oci-java-sdk and this is working as expected. I would like to retrieve the URL of the file which was uploaded to the bucket in object storage and when I use this URL, this should redirect to the actual location without asking any credentials.
I saw preauthenticated requests but again i need to create one more request. I dont want to send one more request and want to get URL in the existing GetObjectResponse.
Any suggestions>
Thanks,
js
The URL of an object is not returned from the API but can be built using information you know (See Update Below!). The pattern is:
https://{api_endpoint}/n/{namespace_name}/b/{bucket_name}/o/{object_name}
Accessing that URL will (generally, see below) require authentication. Our authentication mechanism is described at:
https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/signingrequests.htm
Authentication is NOT required if you configure the bucket as a Public Bucket.
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/managingbuckets.htm?TocPath=Services%7CObject%20Storage%7C_____2#publicbuckets
As you mentioned, Pre-authenticated Requests (PARs) are an option. They are generally used in this situation, and they work well.
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/usingpreauthenticatedrequests.htm
Strictly speaking, it is also possible to use our Amazon S3 Compatible API...
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm
...and S3's presigned URLs to generate (without involving the API) a URL that will work without additional authentication.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
Update: A teammate pointed out that the OCI SDK for Java now includes a getEndpoint method that can be used to get the hostname needed when querying the Object Storage API. https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.25.3/com/oracle/bmc/objectstorage/ObjectStorage.html#getEndpoint--
Can anyone comment on if it should be possible to use rclone's swift support to access buckets in OCI object storage (new OCI, not classic).
I'm interested in it because S3 compatibility mode is limited to a single designated compartment and I'd like to be able to use rclone with any bucket in my tenancy.
I know that for public buckets there is still a swift style URL. The 3 functional URLs styles seem to be:
Native: https://objectstorage.{region}.oraclecloud.com/n/{object-storage-namespace}/b/{bucket}/o/{filename}
Swift: https://swiftobjectstorage.{region}.oraclecloud.com/v1/{object-storage-namespace}/{bucket}/{filename}
S3: https://{object-storage-namespace}.compat.objectstorage..oraclecloud.com/{bucket}/{filename}
https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/managingcredentials.htm talks a little bit about Swift password (Auth Tokens) and you can create one in the console.
But I can't find anything about what the auth URL would be for the non-classic version of object storage. And storage_url with a auth_token doesn't seem to work either.
Using -vvvv doesn't show anything more than 401 Unauthorized.
I'm interested in it because S3 compatibility mode is limited to a single designated compartment and I'd like to be able to use rclone with any bucket in my tenancy.
The designated compartment only controls where buckets created via that protocol (S3 or Swift) are placed. The designated compartment does not affect authorization. Authorization is controlled by the relevant IAM policies.
But I can't find anything about what the auth URL would be for the non-classic version of object storage. And storage_url with a auth_token doesn't seem to work either.
The new/current OCI Object Storage does not support auth URLs. You must use HTTP basic-style auth with Swift on OCI. It does not seem that rclone supports HTTP basic auth with swift directly (it is possible to create the basic auth header yourself and have rclone send it).
All that said, using rclone with s3 is the best approach for OCI Object Storage. Ensure you set the "region" option to the correct region name like "us-phoenix-1" and you should be good.
Thanks!
I want to create pre-authenticated request for an object inside a bucket in the OCI object storage using python SDK. I found out that I can use get_preauthenticated_request for the bucket to put objects inside the bucket but not to get the objects pre-authenticated. I can create a pre-authenticated request using the OCI console but I need to do it in a python script. can anybody help me in this issue?
You can use create_preauthenticated_request (see code) for both buckets and individual objects.
The difference is in the access type:
ANY_OBJECT_WRITE is for the whole bucket
OBJECT_READ, OBJECT_READ_WRITE and OBJECT_WRITE are for objects
So you should be able to create a Pre-Authenticated Request with something like
request_details = create_preauthenticated_request_details()
request_details.access_type("ObjectReadWrite")
par = create_preauthenticated_request("namespace", "bucket", request_details)
You can find more on the request details here and for the request itself here.
Let me know if this works for you, I don't have an account to test against at the moment.