Possible to determine if a file is present on IPFS by calculating its hash? - ipfs

If I have a file, and I want to see if it is present on IPFS, is there some obvious way to compute its hash then see if a file is returned from the a url including that hash?

You can check out how a CID is created from a file here or check out libraries for your languages such as this one for Rust to encode your file into a CID and compare it with those retrieved from IPFS.
Note that CID is not the same as a file's hash. CID is derived from a process called multihash and encoded in base-58 encoding. If you have a hash of a file (like sha-256 or keccak256) that isn't CID and you don't know the file, then it is not possible to retrieve the file from IPFS since the hash or checksum is only useful for validating the file, but not possible to retrieve the file content.
See also: CID decoding algorithm

Related

Using IPFS DHT for custom key/value

I would like to know if I can use the DHT of IPFS as a kind of registry to find information related to a CID.
Example:
I have merkle trees that consist of roughly 2000-5000 file hashes. Those merkle trees (XML Format) should be put on IPFS where i get a CID back. Now I want to be able to find the CID which contains the corresponding merkle tree only by the file hash.
So I would get the CID of the merkle tree, if I look for my file hash on IPFS.
The entry for the DHT would look something like this: {key,value}:{file hash, IPFS-CID}.
I know that I could build my own hash table where I map the file hashes to a CID. My first thoughts were to store this hash table also in IPFS and let a DNS point to it, so its the single source of truth and could easily be found. But as I want to keep everything as decentralized as possible, my thoughts where that I could maybe use the DHT of IPFS to do something similar. Is there a way of doing this or something closely related?

Upload multiple JSON files but not all at once on IPFS using moralis

I want to create NFT’s . So i want all json files in directory (/ipfs/CID/1.json,2.json,3.json) but i dont want want to reveal them instantly thats why I will upload only that json files that i want to reveal instantly and then add json files in ipfs to reveal NFTs later.
I want to upload multiple json files on IPFS but not all at once . Moralis uploadFolder working fine but when it try to upload another .json file its parent hash is different.
Example :-
I upload 2 json file in /json folder then moralis upload folder returns me with
/ipfs/CID/1.json
/ipfs/CID/2.json
in this case CID is same and that what i want but when i upload another 3.json file it returns me with another CID
/ipfs/NEW CID/3.json
We can’t use decentralized storage like IPFS, because the URIs in IPFS are hashes of unique pieces of content and we can’t have completely unique URIs for every token in ERC1155.
Refer:- https://rameerez.com/problems-and-technical-nuances-of-nft-immutability-and-ipfs/
you will have to send different requests
If you send it on request moralis will return same base url
I faced the same issue, I came up with a solution, I wrote a script which read images and json from a file structure and upload images to ipfs
you can see my code
https://github.com/RajaFaizanNazir/bulk_IPFS_pin_differentCID
if you face any issue or confusing, feel free to ask me

How to take "access token" value inside an output json file and pass the same "access token" to another REST GET request in Azure Data Factory?

I have got access token and expiry time as two columns in a JSON file after doing a POST request and stored the file in Blob storage.
Now I need to look inside the JSON file the I stored before and take the value of Access token and use it as a parameter to another REST request.
Please help...
Depending on your scenario, there are a couple ways you can do this. I assume that you need the access token for a completely different pipeline since you are storing the Get access token output to a file in Blob.
So in order to reference the values within the json Blob file, you can just use a lookup activity in Azure Data Factory. Within this Lookup Activity you will use a dataset for a json file referencing a linked service connection to your Azure Blob Storage.
Here is an illustration with a json file in my Blob container:
The above screenshot uses a lookup using the Json File dataset on a Blob Storage Linked service to get the contents of the file. It then saves the outputted contents of the file to variables, one for access token, and another for expiration time. You don't have to save them to variables, and instead can call the output of the activity directly in the subsequent web activity. Here are the details of the outputs and settings:
Hopefully this helps, and let me know if you need clarification on anything.
EDIT:
I forgot to mention, if you need to get the access token using a web activity, then just need to use it again for another web activity in the same pipeline, then you can just get the AccessToken Value in the first web activity, and call that output in the next web activity. Just like I showed in the Lookup Activity, but instead you'd be using the response from the first web activity that retrieves the Access Token. Apologies if that's hard to follow, so here is an illustration of what I mean:
A simple way to read JSON files into a pipeline is to use the Lookup activity.
Here is a test JSON file loaded into Blob Storage in a container named json:
Create a JSON Dataset that just points to the container, you won't need to configure or parameterize the Folder or file name values, although you certainly can if that suits your purpose:
Use a Lookup activity that references the Dataset. Populate the Wildcard folder and file name fields. [This example leaves the "Wildcard folder path" blank, because the file is in the container root.] To read a single JSON file, leave "First row only" checked.
This will load the file contents into the Lookup Activity's output:
The Lookup activity's output.firstRow property will become your JSON root. Process the object as you would any other JSON structure:
NOTE: the Lookup activity has a limit of 5,000 rows and 4MB.

https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable. How do find the file id

When I use https://www.googleapis.com/upload/drive/v3/files without uploadType=resumable I receive back a JSON containing the file id. So this is ideal for creating folders.
If I want to add a file > 5MB I have to use resumable and I receive a upload URL for use with PUT.
Presently I am performing a list of the folder by specifying the file name and folder id and then pick the one with the latest date.
The documentation refers to querying the location URL but I can not find any examples to whether this returns the file id.
Can anybody explain how to get the file id easily.
Regards Conwyn
When you use the Google Drive API to create a file > 5M you have to use POST with uploadType=resumable. This returns a URL which you use with PUT. When the PUT completes it returns the file metadata [[:]] from which you can determine the file id ["id"].
I have suggested a documentation change to Google.
Regards Conwyn

How to store a blob of JSON in Airtable?

There does not appear to be a dedicated field type in Airtable for "meta" data blobs and/or a JSON string.
Is the "Attachment" type my best bet?
I could store it either as a json attachment, or on a String type column.
Since a full json on a text column would likely not be readable, I would store it as attachments.
However, it seems that at least for now, uploading attachments require the file to be already hosted somewhere first, so this route might not be the easiest one:
https://community.airtable.com/t/is-it-possible-to-upload-attachments/188
Right now this isn’t possible with the Airtable API alone. It’s
something we’ll think about for future API versions though. A
workaround for now is to use a different service
(e.g. Filestack90, imgur52, etc.) to process the upload before then
sending the url to Airtable. When Airtable processes the attachment,
it will copy the file to Airtable’s own (S3) server for safekeeping,
so it’s OK if the original uploaded file url is just temporary