Upload multiple JSON files but not all at once on IPFS using moralis - ipfs

I want to create NFT’s . So i want all json files in directory (/ipfs/CID/1.json,2.json,3.json) but i dont want want to reveal them instantly thats why I will upload only that json files that i want to reveal instantly and then add json files in ipfs to reveal NFTs later.
I want to upload multiple json files on IPFS but not all at once . Moralis uploadFolder working fine but when it try to upload another .json file its parent hash is different.
Example :-
I upload 2 json file in /json folder then moralis upload folder returns me with
/ipfs/CID/1.json
/ipfs/CID/2.json
in this case CID is same and that what i want but when i upload another 3.json file it returns me with another CID
/ipfs/NEW CID/3.json

We can’t use decentralized storage like IPFS, because the URIs in IPFS are hashes of unique pieces of content and we can’t have completely unique URIs for every token in ERC1155.
Refer:- https://rameerez.com/problems-and-technical-nuances-of-nft-immutability-and-ipfs/

you will have to send different requests
If you send it on request moralis will return same base url
I faced the same issue, I came up with a solution, I wrote a script which read images and json from a file structure and upload images to ipfs
you can see my code
https://github.com/RajaFaizanNazir/bulk_IPFS_pin_differentCID
if you face any issue or confusing, feel free to ask me

Related

Upload files under same CID to IPFS

I'm looking to create an NFT project with 10k pieces, each piece should be made available as soon as the token was minted, therefore I want to call upload the JSON object to IPFS under the same hash as I've seen in other projects.
This means that when the item was minted a new file will be uploaded to:
ipfs://<CID>/1
the seconds minting will create token 2 and then a new file will be uploaded to
ipfs://<CID>/2
How is it possible to be done with ipfs or pinata api?
Wrap it into a .car Web 3 Storage How to Work With Car Files
Update: I just reread the last part of the question.
I found this here (https://docs.pinata.cloud/api-pinning/pin-file) :
wrapWithDirectory - Wrap your content inside of a directory when adding to IPFS. This allows users to retrieve content via a filename instead of just a hash. For a more detailed explanation, see this informative blogpost. Valid options are: true or false
I'm pretty sure that you can do this with ipfs add /PATH/TO/CONTENT/* -w
I'm still exploring with IPFS, but this sounds like what you are looking for.

Send Multiple JSON Files from Directory to a REST API with JMETER

I'm really struggling with getting Jmeter to work with sending multiple json files in Jmeter to a REST API. I have tried other questions on stack and tutorials online and none of them answer my specific requirement.
My requirement is that I will have various json messages saved to a file directory and I'm trying to use JMeter to loop through the folder, pick up the json's and pop it into a HTTP request one at a time i.e. one file = one request and view the result.
Does any one know how to do this?
The easiest solution would be:
Directory Listing Config plugin to read file names into a JMeter Variable
__FileToString() function to read the file content

Is it possible to retrieve a list of files in a JSON format from a URL that lists the contents of a folder

I have a NFS location that is not managed by me and it's contents can be accessed by browsing to it, i.d. the server is serving up the folder as a HTML page.
something like https://ftp.mozilla.org/pub/firefox/releases/52.0/
Is it possible to get the list of files in a JSON format response directly in the request response? Without changing anything on the NFS server and without having to write code to parse the HTML?
e.g., Maybe I can send the request to the URL with different headers.
To clarify:
When you access the address with a browser, curl or wget, you get a HTML page.
My motivation is that I don't want to mount the NFS location. I want to access the files by downloading them from the URL.
I don't know the type of server that is holding the shared folder.
Thanks.
In short, the answer is NO.
Not without tweaking the settings on the webserver that is serving the folder contents.
Here are some examples of how to tweak Apache to serve JSON formatted files listing for the folder.
Apache directory listing as JSON using PHP
Apache directory listing as json
Apache External Module mod_jsonindex - May not be the recommended way
http://1h.com/opensource/mod_jsonindex.html

Swagger API Specification filenames

I'm trying to use Swagger to create API documentation for an API we're building and I've never used it before.
The documentation on Github says that the Resources Listing needs t be at /api-docs and the various resource files need to be at /api-docs/books etc.
This makes naming files and folders very tricky. I think they expect the files to have no file names, rather than having a folder called /api-docs it has to be an extension-less file, then you can't put the resources in an api-docs folder because you can't call the folder that, so they suggest using a folder called /listings.
This folder doesn't appear in the URL structure of your documentation though, it's kind of invisible because you set the baseURL in your resources to the proper path, but it looks like that has to be an absolute path, which is awkward if you want to have it on several servers (local and production).
Maybe I just don't get it but this all seems to be absolutely nuts.
So, I have 2 questions.....
1) Can I give my resource listing file and my resource files a .json extension? This would make sense as it's a JSON file.
2) Can I use a relative path to the resource listing file in the baseURL in my resource files?
Ideally, my file structure would be flatter, like this...
/api-docs
resources.json
books.json
films.json
Is Swagger flexible enough to do this?
It's an IIS server if that makes any difference (if the solution requires routing for example).
I was able to put model files into a folder under the web root and could reference them like this.
$ref: '/models/model.yml#/MyObject'
Relative paths also worked without a leading slash.
$ref: 'models/model.yml#/MyObject'
Inside the model.yml, I can reference other objects int eh same file like this
$ref: '#/MyObject2'.
However, I could only get the main swagger file to import model files. I could not get one model file to cross-reference another model file.
I was using a Tomcat web server but the principle will be the same.

How to handle uploading html content to an AppEngine application?

I would like to allow my users to upload HTML content to my AppEngine web app. However if I am using the Blobstore to upload all the files (HTML files, css files, images etc.) this causes a problem as all the links to other files (pages, resources) will not work.
I see two possibilities, but both of them are not very pretty and I would like to avoid using them:
Go over all the links in the html files and change them to the relevant blob key.
Save a mapping between a file and a blob key, catch all the redirections and serve the blobs (could cause problems with same name files).
How can I solve this elegantly without having to go over and change my user's files?
Because app engine is running your content on multiple servers, you are not able to write to the filesystem. What you could do is ask them to upload a zip file containing their html, css, js, images,... The zipfile module from python is available in appengine, so you can unzip these files, and store them individually. This way, you know the directory structure of the zip. This allows you to create a mapping of relative paths to the content in the blobstore. I don't have enough experience with zipfile to write a full example here, I hope someone more experienced can edit my answer, or create a new one with an example.
Saving a mapping is the best option here. You'll need to identify a group of files in some way, since multiple users may upload a file with the same name, then associate unique pathnames with each file in that group. You can use key names to make it a simple datastore get to find the blob associated with a given path. No redirects are required - just use the standard Blobstore serving approach of setting the blobstore header to have App Engine serve the blob to the user.
Another option is to upload a zip, as Frederik suggests. There's no need to unpack and store the files individually, though - you can serve them directly out of the zip in blobstore, as this demo app does.