Upload files under same CID to IPFS - ipfs

I'm looking to create an NFT project with 10k pieces, each piece should be made available as soon as the token was minted, therefore I want to call upload the JSON object to IPFS under the same hash as I've seen in other projects.
This means that when the item was minted a new file will be uploaded to:
ipfs://<CID>/1
the seconds minting will create token 2 and then a new file will be uploaded to
ipfs://<CID>/2
How is it possible to be done with ipfs or pinata api?

Wrap it into a .car Web 3 Storage How to Work With Car Files

Update: I just reread the last part of the question.
I found this here (https://docs.pinata.cloud/api-pinning/pin-file) :
wrapWithDirectory - Wrap your content inside of a directory when adding to IPFS. This allows users to retrieve content via a filename instead of just a hash. For a more detailed explanation, see this informative blogpost. Valid options are: true or false
I'm pretty sure that you can do this with ipfs add /PATH/TO/CONTENT/* -w
I'm still exploring with IPFS, but this sounds like what you are looking for.

Related

how do i add client-secret.json to my heroku app? [duplicate]

I'm building a rails app that pulls data from Google Analytics using the Google Api Client Library for Ruby.
I'm using OAuth2 and can get everything working in development on a local machine. My issue is that the library uses a downloaded file, client_secrets.json, to store two secret keys.
Problem:I'm using Heroku and need a way to get the file to their production servers.
I don't want to add this file to my github repo as the project is public.
If there is a way to temporarily add the file to git, push to Heroku, and remove from git that would be fine. My sense is that the keys will be in the commits and very hard to prevent from showing on github.
Tried:
As far I can tell you cannot SCP a file to Heroku via a Bash console. I believe when doing this you get a new Dyno and anything you add would be only be temporary. I tried this but couldn't get SCP to work properly, so not 100% sure about this.
Tried:
I looked at storing the JSON file in an Environment or Config Var, but couldn't get it to work. This seems like the best way to go if anyone has a thought. I often run into trouble when ruby converts JSON into a string or hash, so possibly I just need guidance here.
Tried:
Additionally I've tried to figure out a way to just pull out the keys from the JSON file, put them into Config Vars, and add the JSON file to git. I can't figure out a way to put ENV["KEY"] in a JSON file though.
Example Code
The Google library has a method that loads the JSON file to create an authorization client. The client then fetches a token (or gives a authorization url).
client_secrets = Google::APIClient::ClientSecrets.load('client_secrets.json')
auth_client = client_secrets.to_authorization
** note that the example on google page doesn't show a filename because it uses a default ENV Var thats been set to a path
I figure this would all be a lot easier if the ClientSecrets.load() method would just take JSON, a string or a hash which could go into a Config Var.
Unfortunately it always seems to want a file path. When I feed it JSON, a string or hash, it blows up. I've seen someone get around the issue with a p12 key here, but I'm not sure how to replicate that in my situation.
Haven't Tried:
My only other though (aside from moving to AWS) is to put the JSON file on AWS and have rails pull it when needed. I'm not sure if this can be done on the fly or if the file would need to be pulled down when the rails server boots up. Seems like too much work, but at this point I've spend a few hours on it so ready to attempt.
This is the specific controller I am working on:
https://github.com/dladowitz/slapafy/blob/master/app/controllers/welcome_controller.rb
By search github I found that someone had used a different method that used a JSON string as an argument rather than a file path: Google::APIClient::ClientSecrets.new(JSON.parse(ENV['GOOGLE_CLIENT_SECRETS']))
This lets me wrap up the JSON into an ENV VAR. The world makes sense again.
As discussed in this thread, rather than supplying a path to a json key file you can set three ENV variables instead:
GOOGLE_ACCOUNT_TYPE=service_account
GOOGLE_PRIVATE_KEY=XXX
GOOGLE_CLIENT_EMAIL=XXX
Source here.
I ran into this same problem using Google API. I ended up using openssl to assign a new very secret passphrase to the p12 file, storing that new file in the repo, and then putting the passphrase into app secrets and on Heroku env variables.
This way, the file is in the repo but it can't be accessed/read without the passphrase.
This post was helpful in changing the default google p12 passphrase from 'notasecret' to something secure.
def authorize!
#client.authorization = Signet::OAuth2::Client.new(
#...
:signing_key => key
)
end
def key
Google::APIClient::KeyUtils.load_from_pkcs12(key_path, ENV.fetch('P12_PASSPHRASE'))
end
def key_path
"#{Rails.root}/config/google_key.p12"
end
Using Rails 7, I encrypted the JSON credentials like so
I first ran bin/rails credentials:edit -e development
Then added my credentials:
omniauth:
google_oauth2:
client_secrets: {"web":{"client_id":"my-client-id","project_id":"my-project-id","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_secret":"my-client-secret","redirect_uris":["http://localhost:3000/users/auth/google_oauth2/callback","http://localhost:3000/contacts/gmail/callback"]}}
The I used it with ClientSecrets like so:
def client_secrets
#client_secrets ||= Google::APIClient::ClientSecrets.new(client_secrets_json)
end
def client_secrets_json
Rails.application.credentials.dig(:omniauth, :google_oauth2, :client_secrets)
end

Passing a config file to Google Cloud Function using GitHub Actions and GitHub Secrets

I have a config.py file on my development machine that stores multiple dictionaries for my config settings.
config.py:
SERVICE_ONE_CREDENTIALS = {
"CLIENT_ID": 000,
"CLIENT_SECRET": "abc123"
}
SERVICE_TWO_CREDENTIALS = {
"VERIFY_TOKEN": "abc123"
}
...
I recently setup this GitHub action to automatically deploy the changes that are pushed to the repository into a Google Cloud Function, and ran into a problem when trying to copy this configuration file over since the file is being ignore from git due to it storing sensitive credentials.
I've been trying to find a way to copy this file over to the Cloud Function but haven't been successful. I would prefer to stay away from using environment variables due to the number of keys there are. I did look into using key management services, but I first wanted to see if it would be possible to store the file in GitHub Secrets and pass it along to the function.
As a backup, I did consider encrypting the config file, adding it to the git repo, and storing the decryption key in GitHub secrets. With that, I could decrypt the file in the Cloud Function before starting the app workflow. This doesn't seem like a great idea though, but would be interested to see if anyone has done this or what your thoughts are on this.
Is something like this possible?
If you encrypt and put in a repo at least it's not clear text and someone can't get to the secret without a private key (which of course you don't check in). I do something similar in my dotfiles repo where I check in dat files with my secrets and the private key isn't checked in. That would have to be a secret in actions and written to disk to be used. It's a bit of machinery but possible.
Using github secrets is a secure path because you don't check-in anything, it's securely stored and we pass it JIT if it's referenced. dislosure, I work on actions.
One consideration with that is we redact secrets on the fly from the logs but it's done one line at a time. multiline secrets are not good.
So a couple of options ...
You can manage the actual secret (abc123) as a secret and echo the config file to a file with the secret. As you noted you have to manage each secret separately. IMHO, it's not a big deal since abc123 is actually the secret. I would probably lean into that path.
Another option is to base64 encode the config file, store that as a secret in github actions and echo base64 decoded to a file. Don't worry, base64 isn't a security mechanism here. It's a transport to get it to a single line and if it accidentally leaks into the logs (the command line you run) the base64 version of it (which could easily be decoded) will be redacted from the logs.
There's likely other options but hope that helped.

Upload multiple JSON files but not all at once on IPFS using moralis

I want to create NFT’s . So i want all json files in directory (/ipfs/CID/1.json,2.json,3.json) but i dont want want to reveal them instantly thats why I will upload only that json files that i want to reveal instantly and then add json files in ipfs to reveal NFTs later.
I want to upload multiple json files on IPFS but not all at once . Moralis uploadFolder working fine but when it try to upload another .json file its parent hash is different.
Example :-
I upload 2 json file in /json folder then moralis upload folder returns me with
/ipfs/CID/1.json
/ipfs/CID/2.json
in this case CID is same and that what i want but when i upload another 3.json file it returns me with another CID
/ipfs/NEW CID/3.json
We can’t use decentralized storage like IPFS, because the URIs in IPFS are hashes of unique pieces of content and we can’t have completely unique URIs for every token in ERC1155.
Refer:- https://rameerez.com/problems-and-technical-nuances-of-nft-immutability-and-ipfs/
you will have to send different requests
If you send it on request moralis will return same base url
I faced the same issue, I came up with a solution, I wrote a script which read images and json from a file structure and upload images to ipfs
you can see my code
https://github.com/RajaFaizanNazir/bulk_IPFS_pin_differentCID
if you face any issue or confusing, feel free to ask me

How to handle uploading html content to an AppEngine application?

I would like to allow my users to upload HTML content to my AppEngine web app. However if I am using the Blobstore to upload all the files (HTML files, css files, images etc.) this causes a problem as all the links to other files (pages, resources) will not work.
I see two possibilities, but both of them are not very pretty and I would like to avoid using them:
Go over all the links in the html files and change them to the relevant blob key.
Save a mapping between a file and a blob key, catch all the redirections and serve the blobs (could cause problems with same name files).
How can I solve this elegantly without having to go over and change my user's files?
Because app engine is running your content on multiple servers, you are not able to write to the filesystem. What you could do is ask them to upload a zip file containing their html, css, js, images,... The zipfile module from python is available in appengine, so you can unzip these files, and store them individually. This way, you know the directory structure of the zip. This allows you to create a mapping of relative paths to the content in the blobstore. I don't have enough experience with zipfile to write a full example here, I hope someone more experienced can edit my answer, or create a new one with an example.
Saving a mapping is the best option here. You'll need to identify a group of files in some way, since multiple users may upload a file with the same name, then associate unique pathnames with each file in that group. You can use key names to make it a simple datastore get to find the blob associated with a given path. No redirects are required - just use the standard Blobstore serving approach of setting the blobstore header to have App Engine serve the blob to the user.
Another option is to upload a zip, as Frederik suggests. There's no need to unpack and store the files individually, though - you can serve them directly out of the zip in blobstore, as this demo app does.

How can I add file locations to a database after they are uploaded using a Perl CGI script?

I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.