How to add a list of CIDs to an IPFS node - ipfs

I have a list of CIDs (4,000 of them) that I uploaded to nft.storage and now I wish to import them to my own IPFS node. What is the easiest way to do this in bulk? I checked the ipfs add command but seems like I can only add one file (not CID) at a time.

You can add multiple CIDs to your node simply by pinning them to your node. To quote the docs:
USAGE
ipfs pin add <ipfs-path>... - Pin objects to local storage.
SYNOPSIS
ipfs pin add [--recursive=false] [--progress] [--] <ipfs-path>...
ARGUMENTS
<ipfs-path>... - Path to object(s) to be pinned.
OPTIONS
-r, --recursive bool - Recursively pin the object linked to by the specified
object(s). Default: true.
--progress bool - Show progress.
DESCRIPTION
Stores an IPFS object(s) from a given path locally to disk.

Related

Passing a config file to Google Cloud Function using GitHub Actions and GitHub Secrets

I have a config.py file on my development machine that stores multiple dictionaries for my config settings.
config.py:
SERVICE_ONE_CREDENTIALS = {
"CLIENT_ID": 000,
"CLIENT_SECRET": "abc123"
}
SERVICE_TWO_CREDENTIALS = {
"VERIFY_TOKEN": "abc123"
}
...
I recently setup this GitHub action to automatically deploy the changes that are pushed to the repository into a Google Cloud Function, and ran into a problem when trying to copy this configuration file over since the file is being ignore from git due to it storing sensitive credentials.
I've been trying to find a way to copy this file over to the Cloud Function but haven't been successful. I would prefer to stay away from using environment variables due to the number of keys there are. I did look into using key management services, but I first wanted to see if it would be possible to store the file in GitHub Secrets and pass it along to the function.
As a backup, I did consider encrypting the config file, adding it to the git repo, and storing the decryption key in GitHub secrets. With that, I could decrypt the file in the Cloud Function before starting the app workflow. This doesn't seem like a great idea though, but would be interested to see if anyone has done this or what your thoughts are on this.
Is something like this possible?
If you encrypt and put in a repo at least it's not clear text and someone can't get to the secret without a private key (which of course you don't check in). I do something similar in my dotfiles repo where I check in dat files with my secrets and the private key isn't checked in. That would have to be a secret in actions and written to disk to be used. It's a bit of machinery but possible.
Using github secrets is a secure path because you don't check-in anything, it's securely stored and we pass it JIT if it's referenced. dislosure, I work on actions.
One consideration with that is we redact secrets on the fly from the logs but it's done one line at a time. multiline secrets are not good.
So a couple of options ...
You can manage the actual secret (abc123) as a secret and echo the config file to a file with the secret. As you noted you have to manage each secret separately. IMHO, it's not a big deal since abc123 is actually the secret. I would probably lean into that path.
Another option is to base64 encode the config file, store that as a secret in github actions and echo base64 decoded to a file. Don't worry, base64 isn't a security mechanism here. It's a transport to get it to a single line and if it accidentally leaks into the logs (the command line you run) the base64 version of it (which could easily be decoded) will be redacted from the logs.
There's likely other options but hope that helped.

Upload files under same CID to IPFS

I'm looking to create an NFT project with 10k pieces, each piece should be made available as soon as the token was minted, therefore I want to call upload the JSON object to IPFS under the same hash as I've seen in other projects.
This means that when the item was minted a new file will be uploaded to:
ipfs://<CID>/1
the seconds minting will create token 2 and then a new file will be uploaded to
ipfs://<CID>/2
How is it possible to be done with ipfs or pinata api?
Wrap it into a .car Web 3 Storage How to Work With Car Files
Update: I just reread the last part of the question.
I found this here (https://docs.pinata.cloud/api-pinning/pin-file) :
wrapWithDirectory - Wrap your content inside of a directory when adding to IPFS. This allows users to retrieve content via a filename instead of just a hash. For a more detailed explanation, see this informative blogpost. Valid options are: true or false
I'm pretty sure that you can do this with ipfs add /PATH/TO/CONTENT/* -w
I'm still exploring with IPFS, but this sounds like what you are looking for.

Is it possible to upload properties file to aws parameter store and fetch one element in the file based on a key?

In spring-cloud-config, it is possible to configure a properties file and then fetch an element in the properties file using the key.
But in aws parameter store, each key value is stored as a separate entry. As i understand, i need to take each key-value from the properties file and then configure in parameter store.
In reality, each region (DEV, QA etc.) has a set of configuration files. Each configuration file has a set of properties in it. These files can be grouped based on the hierarchy support that parameter store provides.
SDLC REGION >> FILE >> KEY-VALUE ENTRIES
SDLC region can be supported by hierarchy. Key-value entries are supported by parameter store. How do we manage the FILE entity in parameter store?
you can use the path hierarchy and get parameters by path or by name prefix.
e.g.
SDLC_REGION/FILE/param1
SDLC_REGION/FILE/param2
SDLC_REGION2/FILE/param1
SDLC_REGION2/FILE/param2
then you can get by path: /SDLC_REGION/FILE to get all it parameters
another option is using tags

Problems working with the ACL of a folder in Google Cloud Storage

I've created an object inside a folder in Google Cloud Storage whith the following OptionsBuilder object:
GSFileOptionsBuilder optionsBuilder = new GSFileOptionsBuilder()
.setBucket("bucket")
.setKey("folder/obj.csv")
.setMimeType("text/csv");
Making the next structure:
bucket >> folder >> obj.csv
When I run the gsutil command to get ACL for "bucket" and "obj.csv" works fine, however when I execute it for "folder" throws this exception:
GSResponseError: status=404, code=NoSuchKey, reason=Not Found.
The exactly command I run is: gsutil getacl gs://bucket/folder/ > acl.txt
How I can get and set permissions on a folder?
You can only retrieve the ACL of an object or a bucket. There is no such thing as a "folder" in GCS, so you can't set or get the ACL of a folder. I suggest you read the Concepts and Terminology section of the developer guide carefully. In particular, the section on object names:
Object names
An object name is just metadata to Google Cloud Storage. Object names
can contain any combination of Unicode characters (UTF-8 encoded) less
than 1024 bytes in length. A common character to include in file names
is a slash (/). By using slashes in an object name, you can make
objects appear as though they're stored in a hierarchical structure.
For example, you could name one object /europe/france/paris.jpg and
another object /europe/france/cannes.jpg. When you list these objects
they appear to be in a hierarchical directory structure based on
location; however, Google Cloud Storage sees the objects as
independent objects with no hierarchical relationship whatsoever.

tool to inspect mercurial's internal files

Git has the cat-file command to inspect internal files, e.g. git cat-file blob 557db03 will show the contents of the object whose hash starts with 557db03.
Are there similar tools for mercurial that allow me to look at all the different data files that merfcurial uses internally?
Try hg --debug help and you can see the list of all the debug commands:
debugancestor:
find the ancestor revision of two revisions in a given index
debugbuilddag:
builds a repo with a given DAG from scratch in the current empty repo
debugbundle:
lists the contents of a bundle
debugcheckstate:
validate the correctness of the current dirstate
debugcommands:
list all available commands and options
debugcomplete:
returns the completion list associated with the given command
debugdag:
format the changelog or an index DAG as a concise textual description
debugdata:
dump the contents of a data file revision
debugdate:
parse and display a date
debugdiscovery:
runs the changeset discovery protocol in isolation
debugfileset:
parse and apply a fileset specification
debugfsinfo:
show information detected about current filesystem
debuggetbundle:
retrieves a bundle from a repo
debugignore:
display the combined ignore pattern
debugindex:
dump the contents of an index file
debugindexdot:
dump an index DAG as a graphviz dot file
debuginstall:
test Mercurial installation
debugknown:
test whether node ids are known to a repo
debugpushkey:
access the pushkey key/value protocol
debugrebuildstate:
rebuild the dirstate as it would look like for the given revision
debugrename:
dump rename information
debugrevlog:
show data and statistics about a revlog
debugrevspec:
parse and apply a revision specification
debugsetparents:
manually set the parents of the current working directory
debugstate:
show the contents of the current dirstate
debugsub:
(no help text available)
debugwalk:
show how files match on given patterns
debugwireargs:
(no help text available)
There are a lot of them, and they pretty much expose everything.
The closest commands would be:
hg cat -r rev aFile
hg cat: Print the specified files as they were at the given revision
This is not completely the same than git cat-file though, as the latter can also list SHA1, type, and size for a list of objects.
In that second case, hg manifest might be more appropriate.