How do I retrieve the hash of ipfs object? - ipfs

After I add a file to ipfs using ipfs add hello - how to do I retrieve the hash for the file if I lose it?
I guess I'm expecting ipfs info <filepath> or something similar?

If you want to see what the hash of a file would be, without actually uploading it to IPFS, you can run ipfs add --only-hash, or ipfs add -n for short.

just run ipfs add hello again...

Make sure ipfs daemon is running before proceeding.
ipfs get theHashOfTheItem in the command line will print out the raw data/ text of the block corresponding to that particular hash
To get a list of objects with the hash that is hosted on your computer, you may run the daemon using ipfs daemon followed by going to and checking under Files.
http://localhost:5001/webui
I remember reading a way to get the list of hashes via the command line, but I can't seem to remember it. Once I get it, I shall post the details about that here as well.
Not ideal, but checking the information each hash stores, either by using the command shared on top or clicking on the files itself within the browser should let you find the hash you are looking for.

Since you've added the file/folder, it will be pinned to your ipfs repo. Run the command
ipfs pin ls
This will list all the objects of the files/folder pinned to your repo

Spent an hour doing this and turns out you can do a simple ipfs files stat /path/to/object to get an output like:
$ ipfs files stat /folder-2/text.txt
QmcNsPV7QZFHKb2DNn8GWsU5dtd8zH5DNRa31geC63ceb4
Size: 14
CumulativeSize: 72
ChildBlocks: 1
Type: file
If you want to script this, do a simple | head -n 1 to get the hash.

Related

How can I find out if a given content can be found in IPFS?

I want to see if a given string ("hello" in this case) is accessible via IPFS. I am trying this:
echo hello | ipfs add -nq | ipfs cat
I am getting this:
Error: lock /root/.ipfs/repo.lock: someone else has the lock
What's going on? Is there a global lock, or is IPFS just protecting itself from a race condition that could have happened if I hadn't specified the -n flag?
This seems to work:
HASH=`echo hithere | ipfs add -nq` && ipfs cat $HASH
However, I wonder whether there is a more idiomatic way. This must be a fairly standard thing to do.
What's going on?
Your daemon is not running, so both ipfs add and ipfs cat tries to start an offline daemon at the same time which doesn't work (only one daemon per IPFS_PATH can run).
You need to run ipfs daemon in the background first, so then the commands will just hit ipfs daemon's API instead of starting their own daemon.
This seems to work:
HASH=`echo hithere | ipfs add -nq` && ipfs cat $HASH
Yes because you are using && which schedule them to wait on each other first.

How can one list all of the currently pinned files for an IPFS instance?

According to https://docs.ipfs.io/guides/concepts/pinning/ , running the command ipfs add hello.txt apparently "pins" the file "hello.txt", yet why don't I see the file listed afterwards when I run the command ipfs files ls? It only lists files I added with the IPFS desktop app. Why is "hello.txt" not in the list now?
Also, I found a list of so-called "pinned" objects, by running the command ipfs pin ls, however none of the CID's that show up there correspond to "hello.txt", or even any of the previously mentioned files added using the IPFS desktop app.
How does one actually manage pinned files?
cool to see some questions about IPFS pop up here! :)
So, there are two different things:
Pins
Files/Folders (Called MFS)
They both overlap heavily, but it's best to describe that the MFS is basically a locally alterable filesystem with a mapping of 'objects' as files and folders.
You have a root ( / ) in your local IPFS client, where you can put files and folders.
For example you can add a folder recursively:
ipfs add -r --recursive /path/to/folder
You get a CID (content ID) back. This content ID represents the folder, all its files and all the file structure as a non-modifiable data structure.
This folder can be mapped to a name in your local root:
ipfs files cp /ipfs/<CID> /<foldername>
A ipfs files ls will now show this folder by name, while an ipfs pin ls --type=recursive will show the content-ID as pinned.
If you use the (Web)GUI, files will show up under the 'files' tab, while the pins show up under the 'pins' tab.
Just a side note, you don't have to pin a file or folder stored in your MFS, everything stored there will be permanently available.
If you going to change the folders, subfolders, files, etc in your MFS, the folder will get a different Content-ID and your pin will still make sure the old version is held on your client.
So if you add another file to your folder, by something like cat /path/to/file | ipfs files write --create /folder/<newfilename>, the CID of your folder will be different.
Compare ipfs files stat --hash /folder and afterwards again.
Hope I didn't fully confuse you :D
Best regards
Ruben
Answer:ipfs pin ls --type recursive
It's simple. Just run that command.
Some further notes: the type can be "direct", "recursive", "indirect", and "all". I ran these commands with these results ("Error: context canceled" means that I canceled the command with ctrl+c):
ipfs pin ls --type all - took too long, "Error: context canceled"
ipfs pin ls --type direct - took too long, "Error: context canceled"
ipfs pin ls --type indirect - took too long, "Error: context canceled"
ipfs pin ls --type recursive - worked, showed multiple, probably all, pins of mine
I don't really know what types other than recursive mean. You can read about them from the output of this command: ipfs pin ls --help.

Is it possible restore ipfs file by hash?

I have hash of ipfs file, node with this file not work. But I need some how restore this file.
Can I some how restore file from hash?
You can download data from IPFS as long as there is at least one node providing it.
If the data was only at one node, and you shut it down, you won't be able to get the data until the node gets online again or someone else with the same data adds it to IPFS and announces it to DHT.
To get a list of nodes providing data for specific hash:
ipfs dht findprovs QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
If you want to download data from IPFS but can't run local node try downloading it from one of public gateways.

What is the connection between ipfs pin and MFS?

There are two concepts in IPFS, the connection of which is not very clear to me: IPFS pin and IPFS MFS.
As I understand it, ipfs pin allows you to keep content on your node, protecting it from being automatically removed by the garbage collector. In this case, if I add content by myself using ipfs add <file>, then it will be automatically pinned and then it can be unpinned and removed only manually.
IPFS MFS, on the other hand, allows objects to be manipulated as if they were in the file system. For example, I can copy a specific external object to MFS using ipfs files cp <id> <name>. After that, I can find out its ID using ipfs files stat <name>.
The questions are:
Are the files in MFS protected from being removed by garbage collector?
If protected, then why are they not displayed in ipfs pin ls?
Will the data be saved if I add it using ipfs add <file>, then add it to MFS using ipfs files cp <id> <name>, and then unpin it using ipfs pin rm <id>?
Is IPFS MFS a more reliable way to work with data?
these pretty good questions! Answering them separately
Are the files in MFS protected from being removed by garbage collector?
They are not by default Pinned. You will need to pin those files as well if you want them tracked by the Pinner. You can do a ipfs files stat /somePath, get the hash and then pin that hash.
The part where it gets confusing is that GC will do a "best effort" pinning, in which files that are accessed by the root of the MFS DAG will not be GC as well.
Example:
You add a file to MFS
You make a modification to that file on MFS
The previous version will get GC'ed
The latest version will be protected from GC
If you want to protect the previous, you can use the Pin API.
If protected, then why are they not displayed in ipfs pin ls?
As answered on 1., you will need to pin them manually to see it being tracked by the pinning system.
Will the data be saved if I add it using ipfs add <file>, then add it to MFS using ipfs files cp <id> <name>, and then unpin it using ipfs pin rm <id>?
Perhaps you get the gist by now. To clarify:
Pinning is a protection for garbage collection (GC). If pinned, GC won't delete it
MFS doesn't auto pin files. GC just tries to be friends with MFS and not GC files that are reachable by the root of MFS.
Is IPFS MFS a more reliable way to work with data?
It is a more familiar way as you get the regular directory structures and Unix like API to operate over files. It handles the graph manipulations for you.

IPFS file upload and view

I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information