I have hash of ipfs file, node with this file not work. But I need some how restore this file.
Can I some how restore file from hash?
You can download data from IPFS as long as there is at least one node providing it.
If the data was only at one node, and you shut it down, you won't be able to get the data until the node gets online again or someone else with the same data adds it to IPFS and announces it to DHT.
To get a list of nodes providing data for specific hash:
ipfs dht findprovs QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
If you want to download data from IPFS but can't run local node try downloading it from one of public gateways.
Related
I figured out pinned data is cached in blocks folder. Can I just get away with copying the files I need in blocks folder to backup? Is datastore folder generated automatically by ipfs daemon?
I tried copying only the blocks folder to another ipfs deamon and it recognized the .data files as pinned files and created a different datastore folder.
There are three stores used by Kubo (formerly go-ipfs):
Key store storing private/public keys for PeerID and IPNS
Data store
Block store
These have folders in the .ipfs directory used by Kubo (with default configuration which uses leveldb for data store and flatfs for blockstore).
datastore folder: used by leveldb to store things like pins and MFS roots
blocks folder where blocks are stored. This includes non-pinned blocks that are cached by your node.
You could copy the blocks folder to another IPFS daemon with the same configuration. However, I'd be aware that it may not be the best way to do this, especially if the node is running and modifying the blocks folder.
A much more explicit way would be to use the ipfs dag export <CID> command to export .car files.
.car files are convenient because they can be imported into another IPFS node and contain inside all the blocks.
There are two concepts in IPFS, the connection of which is not very clear to me: IPFS pin and IPFS MFS.
As I understand it, ipfs pin allows you to keep content on your node, protecting it from being automatically removed by the garbage collector. In this case, if I add content by myself using ipfs add <file>, then it will be automatically pinned and then it can be unpinned and removed only manually.
IPFS MFS, on the other hand, allows objects to be manipulated as if they were in the file system. For example, I can copy a specific external object to MFS using ipfs files cp <id> <name>. After that, I can find out its ID using ipfs files stat <name>.
The questions are:
Are the files in MFS protected from being removed by garbage collector?
If protected, then why are they not displayed in ipfs pin ls?
Will the data be saved if I add it using ipfs add <file>, then add it to MFS using ipfs files cp <id> <name>, and then unpin it using ipfs pin rm <id>?
Is IPFS MFS a more reliable way to work with data?
these pretty good questions! Answering them separately
Are the files in MFS protected from being removed by garbage collector?
They are not by default Pinned. You will need to pin those files as well if you want them tracked by the Pinner. You can do a ipfs files stat /somePath, get the hash and then pin that hash.
The part where it gets confusing is that GC will do a "best effort" pinning, in which files that are accessed by the root of the MFS DAG will not be GC as well.
Example:
You add a file to MFS
You make a modification to that file on MFS
The previous version will get GC'ed
The latest version will be protected from GC
If you want to protect the previous, you can use the Pin API.
If protected, then why are they not displayed in ipfs pin ls?
As answered on 1., you will need to pin them manually to see it being tracked by the pinning system.
Will the data be saved if I add it using ipfs add <file>, then add it to MFS using ipfs files cp <id> <name>, and then unpin it using ipfs pin rm <id>?
Perhaps you get the gist by now. To clarify:
Pinning is a protection for garbage collection (GC). If pinned, GC won't delete it
MFS doesn't auto pin files. GC just tries to be friends with MFS and not GC files that are reachable by the root of MFS.
Is IPFS MFS a more reliable way to work with data?
It is a more familiar way as you get the regular directory structures and Unix like API to operate over files. It handles the graph manipulations for you.
I can download a file with ipfs get hash but does that seed the file too? It also downloads a copy of the file and saves it to the folder I run the command from which I don't want. I only want the file to be chopped up and seeded from .ipfs
Any files accessed through your gateway will be cached and available (seeding) to the rest of the p2p network as long as your daemon is running and publicly accessible.
You can run ipfs repo gc to clear your cache.
You may also add files to your local storage that won't be garbage collected. You can do this with ipfs pin {hash}, and you can view the pinned items with ipfs pin ls.
I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information
After I add a file to ipfs using ipfs add hello - how to do I retrieve the hash for the file if I lose it?
I guess I'm expecting ipfs info <filepath> or something similar?
If you want to see what the hash of a file would be, without actually uploading it to IPFS, you can run ipfs add --only-hash, or ipfs add -n for short.
just run ipfs add hello again...
Make sure ipfs daemon is running before proceeding.
ipfs get theHashOfTheItem in the command line will print out the raw data/ text of the block corresponding to that particular hash
To get a list of objects with the hash that is hosted on your computer, you may run the daemon using ipfs daemon followed by going to and checking under Files.
http://localhost:5001/webui
I remember reading a way to get the list of hashes via the command line, but I can't seem to remember it. Once I get it, I shall post the details about that here as well.
Not ideal, but checking the information each hash stores, either by using the command shared on top or clicking on the files itself within the browser should let you find the hash you are looking for.
Since you've added the file/folder, it will be pinned to your ipfs repo. Run the command
ipfs pin ls
This will list all the objects of the files/folder pinned to your repo
Spent an hour doing this and turns out you can do a simple ipfs files stat /path/to/object to get an output like:
$ ipfs files stat /folder-2/text.txt
QmcNsPV7QZFHKb2DNn8GWsU5dtd8zH5DNRa31geC63ceb4
Size: 14
CumulativeSize: 72
ChildBlocks: 1
Type: file
If you want to script this, do a simple | head -n 1 to get the hash.