What is the right way to locally copy pinned data in ipfs and what is the use of datastore folder? - ipfs

I figured out pinned data is cached in blocks folder. Can I just get away with copying the files I need in blocks folder to backup? Is datastore folder generated automatically by ipfs daemon?
I tried copying only the blocks folder to another ipfs deamon and it recognized the .data files as pinned files and created a different datastore folder.

There are three stores used by Kubo (formerly go-ipfs):
Key store storing private/public keys for PeerID and IPNS
Data store
Block store
These have folders in the .ipfs directory used by Kubo (with default configuration which uses leveldb for data store and flatfs for blockstore).
datastore folder: used by leveldb to store things like pins and MFS roots
blocks folder where blocks are stored. This includes non-pinned blocks that are cached by your node.
You could copy the blocks folder to another IPFS daemon with the same configuration. However, I'd be aware that it may not be the best way to do this, especially if the node is running and modifying the blocks folder.
A much more explicit way would be to use the ipfs dag export <CID> command to export .car files.
.car files are convenient because they can be imported into another IPFS node and contain inside all the blocks.

Related

Is it possible restore ipfs file by hash?

I have hash of ipfs file, node with this file not work. But I need some how restore this file.
Can I some how restore file from hash?
You can download data from IPFS as long as there is at least one node providing it.
If the data was only at one node, and you shut it down, you won't be able to get the data until the node gets online again or someone else with the same data adds it to IPFS and announces it to DHT.
To get a list of nodes providing data for specific hash:
ipfs dht findprovs QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
If you want to download data from IPFS but can't run local node try downloading it from one of public gateways.

ipfs - How can I download and seed a file?

I can download a file with ipfs get hash but does that seed the file too? It also downloads a copy of the file and saves it to the folder I run the command from which I don't want. I only want the file to be chopped up and seeded from .ipfs
Any files accessed through your gateway will be cached and available (seeding) to the rest of the p2p network as long as your daemon is running and publicly accessible.
You can run ipfs repo gc to clear your cache.
You may also add files to your local storage that won't be garbage collected. You can do this with ipfs pin {hash}, and you can view the pinned items with ipfs pin ls.

IPFS file upload and view

I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information

css is not working after uploading the file on server with filezilla

css is not working after uploading the file on server with filezilla i added file permission 755 on server and even after that it's not working.
I have changed /web/assets/d01711d6/css/bootstrap.css.
css is not working after uploading the file on server with filezilla i added file permission 755 on server after that it's not working.
This may depend on several factors.
Cache
If the application is hosted it may be that it is a problem of the server cache so the new CSS file is read to the cache expiration (sometimes several days). In some cases the provider it provides configurations for enabling a temporary mode that disables this mode and promptly update the files.
Asset Management
Another factor is related to the fact that the directory where content assets are dynamically generated so not always the name of the directory in the development environment and the production server match. It is in these cases to find (looking for it) the actual directory used by the server and replace the file in the right place.
If, as the practice, changes to css file was made in the original directory and not on the copy of the file created by dynamically from asset management, one usually proceeds by eliminating the directory containing the assets of interest and the first subsequent invocation of the application (the URL / link) a new directory of assets is created for these files

Openshift: where to put resource files that I want outside of the deployment folder

I'm starting a new web app with Openshift (jboss, mysql). It's the first time I use openshift and after reading through some doc and experimenting a bit with it, I'm having one question regarding best practices for the architecture of my app.
There will be some files generated by- or uploaded to the application (resources). I'd like those files to be outside the deployment folder so they are not erased/overwritten when the app deploys again. I have browsed through the directories and I was wondering:
is it ok to use the /var/lib/openshift/[openshift-id]/app-root/data folder for these files?
Yes, you should use your ~/app-root/data folder for any files that you want to not be erased when you do a git push, there is also an environment variable that you can use that points to that folder called OPENSHIFT_DATA_DIR. Please note that if you are using a scaled application, that folder is not shared among your gears.