I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information
Related
I figured out pinned data is cached in blocks folder. Can I just get away with copying the files I need in blocks folder to backup? Is datastore folder generated automatically by ipfs daemon?
I tried copying only the blocks folder to another ipfs deamon and it recognized the .data files as pinned files and created a different datastore folder.
There are three stores used by Kubo (formerly go-ipfs):
Key store storing private/public keys for PeerID and IPNS
Data store
Block store
These have folders in the .ipfs directory used by Kubo (with default configuration which uses leveldb for data store and flatfs for blockstore).
datastore folder: used by leveldb to store things like pins and MFS roots
blocks folder where blocks are stored. This includes non-pinned blocks that are cached by your node.
You could copy the blocks folder to another IPFS daemon with the same configuration. However, I'd be aware that it may not be the best way to do this, especially if the node is running and modifying the blocks folder.
A much more explicit way would be to use the ipfs dag export <CID> command to export .car files.
.car files are convenient because they can be imported into another IPFS node and contain inside all the blocks.
I have hash of ipfs file, node with this file not work. But I need some how restore this file.
Can I some how restore file from hash?
You can download data from IPFS as long as there is at least one node providing it.
If the data was only at one node, and you shut it down, you won't be able to get the data until the node gets online again or someone else with the same data adds it to IPFS and announces it to DHT.
To get a list of nodes providing data for specific hash:
ipfs dht findprovs QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
If you want to download data from IPFS but can't run local node try downloading it from one of public gateways.
I can download a file with ipfs get hash but does that seed the file too? It also downloads a copy of the file and saves it to the folder I run the command from which I don't want. I only want the file to be chopped up and seeded from .ipfs
Any files accessed through your gateway will be cached and available (seeding) to the rest of the p2p network as long as your daemon is running and publicly accessible.
You can run ipfs repo gc to clear your cache.
You may also add files to your local storage that won't be garbage collected. You can do this with ipfs pin {hash}, and you can view the pinned items with ipfs pin ls.
I made one Dyno app in Heroku using node.js
that Dyno task is to collect data and create json file daily
but I don't know how to download them locally
I tried
http://myappname.heroku.com/filename.json
but failed
Heroku is new for me,so please don't treat me like advance user
You cannot do this.
If your code is writing a JSON file to the Heroku server daily, that file is almost instantly being deleted, so there is no way you can download it.
Heroku dynos are ephemeral. This means that any data you 'save' to the filesystem will be deleted almost instantly. If you need to save files, you should save them to a file service like Amazon S3 -- then download them through there.
Save your JSON file to /public folder.
Ensure that your app.js has the following:
app.use(express.static(__dirname + '/public'))
Now, you should be able to access:
http://myappname.heroku.com/filename.json
We are developing a service for our QA staff.
The main goal is that a tester from our web interface be able to select from a github branch a dump for this particular machine and click "Deploy" button, then the rails app for testing will be deployed to Digital Ocean.
The feature I am now working on, is collecting deployment logs and displaying them through our web interface.
On DO droplet there is a "logs" folder which contains different log files which are populated during deployment:
migrations_result_#{machine_id}.log, bundle_result_#{machine_id}.log, etc.
Where #{machine_id} is the id of deployed machine on our service(it is not droplet id).
With the help of remote_syslog gem we are monitoring "logs" folders on each droplet and send them through udp to our main service server, and with the help of rsyslog we store them in a particular folder, let's say /var/log/deplogs/
So in /var/log/deplogs/ we have:
migrations_result_1.log, bundle_result_1.log,
migrations_result_2.log, bundle_result_2.log,
...
migrations_result_n.log, bundle_result_n.log
How do I need to monitor this folder and save contents of each log file to mysql database?
I need to achieve something like the following (Ruby code):
Machine.find(#{machine_id}).logs.create!(text: "migrations_result_#{machine_id}.log contents")
Rsyslog does not seems to be able to achieve this. Or am I missing something?
Any advices?
Thanks in advance, and sorry for my English, I hope you can get the idea.
First of all, congratulations! You are in front of a beautiful problem. My suggestion is to use divide and conquer.
Here are my considerations:
Put the relevant folder(s) under version control (for example, GIT)
Check via GIT commands the files that changed every X amount of time.
Also obtain the differences between the prior version of each file, and the new ones, so you can update your database parsing the new info.
Just in case, here are ways to call system commands from ruby.
Hope that helps,