IPFS Companion - How to NOT load content from my node? - ipfs

If files have been PINNED and I want to test them I need to turn off IPFS Companion
as otherwise files get served from my node. Wondering if I could set it somehow that by default I am not redirected to my node in browser?

I found it in IPFS companion config
"Redirect requests for IPFS resources to your local gateway."
simple, didn't realize it has also preferences, it is on cog wheel of popup in the upper right corner.

Related

Why is it so hard for web browsers to open IPFS links?

Why is it so hard for web browsers to open IPFS links?
For instance, this is an IPFS blog page link, https://ipfs.io/ipfs/bafybeic3y6oc2dai3uypyyuaggp4xx3krocpgzbwst2z4ha73jdh7y6nea/index.html
, and the loading page is tremendously slow on Safari or Chrome/Edge, and it's stuck into Error 504 from time to time. Is there any way to browse IPFS pages smoothly? Or are IPFSs just internally incapable of smooth browsing without third-party helps?
You're loading the page through a gateway. So effectively, you're asking another (quite popular) IPFS node to fetch the info over IPFS, then serve it to you over HTTP. If the gateway is slowed down for any reason, all IPFS resources will seem slow to you. If a gateway is your only option for whatever reason, check out the IPFS Gateway Checker for a list of active ones.
Alternatively, you could run your own IPFS node via something like IPFS Desktop then connect it to IPFS Companion (Chrome | Firefox). IPFS Companion can be configured to redirect all IPFS gateway links to your own IPFS node, then you'll be limited by just how quickly your node can find/retrieve the data, which you'll likely find to be the superior IPFS experience.
The Brave browser also includes an integrated IPFS node. They have an article about it here.

ipfs - How can I download and seed a file?

I can download a file with ipfs get hash but does that seed the file too? It also downloads a copy of the file and saves it to the folder I run the command from which I don't want. I only want the file to be chopped up and seeded from .ipfs
Any files accessed through your gateway will be cached and available (seeding) to the rest of the p2p network as long as your daemon is running and publicly accessible.
You can run ipfs repo gc to clear your cache.
You may also add files to your local storage that won't be garbage collected. You can do this with ipfs pin {hash}, and you can view the pinned items with ipfs pin ls.

How to create link in HTML that download that file

I have http://192.168.230.237:20080 Server
file located on "/etc/Jay/log/jay.txt"
I tried with "http://192.168.230.237:20080/etc/Jay/log/jay.txt" this link gives me "404 NOT Found"
Here I can I link my file to link
Your HTTP server will have a configuration option somewhere (Apache HTTPD calls it DocumentRoot) which determines where http://example.com/ maps onto the filesystem of the computer.
Commonly this will be /var/www/.
Unless you change it to / (which would expose your entire filesystem over HTTP and is very much not recommended), you can't access arbitrary files on the computer.
/etc/ is used to store configuration information for software installed on the computer. It should almost never be exposed outside the computer.
The best solution to your problem is probably:
Look at the configuration of your HTTP server and identify the document root (e.g. /var/www/)
Move your website files to that directory
If you really want to expose files under /etc via HTTP then you could also change the document root.
Your webserver might also support features like Apache HTTPD's Alias directive which allows you to map a URL onto a file that can be outside the DocumentRoot.

IPFS file upload and view

I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information

How do you invalidate cache of index.html for a static site hosted on S3 with cloudfront?

So I have hosted my angular app on s3 with a cloudfront dist. I do file revision-ing (using grunt filerev) to make sure that I never get stale content. But, how should I version the index.html file. Its required because all other files are referenced inside index.html.
I have configured my bucket to be used as a static site. So it just picks up the index.html when I reference the bucket in url.
Cloudfront says that you should set min TTL to 0, so it would always hit origin to serve the content. But, I don't need this since I am doing file revisioning of all my files (except index.html). I can take advantage of cdn caching for these files.
They also say that in order to invalidate a single object, set the max-age headers to 0. I tried adding following to my index.html
<meta http-equiv="Cache-Control" content="public, must-revalidate, proxy-revalidate, max-age=0"/>
But this does not reflect once you upload on s3. Do I need to explicitly set headers on s3 using s3cmd or dashboard? And do I need to do this every time when index.html changes and I upload it?
I am aware that I could invalidate a single file using cmd but its a repeating process and It would be great if it can take care of itself just by deploying on s3.
Although the accepted answer is correct if you are using s3cmd, I was using the AWS CLI, so what I did was the following 2 commands:
First, to actually deploy the code:
aws s3 sync ./ s3://bucket-name-here/ --delete
Then, to create an invalidation on CloudFront:
aws cloudfront create-invalidation --distribution-id <distribution-id> --paths /index.html
Answering my own question. I deploy my site to S3 using s3cmd tool and there is an option you could provide to invalidate CloudFront cache of all the files changed (diff between your dist folder and S3 bucket). This invalidates cache of all the files changed including index file. It usually takes around 15-20 mins to reflect the new changes on production.
Here is the command
s3cmd sync --acl-public --reduced-redundancy --delete-removed --cf-invalidate [your-distribution-folder]/* s3://[your-s3-bucket]
Note: On macOS, you can install this tool via: brew install s3cmd.
Hope this helps.
You can automate a process using Lambda. It allows you to create a function that will perform certain actions (Object invalidation in your case) in response to certain events (new file in S3).
More information here:
https://aws.amazon.com/documentation/lambda/
I have had the same problem with my static website hosted on S3 and distributed with CloudFront. In my case invalidating /index.html didn't work.
I talked with AWS support and what I needed to do was to invalidate with only /. This is due I am accessing my website with https://website.com/ URL and not with https://website.com/index.html (which would have brought the updated content with the /index.html invalidation). This was done in AWS CloudFront console and not with the AWS CLI.
When you sync local directory with s3, you can do this:
aws s3 sync ./dist/ s3://your-bucket --delete
aws s3 cp \
s3://your-bucket s3://your-bucket \
--exclude 'index.html' --exclude 'robots.txt' \
--cache-control 'max-age=604800' \
--metadata-directive REPLACE --acl public-read \
--recursive
The first command is just a normal sync, the second command enable S3 to return cache-control for all the files except index.html and robots.txt.
Then your SPA can be fully cached (except index.html).
If you use s3cmd sync and utilize the --cf-invalidate option, you may have to also specify --cf-invalidate-default-index depending on your setup.
From the man page:
When using Custom Origin and S3 static website, invalidate the default index file.
This will ensure to also invalidate your index document, most likely index.html, which will otherwise be skipped regardless if updated or not through the sync.