My IPFS node doesn't seem to be storing data, why is that? I wanna contribute to the network - ipfs

My IPFS node has been running all day and it's storing 6MiB according to the web UI... how come?
I've set the Gateway address in the config as such:
"Gateway": "/ip6/2a01:e34:ecb8:d540:dacb:8aff:fee4:74a0/tcp/8081"
And I have started the daemon with ipfs daemon --writable, it says it's ready.
The UI says it has discovered about 900 nodes, so why is mine not participating more actively in the network?

other nodes can't store content into your node automatically. There is public ipfs gateway list somewhere, submit your node to it.
run DHT server
ipfs config Routing.Type dhtserver
more information here https://github.com/ipfs/go-ipfs/blob/master/docs/config.md

Related

Why is it so hard for web browsers to open IPFS links?

Why is it so hard for web browsers to open IPFS links?
For instance, this is an IPFS blog page link, https://ipfs.io/ipfs/bafybeic3y6oc2dai3uypyyuaggp4xx3krocpgzbwst2z4ha73jdh7y6nea/index.html
, and the loading page is tremendously slow on Safari or Chrome/Edge, and it's stuck into Error 504 from time to time. Is there any way to browse IPFS pages smoothly? Or are IPFSs just internally incapable of smooth browsing without third-party helps?
You're loading the page through a gateway. So effectively, you're asking another (quite popular) IPFS node to fetch the info over IPFS, then serve it to you over HTTP. If the gateway is slowed down for any reason, all IPFS resources will seem slow to you. If a gateway is your only option for whatever reason, check out the IPFS Gateway Checker for a list of active ones.
Alternatively, you could run your own IPFS node via something like IPFS Desktop then connect it to IPFS Companion (Chrome | Firefox). IPFS Companion can be configured to redirect all IPFS gateway links to your own IPFS node, then you'll be limited by just how quickly your node can find/retrieve the data, which you'll likely find to be the superior IPFS experience.
The Brave browser also includes an integrated IPFS node. They have an article about it here.

Openshift 4.6 Node and Master Config Files

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

Is it possible restore ipfs file by hash?

I have hash of ipfs file, node with this file not work. But I need some how restore this file.
Can I some how restore file from hash?
You can download data from IPFS as long as there is at least one node providing it.
If the data was only at one node, and you shut it down, you won't be able to get the data until the node gets online again or someone else with the same data adds it to IPFS and announces it to DHT.
To get a list of nodes providing data for specific hash:
ipfs dht findprovs QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
If you want to download data from IPFS but can't run local node try downloading it from one of public gateways.

IPFS file upload and view

I uploaded a pdf file to the IPFS de-centralised network. My question here. When I have the IPFS console and I couldn't view the pdf file anymore through the https://ipfs.io/gateway. Why is that? My understanding is once the file is uploaded to the IPFS network and the file will be distributed to the nodes.
Adding a file to IPFS via ipfs add <file> does not distribute it to the network (that would be free hosting!), it only puts the file into the standard format (IPLD) and makes it possible to access over the network (IPFS) as long as someone connected to the network has the file. When you first add something, that's only you. So if you close your laptop, suddenly the file is no longer available. UNLESS someone else has downloaded it since then, because then they can distribute it while your computer is off. There are many "pinning services" which do just that, for a small fee.
Hi Your understanding is correct,But can you tell me how are you uploading files to ipfs network there are number of ways to add data to ipfs network,
if you are able to add data to ipfs you will get the hash of the data, condition is daemon is running locally so that your data can be broadcasted to other peers you are attached to, you can check it by command: ipfs swarm peers
if above conditions are fulfilled you view/get data from https://ipfs.io/ipfs/<replace with hash you will get after adding>
if daemon is not running you can able to add you file and get the hash but you files will be saved locally, you wont be able to access it from web.
please let me know if you need other information

Couchbase Mobile (Sync Gateway) sample TODOlite application doesn't replicate; complains _facebook doesn't exist

My objective: get https://github.com/couchbaselabs/ToDoLite-iOS syncing with a Couchbase Server and sync gateway on localhost rather than the default demo URL.
I run sync gateway like so: bin/sync_gateway -url http://localhost:8091
And then the only thing I changed in the example is:
-#define kSyncGatewayUrl #"http://demo.mobile.couchbase.com/todolite"
+#define kSyncGatewayUrl #"http://localhost:4984/sync_gateway/"
And when I run
Error: Error Domain=CBLHTTP Code=404 "404 not_found" UserInfo=0x7ff11941fb50 {NSURL=http://localhost:4984/sync_gateway/_facebook, NSLocalizedFailureReason=not_found, NSLocalizedDescription=404 not_found}
How do I fix this?
I solved it. The reason is that I ran sync_gateway without enabling Facebook registration support.
Normally this is done in config.json file. In fact, this configuration file was supplied in ToDoLite all along.
It is crucial that you launch sync_gateway with this configuration file. The README actually states this but in a loose and casual way...
cd ToDoLite-iOS
sync_gateway -url http://localhost:8091 sync-gateway-config.json
NB: I assume above that sync_gateway has been made accessible through $PATH. It's a good idea to do that anyway.
Also, I didn't pay attention to the dbname. So you'll need to replace
#define kSyncGatewayUrl #"http://demo.mobile.couchbase.com/todolite"`
with
#define kSyncGatewayUrl #"http://localhost:4984/todos"
So, what's the complete sequence of steps to get it working?:
If you want to wipe everything on the server, rm -rf Library/Application\ Support/Couchbase and start over. Homebrew cask hides this setting somewhere else where it's hard to reset so a manual install is very recommended.
Install Couchbase Server
Set up login credentials if fresh install; otherwise just login
Create a bucket (a database) with name todos on the cluster. This is the dbname used by TODOLite.
Launch sync gateway. Be sure to pass in the replication URL AND the JSON config file.
bin/sync_gateway -url http://localhost:8091 sync-gateway-config.json; keep sync gateway running
In the TODOLite AppDelegate.m, change kSyncGatewayUrl:
#define kSyncGatewayUrl #"http://localhost:4984/todos". Notice the name of the database is necessary!
(Optionally) Access the administrator interface of the sync gateway by going to http://localhost:4985/_admin/db/sync_gateway/sync. You can set up the sync function here.
In case you're wondering where those port numbers came from, check out
ports Couchbase Server uses
ports Sync Gateway uses
4984 — SG API port
4985 — SG admin server
The default remote sync URL will be defined in different files depending on the version of the project you download (iOS, Android, PhoneGap, and Motion). To find the appropriate string to change simply search through your project for the URL "http://demo.mobile.couchbase.com/todolite" and replace it with the URL of your new sync gateway database.