IPFS Swarm and IPFS Cluster - ipfs

Consider 3 IPFS peers A, B and C
When
peer A establishes the connection with peer B and C (using ipfs swarm connect)
Will it form a cluster with A as leader? If yes, Do we need to manually create secret key? and Who and how the key is managed?

IPFS is a decentralized system, even you establish connection using peer A, at the end they all will end up sharing each other's DHT ( Distribute Hash Table) information and come at the same level. There will not be any leader in a cluster, and all peers will have the same privileges as any other peer in the network.
And right now there is no notion of a secret key in IPFS, all the data in IPFS network is publicly available if you want you have to implement a layer on the top of it and encrypt data before putting it into IPFS.

Private IPFS is designed for a particular IPFS node to connect to other peers who have a shared secret key. With IPFS private networks, each node specifies which other nodes it will connect to. Nodes in that network don’t respond to communications from nodes outside that network.
An IPFS-Cluster is a stand-alone application and a CLI client that allocates, replicates and tracks pins across a cluster of IPFS daemons. IPFS-Cluster uses the RAFT leader-based consensus algorithm to coordinate storage of a pinset, distributing the set of data across the participating nodes.
This difference between Private IPFS and IPFS cluster is remarkable. It is worth noting that a private network is a default feature implemented within the core IPFS functionality and IPFS-Cluster is its separate app. IPFS and IPFS-Cluster applications are installed as different packages, run as separate processes, and they have different peer IDs as well as API endpoints and ports. IPFS-Cluster daemon depends on IPFS daemon and should be started afterwards.
In a private IPFS network, you should have 'Go' and IPFS installed on all the nodes. Once it is done, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.
This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.
go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
Now run this utility on your first node to generate swarm.key under .ipfs folder:
ipfs-swarm-key-gen & > ~/.ipfs/swarm.key
Copy the file generated swarm.key to the IPFS directory of each node participating in the private network. Please let me know if you need further details on this.

No.It doesn't form a cluster because there is a separate implementation of IPFS for the above mentioned problem named as IPFS Cluster where a particular node pins the data across various other nodes through which other nodes in the network can access the data.The pinning of data by the node functions through a secret-key. For more information you can go through the documentation of IPFS Cluster.
https://cluster.ipfs.io/documentation/

Related

Syncing ethereum with get from my another node without execution and verifying txs

I have 2 ethereum full synced geth node but i need one more geth full node.
however ethereum geth syncing is very slow because verifying the downloaded syncing datas.
Can I skip the verifying datas from my authorized nodes?
trusted node option is acted as just a static node
I would suggest backing up your chaindata folder from one of your already synchronized nodes and restoring it on the third node:
Gracefully stop your already synced node (SIGINT/CTRL-C).
Tar compress the chaindata folder with zstd or lz4 compression.
Extract to chaindata folder (Replace it) at the location geth is saving the state.
It will then only sync from that point to head of the chain.

IPFS cluster bootnode required after intialization

I have a private IPFS cluster setup and I was wondering what happens to the cluster when the initial boot node that other new peer nodes are bootstrapping to is unavailable temporarily or indefinitely.
If you're only bootstrapping with a single peer, if that peer is not available, the node won't be able to bootstrap, and therefor won't be able to find other peers. If you control many peers in the cluster, simply add all of them as bootstrap peers, that way you don't have a SPoF.

Do all IPFS requests go through a public gateway?

I initialized a local directory with ipfs add -r . and was able to access it through https://ipfs.io/ipfs gateway using the hash.
I was also able to get the files from another node using ipfs get -o <file-name> <hash>
Is the file served through the ipfs.io gateway or through local gateways of other decentralized participating nodes?
TLDR: No
The go/js-ipfs CLIs will not make any HTTP related requests, to public gateways or otherwise, when you perform an ipfs get
Gateways, public or local, are just a convenient way of bridging the IPFS protocol stack with the standard experience of performing an HTTP request for some data. A local gateway will let you use standard HTTP based applications (e.g. web browsers, curl, etc.) while still utilizing your locally running IPFS daemon under the hood. On the other hand, the public gateways let you use standard HTTP based applications while using someone else's (i.e. publicly run infrastructure's) IPFS daemon under the hood.
The main utility of the public gateways is making content that peers have in the public IPFS network available over HTTP to people and applications that are not able to run IPFS.

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

rsync mechanism in wso2 all in one active-active

I am deploying active-active all in one in 2 separate servers with wso2-am 2.6.0 and wso2 analytics 2.6.0. I am configuring my servers by this link. In part 4 and 5 about rsync mechanism I have some questions:
1.how can I figure out that my server is working rsync or sync??
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
1.how can I figure out that my server is working rsync or sync??
It is not really clear what are you asking for.. rsync is just a command to synchronize files in folders.
What is the rsync used for - when deploying an API, the gateway creates or updates a few synapse sequences or apis in the filesystem (repository/deployment/server) and these file updates need to be synchronized to all gateway nodes.
I personally don't advice using rsync, the whole issue is that you need to invoke regularly the rsynccommand to synchronize the files created by a master node. That creates certain delay for service availability and most important, if something goes wrong and you want to use another node as the master, you need to switch the rsync direction, which is not really automated process.
We usually keep it simple using a shared filesystem (nfs, gluster, ..) and then we have all active-active setup (ok, setting up HA NFS or glusterFS is not particulary simple, but that's usually job of the infra guys)
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
In the case the filesystems between gateways is not synced or shared - you deploy an api from the publisher to a single gateway node, but other gateway nodes won't create the synapse sequences and api artefacts. As a result the other nodes won't pass the client request to the backend