IPFS cluster bootnode required after intialization - ipfs

I have a private IPFS cluster setup and I was wondering what happens to the cluster when the initial boot node that other new peer nodes are bootstrapping to is unavailable temporarily or indefinitely.

If you're only bootstrapping with a single peer, if that peer is not available, the node won't be able to bootstrap, and therefor won't be able to find other peers. If you control many peers in the cluster, simply add all of them as bootstrap peers, that way you don't have a SPoF.

Related

Gracefully Remove K3S Node

After installing 2 master nodes according to the k3s docs we now want to remove one node (don't ask).
We know that if we shut down one node, the entire cluster "dies". No services accessible, no Kubernetes API available.
Is there a way to gracefully remove a node and return to a single node (embedded etcd) cluster?
Theoretically it must be possible since during installation we started with a working, single node cluster.

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

IPFS Swarm and IPFS Cluster

Consider 3 IPFS peers A, B and C
When
peer A establishes the connection with peer B and C (using ipfs swarm connect)
Will it form a cluster with A as leader? If yes, Do we need to manually create secret key? and Who and how the key is managed?
IPFS is a decentralized system, even you establish connection using peer A, at the end they all will end up sharing each other's DHT ( Distribute Hash Table) information and come at the same level. There will not be any leader in a cluster, and all peers will have the same privileges as any other peer in the network.
And right now there is no notion of a secret key in IPFS, all the data in IPFS network is publicly available if you want you have to implement a layer on the top of it and encrypt data before putting it into IPFS.
Private IPFS is designed for a particular IPFS node to connect to other peers who have a shared secret key. With IPFS private networks, each node specifies which other nodes it will connect to. Nodes in that network don’t respond to communications from nodes outside that network.
An IPFS-Cluster is a stand-alone application and a CLI client that allocates, replicates and tracks pins across a cluster of IPFS daemons. IPFS-Cluster uses the RAFT leader-based consensus algorithm to coordinate storage of a pinset, distributing the set of data across the participating nodes.
This difference between Private IPFS and IPFS cluster is remarkable. It is worth noting that a private network is a default feature implemented within the core IPFS functionality and IPFS-Cluster is its separate app. IPFS and IPFS-Cluster applications are installed as different packages, run as separate processes, and they have different peer IDs as well as API endpoints and ports. IPFS-Cluster daemon depends on IPFS daemon and should be started afterwards.
In a private IPFS network, you should have 'Go' and IPFS installed on all the nodes. Once it is done, run the following command to install the swarm key generation utility. Swarm key allows us to create a private network and tell network peers to communicate only with those peers who share this secret key.
This command should be run only on your Node0. We generate swarm.key on the bootstrap node and then just copy it to the rest of the nodes.
go get -u github.com/Kubuxu/go-ipfs-swarm-key-gen/ipfs-swarm-key-gen
Now run this utility on your first node to generate swarm.key under .ipfs folder:
ipfs-swarm-key-gen & > ~/.ipfs/swarm.key
Copy the file generated swarm.key to the IPFS directory of each node participating in the private network. Please let me know if you need further details on this.
No.It doesn't form a cluster because there is a separate implementation of IPFS for the above mentioned problem named as IPFS Cluster where a particular node pins the data across various other nodes through which other nodes in the network can access the data.The pinning of data by the node functions through a secret-key. For more information you can go through the documentation of IPFS Cluster.
https://cluster.ipfs.io/documentation/

Openshift scaling when using EBS

How does OpenShift scale when using EBS for persistent storage? How does OpenShift map users to EBS volumes? Because its infeasible to allocate 1 ebs volume to each user, how does openshift handle this in the backend using kubernetes?
EBS volumes can only be mounted on a single node in a cluster at a time. This means you cannot scale an application that uses one beyond 1 replica. Further, an application using an EBS volume cannot use 'Rolling' deployment strategy as that would require there to be 2 replicas when the new deployment is occurring. The deployment strategy therefore needs to be set to 'Recreate'.
Subject to those restrictions on your deployed application which has claimed a volume of type EBS, there is no problems with using EBS volumes as an underlying storage type. Kubernetes will quite happily map the volume into the pod for your application. If that pod dies and gets started on a different node, Kubernetes will then mount the volume in the pod on the new node instead, such that your storage follows the application.
If you give up a volume claim, its contents are wiped and it is returned to the pool of available volumes. A subsequent claim by you or a different user can then get that volume and it would be applied to the pod for the new application.
This is all handled and works no problems. It is a bit hard to understand what you are asking, but hopefully this gives you a better picture.

Kubernetes: run persistent pods cassandra/mysql in Ubuntu servers

I'm newbie at kubernetes and I'm having problem to understand how I can run persistent pods (Cassandras ones or mysql ones) in ubuntu servers.
Correct me if I'm wrong, kubernetes can scale up or down the pods when it sees that we need more CPU but we are not talking about static code but data that are present in other nodes. So what will do the pod when it receive the request from the balancer? Also, kubernetes has the power to destroy nodes when it sees that the traffic has reduced, so how we can not lose data and not disturb the environment?
You should use volumes to map a directory in the container to persistent disks on the host or other storage