Gracefully Remove K3S Node - k3s

After installing 2 master nodes according to the k3s docs we now want to remove one node (don't ask).
We know that if we shut down one node, the entire cluster "dies". No services accessible, no Kubernetes API available.
Is there a way to gracefully remove a node and return to a single node (embedded etcd) cluster?
Theoretically it must be possible since during installation we started with a working, single node cluster.

Related

IPFS cluster bootnode required after intialization

I have a private IPFS cluster setup and I was wondering what happens to the cluster when the initial boot node that other new peer nodes are bootstrapping to is unavailable temporarily or indefinitely.
If you're only bootstrapping with a single peer, if that peer is not available, the node won't be able to bootstrap, and therefor won't be able to find other peers. If you control many peers in the cluster, simply add all of them as bootstrap peers, that way you don't have a SPoF.

Openshift cluster(v4.6) using crc has empty operators hub

I have downloaded code ready containers on windows for installing my openshift cluster. I need to deploy 3scale on it using the operator from operators hub but the operators hub page is empty.
Digging deeper I found that a few pods on the cluster are not running and show a state "ImagePullBackOff"
I deleted the pods in order to get them restarted but the error wont go away. I checked the event logs and all the screenshotted images are attached.
Pods Terminal logs
This is an error that I keep on getting when I start my cluster. Sometimes it comes up sometimes it starts normally but maybe this has something to do with it.
Quay.io Error
This is my fist time making a deployment on openshift cluster and setting up my cluster environment. So far I am not able to resolve the issue even after deleting and restarting the cluster.

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

rsync mechanism in wso2 all in one active-active

I am deploying active-active all in one in 2 separate servers with wso2-am 2.6.0 and wso2 analytics 2.6.0. I am configuring my servers by this link. In part 4 and 5 about rsync mechanism I have some questions:
1.how can I figure out that my server is working rsync or sync??
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
1.how can I figure out that my server is working rsync or sync??
It is not really clear what are you asking for.. rsync is just a command to synchronize files in folders.
What is the rsync used for - when deploying an API, the gateway creates or updates a few synapse sequences or apis in the filesystem (repository/deployment/server) and these file updates need to be synchronized to all gateway nodes.
I personally don't advice using rsync, the whole issue is that you need to invoke regularly the rsynccommand to synchronize the files created by a master node. That creates certain delay for service availability and most important, if something goes wrong and you want to use another node as the master, you need to switch the rsync direction, which is not really automated process.
We usually keep it simple using a shared filesystem (nfs, gluster, ..) and then we have all active-active setup (ok, setting up HA NFS or glusterFS is not particulary simple, but that's usually job of the infra guys)
2.What will happen in future if I don't use rsync now and also don't use configuration on part 4 and 5 ?
In the case the filesystems between gateways is not synced or shared - you deploy an api from the publisher to a single gateway node, but other gateway nodes won't create the synapse sequences and api artefacts. As a result the other nodes won't pass the client request to the backend

How do I reinstall an OpenShift node which is an active GlusterFS node?

I have a small installation of OpenShift Origin 3.10 (aka OKD), which also hosts an internal GlusterFS cluster for persistent volumes. I need to reinstall several of the nodes (XFS filesystem without d_type, which is needed for overlay2). While reinstalling the node itself is well-defined (cordon, drain, remove, reinstall, scaleup), there is basically no documentation I could find on removing and later re-adding a GlusterFS node from a containerized cluster.
How do I remove such a node safely and reinstall later?