What is the best way to rename okd cluster's master? - openshift

We are running a cluster "Single master and multiple nodes" (in https://docs.okd.io/latest/install/index.html#multi-masters-using-native-ha-colocated). Let's call our servers oomaster1, oonode1 and oonode2.
I would like to add other masters one day and I think the first step would be to add a VIP oomaster, now pointing only to oomaster1, and then rename the cluster (currently oomaster1) to oomaster.
What would be the best way to proceed ? I mean I can just stop all okd related services and replace oomaster1 (and its address) with oomaster (and its address) in every file in /etc/origin and /etc/etcd and then restart services. But I suppose it is more complex...
Thanks in advance for advices

I think you should replace the existing cluster(master and nodes) with new cluster which is configured as new hostname, because the master and nodes are deployed the various certificates based on master hostname for encrypted communication and authentication. And I have no idea whether or not the existing master hostname can change.

Related

problems on switching DNS on HA environment

When I use DNS server + redis/mysql master/slave as a HA deployment,I found there are two problems:
When redis/mysql master fails, I promote slave to be new master (sentinel for redis and mha for mysql),the domain name change maybe lag due to the existence of DNS cache, but we can less the DNS ttl or turn off the nscd service.
Long-live connections maybe keep connecting to the old master (if the connection is not re-connected),this cause problems.
My thought:
After changing the domain name to the new master ip address, we need to kill all existing connections (clients will be re-connect and connect to the new master) or power off the orignal master.
Is there any better ways?
If the two nodes are in the same datacenter, you could user VIP (Virtual IP) , and then move the VIP to master using corosync, its almost "instantaneous" failover.
If the nodes are in two different datacenters, I think you can use ProxySQL, I havn't tested ProxySQL yet though.

OpenShift Origin : Multiple Masters installation

I'm a bit new with OpenShift, i've already made a install for a master and multiple nodes ( this setup is deleted now ).
Now i need more reliability, so i'm currently preparing my hosts with three masters and two nodes ( for starting ).
I've a dns, a dhcp and an etcd2 cluster up and running with the specifics entries for the hosts like :
openshift-router ( external and internal access this will be use as LB with HAproxy )
openshift-etcd1
openshift-etcd2
openshift-etcd2
openshift-master1
openshift-master2
openshift-master3
openshift-node1
openshift-node2
But now i have three questions :
Where do i run my ansible playbook from the router or on one of the masters ?
Do i need to create a shared pool for the docker-storage or just create a new disk on each master ?
Have you already experienced issues with the multi master configuration ?
i know, i ask many questions, but thoses questions are in the same subject, how to make a HA setup with OpenShift.
Thanks you in advance.
I am able to answer only 2 of your questions.
It does not matter from which host you run the Ansible playbook. You can also run it from a host which is not part of the whole openshift setup (e.g. your laptop)
Note: If you change a host's role (from master to node or vice versa), be sure to delete the Ansible facts before running the config playbook again. Otherwise the changes won't be picked up
Follow up: are you asking about the shared pool in reference to High Availability of the Masters ?
Multi master setup provisioned using the Ansible playbook works as expected. The state of the masters is synced using etcd
Some minor issues faced:
There was a small hiccup w.r.t. origin-node service not finding
docker's PID. Workaround:
https://github.com/kubernetes/kubernetes/issues/26259#issuecomment-229303274
openvswitch fails to set up networking correctly. Workaround: restart openvswitch service

Consistent deployment for ejabberd cluster in cloud?

From the docs # http://docs.ejabberd.im/admin/guide/clustering/#clustering-setup
Adding a node into the cluster is done by starting a new ejabberd node within the same network, and running a command from a cluster node. On second node for example, as ejabberd is already started, run the following command as the ejabberd daemon user, using the ejabberdctl script: ejabberdctl join_cluster 'ejabberd#first'
How does this translate into deployment in the cloud- where instances can (hopefully) be shutdown/restarted based on a consistent image and behind a load balancer?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
Or must the first instance not attempt to join a cluster, and subsequent ones all use the ip address of that initial instance instead of "first" (and if this is the case- does it get wacky if that initial instance goes down)?
Can all of them, including the initial instance, use "example.com" as "first" in the example above (assuming "example.com" is setup in DNS to point to the cloud load balancer)?
No, the node name parameter is the node name of an Erlang ejabberd node. It should even more be on the internal Amazon network, not the public one, so it should not rely on a central DNS. It must be a name of an Erlang node as the newly started node with connect to the existing node to share the same "cluster schema" and do an initial sync of the data.
So, the deployment is done as follow:
first instance does not need to join a cluster indeed as there is no cluster schema to share.
New instance can use the node name of any other node of the cluster. It means they will add themselves to the ejabberd cluster schema. It means ejabberd knows that users can be on any node of this cluster. You can point to any running node in the cluster to add a new one, as they are all equivalent (there is no master).
You still need to configure the load balancer to balance traffic to public XMPP port on all nodes.
You only need to perform the cluster config for each once for each extra cluster node. The configuration with all the node is kept locally, so when you stop and restart a node, it will then automatically rejoined the cluster after it has been properly set up.

MySQL Replication w/SSL -- How does it work?

thank you for the help.
I was hoping that someone could shed some light on how mysql uses SSL. I am currently up and running with master/slave replication, however, I'd like to make sure that the traffic is secured through SSL. The instructions that I am using are here:
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-ssl.html
The question that I have is does the master or the slave store and use the .key?
When I configure the slave, I use the CHANGE MASTER TO command and specify the CA, PATH, CERT, and KEY. Are these the files that are housed on my master server?
I'm also told to specify in my.cnf the [client] CA, CERT, and KEY. Again, are these the files on the master?
I guess I'm just not understanding the workflow. It would seem that the slave would contact the master, the master would require ssl, then the slave would request the public key from the master to establish the secure connection.
can anyone help me with this? Thanks again!
As it shows in 6.3.6.3. Using SSL Connections, client/s should have it's/their own ca, cert, and key.
Similar options are used on the client side, although in this case, --ssl-cert and --ssl-key identify the client public and private key. Note that the Certificate Authority certificate, if specified, must be the same as used by the server.

Autoscaling mysql on ec2

I need autoscaling of mysql-slave on ec2 can anybody guide me how to do that and how to transfer the load on newly added instance
i would use opscode's chef. you can create "roles" in chef, such as a slave_server role. make the role set up a new server and install mysql (check out the opscode provided cookbooks for mysql to do the first parts). Then what you want to do is write your own recipe to grab a copy of your slave db (perhaps grabbing a recent snapshot of the ebs volume of one of your other slave servers) and use that to create a new ebs volume on your new server. Then its just a matter of making your recipe configure the server as a slave and get slaving going so it catches up to master.