Active - Passive DR Strategy for OpenShift v4.x - openshift

We have OpenShift v4.3 deployed across two separate datacenter (Prod and DR) on active/passive strategy. In the event of disaster in Prod, our GTM will switch traffic to DR.
Prod: *.apps.cluster1.domain.com
DR: *.apps.cluster2.domain.com
But how can we implement a main OCP endpoint: *.apps.main_cluster_name.domain.com so that it won't change on user side even after the fail over.

Related

What is the difference between application console vs cluster console?

What is the difference between application console vs cluster console in openshift enterprise version. I am new to openshift and confused with terminologies. I feel that openshift is like linux kernel in our system(an analogy). On top of that are containers and to orchestrate we have kubernetes. However , the architecture of openshift is exact opposite. Please correct me.
OpenShift is just one of the available Kubernetes distributions, which adds enterprise-level services like authentication, authorization and multitenancy.
The web console provides two perspectives: Administrator and Developer. The Developer perspective provides workflows specific to developer use cases like create, deploy and monitor applications, while Administrator perspective is responsible for managing the cluster resources, users, and projects. Depending on the user's role, you will see a different set of views available in the main menu.

Can OKD (Openshift origin) be used for production grade cluster?

I’m setting up a new k8s cluster, and I find the concept of BuildConfig and ImageStream quite interesting. But I do not have the incentive to buy Openshift support since the project context does not allow that.
so I was wondering if it is safe to use an OKD cluster in a production environment, and if there is an example of an entity that is already using it in production grade?
I was (still are) in the same situation with projects that could not afford to run AWS or GCE clouds, so we have deployed a 3 node single master, and later a 9 node HA cluster in our own data centre. The HA architecture was based on the reference implementation at http://uncontained.io/.
So yes it is certainly possible and thoroughly worth the effort. Our cluster is running Kafka, Spark, Neo4J, MongoDB, Jenkins and Cassandra and about 100 business application pods. The DevOps in Openshift (OKD) the biggest benefit.
The learning curve is steep though. I have invested enormous amounts of time in reading up on persistent storage (GlusterFS in our case), networking, cluster architecture, etc. It is very important to script the provisioning process in a rigorously repeatable manner. You are going to stand up and tear down the initial cluster close to 100 times before it plays through reliably.

MySQL replication with masters and slaves in different Kubernetes clusters using Calico as CNI plugin

I have a Kubernetes cluster in which there are some MySQL databases.
I want to have a replication slave for each database in a different Kubernetes cluster in a different datacenter.
I'm using Calico as CNI plugin.
To make the replication process work, the slaves must be able to connect to the port 3306 of the master servers. And I would prefer to keep these connections the most isolated as possible.
I'm wondering about the best approach to manage this.
One of the ways to implement your idea is to use a new tool called Submariner.
Submariner enables direct networking between pods in different Kubernetes clusters on prem or in the cloud.
This new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.
Key features of Submariner include:
Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, with the addition of Layer-3 network connectivity between pods in different clusters.
Secure paths: Encrypted network connectivity is implemented using IPSec tunnels.
Various connectivity mechanisms: While IPsec is the default connectivity mechanism out of the box, Rancher will enable different inter-connectivity plugins in the near future.
Centralized broker : Users can register and maintain a set of healthy gateway nodes.
Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.
CNI compatibility: Works with popular CNI drivers such as Flannel and Calico.
Prerequisites to use it:
At least 3 Kubernetes clusters, one of which is designated to serve as the central broker that is accessible by all of your connected clusters; this can be one of your connected clusters, but comes with the limitation that the cluster is required to be up in order to facilitate inter-connectivity/negotiation
Different cluster/service CIDR's (as well as different Kubernetes DNS suffixes) between clusters. This is to prevent traffic selector/policy/routing conflicts.
Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.
Knowledge of each cluster's network configuration
Helm version that supports crd-install hook (v2.12.1+)
You can find more info with installation steps on submariner github.
Also, you may find rancher submariner multi-cluster article interesting and useful.
Good luck.

Openshift Routers: Should they be exposed for applications?

I have two openshift routers, running as pods, running in OSE.
However, I don't see any associated services in my openshift cluster which forwards traffic / loadbalances to them.
Should I expose my routers to the external world in a normal OSE environment?
Note that this is in a running openshift (OSE) cluster, so I do not think it would be appropriate to recreate the routers with new service accounts, and even if I did want to do this, it isn't always gauranteed that I will have access inside of OpenShift to do so.
If you are talking about the haproxy routers which are a part of the OpenShift platform, and which handle routing of external HTTP/HTTPS requests through to the pods of an application which has been exposed using a route, then no, you should not at least expose then as an OpenShift Route. Adding a Route for them would be circular as the router is what implements the route.
The incoming port of the haproxy routers does need to be exposed outside of the cluster, but this should have been handled as part of the setup you did when the OpenShift cluster was installed. Exactly what you may needed to have done to prepare for that when installing the OpenShift cluster depends on your target system into which OpenShift was installed.
It may be better to step back and explain the problem you are having. If it is an installation issue, you may be better asking on one of the lists at:
https://lists.openshift.redhat.com/openshiftmm/listinfo
as that is more frequented by people more familiar with installing OpenShift.

How does inter-gear networking security work in OpenShift?

The OpenShift Infinispan cartridge defines a number of ports which are used to communicate between gears1.
Even after looking at the documentation, it's not clear to me what level of networking separation is provided by OpenShift, and whether there's some level of security provided e.g. that only other gears within the same application can access these 'public ips', or whether other organisations apps might be able to connect?
SELinux provides gear Isolation (IE: so other gears can not interact with each other, thus keeping gear a from running code in gear b's space). The major part of the Network Security of OpenShift is handled by 2 routers (one for 80, 443, 8000 and 8443 and another for 3550+ port ranges). Because a node keeps track of what GEAR's can bind to what ports (and applies SELinux Contest to the binding, this keeps gear a from binding to a port gear b owns [in the 3550*+ range]). Routes from the 3550*+ range can then be made to your gear running on a 127...* address (thus locking down the isolation) and ensuring that there is no port crosstalk.
Red Hat has a good diagram of this.
The network security is done via SELinux. No gear can access another gears services directly (except for port 8080 which is publicly available), the exception being if you generate a scaled application with a database, the database sits on it's own gear and is accessible from more than just itself.