I am setting up a new Openshift system with origin v3.11. The only information provided in the official website is: Disabling Features Using Feature Gates. And in the Service Proxy Mode section of the document it says there are only two proxy-mode supported, so I am not sure if the ipvs mode is removed.
I think kuberntes 1.11 has the IPVS as proxy mode supported and the openshift 3.11 is based on k8s 1.11:
Kubelet Version: v1.11.0+d4cacc0
Kube-Proxy Version: v1.11.0+d4cacc0
Does anyone ever tried to enable the IPVS proxy mode on openshift 3.11? I have tried to modify the master-config.yaml and node-config.yaml files. And tried to modify the config map.I added something like:
feature-gates:
- SupportIPVSProxyMode=true
proxyArguments:
cluster-cidr:
- 10.128.0.0/14
proxy-mode:
- ipvs
ipvs-min-sync-period:
- 5s
ipvs-sync-period:
- 5s
ipvs-scheduler:
- rr
And then restarted the master and node service.
Also I have the ipvsadm installed on all nodes.
But, it seems not to be working.
OpenShift only supports two service proxy modes:
iptables
userspace
In general, OpenShift by design only supports a subset of upstream Kubernetes features (though in some cases, such as route objects or the web console, OpenShift has extra features).
These features are cherry picked by Red Hat engineering based on stability, supportabitily, and customer demand.
Related
I am looking for a way to install AppDynamics in a OpenShift Cluster.
Unable to find proper documentation on how to install and what tools need to be installed.
Should My Application Docker file also include any images related to AppDynamics
If anyone familiar with this please share some steps or provide reference to documents.
Old docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-containers-with-docker-visibility/use-docker-visibility-with-red-hat-openshift
New Docs: https://docs.appdynamics.com/22.2/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent
Note that there is not a prescribed way to instrument as such, you need to make some decisions.
i.e. (from the second doc link):
The first decision is to use the officially released pre-built
Appdynamics Operator images published on DockerHub and Redhat
Registry or If you want to build a custom Appdynamics Operator image.
See Build the Custom Cluster Agent Image.
The second decision is whether to use the officially released
pre-built Cluster Agent images published on DockerHub and Redhat
Registry or If you want to build a custom Cluster Agent image. See
Cluster Agent Container Image.
The third decision is whether to install the Cluster Agent using the
Kubernetes CLI or the Cluster Agent Helm Chart. See Install the
Cluster Agent with the Kubernetes CLI and Install the Cluster Agent
with Helm Charts.
I have a Kubernetes cluster in which there are some MySQL databases.
I want to have a replication slave for each database in a different Kubernetes cluster in a different datacenter.
I'm using Calico as CNI plugin.
To make the replication process work, the slaves must be able to connect to the port 3306 of the master servers. And I would prefer to keep these connections the most isolated as possible.
I'm wondering about the best approach to manage this.
One of the ways to implement your idea is to use a new tool called Submariner.
Submariner enables direct networking between pods in different Kubernetes clusters on prem or in the cloud.
This new solution overcomes barriers to connectivity between Kubernetes clusters and allows for a host of new multi-cluster implementations, such as database replication within Kubernetes across geographic regions and deploying service mesh across clusters.
Key features of Submariner include:
Compatibility and connectivity with existing clusters: Users can deploy Submariner into existing Kubernetes clusters, with the addition of Layer-3 network connectivity between pods in different clusters.
Secure paths: Encrypted network connectivity is implemented using IPSec tunnels.
Various connectivity mechanisms: While IPsec is the default connectivity mechanism out of the box, Rancher will enable different inter-connectivity plugins in the near future.
Centralized broker : Users can register and maintain a set of healthy gateway nodes.
Flexible service discovery: Submariner provides service discovery across multiple Kubernetes clusters.
CNI compatibility: Works with popular CNI drivers such as Flannel and Calico.
Prerequisites to use it:
At least 3 Kubernetes clusters, one of which is designated to serve as the central broker that is accessible by all of your connected clusters; this can be one of your connected clusters, but comes with the limitation that the cluster is required to be up in order to facilitate inter-connectivity/negotiation
Different cluster/service CIDR's (as well as different Kubernetes DNS suffixes) between clusters. This is to prevent traffic selector/policy/routing conflicts.
Direct IP connectivity between instances through the internet (or on the same network if not running Submariner over the internet). Submariner supports 1:1 NAT setups, but has a few caveats/provider specific configuration instructions in this configuration.
Knowledge of each cluster's network configuration
Helm version that supports crd-install hook (v2.12.1+)
You can find more info with installation steps on submariner github.
Also, you may find rancher submariner multi-cluster article interesting and useful.
Good luck.
I have two openshift routers, running as pods, running in OSE.
However, I don't see any associated services in my openshift cluster which forwards traffic / loadbalances to them.
Should I expose my routers to the external world in a normal OSE environment?
Note that this is in a running openshift (OSE) cluster, so I do not think it would be appropriate to recreate the routers with new service accounts, and even if I did want to do this, it isn't always gauranteed that I will have access inside of OpenShift to do so.
If you are talking about the haproxy routers which are a part of the OpenShift platform, and which handle routing of external HTTP/HTTPS requests through to the pods of an application which has been exposed using a route, then no, you should not at least expose then as an OpenShift Route. Adding a Route for them would be circular as the router is what implements the route.
The incoming port of the haproxy routers does need to be exposed outside of the cluster, but this should have been handled as part of the setup you did when the OpenShift cluster was installed. Exactly what you may needed to have done to prepare for that when installing the OpenShift cluster depends on your target system into which OpenShift was installed.
It may be better to step back and explain the problem you are having. If it is an installation issue, you may be better asking on one of the lists at:
https://lists.openshift.redhat.com/openshiftmm/listinfo
as that is more frequented by people more familiar with installing OpenShift.
What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.
I'm trying to deploy a Fiware-based system composed by Orion CB and some webapps, and I want to include security in this deployment by using the differents security GE availables.
OrionCB runs on Centos and (i.e)authorization-pdp-authzforce runs on Ubuntu TLS. Does this means that I need to use (i.e)two VPS?
AuthZForce is not supposed to be run on the same machine as Orion Context Broker. This doesn't mean that you can't, only that you are not forced to do it.
What would make more sense to run on the same machine (but still it is not a requirement) is to run a policy enforcement proxy. At the moment there are a couple that work with FIWARE:
FIWARE Wilma PEP Proxy.
FIWARE Orion PEP.
Both of them are developd using NodeJS, so they should run on a CentOS just fine. Keep in mind that even if you use AuthZForce you still need the a PEP proxy.