What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.
Related
What is the difference between application console vs cluster console in openshift enterprise version. I am new to openshift and confused with terminologies. I feel that openshift is like linux kernel in our system(an analogy). On top of that are containers and to orchestrate we have kubernetes. However , the architecture of openshift is exact opposite. Please correct me.
OpenShift is just one of the available Kubernetes distributions, which adds enterprise-level services like authentication, authorization and multitenancy.
The web console provides two perspectives: Administrator and Developer. The Developer perspective provides workflows specific to developer use cases like create, deploy and monitor applications, while Administrator perspective is responsible for managing the cluster resources, users, and projects. Depending on the user's role, you will see a different set of views available in the main menu.
I need to migrate my on-premise applications to Openshift Containers that will be hosted in AWS IaaS environment. When looking for possibilities of hosting COTS (Commercial off the shelf) applications in Openshift Containers, I am unable to reach a solution.
Can you please share your knowledge and if possible, any use-case, sample procedure.
Take a look at OpenShift Primed for the Red Hat certified ISV programme:
https://hub.openshift.com/primed
Although re-reading your question it seems to be more around "can I containerise my existing application". Which is something relatively straightforward to do technically, but you will need to check with the vendor on the support and licensing conditions for this.
I have two openshift routers, running as pods, running in OSE.
However, I don't see any associated services in my openshift cluster which forwards traffic / loadbalances to them.
Should I expose my routers to the external world in a normal OSE environment?
Note that this is in a running openshift (OSE) cluster, so I do not think it would be appropriate to recreate the routers with new service accounts, and even if I did want to do this, it isn't always gauranteed that I will have access inside of OpenShift to do so.
If you are talking about the haproxy routers which are a part of the OpenShift platform, and which handle routing of external HTTP/HTTPS requests through to the pods of an application which has been exposed using a route, then no, you should not at least expose then as an OpenShift Route. Adding a Route for them would be circular as the router is what implements the route.
The incoming port of the haproxy routers does need to be exposed outside of the cluster, but this should have been handled as part of the setup you did when the OpenShift cluster was installed. Exactly what you may needed to have done to prepare for that when installing the OpenShift cluster depends on your target system into which OpenShift was installed.
It may be better to step back and explain the problem you are having. If it is an installation issue, you may be better asking on one of the lists at:
https://lists.openshift.redhat.com/openshiftmm/listinfo
as that is more frequented by people more familiar with installing OpenShift.
My project is currently hosted by an independent cloud provider.
I am using 2 Virtual Machines, with Linux:
one hosts a Go application
one hosts a MySql database
I would now like to move to the Google Cloud Platform.
Do you think does it make sense to move to Google Cointainer Engine (GKE), rather than to the Google Compute Engine (which would have the same virtual machine model (IaaS) I am using with the current provider)?
I have never used Kubernetes and Docker. How easy would it be to make the migration? Am I going to complicate my life uselessly?
How difficult is the configuration for my simple model?
I have never used Kubernetes and Docker.
Moving to a platform that you have no experience with doesn't sound like a great idea. Instead, why not start by doing some tutorials about Docker and then Kubernetes?
After that, you might try Minikube (https://kubernetes.io/docs/getting-started-guides/minikube/) locally to start writing some manifests for the components (which sound like maybe a DaemonSet or single Pod with PersistentVolume for MySQL and a Deployment for the Go application).
Once you have the pieces working locally, then it would probably make more sense to think about migrating. You would have a much better understanding of what you are getting into and if it is something you would want to undertake.
I have a publicly accessed database on RDS that works like a charm from Netbeans. I would like to deploy my Java application on AWS. What is the simplest way to do this? I will only use the application for some very basic tasks, getting used to cloud computing working on a small scale. Is EC2 my best bet and is it possible to upload apps as easily as with the Google App Engine plugin. Can I use the same jdbc driver as I use locally, and can I use JPA against the database? I would rather not use Eclipse for now as I am in a bit of a hurry and need to get this working as soon as possible.
This is a lot of questions for one question, but I'll see if I can help you out.
1. Simplest Way to deploy to AWS
If this application is as simple as you say it is, the most cost effective solution while you're getting used to AWS will be to deploy to a micro instance and take advantage of the free tier. From Amazon:
AWS Free Tier includes 750 hours of Linux and Windows Micro Instances each month for one year. To stay within the Free Tier, use only EC2 Micro instances.
The simplest way to deploy directly from Netbeans is to use the integrated Elastic Beanstalk support. This saves you from having to configure things yourself.
Another option is to launch a Ubuntu AMI and install Tomcat. Create a WAR file from your application and place it where Tomcat can find it. I suggest using the first method.
2. Is EC2 my best bet?
This is a little open ended. For a nice learning experience as you get accustomed to AWS, the free tier for EC2 is a nice platform to learn with. If your application needs to eventually scale, using EBS is a pretty simple way to manage an application. My answer is an opinion because "best bet" depends solely on the requirements of your application, but I say yes.
3. Is it possible to upload apps as easily as with the Google App Engine plugin?
For simple applications I think so. I think it's even easier if you switch to Eclipse and use the toolkit for AWS. Whether Google App Engine or AWS is easier for you will once again depend on personal preference, the application, and your requirements.
4. Can I use the same JDBC driver as I use locally?
If you're using MySQL Connector/J then yes. Read this to understand how it works with RDS.
5. Can I use JPA against the database?
Yes. You'll change the endpoint from localhost to the endpoint of your RDS instance.
6. I would rather not use Eclipse for now...
Another personal preference, but the AWS toolkit for Eclipse is very easy to use and can speed the process up a bit.