Is it possible to have single application multiple environments in openshift. I want to have environment like development, test and production in openshift. Is there a way to do this type of deployments automatically.
Its definitely possible to accomplish this in Openshift. You can leverage jenkins to due your builds and git to create dev, test, prod branches.
Related
I have a load balanced EB environment, running a PHP application on an Apache server.
We have successfully deployed the identical software to a test environment in this AWS account, as a pre-production test. This went as expected, and updated the sortware with each CLI deployment.
I cloned this environment in order to deploy the production instance. Generally, deploying the application via EB CLI results in a healthy instance. I say generally because occasionally this shows as degraded - to fix this, I select the latest application version and deploy it to the instance via the admin interface. This feels like a workaround because the console already shows the correct version as the one deployed.
The problem I am having now is in changing the environment variables, to point to the production database. When I change this via the configuration>software section, no changes are stored. When I hit 'apply' the environment starts to transition. When this is complete, the instance health has degraded and the changes made to the configuration are not persisted.
I don't really see a pattern here, and it's behaving in a way that differs from the way the test instance did - I had no problems there.
Any suggestions on how to get past this?
I have openshift 3.9 installed in one AWS region ohio. I have jenkins installed in it. I have a pipeline code in where it will take Java code from GitHub bind with jboss and deployed it in project test within the same cluster. It works fine and I'm able to access the app as pod is creating and app is also binding with jboss. Now I want to deploy this application across different clusters either within the same region or across different regions. Is there a way to achieve this?
You can use the oc command line tool in your Jenkins pipeline to deploy it to a different cluster. For a related example, check the Gitlab review apps example using an OpenShift cluster. It does something similar, where the CI pipeline deploys the required artifacts to an OpenShift cluster using oc and appropriate credentials.
What's the difference between OpenShift and Kubernetes and when should you use each? I understand that OpenShift is running Kubernetes under the hood but am looking to determine when running OpenShift would be better than Kubernetes and when OpenShift may be overkill.
In addition to the additional API entities, as mentioned by #SteveS, Openshift also has advanced security concepts.
This can be very helpful when running in an Enterprise context with specific requirements regarding security.
As much as this can be a strength for real-world applications in production, it can be a source of much frustration in the beginning.
One notable example is the fact that, by default, containers run as root in Kubernetes, but run under an arbitrary user with a high ID (e.g. 1000090000) in Openshift. This means that many containers from DockerHub do not work as expected. For some popular applications, The Red Hat Container Catalog supplies images with this feature/limitation in mind. However, this catalog contains only a subset of popular containers.
To get an idea of the system, I strongly suggest starting out with Kubernetes. Minikube is an excellent way to quickly setup a local, one-node Kubernetes cluster to play with. When you are familiar with the basic concepts, you will better understand the implications of the Openshift features and design decisions.
OpenShift includes a distribution of Kubernetes, so if you don't need any of those added features of OpenShift you can choice to ignore them such as: Web Console, Builds, advanced deployment models and much, much more.
Here's a summary of items available on the OpenShift website.
Kubernetes comes with Ingress Rules but Openshift comes with Routes
Kubernetes has IngressController but Openshift has Router as HAProxy
To swtich namespace in cli for openshift is very easy but in
kubernetes you need to create contex and switch between context
Openshift UI has more interactive and informative then Kubernetes
To bake docker image inside Openshift has BuildConfig but kubernetes
don't has any thing you need to build image and push to registry
Openshift has Pipeline where u don't need any jenkins to deploy any
app but Kubernetes don't has.
The easiest way to differentiate between them is to understand that while vanilla K8S is community project, OpenShift is more focused towards making it a enterprise ready product. Resources like Imagestreams, BC, Builds, DC, Routes etc along with leveraging functionalities like S2I, Router etc make it easier for Developers and admin alike to use OCP for development, deployment and lifecycle management. You can refer to the URL https://cloud.redhat.com/learn/topics/kubernetes/ for getting more information on key differences between them.
OCP makes your life much easier by giving easy actions using CLI command OC and fine grained webconsole.
You can try OCP and get first hand experience of the features using https://developers.redhat.com/developer-sandbox
where you can quick get access to sandboxed environment in a shared cluster.
We have a WordPress's application on the Azure's WebApp.
We have two environments there - DEV and Prod. Prod have a Staging swap-slot.
DEV and PROD, obviously, use different MySQL database (we using MySQL In App now, but the same question related to ClearDB/MySQL setup).
So, question is - how to do Blue-green deployment? What to do with databases?
We have Travis configured to deploy code to different environments. But - Prod's database will be updated during its usage be visitors, and DEV will be updated with developers (and, sure - will not have visitor's changes from Prod).
Are there any solutions/practises to realize that?
P.S. And there is one more issue. "MySQL In App (preview)" does not allow to have separated database for WebApp and its Stage (swap-slot). This creates an additional headache for us.
We are going to use Hudson/Jenkins build server to both build our server applications (just calling maven) and run integration tests against it. We are going to prepare 3 Hudson/Jenkins jobs: for build, deploy and run integration tests, which call each other in this order. All these jobs (build, deploy, integration tests) will be running nightly.
The integration tests are written with JUnit and are invoked by mvn test, (which will be invoked by the "test" Hudson/Jenkins job in turn). Since they require the server to be up and running we have to run that "deploy" job.
Does it make sense? Is there are any special server to deploy application and run tests or Hudson/Jenkins is ok for that?
It definitely makes sense, basically you are referring to a build pipeline. There is a Jenkins-plugin to help visualize the upstream/downstream projects (you create a new pipeline view in jenkins).
As for the deployment of the server component, this depends on what technology/stack you are running on. For instance you could write a script that deploys the application to a test environment using a post-build step in jenkins.
Another option is to use a maven plugin to deploy the application. You can separate the deployment step in profile, and run only the deploy goal on the deploy step etc.
Basically there are a lot of options, but the idea of a build pipeline makes a lot of sense. To read up on build pipelines and related topics I would suggest taking a look at Continuous Deployment.
For more information related to Jenkins, have a look at this video.
Does it make sense? Is there are any special server to deploy
application and run tests or Hudson/Jenkins is ok for that?
You can run the application on the same server as jenkins, but wether that makes sense depends on the application. If it depends heavily on a specific server setup, a better choice may be to run the server in a vm, and but the configuration in source control. There are plenty of tools to help automate this, of the top of my head you have Puppet, Chef and Vagrant
Depending on the technology of your server, you could do all of this in a single Hudson project, executing your integration tests using Maven's Failsafe plugin instead of Surefire.
This allows you to start and deploy prior to executing the integration tests, and shutdown your server after they have completed. It also allows you to separate your integration tests from your unit tests.
For Java EE applications, you can perform the start/deploy/stop steps using Cargo, or use an embedded Jetty containing and the Jetty Maven plugin.