Redeploy Openshift Application when Docker Hub Image Changes? - openshift

Is there a way to trigger a re-deploy when I push an image to docker hub? I used S2I to build an image, put it up on docker hub, and did a deployment from there. How can I trigger a new deployment when I push a new image to docker hub?
Perhaps there is a better way? I created a wildfly image with the changes to the standalone.xml I needed. Then I used S2I to build my local source into a runnable wildfly application image, which is what I pushed and deployed. I'm trying to get around having to go through a github repository.
I'm thinking I could create an application with the customer wildfly image that I created and use the direct from IDE option to the application, but what if I want to use the command line?

You can set a scheduled flag on the image stream to have a remote registry periodically polled. This will only work though if the OpenShift cluster has been configured globally to allow that. If using OpenShift Online I don't believe that feature is enabled.
https://docs.openshift.com/container-platform/latest/dev_guide/managing_images.html#importing-tag-and-image-metadata
If you want to avoid using a Git repository, you can use a binary input build instead. This allows you to push files direct from your local computer. This means you can compile binary artifacts locally and push them into the S2I build done by OpenShift.
https://docs.openshift.com/container-platform/latest/dev_guide/builds/build_inputs.html#binary-source

Related

Open shift build config vs jenkinsfile

We are using OpenShift. I have a confusion between buildconfig file vs jenkinsfile. Do we need both of them or one is sufficient. I have seen examples where in jenkinsfile docker build is defined using buildconfig file. In some cases buildconfig file is using jenkinsfile as the build strategy. Can some one please clarify on this
BuildConfig is the base type for all builds, there are different build strategies that can be used in a build config, by running oc explain buildconfig.spec.strategy you can see them all. If you want to do a docker build you use the dockerStrategy, if you want to build from source code using source2image you specify the sourceStrategy.
Sometimes you have more complex needs than simply running a build with an output image, let's say you want to run the build, wait for that image to be deployed to some environment and then run some automated GUI tests. In this case you need a pipeline. If you want to trigger and configure this pipeline from the OpenShift Web Console you would use the jenkinsPipelineStrategy in your BuildConfig. In the OpenShift 3.x web console such BuildConfigs are presented as Pipelines and not Builds even though they are all really BuildConfigs.
Any BuildConfig with the jenkinsPipelineStrategy will be executed by the Jenkins Build Server running inside the project. That Jenkins instance could also have other pipelines that are not mapped or visible in the OpenShift Web Console, there does not need to be a BuildConfig for every Jenkinsfile if you don't see the benefit of them appearing in the OpenShift Web Console.
The difference of running builds inside a Jenkinsfile and a BuildConfig with some non-jenkinsfile-strategy is that the build is actually executed inside the jenkins build agent rather than a normal OpenShift build pod.
At our company we utilize a combination of jenkinsFile pipelines and BuildConfigs with the sourceStrategy. Instead of running builds in our Jenkinsfile pipelines directly inside the Jenkins build agent we let the pipeline call the OpenShift API and tell it to execute the BuildConfig with sourceStrategy. So basically we still use s2i for building the images but the Jenkinsfile as our CI/CD pipeline engine. You can find some examples of this at https://github.com/openshift/jenkins-client-plugin.

Bluemix Devops and Cast iron Containers

I want to use Devops pipeline for building and deploying Castiron orchestrations in Castiron containers. How can i create the Castiron container through devops pipeline.
Do I need to upload the cast iron docker image into GIT repository and configure the build and deploy stage.
Yes, You need to first upload cast iron docker image.
Steps in this link might help
Generally when working with containers, you should be considering the output of any container build as an immutable artefact. And as such an additional stage in your pipeline could be to publish the container to a repository or an artefact repository.
The artefact (container) is then available in subsequent stages within your pipeline. i.e. The container can be pulled down and deployed in a Deploy Stage.
In terms of pushing your container into GIT, you can do that directory within a script or with the Bluemix DevOps Service GUI.
Documentation on how you can achieve with just Bluemix DevOps Sevice:
IBM Developerworks:
https://www.ibm.com/developerworks/library/d-bluemix-devops-pipeline-trs/index.html
I would however recommend looking into an actual artefact repository (such as Artifactory). Register a custom service using the CF (Cloud Foundry) CLI to your Bluemix account, then utilize that service when you are required to store/retrieve a container(s).

Can I modify my openshift git repo using ssh shell?

I have working app on OpenShift server. My question is - how to update openshift's git repo of my application, if I make some changes using ssh acsess to openshift? I mean not using all this stuff with pull/push to my local mashine.
If I understand you correctly, you would like to modify source code without using git. I am not sure why you would want that. All that stuff with pull/push gives you a version control flexibility which can save you a lot of time when you screw up one thing. For example, you push brand new UI to production, which turns out to be buggy. With git, you have flexibility to revert back to previous version, and work on different branch to fix the bug on UI.
OpenShift follows conventional app structure. Git for source control, maven for build, jbosseap(for example) for app server, jenkins for continuous integration, etc. So, when you push using git, OpenShift will automatically build using maven, then deploy to the server.
If you would like to disregard all that advantages that OpenShift has to offer, use rhc ssh appname to directly work on the server.

Openshift: how to import MySQL driver to the tomcat/lib directory

I would like to connect my web application (running on tomcat 7) to MySQL (v5.6.20). It is ok if I include the driver mysql-connector-java-5.1.31-bin.jar into my web application. But would like to have it for all my apps. On my local computer, I put the file in tomcat/lib and everything is fine.
How to do the same with openshift? Is it a bad idea to do so?
I am a total beginner. What I do to upload my application (war files) is
git add --all
git commit --m "text"
git push
Thanks a lot for your help!!
Here are two KB articles from the Help Center that I think will help you get going, the first shows how to use the pre-configured database connections that come with each of the Java containers on OpenShift (https://help.openshift.com/hc/en-us/articles/202399720-How-to-use-the-pre-configured-MySQLDS-and-PostgreSQLDS-data-sources-in-the-Java-cartridges), They are very easy to use.
The second shows you how to include external libraries (jar files) inside your application without using maven (https://help.openshift.com/hc/en-us/articles/202399730-How-to-include-libraries-jar-files-in-your-java-application-without-using-Maven).
A third option, if you are using a Maven based project (similar to the default applications that come with the Java cartridges), is to add the mysql driver as a dependency to your pom.xml file, and it will be loaded into the correct place in your application when you do a git push. If you want to go that route, I think that this article will help: http://www.java-tutorial.ch/core-java-tutorial/mysql-with-java-and-maven-tutorial

Is "Do-It-Yourself 0.1" still useful on OpenShift Online?

Currently there are two application types on OpenShift Online, which may be used for OpenShift cartridge development: Do-It-Yourself 0.1 and Cartridge Development Kit.
The description of the Cartridge Development Kit sounds much more useful:
Helps you build and deploy your own custom cartridges - the CDK hosts
your cartridge source AND allows you to easily deploy that cartridge
directly to OpenShift. For more info check out the README in the
source repository.
Is there a reason why the Do-It-Yourself 0.1 type is still available? Which one should I use for what use case?
From my understanding diy cartridges are for testing out frameworks on a single gear. While the CDK is to create custom cartridges to share around and enable scaling. The CDK doesn't launch the app directly but instead keeps snapshots of your git pushes so you can create apps out of it with
rhc create-app {newappname} http://{yourcdkapp}/manifest/{commit_or_tag_or_branch}
For example you could use /manifest/master to create an app out of your most recent push to the CDK
In order for a CDK to work you have to follow along this specification so that it will create the proper folder structure openshift recognizes and will execute. An important thing to note is that your app will need a control action hook, which describes other action hooks your app can execute from rhc (like start, stop, restart...). Also since it's not a diy you'll need a manifest.yml file. This is what sets up your app with ips and ports and describes what your app is to openshift.
Another thing to note: a CDK will take up a gear in order to keep. BUT If you have the CDK working you could host your code on github and use the url to that manifest.yml instead.