Bluemix Devops and Cast iron Containers - containers

I want to use Devops pipeline for building and deploying Castiron orchestrations in Castiron containers. How can i create the Castiron container through devops pipeline.
Do I need to upload the cast iron docker image into GIT repository and configure the build and deploy stage.

Yes, You need to first upload cast iron docker image.
Steps in this link might help

Generally when working with containers, you should be considering the output of any container build as an immutable artefact. And as such an additional stage in your pipeline could be to publish the container to a repository or an artefact repository.
The artefact (container) is then available in subsequent stages within your pipeline. i.e. The container can be pulled down and deployed in a Deploy Stage.
In terms of pushing your container into GIT, you can do that directory within a script or with the Bluemix DevOps Service GUI.
Documentation on how you can achieve with just Bluemix DevOps Sevice:
IBM Developerworks:
https://www.ibm.com/developerworks/library/d-bluemix-devops-pipeline-trs/index.html
I would however recommend looking into an actual artefact repository (such as Artifactory). Register a custom service using the CF (Cloud Foundry) CLI to your Bluemix account, then utilize that service when you are required to store/retrieve a container(s).

Related

Open shift build config vs jenkinsfile

We are using OpenShift. I have a confusion between buildconfig file vs jenkinsfile. Do we need both of them or one is sufficient. I have seen examples where in jenkinsfile docker build is defined using buildconfig file. In some cases buildconfig file is using jenkinsfile as the build strategy. Can some one please clarify on this
BuildConfig is the base type for all builds, there are different build strategies that can be used in a build config, by running oc explain buildconfig.spec.strategy you can see them all. If you want to do a docker build you use the dockerStrategy, if you want to build from source code using source2image you specify the sourceStrategy.
Sometimes you have more complex needs than simply running a build with an output image, let's say you want to run the build, wait for that image to be deployed to some environment and then run some automated GUI tests. In this case you need a pipeline. If you want to trigger and configure this pipeline from the OpenShift Web Console you would use the jenkinsPipelineStrategy in your BuildConfig. In the OpenShift 3.x web console such BuildConfigs are presented as Pipelines and not Builds even though they are all really BuildConfigs.
Any BuildConfig with the jenkinsPipelineStrategy will be executed by the Jenkins Build Server running inside the project. That Jenkins instance could also have other pipelines that are not mapped or visible in the OpenShift Web Console, there does not need to be a BuildConfig for every Jenkinsfile if you don't see the benefit of them appearing in the OpenShift Web Console.
The difference of running builds inside a Jenkinsfile and a BuildConfig with some non-jenkinsfile-strategy is that the build is actually executed inside the jenkins build agent rather than a normal OpenShift build pod.
At our company we utilize a combination of jenkinsFile pipelines and BuildConfigs with the sourceStrategy. Instead of running builds in our Jenkinsfile pipelines directly inside the Jenkins build agent we let the pipeline call the OpenShift API and tell it to execute the BuildConfig with sourceStrategy. So basically we still use s2i for building the images but the Jenkinsfile as our CI/CD pipeline engine. You can find some examples of this at https://github.com/openshift/jenkins-client-plugin.

what is the difference between openshift buildconfig and pipeline

I am new to devops and reading openshift docs about this. Seems both buildconfig and pipeline(tekton in openshift 4.6) can achieve source-to-image process and triggered by git webhooks. So what is the difference between openshift buildconfig and pipeline?
PS:
Just finished the pipeline tutorial on openshift, there is no build or buildconfig resource created during the whole process.
Openshift buildconfig is "Openshift specific" and was very hot in Openshift3.
The hot stuff then was the source2image thing.
Buildconfig could be setup for S2I, Docker and even "Pipeline". But this is not to mix with Openshift Pipelines with Tekton. The BuildConfig pipeliens was provided using jenkins files.
Now as Tekton has gain more stability, respect and maturity out in the community and also under the "Openshift Pipeline" it has been the right way to do stuff.
It is a more complete way to setup complex pipeliens with k8s native way and not only for openshift.
So what the difference more than above I would say that using a pipeline will give you all the flexibility and power as any CI build tool. It is frequently updated and has a great slack community. Buildconfig has lot of limitations on what you can do.
All you can do in buildconfig and more is achievable in Tekton pipeliens, but not the other way around. ;)
When using Openshift Pipelines there are tasks provided for s2i too:
https://github.com/openshift/pipelines-catalog
Also Tekton tasks can be added from:
https://github.com/tektoncd/catalog

Is it possible to deploy to openshift using Circle CI?

Currently learning about CI CD for an upcoming project. Currently our project is being hosted on bitbucket and thus can't use Travis CI. Was thinking of using Circle CI in this case. Searched through the internet for examples of how to configure circle CI to deploy to openshift. Does anyone have experience with this?
In this case you do not want to use automatic webhook based build triggering in Openshift based on accepted pull requests in GitHub, but just simply trigger a build by CircleCI via e.g. the Openshift (oc start-build <buildconfig_name> --follow) CLI tool.

Redeploy Openshift Application when Docker Hub Image Changes?

Is there a way to trigger a re-deploy when I push an image to docker hub? I used S2I to build an image, put it up on docker hub, and did a deployment from there. How can I trigger a new deployment when I push a new image to docker hub?
Perhaps there is a better way? I created a wildfly image with the changes to the standalone.xml I needed. Then I used S2I to build my local source into a runnable wildfly application image, which is what I pushed and deployed. I'm trying to get around having to go through a github repository.
I'm thinking I could create an application with the customer wildfly image that I created and use the direct from IDE option to the application, but what if I want to use the command line?
You can set a scheduled flag on the image stream to have a remote registry periodically polled. This will only work though if the OpenShift cluster has been configured globally to allow that. If using OpenShift Online I don't believe that feature is enabled.
https://docs.openshift.com/container-platform/latest/dev_guide/managing_images.html#importing-tag-and-image-metadata
If you want to avoid using a Git repository, you can use a binary input build instead. This allows you to push files direct from your local computer. This means you can compile binary artifacts locally and push them into the S2I build done by OpenShift.
https://docs.openshift.com/container-platform/latest/dev_guide/builds/build_inputs.html#binary-source

Is "Do-It-Yourself 0.1" still useful on OpenShift Online?

Currently there are two application types on OpenShift Online, which may be used for OpenShift cartridge development: Do-It-Yourself 0.1 and Cartridge Development Kit.
The description of the Cartridge Development Kit sounds much more useful:
Helps you build and deploy your own custom cartridges - the CDK hosts
your cartridge source AND allows you to easily deploy that cartridge
directly to OpenShift. For more info check out the README in the
source repository.
Is there a reason why the Do-It-Yourself 0.1 type is still available? Which one should I use for what use case?
From my understanding diy cartridges are for testing out frameworks on a single gear. While the CDK is to create custom cartridges to share around and enable scaling. The CDK doesn't launch the app directly but instead keeps snapshots of your git pushes so you can create apps out of it with
rhc create-app {newappname} http://{yourcdkapp}/manifest/{commit_or_tag_or_branch}
For example you could use /manifest/master to create an app out of your most recent push to the CDK
In order for a CDK to work you have to follow along this specification so that it will create the proper folder structure openshift recognizes and will execute. An important thing to note is that your app will need a control action hook, which describes other action hooks your app can execute from rhc (like start, stop, restart...). Also since it's not a diy you'll need a manifest.yml file. This is what sets up your app with ips and ports and describes what your app is to openshift.
Another thing to note: a CDK will take up a gear in order to keep. BUT If you have the CDK working you could host your code on github and use the url to that manifest.yml instead.