what is the difference between openshift buildconfig and pipeline - openshift

I am new to devops and reading openshift docs about this. Seems both buildconfig and pipeline(tekton in openshift 4.6) can achieve source-to-image process and triggered by git webhooks. So what is the difference between openshift buildconfig and pipeline?
PS:
Just finished the pipeline tutorial on openshift, there is no build or buildconfig resource created during the whole process.

Openshift buildconfig is "Openshift specific" and was very hot in Openshift3.
The hot stuff then was the source2image thing.
Buildconfig could be setup for S2I, Docker and even "Pipeline". But this is not to mix with Openshift Pipelines with Tekton. The BuildConfig pipeliens was provided using jenkins files.
Now as Tekton has gain more stability, respect and maturity out in the community and also under the "Openshift Pipeline" it has been the right way to do stuff.
It is a more complete way to setup complex pipeliens with k8s native way and not only for openshift.
So what the difference more than above I would say that using a pipeline will give you all the flexibility and power as any CI build tool. It is frequently updated and has a great slack community. Buildconfig has lot of limitations on what you can do.
All you can do in buildconfig and more is achievable in Tekton pipeliens, but not the other way around. ;)
When using Openshift Pipelines there are tasks provided for s2i too:
https://github.com/openshift/pipelines-catalog
Also Tekton tasks can be added from:
https://github.com/tektoncd/catalog

Related

Open shift build config vs jenkinsfile

We are using OpenShift. I have a confusion between buildconfig file vs jenkinsfile. Do we need both of them or one is sufficient. I have seen examples where in jenkinsfile docker build is defined using buildconfig file. In some cases buildconfig file is using jenkinsfile as the build strategy. Can some one please clarify on this
BuildConfig is the base type for all builds, there are different build strategies that can be used in a build config, by running oc explain buildconfig.spec.strategy you can see them all. If you want to do a docker build you use the dockerStrategy, if you want to build from source code using source2image you specify the sourceStrategy.
Sometimes you have more complex needs than simply running a build with an output image, let's say you want to run the build, wait for that image to be deployed to some environment and then run some automated GUI tests. In this case you need a pipeline. If you want to trigger and configure this pipeline from the OpenShift Web Console you would use the jenkinsPipelineStrategy in your BuildConfig. In the OpenShift 3.x web console such BuildConfigs are presented as Pipelines and not Builds even though they are all really BuildConfigs.
Any BuildConfig with the jenkinsPipelineStrategy will be executed by the Jenkins Build Server running inside the project. That Jenkins instance could also have other pipelines that are not mapped or visible in the OpenShift Web Console, there does not need to be a BuildConfig for every Jenkinsfile if you don't see the benefit of them appearing in the OpenShift Web Console.
The difference of running builds inside a Jenkinsfile and a BuildConfig with some non-jenkinsfile-strategy is that the build is actually executed inside the jenkins build agent rather than a normal OpenShift build pod.
At our company we utilize a combination of jenkinsFile pipelines and BuildConfigs with the sourceStrategy. Instead of running builds in our Jenkinsfile pipelines directly inside the Jenkins build agent we let the pipeline call the OpenShift API and tell it to execute the BuildConfig with sourceStrategy. So basically we still use s2i for building the images but the Jenkinsfile as our CI/CD pipeline engine. You can find some examples of this at https://github.com/openshift/jenkins-client-plugin.

Is it possible to deploy to openshift using Circle CI?

Currently learning about CI CD for an upcoming project. Currently our project is being hosted on bitbucket and thus can't use Travis CI. Was thinking of using Circle CI in this case. Searched through the internet for examples of how to configure circle CI to deploy to openshift. Does anyone have experience with this?
In this case you do not want to use automatic webhook based build triggering in Openshift based on accepted pull requests in GitHub, but just simply trigger a build by CircleCI via e.g. the Openshift (oc start-build <buildconfig_name> --follow) CLI tool.

how to get kaa deployed on openshift

The Kaa platform as an IoT cloud platform is prebuilt to run on amazon aws or a virtualbox sandbox. Is it immediately deployable to openshift, especially the starter free plan? If not, what it takes to get it to work?
I have looked at the python on openshift which uses the S2I to dockerize a software collections version of python, e.g. 2.7. I'm wondering how these projects or technologies would work together to make Kaa to run on multiple platforms, or to make more versions/flavors/variants of Kaa to run on platforms. An interesting question, but I'm not sure the way of thinking is right. Though this is just to add some hints of the background information that I've been looking at, may or may not be related to the questions asked here.
You can use different workflows to achieve the goal:
It seems there are Docker images ready to use[1], so you can try deploy it in Openshift and see what happens.
You can create a custom s2i[2] image in Openshift and create a Dockerfile with all the base software you need to run Kaa.
You can create a Dockerfile (maybe editing the exsisting Kaa Dockerfile) that contains/add all the software you need, then create a BuildConfig with docker strategy[3] and run it in an Openshift project to add your Kaa image to the imagestream and then deploy Pods from your Kaa imagestream with a Deploymentconfig[4].
[1]: https://kaaproject.github.io/kaa/docs/v0.10.0/Administration-guide/System-installation/Docker-deployment/
[2]: https://blog.openshift.com/create-s2i-builder-image/
[3]: https://docs.openshift.com/container-platform/3.7/dev_guide/builds/build_strategies.html#docker-strategy-options
[4]: https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration

Bluemix Devops and Cast iron Containers

I want to use Devops pipeline for building and deploying Castiron orchestrations in Castiron containers. How can i create the Castiron container through devops pipeline.
Do I need to upload the cast iron docker image into GIT repository and configure the build and deploy stage.
Yes, You need to first upload cast iron docker image.
Steps in this link might help
Generally when working with containers, you should be considering the output of any container build as an immutable artefact. And as such an additional stage in your pipeline could be to publish the container to a repository or an artefact repository.
The artefact (container) is then available in subsequent stages within your pipeline. i.e. The container can be pulled down and deployed in a Deploy Stage.
In terms of pushing your container into GIT, you can do that directory within a script or with the Bluemix DevOps Service GUI.
Documentation on how you can achieve with just Bluemix DevOps Sevice:
IBM Developerworks:
https://www.ibm.com/developerworks/library/d-bluemix-devops-pipeline-trs/index.html
I would however recommend looking into an actual artefact repository (such as Artifactory). Register a custom service using the CF (Cloud Foundry) CLI to your Bluemix account, then utilize that service when you are required to store/retrieve a container(s).

Is "Do-It-Yourself 0.1" still useful on OpenShift Online?

Currently there are two application types on OpenShift Online, which may be used for OpenShift cartridge development: Do-It-Yourself 0.1 and Cartridge Development Kit.
The description of the Cartridge Development Kit sounds much more useful:
Helps you build and deploy your own custom cartridges - the CDK hosts
your cartridge source AND allows you to easily deploy that cartridge
directly to OpenShift. For more info check out the README in the
source repository.
Is there a reason why the Do-It-Yourself 0.1 type is still available? Which one should I use for what use case?
From my understanding diy cartridges are for testing out frameworks on a single gear. While the CDK is to create custom cartridges to share around and enable scaling. The CDK doesn't launch the app directly but instead keeps snapshots of your git pushes so you can create apps out of it with
rhc create-app {newappname} http://{yourcdkapp}/manifest/{commit_or_tag_or_branch}
For example you could use /manifest/master to create an app out of your most recent push to the CDK
In order for a CDK to work you have to follow along this specification so that it will create the proper folder structure openshift recognizes and will execute. An important thing to note is that your app will need a control action hook, which describes other action hooks your app can execute from rhc (like start, stop, restart...). Also since it's not a diy you'll need a manifest.yml file. This is what sets up your app with ips and ports and describes what your app is to openshift.
Another thing to note: a CDK will take up a gear in order to keep. BUT If you have the CDK working you could host your code on github and use the url to that manifest.yml instead.