The Kaa platform as an IoT cloud platform is prebuilt to run on amazon aws or a virtualbox sandbox. Is it immediately deployable to openshift, especially the starter free plan? If not, what it takes to get it to work?
I have looked at the python on openshift which uses the S2I to dockerize a software collections version of python, e.g. 2.7. I'm wondering how these projects or technologies would work together to make Kaa to run on multiple platforms, or to make more versions/flavors/variants of Kaa to run on platforms. An interesting question, but I'm not sure the way of thinking is right. Though this is just to add some hints of the background information that I've been looking at, may or may not be related to the questions asked here.
You can use different workflows to achieve the goal:
It seems there are Docker images ready to use[1], so you can try deploy it in Openshift and see what happens.
You can create a custom s2i[2] image in Openshift and create a Dockerfile with all the base software you need to run Kaa.
You can create a Dockerfile (maybe editing the exsisting Kaa Dockerfile) that contains/add all the software you need, then create a BuildConfig with docker strategy[3] and run it in an Openshift project to add your Kaa image to the imagestream and then deploy Pods from your Kaa imagestream with a Deploymentconfig[4].
[1]: https://kaaproject.github.io/kaa/docs/v0.10.0/Administration-guide/System-installation/Docker-deployment/
[2]: https://blog.openshift.com/create-s2i-builder-image/
[3]: https://docs.openshift.com/container-platform/3.7/dev_guide/builds/build_strategies.html#docker-strategy-options
[4]: https://docs.openshift.com/container-platform/3.7/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration
Related
I am new to devops and reading openshift docs about this. Seems both buildconfig and pipeline(tekton in openshift 4.6) can achieve source-to-image process and triggered by git webhooks. So what is the difference between openshift buildconfig and pipeline?
PS:
Just finished the pipeline tutorial on openshift, there is no build or buildconfig resource created during the whole process.
Openshift buildconfig is "Openshift specific" and was very hot in Openshift3.
The hot stuff then was the source2image thing.
Buildconfig could be setup for S2I, Docker and even "Pipeline". But this is not to mix with Openshift Pipelines with Tekton. The BuildConfig pipeliens was provided using jenkins files.
Now as Tekton has gain more stability, respect and maturity out in the community and also under the "Openshift Pipeline" it has been the right way to do stuff.
It is a more complete way to setup complex pipeliens with k8s native way and not only for openshift.
So what the difference more than above I would say that using a pipeline will give you all the flexibility and power as any CI build tool. It is frequently updated and has a great slack community. Buildconfig has lot of limitations on what you can do.
All you can do in buildconfig and more is achievable in Tekton pipeliens, but not the other way around. ;)
When using Openshift Pipelines there are tasks provided for s2i too:
https://github.com/openshift/pipelines-catalog
Also Tekton tasks can be added from:
https://github.com/tektoncd/catalog
Currently learning about CI CD for an upcoming project. Currently our project is being hosted on bitbucket and thus can't use Travis CI. Was thinking of using Circle CI in this case. Searched through the internet for examples of how to configure circle CI to deploy to openshift. Does anyone have experience with this?
In this case you do not want to use automatic webhook based build triggering in Openshift based on accepted pull requests in GitHub, but just simply trigger a build by CircleCI via e.g. the Openshift (oc start-build <buildconfig_name> --follow) CLI tool.
Just to give context:
I am planning to use Terraform to bring up new separate environments with ec2 machines, elb etc. and then maintaining configuration as well.
Doing that with terraform and using AWS provider sounds fairly simple.
Problem 1:
While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
Assuming the above is up and running:
Problem 2:
How do I deploy new code on the servers in this environment launched by Terraform?
Should I use for eg. ansible playbooks/chef recipes/puppet manifests for that? or Terraform gives some other options/ways?
Brief answers:
Problem 1: While launching those instances I want to install few packages etc. so that when Terraform launches the instances (servers) things/ apps should be up and running.
A couple of options:
Create an AMI of your instance with the installed packages and specify that in the resource.
Use the user data script to install the packages that you need when the instance starts.
Use ansible playbooks/chef recipes/puppet to install packages once the instance is running (e.g. creating an opsworks stack with terraform)
Problem 2: How do I deploy new code on the servers in this environment
launched by Terraform? Should I use for eg. ansible playbooks/chef
recipes/puppet manifests for that? or Terraform gives some other
options/ways?
Not the intended use case for terraform, use other tools like jenkins or aws services like codepipeline or codedeploy. Ansible/chef/puppet can also help (e.g. with opsworks)
Currently there are two application types on OpenShift Online, which may be used for OpenShift cartridge development: Do-It-Yourself 0.1 and Cartridge Development Kit.
The description of the Cartridge Development Kit sounds much more useful:
Helps you build and deploy your own custom cartridges - the CDK hosts
your cartridge source AND allows you to easily deploy that cartridge
directly to OpenShift. For more info check out the README in the
source repository.
Is there a reason why the Do-It-Yourself 0.1 type is still available? Which one should I use for what use case?
From my understanding diy cartridges are for testing out frameworks on a single gear. While the CDK is to create custom cartridges to share around and enable scaling. The CDK doesn't launch the app directly but instead keeps snapshots of your git pushes so you can create apps out of it with
rhc create-app {newappname} http://{yourcdkapp}/manifest/{commit_or_tag_or_branch}
For example you could use /manifest/master to create an app out of your most recent push to the CDK
In order for a CDK to work you have to follow along this specification so that it will create the proper folder structure openshift recognizes and will execute. An important thing to note is that your app will need a control action hook, which describes other action hooks your app can execute from rhc (like start, stop, restart...). Also since it's not a diy you'll need a manifest.yml file. This is what sets up your app with ips and ports and describes what your app is to openshift.
Another thing to note: a CDK will take up a gear in order to keep. BUT If you have the CDK working you could host your code on github and use the url to that manifest.yml instead.
I'm trying to implement a continuous deployment system and I seem to not be able to find a good answer for our problem.
We use Jenkins to run a maven build to generate our artifacts and deploy them to Nexus. I see a few projects that bundle up everything into a single war or tar file, extract one file per request from Nexus by name and deploy it to an application server, but this requires them to know beforehand what versions they have available.
My project has quite a few jars/wars/binaries among other artifacts, which don't get deployed using an application server. What we want to do is be able to do is pull any snapshot or release revision of the software out of nexus and either generate an install package or deliver it directly to a remote server.
Clarification: I want QA or development to be able to select a version from Jenkins; where Jenkins will poll Nexus for the available versions, then perform an automated deploy to a server from Nexus.
Is there an easy nexus/maven way to get software out to a testing system?
So, is there a way to poll nexus to determine what revisions are available through ant/ivy, Jenkins, maven, gradle? I'll write in something else if it helps.
I see that a similar question was asked here: How do I choose an artifact from Nexus in a Hudson / Jenkins job?, but it is as of yet unanswered 9 months later.
Nexus gives you a standard HTTP browsing capability. You could browse the repository through HTTP and see what is available.
I still don't understand your Use Case though. If you know which versions of the project you want then what is the problem?
The easiest would be to write an installer pom.xml that has in it a ${} placeholder for the version you want for the artifacts then invoke mvn with mvn package -Dproduct.version=1.0.0
If you use a container, PAX has plugins that allow you to specific artifacts like mvn:myGroup/myArtifact/myVersion and it will auto pull from Maven.
Nexus isn't doing any magic. It's all well known paths on a URL of groups/artifactId/versions