What is the standard way of using "oc apply" command inside the java code using "Fabric8" library ?
We have been using client.buildConfigs().inNamespace ..... etc so far. However, felt using "oc apply" command can simplify code
Thanks
San
Do you mean inside a Jenkins Pipeline or just in general purpose Java code?
We have the equivalent of kubernetes apply with:
new Controller().apply(entity);
in the kubernetes-api module in the fabric8io/fabric8 git repo.
There's a similar API in kuberetes-client too which lets you apply a file or URI directly.
In pipelines we tend to use the kubernetesApply(file) function to do a similar thing.
But inside Jenkins Pipelines you can also just use oc or kubectl directly in pipelines using the clients docker image via the clientsNode() function in the fabric8-pipeline-library like these examples then you can run oc commands directly via this code in a Jenkinsfile.
Related
Currently I have a tfvars file in json setting key values with spaces. For example:
{
"customer": "Test Customer",
}
I then pass this variable to a ansible playbook command run locally on the provisioned EC2 host using cloud-config
sudo ansible-playbook /Playbook.yml --extra-vars 'customer=${var.customer}'
In that playbook I have a license file that I want to propagate with Ansible's template module. Currently the license file will get Test, but not Test Customer (because of the space). How can I fix this?
Also on a second note, is there a better/cleaner way of passing in terraform variables to a ansible playbook command in a Terraform config other than -e extra variables?
I think this will do the trick for you:
sudo ansible-playbook /Playbook.yml --extra-vars "customer='${var.customer}'"
Note:
I would really recommend you, for a better design, to decouple these two tools from each other. Don't make a tight coupling between ansible and terraform, as in the future you may decide to start using a different tool which will force you to rewrite your whole IaC.
Openshift CLI command to start S2I (source to image) build looks like this:
oc start-build buildname --from-dir=./someDirectory--wait=true
But how can we execute some shell command? os start-build is going to create image (described in the build definition) and copy someDirectory to it, but what if we need additional configuration of that image, not only to push compiled source code there?
There're a couple of options:
Provide a layer in the Dockerfile
Override assemble script
assemble script override options:
Provide .s2i/bin/assemble into the code repository
Specify spec.strategy.sourceStrategy.scripts in the BuildConfig with URL of a directory containing the scripts (see override builder image scripts)
Example of assemble script and s2i workflow you can check in s2i or there's simple example:
#!/bin/bash
# Run additional build before steps
# Execute original assemble script.
/usr/libexec/s2i/assemble
# Run additional build after steps
In addition, There's postCommit build hooks which executes after commiting the image and before pushing it to the registry. It executes in temporary container so it can only be used to do some tests.
What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)
I defined a template (let's call it template.yaml) with a service, deploymentconfig, buildconfig and imagestream, applied it with oc apply -f template.yaml and ran oc new-app app-name to create new app from the template. What the app basically does is to build a Node.js application with S2I, write it to a new ImageStream and deploy it to a pod with the necessary service exposed.
Now I've decided to make some changes to the template and have applied it on OpenShift. How do I go about ensuring that all resources in the said template also get reconfigured without having to delete all resources associated with that template and recreating it again?
I think the template is only used to create the related resource first time. Even though you modify the template, it's not associated with created resources. So you should recreate or modify each resource that is modified.
But you can modify simply all resources created by template using the following cmd.
# oc apply -f template_modified.yaml | oc replace -f -
I hope it help you
The correct command turned out to be:
$ oc apply -f template_modified.yaml
$ oc process -f template_modified.yaml | oc replace -f -
That worked for me on OpenShift 3.9.
Try to create Node.js app in OpenShift in terminal, like this:
./oc new-app https://j4nos#bitbucket.org/j4nos/nodejs.git
Source code in BitBucket in a private account, how to set credentials? Once it asked for password, but not again. How can I set credentials?
Added annotated secret from GUI: repo-at-bitbucket
I have read Private Git Repositories: Part 2A tutorial, strange that for HTTPD app there is a Source Secret filed to select secret, but not when Node.js + MongoDB combo is selected. Why?
Ahh .. need to select pure Node.js app.
You need to authenticate to the private git repository. This can be done a few different ways. I would suggest taking a few a minutes and reading this blog series which outlines the different methods you can take.
https://blog.openshift.com/private-git-repositories-part-1-best-practices/
After reading first through initial few posts explaining concepts and doing it with GitHub, only then look at the BitBucket example.
https://blog.openshift.com/private-git-repositories-part-5-hosting-repositories-bitbucket/
Those GitHub examples have more explanation which will then make BitBucket example easier to understand.
The likely reason you were prompted for the password when running oc new-app is that you used:
oc new-app https://j4nos#bitbucket.org/j4nos/nodejs.git
Specifically, you didn't specify a S2I builder to use. As a result, oc new-app will try and checkout the repo locally to analyse it to try and work out what language it uses. This is why it would prompt for the password separately.
It is better to specify the builder name on the command as:
oc new-app nodejs~https://j4nos#bitbucket.org/j4nos/nodejs.git
This is an abbreviated form of the command and is the same as running:
oc new-app --strategy=source --image-stream nodejs --code https://j4nos#bitbucket.org/j4nos/nodejs.git
If you specify the builder, it already knows what to use and doesn't analyse the code so will not prompt for the password, plus you wouldn't need user in the URI.
Either way, when building in OpenShift you still need the basicauth secret and should annotate it so it knows to use the secret for that build.