OpenShift oc apply and rollout - openshift

I would like to perform the following script in our test-environment to deploy our application in OpenShift 4.8.
oc apply -f deployment-config.yaml
oc rollout latest dc/my-application
The trigger in deployment config is ConfigChange. If e.g. an environment variable has changed in deployment config oc apply -f deployment-config.yaml will trigger a rollout.
The deployment config uses snapshot as image. We don't have a version number of our snapshot which means that a new snapshot might need to be deployed even though deployment config has not been changed. Thats why we use oc rollout latest dc/my-application.
image: "<repo-url>/my-application:snapshot"
imagePullPolicy: Always
The problem is that sometimes both oc apply -f deployment-config.yaml and oc rollout latest dc/my-application will trigger a rollout.
Is there a way to do oc apply -f deployment-config.yaml without triggering a rollout? Or do you se another solution?

As of Deployment triggers you need to define the trigger as empty field. If you just remove the trigger, a config change trigger will be added by default.
If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually.

Related

Monitoring the progress of specified pods with oc cli

I want to know is there a way to monitor the progress of a particular pod instead of seeing all pods?
For example I scale consleapp
oc scale dc consleapp --replicas=3
After that command I want to watch the progress of only consoleapp and ensure the pods are active.
I thought you'd be able to run this command
oc get pods consoleapp -watch but it does not work. Is there a way for me to monitor the progress of this? very similar to oc rollout status deploymentconfig/consoleapp --watch but without rolling out a new deployment.
When you run oc get pods consoleapp, you're asking for a pod named consoleapp, which of course doesn't exist. If you want to watch all the pods managed by the DeploymentConfig, you can use the --selector (-l) option to select pods that match the selector in your DeploymentConfig.
If you have:
spec:
selector:
name: consoleapp
Then you would run:
oc get pod -l name=consoleapp -w
If your selector has multiple labels, combine them with commas:
oc get pod -l app=consoleapp,component=frontend -w
NB: DeploymentConfigs are considered deprecated and can generally be replaced by standard Kubernetes Deployments:
Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects.
(from "Understanding Deployment and DeploymentConfig objects")

Can oc new-app create a Deployment instead of a DeploymentConfig?

oc new-app always creates a DeploymentConfig. Is there an option to create a Deployment instead of a DeploymentConfig?
Why? DeploymentConfig is a proprietary legacy Red Hat only resource kind. I would prefer a modern cross platform industry standard Deployment.
oc new-app always creates a DeploymentConfig. Is there an option to create a Deployment instead of a DeploymentConfig?
Current versions of oc have been creating Deployments for quite some time now:
$ oc new-app --docker-image=<IMAGE> --name=my-application
--> Found container image [..]
* An image stream tag will be created as "my-application:latest" that will track this image
--> Creating resources ...
imagestream.image.openshift.io "my-application" created
deployment.apps "my-application" created
service "my-application" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/my-application'
Run 'oc status' to view your app.
$ oc get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-application 1/1 1 1 7s
$ oc get deploymentconfig
No resources found in simon namespace.
So you should update your oc client as you seem to be using an old version (my output above is with a 4.6 client).
The old behaviour of creating a DeploymentConfig can still be forced by using the
--as-deployment-config option:
$ oc new-app --docker-image=<IMAGE> --name=my-application --as-deployment-config
Note that DeploymentConfigs still have their place if you want to use features like triggers, automatic rollback, lifecycle hooks or custom strategies (DeploymentConfigs-specific features)

How can i disable the automatic build triggered from build configuration in openshift?

I am trying to create a cicd pipeline with openshift. Initially, when creating the application using 'oc new-app' command, it automatically triggers the build. How i need to disable the initial build other than deleting or cancel the build?
How i need to disable the initial build other than deleting or cancel the build?
oc new-app can not prevent the initial build.
It had discussed here: https://github.com/openshift/origin/issues/15429
Unfortunately it does not implement now.
But, you can prevent initial build as removing all triggers from buildConfig by modifying yaml of buildConfig manually.
First export oc new-app as yaml format.
# oc new-app --name=test \
centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git -o yaml --dry-run > test.yml
Remove all triggers as changing the configuration to triggers: [].
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: ruby-25-centos7:latest
type: Source
triggers: []
After modifying, create resources using oc create -f.
# oc create -f test.yml
imagestream.image.openshift.io/ruby-25-centos7 created
imagestream.image.openshift.io/ruby-ex created
buildconfig.build.openshift.io/ruby-ex created
deploymentconfig.apps.openshift.io/ruby-ex created
service/ruby-ex created
The build does not run until you run oc start-build <bc name> and oc rollout latest dc/<dc name>.
I hope this use case is helpful for you.

Apply changes dynamically when OpenShift template is modified (and applied)

I defined a template (let's call it template.yaml) with a service, deploymentconfig, buildconfig and imagestream, applied it with oc apply -f template.yaml and ran oc new-app app-name to create new app from the template. What the app basically does is to build a Node.js application with S2I, write it to a new ImageStream and deploy it to a pod with the necessary service exposed.
Now I've decided to make some changes to the template and have applied it on OpenShift. How do I go about ensuring that all resources in the said template also get reconfigured without having to delete all resources associated with that template and recreating it again?
I think the template is only used to create the related resource first time. Even though you modify the template, it's not associated with created resources. So you should recreate or modify each resource that is modified.
But you can modify simply all resources created by template using the following cmd.
# oc apply -f template_modified.yaml | oc replace -f -
I hope it help you
The correct command turned out to be:
$ oc apply -f template_modified.yaml
$ oc process -f template_modified.yaml | oc replace -f -
That worked for me on OpenShift 3.9.

How to restart pod in OpenShift?

I updated a file (for debug output) in a running pod, but it isn't getting recognized. I was going to restart the pod to get it to take but I only see oc stop and not oc start or oc restart. How would I force a refresh of files in the pod?
I am thinking maybe it is a Ruby thing (like opcache in PHP). But figured a restart of the pod would handle it. Just can't figure out how to restart a pod.
You need to do your changes in the deployment config but not in the pod. Because OpenShift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/pods_and_services.html#pods
If you make some changes in deployment config and save them, pod will restart and and your changes will take effect:
oc edit dc "deploy-config-example"
If you change something in volumes or configmaps you need to delete pod for his restart:
oc delete pod "name-of-your-pod"
And pod will restart. Or better still trigger a new deployment by running:
oc rollout latest "deploy-config-example"
Using oc rollout is better because it will re-deploy all pods if you have a scaled application, and you don't need to identify each pod and delete it.
You can scale deployments down (to zero) and then up again:
oc get deployments -n <your project> -o wide
oc get pods -n <your project> -o wide
oc scale --replicas=0 deployment/<your deployment> -n <your project>
oc scale --replicas=1 deployment/<your deployment> -n <your project>
watch oc get pods -n <your project> # wait until your deployment is up again
If you want to do it using GUI :
Login to ocp
Click workloads -> Deployment Configs
Find the pod you want to restart.
On the right side, click on the 3 dots.
Click start rollout.
If you delete your pod, or scale it to 0 and to 1 again you might lose some clients, because you are basically stopping and restarting your application. But in rollout, your existing pod waits for the new pod to get ready and then deletes itself. So I guess rollout is safer than deleting or scaling 0/1.
Thanks Noam Manos for your solution.
I've used "Application Console" in Openshift. I've navigated to Applications - Deployment - #3 (check for your active deployment) to see my pod with up and down arrows. Currently, I've 1 pod running. So, I've clicked on down arrow to scale down to 0 pod. Then, I clicked on up arrow to scale up to 1 pod.
Follow the below steps
login to open shift
click on monitor tab
select the component for which you want to restart the pod
click the action drop down ( right top corner )
delete the existing pod
new pod automatically generated.
You also can go to DeploymentConfig and choose option "Start rollout" from actions.
And if nothing helps, there is also such thing as
Workloads -> ReplicationControllers
they controll replica numbers.
You delete such controller, and then another such controller is created which creates your new pod.