Why does the scope oc get all (and oc delete all --all) in OpenShift 3.10 not include components of type PersistentVolumeClaim as well? A separate oc get pvc (and oc delete pvc --all) is required.
Is there a particular reason for treating these objects special? (Apparently they are only special in some regards; for instance, application templates can create them quite normally along with other components.)
Update Components of type Secret are also treated special in a perhaps similar way and for similar reasons. One reason I can think of is that these components typically may have longer lifetimes than applications.
I have by now concluded (also from the comments received) that is behavior is intended by design to guard against accidental deletions of dear persistent storage whose lifetime may substantially exceed single (versions of) applications.
As a consequence, I have now refactored the application template a bit. So far, a single template (YAML file) was in charge of creating all components (except secrets). This led to an "unbalanced" situation which required one oc new-app --template=app to create the app, but two oc deletes (oc delete all --selector app=... and oc delete pvc --selector app=..) to delete entirely. After splitting up the template into app.yaml and yaml.yaml the new, "balanced" arrangement is like this:
# create app (including its persistent storage)
oc new-app --template=app
oc new-app --template=pvc
# delete app (including its persistent storage)
oc delete all --selector app=...
oc delete pvc --selector app=...
I still leave secrets out of this scope and create them with oc create secret once up front.
Related
I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath
The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html
oc new-app always creates a DeploymentConfig. Is there an option to create a Deployment instead of a DeploymentConfig?
Why? DeploymentConfig is a proprietary legacy Red Hat only resource kind. I would prefer a modern cross platform industry standard Deployment.
oc new-app always creates a DeploymentConfig. Is there an option to create a Deployment instead of a DeploymentConfig?
Current versions of oc have been creating Deployments for quite some time now:
$ oc new-app --docker-image=<IMAGE> --name=my-application
--> Found container image [..]
* An image stream tag will be created as "my-application:latest" that will track this image
--> Creating resources ...
imagestream.image.openshift.io "my-application" created
deployment.apps "my-application" created
service "my-application" created
--> Success
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/my-application'
Run 'oc status' to view your app.
$ oc get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-application 1/1 1 1 7s
$ oc get deploymentconfig
No resources found in simon namespace.
So you should update your oc client as you seem to be using an old version (my output above is with a 4.6 client).
The old behaviour of creating a DeploymentConfig can still be forced by using the
--as-deployment-config option:
$ oc new-app --docker-image=<IMAGE> --name=my-application --as-deployment-config
Note that DeploymentConfigs still have their place if you want to use features like triggers, automatic rollback, lifecycle hooks or custom strategies (DeploymentConfigs-specific features)
What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)
The command oc describe dc/backend is only showing the last three deployments. Due to a troublesome rollout, the last three deployments today were the same code. Is it possible to see a longer history of deployments to make it easier to see the one we want to deploy with oc rollback backend --to-version=X?
In first, you need to check your current revisionHisoryLimit from your deploymentConfig resource using oc get dc/backend -o yaml. [0] And edit it for your needs.
[0]No circle 7[https://docs.openshift.com/container-platform/3.9/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration]
I know I can have two different strategies when I want to deploy in Openshift.
Rolling strategy: Openshift waits for new pods to become ready before scaling down the production pods.
Recreate strategy: Openshift will remove old instances and the will start new ones.Getting a 503 HTTP error in the meanthime. For db or when two or more instances can't coexist.
To chage the deployment configuration:
oc edit dc/mydeploy-conf -o json
"spec": {
"strategy": {
"type": "Recreate/Rolling"
},
EDIT 1 -- Adding info from the project github suggested by Clayton
https://github.com/openshift/origin/blob/master/examples/deployment/README.md
Not included strategies in Openshift v3 but can be done manually.
Blue-Green Deployment
Blue-Green deployments involve running two versions of an application at the same time and moving production traffic from the old version to the new version (more about blue-green deployments). There are several ways to implement a blue-green deployment in OpenShift.
Create two copies of the example application
oc new-app openshift/deployment-example:v1 --name=bluegreen-example-old
oc new-app openshift/deployment-example:v2 --name=bluegreen-example-new
Create a route that points to the old service
oc expose svc/bluegreen-example-old --name=bluegreen-example
Edit the route and change the service to bluegreen-example-new
oc edit route/bluegreen-example
A/B Deployment
A/B deployments generally imply running two (or more) versions of the application code or application configuration at the same time for testing or experimentation purposes.
The simplest form of an A/B deployment is to divide production traffic between two or more distinct shards -- a single group of instances with homogenous configuration and code.
More complicated A/B deployments may involve a specialized proxy or load balancer that assigns traffic to specific shards based on information about the user or application (all "test" users get sent to the B shard, but regular users get sent to the A shard). A/B deployments can be considered similar to A/B testing, although an A/B deployment implies multiple versions of code and configuration, where as A/B testing often uses one codebase with application specific checks.
Example:
One service, multiple deployment configs
OpenShift, through labels and deployment configurations, can support multiple simultaneous shards being exposed through the same service. To the consuming user, the shards are invisible. An example of the simplest possible sharding is described below:
Create the first shard of the application based on the example deployment images
oc new-app openshift/deployment-example --name=ab-example-a --labels=ab-example=true SUBTITLE="shard A"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-a
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-a label. Save and exit the editor.
Trigger a re-deployment of the first shard to pick up the new labels:
oc deploy ab-example-a --latest
Create a service that uses the common label:
oc expose dc/ab-example-a --name=ab-example --selector=ab-example=true
make the application available via a route
oc expose svc/ab-example
Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:
oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-b
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.
Trigger a re-deployment of the second shard to pick up the new labels:
oc deploy ab-example-b --latest
At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default through a cookie) will attempt to preserve your connection to a backend server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:
oc scale dc/ab-example-a --replicas=0
oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
https://github.com/openshift/origin/blob/master/examples/deployment/README.md is probably the best documentation for the types of strategies and how to achieve them