Is there a way in cloud foundry cf push command to have a manifest.yml with several "profiles"? - manifest

We have different setups (deployment parameters) for prod and for non prod environments, in regards to memory, instances etc.
We are deploying our applications with Jenkins pipeline, on Pivotal Cloud Foundry environments, which is eventually calling a script with a "CF push" command.
We are examining using two different manifest.yml files (but dislike the duplicity if identical parameters).
We are also examining using --var-file with two different vars files. We have a concern with backward compatibility, and the effort (we have many MSs) of adding so many files.
We want a manifest.yml that will look like this:
applications:
- name: myAppName
services:
- discovery
- config-server
profile:
dev:
memory: 1024M
instances: 1
prod:
memory: 4096M
instances: 4
Assuming we will need to pass a parameter profile=dev to the cf push command is fine.
In DEV environment 1 instance with 1024M of memory will be deployed; while in PROD environments, 4 instances with 4096M of memory will be deployed.

I suggest that you reconsider using variables in your manifest. You can use --var-file, but if you want to avoid having those files present you can just pass in multiple --var=<name>=<val> arguments instead.
That or just have dev.yml and prod.yml files, you can then cf push -f dev.yml or cf push -f prod.yml and pick between the two. There's a little duplication, but the files are tiny so it shouldn't be a big deal.
Hope that helps!

I don’t think, trying to achieve everything using CF CLI commands is the right way to do
I would achieve this much simply by writing a bash script and executing cf-push sequencely in whichever fashion I would like to have..

Related

OpenShift single node PersistentVolume with hostPath requires privileged pods, how to set as default?

I am fairly new to OpenShift and have been using CRC (Code Ready Containers) for a little while, and now decided to install the single server OpenShift on bare metal using the Assisted-Installer method from https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift and https://console.redhat.com/openshift/assisted-installer/clusters/. This has worked well and I have a functional single-server.
As a single server in a test environment (without NFS available) I need/want to create PersistentVolumes with hostPath (localhost storage) - these work flawlessly in CRC. However on the full install, I run into an issue when mounting PVC's to pods as the pods were not running privileged. I edited the deployment config and added the lines below (within the containers hash)
- resources: {}
...
securityContext:
privileged: true
... however still had errors as the restricted SCC has 'allowPrivilegedContainer: false'. I have done a horrible hack of changing this to true, so adding the lines above to the deployment yaml works. However there must be an easier way as none of these hacks seem present in CRC. I checked and CRC pods run restricted, the restricted SCC has privileged set to false, and the Persistent Volume is also using hostPath. I also do not have to edit the deployment yaml as above in CRC - it just works (tm).
Guidance here shows that the containers must run privileged, however the containers in CRC are running restricted and the SCC still has 'allowPrivilegedContainer: false'.
https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-hostpath.html
An example app creation as below (from the RedHat DO280 course) works without any massaging of privileges or deployment config in CRC, but on a real OS server requires the massaging above. As my server is purely for testing, I would like to make it easier without doing the hackjob and deployment changes above.
oc new-app --name mysql --docker-image registry.access.redhat.com/rhscl/mysql-57-rhel7:5.7
oc create secret generic mysql --from-literal password=r3dh4t123
oc set env deployment mysql --prefix MYSQL_ROOT_ --from secret/mysql
oc set volumes deployment/mysql --name mysql-storage --add --type pvc --claim-size 2Gi --claim-mode rwo --mount-path /var/lib/mysql/data
oc get pods -l deployment=mysql
oc get pvc
Any help appreciated.
EDIT: I have overcome this now by enabling nfs-server and adding entries to /etc/exports. However I'm still interested to understand how CRC manages the above issue when using hostPath
The short answer to this is: don't use hostPath.
You are using hostPath to make use of arbitrary disk space available on the underlying host's volume. hostPath can also be used to read/write any directory path on the underlying host's volume -- which, as you can imagine, should be used with great care.
Have a look at this as an alternative -- https://docs.openshift.com/container-platform/4.8/storage/persistent_storage/persistent-storage-local.html

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

How to manage settings in Openshift?

profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.

Deployment strategies in Openshift v3

I know I can have two different strategies when I want to deploy in Openshift.
Rolling strategy: Openshift waits for new pods to become ready before scaling down the production pods.
Recreate strategy: Openshift will remove old instances and the will start new ones.Getting a 503 HTTP error in the meanthime. For db or when two or more instances can't coexist.
To chage the deployment configuration:
oc edit dc/mydeploy-conf -o json
"spec": {
"strategy": {
"type": "Recreate/Rolling"
},
EDIT 1 -- Adding info from the project github suggested by Clayton
https://github.com/openshift/origin/blob/master/examples/deployment/README.md
Not included strategies in Openshift v3 but can be done manually.
Blue-Green Deployment
Blue-Green deployments involve running two versions of an application at the same time and moving production traffic from the old version to the new version (more about blue-green deployments). There are several ways to implement a blue-green deployment in OpenShift.
Create two copies of the example application
oc new-app openshift/deployment-example:v1 --name=bluegreen-example-old
oc new-app openshift/deployment-example:v2 --name=bluegreen-example-new
Create a route that points to the old service
oc expose svc/bluegreen-example-old --name=bluegreen-example
Edit the route and change the service to bluegreen-example-new
oc edit route/bluegreen-example
A/B Deployment
A/B deployments generally imply running two (or more) versions of the application code or application configuration at the same time for testing or experimentation purposes.
The simplest form of an A/B deployment is to divide production traffic between two or more distinct shards -- a single group of instances with homogenous configuration and code.
More complicated A/B deployments may involve a specialized proxy or load balancer that assigns traffic to specific shards based on information about the user or application (all "test" users get sent to the B shard, but regular users get sent to the A shard). A/B deployments can be considered similar to A/B testing, although an A/B deployment implies multiple versions of code and configuration, where as A/B testing often uses one codebase with application specific checks.
Example:
One service, multiple deployment configs
OpenShift, through labels and deployment configurations, can support multiple simultaneous shards being exposed through the same service. To the consuming user, the shards are invisible. An example of the simplest possible sharding is described below:
Create the first shard of the application based on the example deployment images
oc new-app openshift/deployment-example --name=ab-example-a --labels=ab-example=true SUBTITLE="shard A"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-a
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-a label. Save and exit the editor.
Trigger a re-deployment of the first shard to pick up the new labels:
oc deploy ab-example-a --latest
Create a service that uses the common label:
oc expose dc/ab-example-a --name=ab-example --selector=ab-example=true
make the application available via a route
oc expose svc/ab-example
Create a second shard based on the same source image as the first shard but different tagged version, and set a unique value:
oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE="shard B" COLOR="red"
Edit the newly created shard to set a label ab-example=true that will be common to all shards:
oc edit dc/ab-example-b
In the editor, add the line ab-example: "true" underneath spec.selector and spec.template.metadata.labels alongside the existing deploymentconfig=ab-example-b label. Save and exit the editor.
Trigger a re-deployment of the second shard to pick up the new labels:
oc deploy ab-example-b --latest
At this point, both sets of pods are being served under the route. However, since both browsers (by leaving a connection open) and the router (by default through a cookie) will attempt to preserve your connection to a backend server, you may not see both shards being returned to you. To force your browser to one or the other shard, use the scale command:
oc scale dc/ab-example-a --replicas=0
oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0
https://github.com/openshift/origin/blob/master/examples/deployment/README.md is probably the best documentation for the types of strategies and how to achieve them

Executing mrjob boostrap commands on head-node only

I have a mrjob configuration that includes loading a large file from s3 into HDFS. I would like to include these commands in the configuration file, but it seems that all bootstrap commands execute on all of the nodes in the cluster. This is over-kill and might also create synchronization problems.
Is there some way to include startup commands for the master node only in the mrjob configuration or is the only solution to SSH into the head node after the cluster is up to perform these operations?
Yoav
Well, you could have your steps start with a mapper and set mapred.map.tasks=1 in your jobconf. I've never tried it, but seems like it should work.
Another suggestion:
Use a filesystem or zookeeper for coordination:
if get_exclusive_lock_on_resource(filesystem_path_or_zookeeper_path):
Do the expensive bit
release_lock(filesystem_path_or_zookeeper_path)
if expensive_bit_not_complete():
sleep 10