Openshift 4.6 Node and Master Config Files - openshift

Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml

You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'

These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.

Related

Openshift - API to get ARTIFACT_URL parameter of a pod or the version of its deployed app

What I want to do is to make a web app that lists in one single view the version of every application deployed in our Openshift (a fast view of versions). At this moment, the only way I have seen to locate the version of an app deployed in a pod is the ARTIFACT_URL parameter in the envirorment view, that's why I ask for that parameter, but if there's another way to get a pod and the version of its current app deployed, I'm also open to that option as long as I can get it through an API. Maybe I'd eventually also need an endpoint that retrieves the list of the current pods.
I've looked into the Openshift API and the only thing I've found that may help me is this GET but if the parameter :id is what I think, it changes with every deploy, so I would need to be modifying it constantly and that's not practical. Obviously, I'd also need an endpoint to get the list of IDs or whatever that let me identify the pod when I ask for the ARTIFACT_URL
Thanks!
There is a way to do that. See https://docs.openshift.com/enterprise/3.0/dev_guide/environment_variables.html
List Environment Variables
To list environment variables in pods or pod templates:
$ oc env <object-selection> --list [<common-options>]
This example lists all environment variables for pod p1:
$ oc env pod/p1 --list
I suggest redesigning builds and deployments if you don't have persistent app versioning information outside of Openshift.
If app versions need to be obtained from running pods (e.g. with oc rsh or oc env as suggested elsewhere), then you have a serious reproducibility problem. Git should be used for app versioning, and all app builds and deployments, even in dev and test environments should be fully automated.
Within Openshift you can achieve full automation with Webhook Triggers in your Build Configs and Image Change Triggers in your Deployment Configs.
Outside of Openshift, this can be done at no extra cost using Jenkins (which can even be run in a container if you have persistent storage available to preserve its settings).
As a quick workaround you may also consider:
oc describe pods | grep ARTIFACT_URL
to get the list of values of your environment variable (here: ARTIFACT_URL) from all pods.
The corresponding list of pod names can be obtained either simply using 'oc get pods' or a second call to oc describe:
oc describe pods | grep "Name: "
(notice the 8 spaces needed to filter out other Names:)

How to manage settings in Openshift?

profile.properties file not found in Source code in repository?
Is it possible using environment variable in openshift?
If yes, how can set -Dkeycloak.profile.feature.scripts=enabled in Openshift environment?
Environment Variables are a first class concept in Openshift. There are many ways to use them:
You can set them directly on your BuildConfig to ”bake them into” your containers. This isn't best practices as then they won't change when you move them through environments but may be necessary to configure your build or set things that won't change (e.g. set the port number node.js uses to match the official node.js image with ”PORT=8080”)
You can put such variables into either ConfigMap or Secret configuration objects to easily share them between many similar BuildConfig
You can set them directly on DeploymentConfig so that they are set for every pod that is launched by that deployment. This is a fairly common way of setting up application specific environment variables. Its not a good idea to use this for settings that are shared between multiple applications as you would have to change common variables in many places.
You can set them up in ConfigMaps and Secrets and apply them to multiple DeploymentConfigs. That way you can manage them in one place.
Its common to see devs use a .env file that is named in .gitignore so not in git. In the past I have written scripts to load that into a Secret within openshift then use envFrom to set that secret on the deployment. Then have an .env.staging and .env.live that we git secret encrypt into git.
The problem with .env files is that they tend to get messy and have unused junk after a while. So we broke the file into one Secret to be database creds, separate Secrets for each api creds, a ConfigMap for app specific settngs. A ConfigMap for shared settings.
These days we use Helmfile to load all our config from git based on git webhooks. All the config is yaml in a git repo (with secret yaml encrypted). If you merge a change to the config git repo a webhook handler decrypts the config and runs Helmfile to update the settings in openshift. I am in the process of open sourcing everything including using a chatbot to manage releases (optional) over on GitHub
I should also say that openshift automatically creates many environment variables to help you configure you apps. In each project a lot of variables are set in every pod telling you the details of all the services you have setup in that project.
Openshift also sets up internal dns entries for your services. This means that if App A uses App B you don't have to configure A with a URL for B yourself. Rather there will be a dns entry for B and you can use the env vars that openshift sets on A to work out the dns entry to and the port number to use (e.g. dns entry includes project name and that is automatically set as an env var by openshift). So our apps can find a redis service running in the same project using that technique.

Have `oc` follow a cluster depending on directory

I use the oc tool for several different clusters.
Since I am usually keeping local yaml files for any OpenShift objects I view/modify, either ad hoc or due to some config management scheme of the individual cluster, I have a separate directory on my machine for each cluster (which, in turn, is of coursed versioned in git). Let's call them ~/clusters/a/, ~/clusters/b/ etc.
Now. When I cd around on my local machine, the oc command uses the global ~/.kube/config to find the cluster I logged in last, to. Or in other words, oc does not care at all about which directory I am in.
Is there a way to have oc store a "local" configuration (i.e. in ~/clusters/a/.kube_config or something like that), so that when I enter the ~/clusters/a/ directory, I am automatically working with that cluster without having to explicitely switch clusters with oc login?
You could set the KUBECONFIG environment variable to specify different directories for configuration for each cluster. You would need to set the environment variable to respective directories in each separate terminal session window.
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#set-the-kubeconfig-environment-variable
To expand on Graham's answer, KUBECONFIG can specify a list of config files which will be merged if more than one exist. The first to set a particular value wins, as described in the merging rules.
So you can add a local config with just the current-context, e.g. ~/clusters/a/.kube_config could be
current-context: projecta/192-168-99-100:8443/developer
and ~/clusters/b/.kube_config:
current-context: projectb/192-168-99-101:8443/developer
Obviously need to adjust this for your particular cluster using the format
current-context: <namespace>/<cluster>/<user>
Then set KUBECONFIG with a relative path and the global config
export KUBECONFIG=./.kube_config:~/.kube/config
Note that if ./.kube_config does not exist it will be ignored.
The current-content will then be overridden by the one defined in the local .kube_config, if one exists.
I tested this locally with 2 minishift clusters and it seemed to work ok. Have not tested what the behaviour is when setting config though.

Containers reconfiguration in real-time

I have faced with following case and haven't found clear answer for me.
Preconditions:
I have kubernetes cluster
there are some options related to my application (for example debug_level=Error)
there are pods running and each of them uses configuration (ENV, mount path or cli args)
later I need to change value of some option (the same 'debug_level' Error -> Debug)
The Q is:
how should I notify my Pods that configuration has changed?
Earlier we could just send HUP signal to the exact process directly or call systemctl reload app.service
What are the best practices for this use-case?
Thanks.
I think this is something you could achieve using sidecar containers. This sidecar container could monitor for changes in the configuration and send the signal to the appropiate process. More info here: http://blog.kubernetes.io/2015/06/the-distributed-system-toolkit-patterns.html
Tools like kubediff or kube-applier can compare your kubernetes YAML files, to what's running on the cluster.
https://github.com/weaveworks/kubediff
https://github.com/box/kube-applier

Service Fabric SDK 2.2.207 how to change data and log paths?

Since installing Service Fabric SDK 2.2.207 I'm not able to change the cluster data and log paths (with previous SDKs I could).
I tried:
Editing the registry keys in HKLM\Software\Microsoft\Service Fabric - they just revert back to C:\SfDevCluster\data and C:\SfDevCluster\log when the cluster is created.
Running poweshell: & "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1" -PathToClusterDataRoot d:\SfDevCluster\data -PathToClusterLogRoot d:\SfDevCluster\log - this works successfully but upon changing the cluster mode to 1-node (newly available configuration with this SDK), the cluster moves to the C drive.
Any help is appreciated!
Any time you switch cluster mode on local dev box, existing cluster is removed and a new one is created. You can pass use \DevClusterSetup.ps1 to switch mode from 5->1 node, by passing -CreateOneNodeCluster to create one node cluster and pass Data and Log root paths to it as well.