how to patch an uploaded template on openshift - openshift

I have a template that I have uploaded to openshift.
$ oc get templates | grep jenkins
jenkins-mycompany Jenkins persistent image 9 (all set) 9
When I get the template, you can see the parameters that are set:
$ oc get template jenkins-mycompany -o json
...
{
"description": "Name of the ImageStreamTag to be used for the Jenkins image.",
"displayName": "Jenkins ImageStreamTag",
"name": "JENKINS_IMAGE_STREAM_TAG",
"value": "jenkins-mycompany:2.0.0-18"
}
I am creating a CI process to build a new Jenkins image and update the template that is uploaded into OpenShift.
I want all params set...
I have tried
oc process -f deploy.yml --param-file=my-param-file | oc create -f-
cat mydeploy.json | oc create -f-
The only way I can get this to work is to do an oc delete templates jenkins-mycompany and then oc create -f deploy.yml.
I want to just patch the value of that one parameter so when I build 2.0.0-19, I just patch the template.

Openshift CLI Reference
You want to use the patch command like so:
oc patch <object_type> <object_name> -p <changes>
For example,
oc patch template jenkins-mycompany -p '{"spec":{"unschedulable":true}}'

Related

Cannot push image to OCR container repository - unknown: Tenant with namespace - not found

I am trying to push an image to OCR within my 'training' compartment but docker returns with message: "unknown: Tenant with namespace training not found"
The compartment is there:
oci iam compartment list --all --output table --compartment-id-in-subtree true --query "data [?\"lifecycle-state\" =='ACTIVE'].{Name:name}" | grep training
| training |
Create repository 'ocr1'
export DISPLAY_NAME=ocr1
oci artifacts container repository create \
--compartment-id $C \
--is-public false \
--display-name $DISPLAY_NAME
Docker login
cat token | docker login fra.ocir.io --username=${NS}/api.user --password-stdin
Login Succeeded
Tag image and push
docker pull alpine:latest
docker tag alpine:latest fra.ocir.io/training/ocr1/alpine:latest
docker push fra.ocir.io/training/ocr1/alpine:latest
The push refers to repository [fra.ocir.io/training/ocr1/alpine]
7cd52847ad77: Retrying in 1 second
unknown: Tenant with namespace training not found
I am only able to push to root compartment ... (not what I want)
docker tag alpine:latest fra.ocir.io/$NS/ocr1/alpine:latest
docker push fra.ocir.io/$NS/ocr1/alpine:latest
The push refers to repository [fra.ocir.io/<NS>/ocr1/alpine]
7cd52847ad77: Layer already exists
latest: digest: sha256:e2e16842c9b54d985bf1ef9242a313f36b856181f188de21313820e177002501 size: 528
Why can't I push to a given compartment?
Thank you
I'm guessing you may be experiencing a permissions issue. Admins have more permissions in root than in a net new compartment where you need to specify access policies. Is this doc helpful?
https://docs.oracle.com/en-us/iaas/Content/Registry/Concepts/registrypolicyrepoaccess.htm

Install input secret into OpenShift build configuration

I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched

Sharing secret across namespaces

Is there a way to share secrets across namespaces in Kubernetes?
My use case is: I have the same private registry for all my namespaces and I want to avoid creating the same secret for each.
Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Basically, you will have to create the secret for every namespace.
https://kubernetes.io/docs/concepts/configuration/secret/#details
They can only be referenced by pods in that same namespace. But you can just copy secret from one name space to other. Here is a example of copying localdockerreg secret from default namespace to dev:
kubectl get secret localdockerreg --namespace=default --export -o yaml | kubectl apply --namespace=dev -f -
###UPDATE###
In Kubernetes v1.14 --export flag is deprecated. So, the following Command with -oyaml flag will work without a warning in forthcoming versions.
kubectl get secret localdockerreg --namespace=default -oyaml | kubectl apply --namespace=dev -f -
or below if source namespace is not necessarily default
kubectl get secret localdockerreg --namespace=default -oyaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=dev -f -
The accepted answer is correct: Secrets can only be referenced by pods in that same namespace. So here is a hint if you are looking to automate the "sync" or just copy the secret between namespaces.
Automated (operator)
For automating the share or syncing secret across namespaces use ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret
Using sed:
kubectl get secret <secret-name> -n <source-namespace> -o yaml \
| sed s/"namespace: <source-namespace>"/"namespace: <destination-namespace>"/\
| kubectl apply -n <destination-namespace> -f -
Use jq
If you have jq, we can use the #Evans Tucker solution
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Secrets are namespaced resources, but you can use a Kubernetes extension to replicate them. We use this to propagate credentials or certificates stored in secrets to all namespaces automatically and keep them in sync (modify the source and all copies are updated).
See Kubernetes Reflector (https://github.com/EmberStack/kubernetes-reflector).
The extension allows you to automatically copy and keep in sync a secret across namespaces via annotations:
On the source secret add the annotations:
annotations:
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
This will create a copy of the secret in all namespaces. You can limit the namespaces in which a copy is created using:
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
The extension supports ConfigMaps and cert-manager certificates as well.
Disclainer: I am the author of the Kubernetes Reflector extension.
--export is deprecated
sed is not the appropriate tool for editing YAML or JSON.
Here's an example that uses jq to delete the namespace and other metadata we don't want:
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Another option would be to use kubed, one of many recommended options from the kind folks at Jetstack who gave us cert-manager. Here is what they link to.
Improving from #NicoKowe
One liner to copy all secrets from one namespace to another
$ for i in `kubectl get secrets | awk '{print $1}'`; do kubectl get secret $1 -n <source-namespace> -o yaml | sed s/"namespace: <source-namespace>"/"namespace: <target-namespace>"/ | kubectl apply -n <target-namespace> -f - ; done
Based on #Evans Tucker's answer but uses whitelisting rather than deletion within the jq filter to only keep what we want.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name"}' | kubectl apply -n rest-of-world -f -
Essentially the same thing but preserves labels.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name", "labels"}' | kubectl apply -n rest-of-world -f -
Use RBAC to authorize the serviceaccoun to use the secret on the original namespaces. But, this is not recommended to have a shared secret between namesapces.
As answered by Innocent Anigbo, you need to have the secret in the same namespace. If you need to support that dynamicaly or avoid forgeting secret creation, it might be possible to create an initialiser for namespace object https://kubernetes.io/docs/admin/extensible-admission-controllers/ (have not done that on my own, so cant tell for sure)
Solution for copying all secrets.
kubectl delete secret --namespace $TARGET_NAMESPACE--all;
kubectl get secret --namespace default --output yaml \
| sed "s/namespace: $SOURCE_NAMESPACE/namespace: $TARGET_NAMESPACE/" \
| kubectl apply --namespace $TARGET_NAMESPACE --filename -;
yq is a helpful command-line tool for editing YAML files. I utilized this in conjunction with the other answers to get this:
kubectl get secret <SECRET> -n <SOURCE_NAMESPACE> -o yaml | yq write - 'metadata.namespace' <TARGET_NAMESPACE> | kubectl apply -n <TARGET_NAMESPACE> -f -
You may also think about using GoDaddy's Kubernetes External Secrets! where you will be storing your secrets in AWS Secret Manager(ASM) and GoDaddy's secret controller will create the secrets automatically. Moreover, there would be sync between ASM And K8S cluster.
For me the method suggested by #Hansika Weerasena didn't work and got the following error:
error: the namespace from the provided object "ns_source" does not match the namespace "ns_dest". You must pass '--namespace=ns_source' to perform this operation.
To get around this problem I did the the following:
kubectl get secret my-secret -n ns_source -o yaml > my-secret.yaml
This file needs to be edited and the namespace changed to your desired destination namespace. Then simply do:
kubectl apply -f my-secret.yaml -n ns_destination
Export from one k8s cluster
mkdir <namespace>; cd <namespace>; for i in `kubectl get secrets -n <namespace> | awk '{print $1}'`; do kubectl get secret $i -n <namespace> -o yaml > $i.yaml; done
Import to Second k8s cluster
cd <namespace>; find . -type f -exec kubectl apply -f '{}' -n <namespace> \;
Well, the question is good, but all the solutions are bad!
Secrets contain sensitive data, as you understand, by design you cant use secret from another namespace. So I dont recommend to use a fancy "cluster scope" operator, that will "push" your secret into namespace "toto-*".
That sounds a bad usage of secret and kubernetes declarative model.
Solution 1: a namespace setup Helm chart
This is the easiest approach, create a Helm chart to create the namespace and setup it, by creating resources you want to share.
Solution 2: use external-secret.io
I love https://external-secrets.io/, this is a pull declarative approach. As you can read at https://external-secrets.io/v0.7.2/provider/kubernetes/ , you declare a ExternalSecret to pull data from a Secret on another namespace.
external-secrets.io is production ready, battle tested, support some providers (vault ...).
Solution 3: to share CA
To share CA easily, https://cert-manager.io/docs/projects/trust-manager/. This is a push approach ;-/ but the tool is prod ready.
With helm, I usually define a (group) variable (e.g. $REGISTRY_PASS) in my CD pipeline and add a template file to the helm chart:
apiVersion: v1
data:
.dockerconfigjson: |
{{ .Values.registryPassword }}
kind: Secret
metadata:
name: my-registry
namespace: {{ .Release.Namespace }}
type: kubernetes.io/dockerconfigjson
When deploying the chart, I set the variable registryPassword on the command line like so:
helm install foo/ --values values.yaml \
--set registryPassword="$REGISTRY_PASS" \
--namespace whatever \
--create-namespace
This is fully compatible with local testing and CD.
To get the correctly formatted value for $REGISTRY_PASS, I use kubectl create secret
kubectl create secret docker-registry secret-tiger-docker \
--docker-email=tiger#acme.example \
--docker-username=tiger \
--docker-password=pass1234 \
--docker-server=my-registry.example:5000
to create the intial secret and then use kubectl get secret to get the base64 encoded string (.dockerconfigjson).
kubectl get secret secret-tiger-docker -o yaml
No matter what namespace the application gets installed to, it will always have access to the local registry, since the secret gets installed before the image gets pulled.
kubectl get secret gitlab-registry --namespace=revsys-com --export -o yaml |\
kubectl apply --namespace=devspectrum-dev -f -

Openshift: how to edit scc non-interactively?

I am experimenting with openshift/minishift, I find myself having to run:
oc edit scc privileged
and add:
- system:serviceaccount:default:router
So I can expose the pods. Is there a way to do it in a script?
I know oc adm have some command for policy manipulation but I can't figure out how to add this line.
You can achieve it using oc patch command and with type json. The snippet below will add a new item to array before 0th element. You can try it out with a fake "bla" value etc.
oc patch scc privileged --type=json -p '[{"op": "add", "path": "/users/0", "value":"system:serviceaccount:default:router"}]'
The --type=json will interpret the provided patch as jsonpatch operation. Unfortunately oc patch --help doesn't provide any example for json patch type. Luckily example usage can be found in kubernetes docs: kubectl patch
I have found an example piping to sed Here and adapted it to ruby so I can easily edit the data structure.
oc get scc privileged -o json |\
ruby -rjson -e 'i = JSON.load(STDIN.read); i["users"].push "system:serviceaccount:default:router"; puts i.to_json ' |\
oc replace scc -f -
Here is quick and dirty script to get started with minishift
The easiest way to add and remove users to SCCs from the command line is using the oc adm policy commands:
oc adm policy add-scc-to-user <scc_name> <user_name>
For more info, see this section.
So for your specific use-case, it would be:
oc adm policy add-scc-to-user privileged system:serviceaccount:default:router
I'm surprised its needed though. I use "oc cluster up" normally, but testing with recent minishift, its already added out of the box:
$ minishift start
$ eval $(minishift oc-env)
$ oc login -u system:admin
$ oc get scc privileged -o yaml | grep system:serviceaccount:default:router
- system:serviceaccount:default:router
$ minishift version
minishift v1.14.0+1ec5877
$ oc version
openshift v3.7.1+a8deba5-34

How to delete or overwrite a secret in OpenShift?

I'm trying to create a secret on OpenShift v3.3.0 using:
oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project
Because I created the same secret earlier, I get this error message:
Error from server: secrets "my-secret" already exists
I looked at oc, oc create and oc create secret options and could not find an option to overwrite the secret when creating it.
I then tried to delete the existing secret with oc delete. All the commands listed below return either No resources found or a syntax error.
oc delete secrets -l my-secret -n my-project
oc delete secret -l my-secret -n my-project
oc delete secrets -l my-secret
oc delete secret -l my-secret
oc delete pods,secrets -l my-project
oc delete pods,secrets -l my-secret
oc delete secret generic -l my-secret
Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
"my-secret" is the name of the secret, so you should delete it like this:
oc delete secret my-secret
Add -n option if you are not using the project where the secret was created
oc delete secret my-secret -n <namespace>
I hope by this time you might have the answer ready, just sharing if this can help others.
As on today here are the details of CLI version and Openshift version which I am working on:
$ oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server <SERVER-URL>
openshift v3.11.0+ec8630f-265
kubernetes v1.11.0+d4cacc0
Let's take a simple secret with a key-value pair generated using a file, will get to know the advantage if generated via a file.
$ echo -n "password" | base64
cGFzc3dvcmQ=
Will create a secret with this value:
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: cGFzc3dvcmQ=
$ oc apply -f clientSecret.yaml
secret "test-secret" created
Let's change the password and update it in the YAML file.
$ echo -n "change-password" | base64
Y2hhbmdlLXBhc3N3b3Jk
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: Y2hhbmdlLXBhc3N3b3Jk
From the definition of oc create command, it creates a resource if found throws an error. So this command won't fit to update a configuration of a resource, in our case its a secret.
$ oc create --help
Create a resource by filename or stdin
To make life easier, Openshift has provided oc apply command to apply a configuration to a resource if there is a change. This command is also used to create a resource, which helps a lot during automated deployments.
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
$ oc apply -f clientSecret.yaml
secret "test-secret" configured
By the time you check the secret in UI, a new/updated password appears on the console.
So if you have noticed, first time apply has resulted in created - secret "test-secret" created and in subsequent apply results in configured - secret "test-secret" configured