Install input secret into OpenShift build configuration - openshift

I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?

This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)

In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched

Related

Use case of OpenShift + buildConfig + ConfigMaps

I am trying to create and run a buildconfig yml file.
C:\OpenShift>oc version
Client Version: 4.5.31
Kubernetes Version: v1.18.3+65bd32d
Background:-
I have multiple Springboot WebUI applications which i need to deploy on OpenShift
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes),
for each and every application seems to be very inefficient.
Instead i would like to have a single set of parameterized yml files
to which i can pass on custom parameters to setup each individual application
Solution so far:-
Version One
Dockerfile-
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties
configmap/myapp-configmap created
$ oc describe cm myapp-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
APPPATH:
----
/app
ARTIFACT:
----
myapp.jar
ARTIFACTURL:
----
"https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
MY_PORT:
----
12305
Events: <none>
buildconfig.yaml snippet
strategy:
dockerStrategy:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
- name: ARTIFACTURL
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACTURL
- name: ARTIFACT
valueFrom:
configMapKeyRef:
name: "myapp-configmap"
key: ARTIFACT
This works fine. However I somehow need to have those env: variables in file.
I am doing this to have greater flexibility, i.e. lets say I have a new variable introduced in docker file, I need NOT change the buildconfig.yml
I just add the new key:value pair to the property file, rebuild and we are good to go
This is what I do next;
Version Two
Dockerfile
FROM org/rhelImage
USER root
# Install Yum Packages
RUN yum -y install\
net-tools\
&& yum -y install nmap-ncat\
#Intializing the variables file;
RUN ["sh", "-c", "source ./MyApp.properties"]
RUN curl -s --create-dirs --insecure -L ${ARTIFACTURL} -o ${APPPATH}/${ARTIFACT}
# Add docker-entrypoint.sh to the image
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod -Rf 775 /app && chmod 775 /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod -R g+rx /app
# Expose port
EXPOSE $MY_PORT
# Set working directory when container starts
WORKDIR $APPPATH
# Starting the applicaiton using ENTRYPOINT
#ENTRYPOINT ["sh","/docker-entrypoint.sh"]
$ oc create configmap myapp-configmap --from-env-file=MyApp.properties=C:\MyRepo\MyTemplates\MyApp.properties
configmap/myapp-configmap created
C:\OpenShift>oc describe configmaps test-configmap
Name: myapp-configmap
Namespace: 1234
Labels: <none>
Annotations: <none>
Data
====
MyApp.properties:
----
APPPATH=/app
ARTIFACTURL="https://myorg/1.2.3.4/myApp-1.2.3.4.jar"
ARTIFACT=myapp.jar
MY_PORT=12035
Events: <none>
buildconfig.yaml snippet
source:
contextDir: "${param_source_contextdir}"
configMaps:
- configMap:
name: "${param_app_name}-configmap"
However the build fails
STEP 9: RUN ls ./MyApp.properties
ls: cannot access ./MyApp.properties: No such file or directory
error: build error: error building at STEP "RUN ls ./MyApp.properties": error while running runtime: exit status 2
This means that the config map file didnt get copy to folder.
Can you please suggest what to do next?
I think you are misunderstanding Openshift a bit.
The first thing you say is
To have separate set of config yml files ( image stream, buildconfig, deployconfig, service, routes), for each and every application seems to be very inefficient.
But that's how kubernetes/openshift works. If your resource files look the same, but only use a different git resource or image for example, then you probably are looking for Openshift Templates.
Instead i would like to have a single set of parameterized yml files to which i can pass on custom parameters to setup each individual application
Yep, I think Openshift Templates is what you are looking for. If you upload your template to the service catalog, whenever you have a new application to deploy, you can add some variables in a UI and click deploy.
An Openshift Template is just a parameterised file for all of your openshift resources (configmap, service, buildconfig, etc.).
If your application needs to be build from some git repo, using some credentials, you can parameterise those variables.
But also take a look at Openshift's Source-to-Image solution (I'm not sure what version you are using, so you'll have to google some resources). It can build and deploy your application without you having to write your own Resource files.

how to patch an uploaded template on openshift

I have a template that I have uploaded to openshift.
$ oc get templates | grep jenkins
jenkins-mycompany Jenkins persistent image 9 (all set) 9
When I get the template, you can see the parameters that are set:
$ oc get template jenkins-mycompany -o json
...
{
"description": "Name of the ImageStreamTag to be used for the Jenkins image.",
"displayName": "Jenkins ImageStreamTag",
"name": "JENKINS_IMAGE_STREAM_TAG",
"value": "jenkins-mycompany:2.0.0-18"
}
I am creating a CI process to build a new Jenkins image and update the template that is uploaded into OpenShift.
I want all params set...
I have tried
oc process -f deploy.yml --param-file=my-param-file | oc create -f-
cat mydeploy.json | oc create -f-
The only way I can get this to work is to do an oc delete templates jenkins-mycompany and then oc create -f deploy.yml.
I want to just patch the value of that one parameter so when I build 2.0.0-19, I just patch the template.
Openshift CLI Reference
You want to use the patch command like so:
oc patch <object_type> <object_name> -p <changes>
For example,
oc patch template jenkins-mycompany -p '{"spec":{"unschedulable":true}}'

Populate environment variables from OpenShift secret with Docker build strategy

I would like to use on opaque OpenShift secret inside a build pod as environment variable. The secret contains three key-value pairs, so they should become available as three environment variables. (This is for OpenShift 3.9).
I have found a documented example for OpenShift's Source build strategy (sourceStrategy), but need this in the context of a build configuration with Docker build strategy (dockerStrategy). oc explain suggests that extraction of secrets into environment variables should work with both build strategies. So far, so good:
oc explain bc.spec.strategy.sourceStrategy.env.valueFrom.secretKeyRef
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
My build configuration is created from a template, so I have added a section like this as a sibling of dockerStragegy where the template refers to the build configuration:
env:
- name: SECRET_1
valueFrom:
secretKeyRef:
name: my-secret
key: role-1
- name: SECRET_2
valueFrom:
secretKeyRef:
name: my-secret
key: role-2
- name: SECRET_3
valueFrom:
secretKeyRef:
name: my-secret
key: role-3
The secret was created like this:
oc create secret generic my-secret \
--from-literal=role-1=... --from-literal=role-2=... --from-literal=role-3=...
After uploading the new template (with oc replace) and recreating the application and hence the build configuration from it (with oc new-app) I observe the following:
The template contains env as expected (checked with oc get template -o yaml).
The build configuration does not contain the desired env (checked with oc get bc -o yaml).
What could be the reason why and am I correct in assuming that secrets can be made available inside environment variables for the Docker build strategy. For context: my Dockerfile sets up a relational database (in its ENTRYPOINT script), and needs to configure passwords for three roles, and these should stem from the secret.
This was my mistake: env should reside as a child (not sibling) of dockerStrategy inside the template (as had already been suggested by oc explain's cited path). I've now fixed this, and so the desired parts now show up both in the template and in the build configuration.

Sharing secret across namespaces

Is there a way to share secrets across namespaces in Kubernetes?
My use case is: I have the same private registry for all my namespaces and I want to avoid creating the same secret for each.
Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Basically, you will have to create the secret for every namespace.
https://kubernetes.io/docs/concepts/configuration/secret/#details
They can only be referenced by pods in that same namespace. But you can just copy secret from one name space to other. Here is a example of copying localdockerreg secret from default namespace to dev:
kubectl get secret localdockerreg --namespace=default --export -o yaml | kubectl apply --namespace=dev -f -
###UPDATE###
In Kubernetes v1.14 --export flag is deprecated. So, the following Command with -oyaml flag will work without a warning in forthcoming versions.
kubectl get secret localdockerreg --namespace=default -oyaml | kubectl apply --namespace=dev -f -
or below if source namespace is not necessarily default
kubectl get secret localdockerreg --namespace=default -oyaml | grep -v '^\s*namespace:\s' | kubectl apply --namespace=dev -f -
The accepted answer is correct: Secrets can only be referenced by pods in that same namespace. So here is a hint if you are looking to automate the "sync" or just copy the secret between namespaces.
Automated (operator)
For automating the share or syncing secret across namespaces use ClusterSecret operator:
https://github.com/zakkg3/ClusterSecret
Using sed:
kubectl get secret <secret-name> -n <source-namespace> -o yaml \
| sed s/"namespace: <source-namespace>"/"namespace: <destination-namespace>"/\
| kubectl apply -n <destination-namespace> -f -
Use jq
If you have jq, we can use the #Evans Tucker solution
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Secrets are namespaced resources, but you can use a Kubernetes extension to replicate them. We use this to propagate credentials or certificates stored in secrets to all namespaces automatically and keep them in sync (modify the source and all copies are updated).
See Kubernetes Reflector (https://github.com/EmberStack/kubernetes-reflector).
The extension allows you to automatically copy and keep in sync a secret across namespaces via annotations:
On the source secret add the annotations:
annotations:
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
This will create a copy of the secret in all namespaces. You can limit the namespaces in which a copy is created using:
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "namespace-1,namespace-2,namespace-[0-9]*"
The extension supports ConfigMaps and cert-manager certificates as well.
Disclainer: I am the author of the Kubernetes Reflector extension.
--export is deprecated
sed is not the appropriate tool for editing YAML or JSON.
Here's an example that uses jq to delete the namespace and other metadata we don't want:
kubectl get secret cure-for-covid-19 -n china -o json \
| jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' \
| kubectl apply -n rest-of-world -f -
Another option would be to use kubed, one of many recommended options from the kind folks at Jetstack who gave us cert-manager. Here is what they link to.
Improving from #NicoKowe
One liner to copy all secrets from one namespace to another
$ for i in `kubectl get secrets | awk '{print $1}'`; do kubectl get secret $1 -n <source-namespace> -o yaml | sed s/"namespace: <source-namespace>"/"namespace: <target-namespace>"/ | kubectl apply -n <target-namespace> -f - ; done
Based on #Evans Tucker's answer but uses whitelisting rather than deletion within the jq filter to only keep what we want.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name"}' | kubectl apply -n rest-of-world -f -
Essentially the same thing but preserves labels.
kubectl get secret cure-for-covid-19 -n china -o json | jq '{apiVersion,data,kind,metadata,type} | .metadata |= {"annotations", "name", "labels"}' | kubectl apply -n rest-of-world -f -
Use RBAC to authorize the serviceaccoun to use the secret on the original namespaces. But, this is not recommended to have a shared secret between namesapces.
As answered by Innocent Anigbo, you need to have the secret in the same namespace. If you need to support that dynamicaly or avoid forgeting secret creation, it might be possible to create an initialiser for namespace object https://kubernetes.io/docs/admin/extensible-admission-controllers/ (have not done that on my own, so cant tell for sure)
Solution for copying all secrets.
kubectl delete secret --namespace $TARGET_NAMESPACE--all;
kubectl get secret --namespace default --output yaml \
| sed "s/namespace: $SOURCE_NAMESPACE/namespace: $TARGET_NAMESPACE/" \
| kubectl apply --namespace $TARGET_NAMESPACE --filename -;
yq is a helpful command-line tool for editing YAML files. I utilized this in conjunction with the other answers to get this:
kubectl get secret <SECRET> -n <SOURCE_NAMESPACE> -o yaml | yq write - 'metadata.namespace' <TARGET_NAMESPACE> | kubectl apply -n <TARGET_NAMESPACE> -f -
You may also think about using GoDaddy's Kubernetes External Secrets! where you will be storing your secrets in AWS Secret Manager(ASM) and GoDaddy's secret controller will create the secrets automatically. Moreover, there would be sync between ASM And K8S cluster.
For me the method suggested by #Hansika Weerasena didn't work and got the following error:
error: the namespace from the provided object "ns_source" does not match the namespace "ns_dest". You must pass '--namespace=ns_source' to perform this operation.
To get around this problem I did the the following:
kubectl get secret my-secret -n ns_source -o yaml > my-secret.yaml
This file needs to be edited and the namespace changed to your desired destination namespace. Then simply do:
kubectl apply -f my-secret.yaml -n ns_destination
Export from one k8s cluster
mkdir <namespace>; cd <namespace>; for i in `kubectl get secrets -n <namespace> | awk '{print $1}'`; do kubectl get secret $i -n <namespace> -o yaml > $i.yaml; done
Import to Second k8s cluster
cd <namespace>; find . -type f -exec kubectl apply -f '{}' -n <namespace> \;
Well, the question is good, but all the solutions are bad!
Secrets contain sensitive data, as you understand, by design you cant use secret from another namespace. So I dont recommend to use a fancy "cluster scope" operator, that will "push" your secret into namespace "toto-*".
That sounds a bad usage of secret and kubernetes declarative model.
Solution 1: a namespace setup Helm chart
This is the easiest approach, create a Helm chart to create the namespace and setup it, by creating resources you want to share.
Solution 2: use external-secret.io
I love https://external-secrets.io/, this is a pull declarative approach. As you can read at https://external-secrets.io/v0.7.2/provider/kubernetes/ , you declare a ExternalSecret to pull data from a Secret on another namespace.
external-secrets.io is production ready, battle tested, support some providers (vault ...).
Solution 3: to share CA
To share CA easily, https://cert-manager.io/docs/projects/trust-manager/. This is a push approach ;-/ but the tool is prod ready.
With helm, I usually define a (group) variable (e.g. $REGISTRY_PASS) in my CD pipeline and add a template file to the helm chart:
apiVersion: v1
data:
.dockerconfigjson: |
{{ .Values.registryPassword }}
kind: Secret
metadata:
name: my-registry
namespace: {{ .Release.Namespace }}
type: kubernetes.io/dockerconfigjson
When deploying the chart, I set the variable registryPassword on the command line like so:
helm install foo/ --values values.yaml \
--set registryPassword="$REGISTRY_PASS" \
--namespace whatever \
--create-namespace
This is fully compatible with local testing and CD.
To get the correctly formatted value for $REGISTRY_PASS, I use kubectl create secret
kubectl create secret docker-registry secret-tiger-docker \
--docker-email=tiger#acme.example \
--docker-username=tiger \
--docker-password=pass1234 \
--docker-server=my-registry.example:5000
to create the intial secret and then use kubectl get secret to get the base64 encoded string (.dockerconfigjson).
kubectl get secret secret-tiger-docker -o yaml
No matter what namespace the application gets installed to, it will always have access to the local registry, since the secret gets installed before the image gets pulled.
kubectl get secret gitlab-registry --namespace=revsys-com --export -o yaml |\
kubectl apply --namespace=devspectrum-dev -f -

How to delete or overwrite a secret in OpenShift?

I'm trying to create a secret on OpenShift v3.3.0 using:
oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project
Because I created the same secret earlier, I get this error message:
Error from server: secrets "my-secret" already exists
I looked at oc, oc create and oc create secret options and could not find an option to overwrite the secret when creating it.
I then tried to delete the existing secret with oc delete. All the commands listed below return either No resources found or a syntax error.
oc delete secrets -l my-secret -n my-project
oc delete secret -l my-secret -n my-project
oc delete secrets -l my-secret
oc delete secret -l my-secret
oc delete pods,secrets -l my-project
oc delete pods,secrets -l my-secret
oc delete secret generic -l my-secret
Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
"my-secret" is the name of the secret, so you should delete it like this:
oc delete secret my-secret
Add -n option if you are not using the project where the secret was created
oc delete secret my-secret -n <namespace>
I hope by this time you might have the answer ready, just sharing if this can help others.
As on today here are the details of CLI version and Openshift version which I am working on:
$ oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server <SERVER-URL>
openshift v3.11.0+ec8630f-265
kubernetes v1.11.0+d4cacc0
Let's take a simple secret with a key-value pair generated using a file, will get to know the advantage if generated via a file.
$ echo -n "password" | base64
cGFzc3dvcmQ=
Will create a secret with this value:
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: cGFzc3dvcmQ=
$ oc apply -f clientSecret.yaml
secret "test-secret" created
Let's change the password and update it in the YAML file.
$ echo -n "change-password" | base64
Y2hhbmdlLXBhc3N3b3Jk
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: Y2hhbmdlLXBhc3N3b3Jk
From the definition of oc create command, it creates a resource if found throws an error. So this command won't fit to update a configuration of a resource, in our case its a secret.
$ oc create --help
Create a resource by filename or stdin
To make life easier, Openshift has provided oc apply command to apply a configuration to a resource if there is a change. This command is also used to create a resource, which helps a lot during automated deployments.
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
$ oc apply -f clientSecret.yaml
secret "test-secret" configured
By the time you check the secret in UI, a new/updated password appears on the console.
So if you have noticed, first time apply has resulted in created - secret "test-secret" created and in subsequent apply results in configured - secret "test-secret" configured