I'm trying to create a secret on OpenShift v3.3.0 using:
oc create secret generic my-secret --from-file=application-cloud.properties=src/main/resources/application-cloud.properties -n my-project
Because I created the same secret earlier, I get this error message:
Error from server: secrets "my-secret" already exists
I looked at oc, oc create and oc create secret options and could not find an option to overwrite the secret when creating it.
I then tried to delete the existing secret with oc delete. All the commands listed below return either No resources found or a syntax error.
oc delete secrets -l my-secret -n my-project
oc delete secret -l my-secret -n my-project
oc delete secrets -l my-secret
oc delete secret -l my-secret
oc delete pods,secrets -l my-project
oc delete pods,secrets -l my-secret
oc delete secret generic -l my-secret
Do you know how to delete a secret or overwrite a secret upon creation using the OpenShift console or the command line?
"my-secret" is the name of the secret, so you should delete it like this:
oc delete secret my-secret
Add -n option if you are not using the project where the secret was created
oc delete secret my-secret -n <namespace>
I hope by this time you might have the answer ready, just sharing if this can help others.
As on today here are the details of CLI version and Openshift version which I am working on:
$ oc version
oc v3.6.173.0.5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth
Server <SERVER-URL>
openshift v3.11.0+ec8630f-265
kubernetes v1.11.0+d4cacc0
Let's take a simple secret with a key-value pair generated using a file, will get to know the advantage if generated via a file.
$ echo -n "password" | base64
cGFzc3dvcmQ=
Will create a secret with this value:
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: cGFzc3dvcmQ=
$ oc apply -f clientSecret.yaml
secret "test-secret" created
Let's change the password and update it in the YAML file.
$ echo -n "change-password" | base64
Y2hhbmdlLXBhc3N3b3Jk
$ cat clientSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
clienttoken: Y2hhbmdlLXBhc3N3b3Jk
From the definition of oc create command, it creates a resource if found throws an error. So this command won't fit to update a configuration of a resource, in our case its a secret.
$ oc create --help
Create a resource by filename or stdin
To make life easier, Openshift has provided oc apply command to apply a configuration to a resource if there is a change. This command is also used to create a resource, which helps a lot during automated deployments.
$ oc apply --help
Apply a configuration to a resource by filename or stdin.
$ oc apply -f clientSecret.yaml
secret "test-secret" configured
By the time you check the secret in UI, a new/updated password appears on the console.
So if you have noticed, first time apply has resulted in created - secret "test-secret" created and in subsequent apply results in configured - secret "test-secret" configured
Related
In bash I am trying to use a variable in a jsonpath for an openshift patch cli command:
OS_OBJECT='sample.k8s.io/element'
VALUE='5'
oc patch quota "my-object" -p '{"spec":{"hard":{"$OS_OBJECT":"$VALUE"}}}'
But that gives the error:
Error from server: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
indicating that the variable is not substituted/expanded.
If I write it explicitly it works:
oc patch quota "my-object" -p '{"spec":{"hard":{"sample.k8s.io/element":"5"}}}'
Any suggestions on how to include a variable in the jsonstring?
EDIT: Based on below answer I have also tried:
oc patch quota "my-object" -p "{'spec':{'hard':{'$OS_OBJECT':'$VALUE'}}}"
but that gives the error:
Error from server (BadRequest): invalid character '\'' looking for beginning of object key string
In single quotes everything is preserved by bash, you have to use double quotes for string interpolation to work (and use the escape sequence \" for the other double quotes).
Try this out:
oc patch quota "my-object" -p "{\"spec\":{\"hard\":{\"$OS_OBJECT\":\"$VALUE\"}}}"
Instead of OC patch would prefer OC apply on your templates. Templates are the best way to configure Openshift/Kubernetes objects which can be stored in git for version control to follow Infrastructure as code.
I am not the admin for my Openshift cluster, so can't access the resource quotas hence suggesting a way in Kubernetes but same can be applied in Openshift too, except the CLI change from kubectl to oc
Let's take a simple resource quota template:
$ cat resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: demo-quota
spec:
hard:
cpu: "1"
memory: 2Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]
Now configure your quota using kubectl apply on your template. This creates resource which is configured in the template, in our case its resourcequota
$ kubectl apply -f resourcequota.yaml
resourcequota/demo-quota created
$ kubectl get quota
NAME CREATED AT
demo-quota 2019-11-19T12:23:37Z
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 1
memory 0 2Gi
pods 0 10
As your looking for an update in resource quota using patch, I would suggest here to edit the template and execute kubectl apply again to update the object.
$ kubectl apply -f resourcequota.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
resourcequota/demo-quota configured
$ kubectl describe quota demo-quota
Name: demo-quota
Namespace: default
Resource Used Hard
-------- ---- ----
cpu 0 2
memory 0 4Gi
pods 0 20
Similarly, you can execute oc apply for your operations as oc patch is not so user-friendly to configure.
Openshift does not allow to run containers as root, but you can do this by creating a service account:
oc adm policy add-scc-to-user anyuid -z useroot
and then patching the deployment configuration, this will consequently deploy a new replication controller version with the new changes, is it possible to create the service account and include it in the following command:
oc new-app --name=test --docker-image=myregistry.com/test:latest
and have the service Account name included in the above command to avoid having a new version of the app or if there's any other possibility to foresee this root permission error and decrease the security for the pod to run as root without patching or redeploy the app
Will and Graham has already provided great comments for you,
so I suggest additional practical details of them as follows.
If you grant anyuid scc to default ServiceAccount before oc new-app, the test pods are going to run as root permission without version change.
# oc adm policy add-scc-to-user anyuid -z default
# oc new-app --name=test --docker-image=myregistry.com/test:latest
# oc rollout history dc/test
deploymentconfigs "test"
REVISION STATUS CAUSE
1 Complete config change
# oc rsh dc/test id
uid=0(root) gid=0(root) groups=0(root)
OR
If you need to specify the custom ServiceAccount name, you can extract oc new-app yaml and create resources after add serviceAccountName: useroot element to it. These steps also do not change the deployment version.
# oc create sa useroot
# oc adm policy add-scc-to-user anyuid -z useroot
# oc new-app --name=test --docker-image=myregistry.com/test:latest -o yaml --dry-run > test.yml
# vim test.yml
apiVersion: v1
items:
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
...
spec:
...
template:
spec:
serviceAccountName: useroot
...
# oc create -f ./test.yml
imagestream.image.openshift.io/test created
deploymentconfig.apps.openshift.io/test created
service/test created
# oc rollout history dc/test
deploymentconfigs "test"
REVISION STATUS CAUSE
1 Complete config change
# oc rsh dc/test id
uid=0(root) gid=0(root) groups=0(root)
I need to automate monitoring log of pods of an app
Monitoring a pod's log can be done using oc CLI
oc log -f my-app-5-43j
However, the pod's name changes dynamically over the deployments. If I want to automate the monitoring, like running a cron job, continually tailing the log even after another deployment, how should I do?
Will Gordon already commented solution, so I provide more practical usage for your understanding.
If you deploy your pod using deploymentConfig, daemonSet and so on, you can see logs of the pod without specifying a pod name as follows.
# oc logs -f dc/<your deploymentConfig name>
# oc logs -f ds/<your daemonset name>
Or you can get first pod name dynamically using jsonpath output option to see log.
# oc logs -f $(oc get pod -o jsonpath='{.items[0].metadata.name}')
If you can specify the pod with a specific label, you can use -l option either.
# oc logs -f $(oc get pod -l app=database -o jsonpath='{.items[0].metadata.name}')
I have an OpenShift 3.9 build configuration my_bc and a secret my_secret of type kubernetes.io/ssh-auth. The secret was created like so:
oc create secret generic my_secret \
--type=kubernetes.io/ssh-auth \
--from-file=key
I have installed it as source secret into my_bc, and oc get bc/my_bc -o yaml reveals this spec:
source:
contextDir: ...
git:
uri: ...
sourceSecret:
name: my_secret
type: Git
As such, it is already effective in the sense that the OpenShift builder can pull from my private Git repository and produce an image with its Docker strategy.
I would now like to add my_secret also as an input secret to my_bc. My understanding is that this would not only allow the builder to make use of it (as source secret), but would allow other components inside the build to pick it up as well (as input secret). E.g. for the Docker strategy, it would exist in WORKDIR.
The documentation explains this with an example that adds the input secret when a build configuration is created:
oc new-build \
openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git \
--build-secret secret-npmrc
Now the corresponding spec refers to the secret under secrets (not: sourceSecret), presumably because it is now an input secret (not: source secret).
source:
git:
uri: https://github.com/openshift/nodejs-ex.git
secrets:
- destinationDir: .
secret:
name: secret-npmrc
type: Git
oc set build-secret apparently allows adding source secrets (as well as push and pull secrets -- these are for interacting with container registries) to a build configuration with command line argument --source (as well as --push/--pull), but what about input secrets? I did not find out yet.
So I have these questions:
How can I add my_secret as input secret to an existing build configuration such as my_bc?
Where would the input secret show up at build time , e.g. under which path could a Dockerfile pick up the private key that is stored in my_secret?
This procedure now works for me (thanks to #GrahamDumpleton for his guidance):
leave build configuration's source secret as is for now; get bc/my_bc -o jsonpath='{.spec.source.sourceSecret}' reports map[name:my_secret] (w/o path)
add input secret to build configuration at .spec.source.secrets with YAML corresponding to oc explain bc.spec.source.secrets: oc edit bc/my_bc
sanity checks: oc get bc/my_bc -o jsonpath='{.spec.source.secrets}' reports [map[destinationDir:secret secret:map[name:my_secret]]]; oc describe bc/my_bc | grep 'Source Secret:' reports Source Secret: my_secret (no path) and oc describe bc/my_bc | grep "Build Secrets:" reports Build Secrets: my_secret->secret
access secret inside Dockerfile in a preliminary way: COPY secret/ssh-privatekey secret/my_secret, RUN chmod 0640 secret/my_secret; adjust ssh-privatekey if necessary (as suggested by oc get secret/my_secret -o jsonpath='{.data}' | sed -ne 's/^map\[\(.*\):.*$/\1/p')
rebuild and redeploy image
sanity check: oc exec -it <pod> -c my_db file /secret/my_secret reports /secret/my_secret: PEM RSA private key (the image's WORKDIR is /)
In the comments to the question it mentions to patch the BuildConfig. Here is a patch that works on v3.11.0:
$cat patch.json
{
"spec": {
"source": {
"secrets": [
{
"secret": {
"name": "secret-npmrc"
},
"destinationDir": "/etc"
}
]
}
}
}
$ oc patch -n your-eng bc/tag-realworld -p "$(<patch.json)"
buildconfig "tag-realworld" patched
I am writing sample program to deploy into Openshift with configmap. I have the following configmap yaml in the source code folder so when devops is setup, Jenkins should pick up this yaml and create/update the configs.
apiVersion: v1
kind: ConfigMap
metadata:
name: sampleapp
data:
username: usernameTest
password: passwordTest
However, I could not find the command that would create/update if the config already exist (similar to kubectl apply command). Can you help with the correct command which would create the Resource if the job is run for the first time and update if otherwise.
I also want to create/update the Services,Routes from the yaml files in the src repository.
Thanks.
you can use "oc apply" command to update the resources already exists.
Like below Example:
#oc process -f openjdk-basic-template.yml -p APPLICATION_NAME=spring-rest -p SOURCE_REPOSITORY_URL=https://github.com/rest.git -p CONTEXT_DIR='' | oc apply -f-
service "spring-rest" configured
route "spring-rest" created
imagestream "spring-rest" configured
buildconfig "spring-rest" configured
deploymentconfig "spring-rest" configured
If you have configmap in yaml file or you store in some place
you can do replace it.
oc replace --force -f config-map.yaml this will update the existing configmap (it actually deletes and creates a new one)
After this - I executed:
oc set env --from=configmap/example-cm dc/example-dc