How do I use an imagestream from another namespace in openshift? - openshift

I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?

When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.

Related

Using JSON Patch on Kubernetes yaml file

I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'

How can I use Optional Image Inputs in BuildConfig

As documented in https://docs.openshift.com/container-platform/4.3/builds/creating-build-inputs.html#image-source_creating-build-inputs I have configured an Image source for my BuildConfig:
source:
images:
- from:
kind: ImageStreamTag
name: optional-data-image:latest
paths:
- sourcePath: /.
destinationDir: "image-sources/optional-data-dir"
When I start the above build it fails to start with the below message
Warning BuildConfigInstantiateFailed 6m26s buildconfig-controller error instantiating Build from BuildConfig next/site (0): Build.build.openshift.io "my-build-1" is invalid: [spec.source.images[1].from.name: Required value]
Is there a way to specify an optional Image Input so that if the image does not exist the build to still continue normally?
Your build has failed because you did not specifiy from.
strategy:
type: Source
sourceStrategy:
from:
kind: ImageStreamTag
namespace: openshift
name: 'java:8'

KNative serving is not showing Ready after installing on Openshift

Followed the link - https://docs.openshift.com/container-platform/4.1/serverless/installing-openshift-serverless.html to install KNative Serving on top of Openshift v4.1. After installing all the openshift operators, control plane. member roll etc as given in the link; I expect to see that serving component is running by executing -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}'
But the above returns nothing. Just returns back the prompt.
Also below are o/p of the get resource command of the serving component -
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving
NAME VERSION READY REASON
knative-serving
C:\Knative installation>oc get knativeserving/knative-serving -n knative-serving -o yaml
apiVersion: serving.knative.dev/v1alpha1
kind: KnativeServing
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"serving.knative.dev/v1alpha1","kind":"KnativeServing","metadata": {"annotations":{},"name":"knative-serving","namespace":"knative-serving"}}
creationTimestamp: "2020-01-12T10:53:42Z"
generation: 1
name: knative-serving
namespace: knative-serving
resourceVersion: "63660251"
selfLink: /apis/serving.knative.dev/v1alpha1/namespaces/knative-serving/knativeservings/knative-serving
uid: cc4b330f-3529-11ea-83ef-0272cb600f74
What could be wrong? I believe KNative Serving did not install correctly but not sure how to debug. I uninstalled and reinstalled several times but no help.
Also, I thought to proceed and install a service using KNative Serving (ref link https://docs.openshift.com/container-platform/4.1/serverless/getting-started-knative-services.html) but, applying the very first resource shows problem.
service.yaml
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: "Go Sample v1"
Applying service.yaml returns error.
C:\start Knative service> oc apply --filename service.yaml
error: unable to recognize "service.yaml": no matches for kind "Service" in version "serving.knative.dev/v1alpha1"
Any help is appreciated. Thanks.

Openshift/OKD - how to template docker secrets

As a admin with a lot of parameterized Openshift Templates, I am struggling to create a parameterized SECRET objects in the templates for type kubernetes.io/dockerconfigjson or kubernetes.io/dockercfg so that the secret can be used for docker pulls.
Challenge:Everything is pre-base64 encoded in JSON format for normal dockerconfigjson template setup, and not sure how to change it.
The Ask: How to create a SECRET template that takes parameters ${DOCKER_USER}, ${DOCKER_PASSWORD}, ${DOCKER_SERVER}, and ${DOCKER_EMAIL} to then create the actual secret that can be used to pull docker images from a private/secured docker registry.
This is to replace commandline "oc create secret docker-registry ...." techniques by putting them in a template file stored within gitlab/github to have a gitOps style deployment pattern.
Thanks!
The format of the docker configuration secrets can be found in the documentation (or in your cluster via oc export secret/mysecret) under the Using Secrets section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:<base64encoded docker-config.json>
One method would be to accept the pre-based64 encoded contents of the json file in your template parameters and stuff them into the data section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:${BASE64_DOCKER_JSON}
Another method would be to use the stringData field of the secret object. As noted on the same page:
Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:${REGULAR_DOCKER_JSON}
The format of the actual value of the .dockerconfigjson key is the same as the contents of the .docker/config.json file. So in your specific case you might do something like:
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:'{"auths": {"${REGISTRY_URL}": {"auth": "${BASE64_USERNAME_COLON_PASSWORD}"}}}'
Unfortunately the template langugage OpenShift uses for templates isn't quite powerful enough to base64 encode the actual paramter values for you, so you can't quite escape having to encode the username:password pair outside of the template itself, but your CI/CD tooling should be more than capable of performing this with raw username/password strings.

OpenShift runs same container twice when it should run two different ones inside pod

I want OpenShift 3.10 to create a pod (console) comprising two containers (api and console). The relevant description in the application template (under dc.spec.template.spec.containers) for DeploymentConfig console looks like this:
containers:
- image: console:api
imagePullPolicy: Always
name: api
terminationMessagePolicy: File
- image: console:console
imagePullPolicy: Always
name: console
ports:
- containerPort: 80
protocol: TCP
terminationMessagePolicy: File
oc describe is/console looks good to me and reports the following (the BuildConfigs for the two containers output to ImageStreamTags console:api and console:console respectively):
api
no spec tag
* docker-registry.default.svc:5000/registry/console#sha256:96...66
console
no spec tag
* docker-registry.default.svc:5000/registry/console#sha256:8a...02
But oc describe pods --selector deploymentconfig=console reveals that the same image has been pulled twice and hence the same container runs twice inside the pod:
Successfully pulled image "docker-registry.default.svc:5000/registry/console#sha256:8a...02"
Successfully pulled image "docker-registry.default.svc:5000/registry/console#sha256:8a...02"
How can I ensure that the pod indeed comprises the two distinct containers? And why is the image stream tag console:api apparently at times not referring to image 96...66 but also to 8a...02, contrary to what os describe is/console suggests?
UPDATE The mismatch is also apparent in oc describe dc/console, which indicates that both image stream tags console:api and console:console apparently have been resolved to the same container image 8a...02:
Containers:
api:
Image: docker-registry.default.svc:5000/registry/console#sha256:8a...02
console:
Image: docker-registry.default.svc:5000/registry/console#sha256:8a...02
The following change to dc.spec.triggers seems to have resolved the situation:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- api
from:
kind: ImageStreamTag
name: console:api
namespace: registry
type: ImageChange
- imageChangeParams:
automatic: true
containerNames:
- console
from:
kind: ImageStreamTag
name: console:console
namespace: registry
type: ImageChange
Previously, there was only a single imageChangeParams for console:console. The pod now comprises the two distinct containers.