Adjusting spec/destination in ArgoCD Application with kustomize - openshift

I am relatively new to kustomize and am trying to work out how I can use it to adjust the destination namespace/server in a argocd Application resource.
The resource is defined as below:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: application-1
namespace: openshift-gitops
spec:
destination:
namespace: argo-app
server: https://kubernetes.default.svc
project: default
source:
path: .
targetRevision: master
repoURL: git#github.com:organization/example.git
syncPolicy:
automated:
prune: true
selfHeal: true
I want to adjust the destination namespace and the destination server. I need to do this without it changing the application resource namespace which needs to remain in openshift-gitops.
I am having difficulty with this. I have tried a patch method and a CRD but I lack the technical knowledge to implement either of these correctly.
Any help is greatly appreciated.

Related

Using JSON Patch on Kubernetes yaml file

I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'

How do I use an imagestream from another namespace in openshift?

I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?
When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.

How to set timeout for gloo ingress controller

I am replacing nginx ingress with gloo ingress controller in kubernetes cluster and want to set timeout for response.There is an annotation for this in nginx.
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
Is there anything similar this in gloo-ingress-controller or else do I have to use virtualservice for this?
The only annotation that you are supposed to use with Gloo is kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller. This requirement will go away if you add the ability for Gloo to be the default Ingress controller for your cluster. Also, according to the documentation:
If you need more advanced routing capabilities, we encourage you to
use Gloo VirtualServices by installing as glooctl install gateway.
Gloo Gateway uses Kubernetes Custom Resources instead of Ingress
Objects as the only way to configure Ingress’ beyond their basic
routing spec is to use lots of vendor-specific Kubernetes Annotations
to your Kubernetes manifests.
So you are supposed to use VirtualService in order to achieve your goal. You can see the example below:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: 'default'
namespace: 'gloo-system'
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: '/petstore'
routeAction:
single:
upstream:
name: 'default-petstore-8080'
namespace: 'gloo-system'
options:
timeout: '20s'
retries:
retryOn: 'connect-failure'
numRetries: 3
perTryTimeout: '5s'
I hope this helps.

Openshift/OKD - how to template docker secrets

As a admin with a lot of parameterized Openshift Templates, I am struggling to create a parameterized SECRET objects in the templates for type kubernetes.io/dockerconfigjson or kubernetes.io/dockercfg so that the secret can be used for docker pulls.
Challenge:Everything is pre-base64 encoded in JSON format for normal dockerconfigjson template setup, and not sure how to change it.
The Ask: How to create a SECRET template that takes parameters ${DOCKER_USER}, ${DOCKER_PASSWORD}, ${DOCKER_SERVER}, and ${DOCKER_EMAIL} to then create the actual secret that can be used to pull docker images from a private/secured docker registry.
This is to replace commandline "oc create secret docker-registry ...." techniques by putting them in a template file stored within gitlab/github to have a gitOps style deployment pattern.
Thanks!
The format of the docker configuration secrets can be found in the documentation (or in your cluster via oc export secret/mysecret) under the Using Secrets section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:<base64encoded docker-config.json>
One method would be to accept the pre-based64 encoded contents of the json file in your template parameters and stuff them into the data section.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson:${BASE64_DOCKER_JSON}
Another method would be to use the stringData field of the secret object. As noted on the same page:
Entries in the stringData map are converted to base64 and the entry will then be moved to the data map automatically. This field is write-only; the value will only be returned via the data field.
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:${REGULAR_DOCKER_JSON}
The format of the actual value of the .dockerconfigjson key is the same as the contents of the .docker/config.json file. So in your specific case you might do something like:
apiVersion: v1
kind: Secret
metadata:
name: aregistrykey
namespace: myapps
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson:'{"auths": {"${REGISTRY_URL}": {"auth": "${BASE64_USERNAME_COLON_PASSWORD}"}}}'
Unfortunately the template langugage OpenShift uses for templates isn't quite powerful enough to base64 encode the actual paramter values for you, so you can't quite escape having to encode the username:password pair outside of the template itself, but your CI/CD tooling should be more than capable of performing this with raw username/password strings.

ingress with both rules and default backend in Google Container Engine

I'm running on the Google Container Engine platform have an ingress that I would like to have a default backend service for almost all of my domains (there are quite a few, but have another, specific service for one domain on it. Going by my understanding of the ingress user guide (scan for "Default Backends:" in there), the config below should work correctly.
However, it doesn't ever create the second backend. Running kubectl describe ingress on the ingress made and when looking at the LB in the Google console site, only the first "default" backend service is listed. Changing the default one into a rule one fixes the problem but means I have to explicitly list all of the domains I want to support.
So, I'm assuming I have a bug in the config below. If so, what is it?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: boringsites
spec:
backend:
serviceName: boringsites
servicePort: 80
tls:
- secretName: boringsites-tls
rules:
- host: subdomain.example.com
http:
paths:
- backend:
serviceName: other-svc
servicePort: 80
I just created https://gist.github.com/bprashanth/9f4533b19fd864b723ba0720a3648fa3#file-default-basic-yaml-L94 on kubernetes 1.3 and it works as expected. Perhaps you can debug backwards? Where are you running kube and what version are you using? There is a known and fixed race in 1.2 that you might be running into, especially if you updated the ingress. Also note that you need services of type=nodeport, or the ingress controller on gce will ignore the service you plugged into the reasource.