How to check the current timout value of the haproxy.router.openshift.io/timeout in openshift environment.
haproxy.router.openshift.io/timeout can be set on a per-Route basis, see the documentation: Configuring route timeouts.
So you can check your annotations on your Route by using the following commands:
# List all Routes
oc get routes -o yaml
# List a particular Route
oc get route <route-name> -o yaml
You will then see the annotations listed under "metadata":
apiVersion: v1
kind: Route
metadata:
annotations:
haproxy.router.openshift.io/timeout: 2s
[..]
Related
I'm trying to use JSON Patch on one of my Kubernetes yaml file.
apiVersion: accesscontextmanager.cnrm.cloud.google.com/v1beta1
kind: AccessContextManagerServicePerimeter
metadata:
name: serviceperimetersample
spec:
status:
resources:
- projectRef:
external: "projects/12345"
- projectRef:
external: "projects/123456"
restrictedServices:
- "storage.googleapis.com"
vpcAccessibleServices:
allowedServices:
- "storage.googleapis.com"
- "pubsub.googleapis.com"
enableRestriction: true
title: Service Perimeter created by Config Connector
accessPolicyRef:
external: accessPolicies/0123
description: A Service Perimeter Created by Config Connector
perimeterType: PERIMETER_TYPE_REGULAR
I need to add another project to the perimeter (spec/status/resources).
I tried using following command:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op": "add", "path": "/spec/status/resources/-/projectRef", "value": {"external": {"projects/01234567"}}}]'
But it resulted in error:
The request is invalid: the server rejected our request due to an error in our request
I'm pretty sure that my path is not correct because it's nested structure. I'd appreciate any help on this.
Thank you.
I don't have the CustomResource you're using so I can't test this, but I think this should work:
kubectl patch AccessContextManagerServicePerimeter serviceperimetersample --type='json' -p='[{"op":"add","path":"/spec/status/resources/2","value":{"projectRef":{"external":"projects/12345"}}}]'
I have been breaking my head over the following:
I have a set of buildconfigs that build images and create imagestreams for it in the "openshift" namespace. This gives me for example the netclient-userspace imagestream.
krist#MacBook-Pro netmaker % oc get is netclient-userspace
NAME IMAGE REPOSITORY TAGS UPDATED
netclient-userspace image-registry.openshift-image-registry.svc:5000/openshift/netclient-userspace latest About an hour ago
What I have however not been able to figure out is how to use this imagestream in a deployment in a different namespace.
Take for example this:
kind: Pod
apiVersion: v1
metadata:
name: netclient-test
namespace: "kvb-netclient-test"
spec:
containers:
- name: netclient
image: netclient-userspace:latest
When I deploy this I get errors...
Failed to pull image "netclient-userspace:latest": rpc error: code =
Unknown desc = reading manifest latest in
docker.io/library/netclient-userspace: errors: denied: requested
access to the resource is denied unauthorized: authentication required
So openshift goest and looks for the image on dockerhub. It shouldn't. How do I tell openshift to use the imagestream here?
When using an ImageStreamTag for a Deployment image source, you need to use the image.openshift.io/triggers annotation. It instructs OpenShift to replace the image: attribute in a Deployment with the value of an ImageStreamTag (and to redeploy it when the ImageStreamTag changes in the future).
Importantly, note the annotation and the image: ' ' with the explicit space character in the yaml string.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"content-netclient-userspace:latest","namespace":"openshift"},"fieldPath":"spec.template.spec.containers[?(#.name==\"netclient\")].image"}]'
name: netclient-test
namespace: "kvb-netclient-test"
spec:
...
template:
...
spec:
containers:
- command:
- ...
image: ' '
name: netclient
...
I will also mention that, in order to pull images from different namespaces, it may be required to authorize the Deployment's service account to do so: OpenShift Docs.
I am replacing nginx ingress with gloo ingress controller in kubernetes cluster and want to set timeout for response.There is an annotation for this in nginx.
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
Is there anything similar this in gloo-ingress-controller or else do I have to use virtualservice for this?
The only annotation that you are supposed to use with Gloo is kubernetes.io/ingress.class: gloo which is the standard way to mark an Ingress object as handled by a specific Ingress controller. This requirement will go away if you add the ability for Gloo to be the default Ingress controller for your cluster. Also, according to the documentation:
If you need more advanced routing capabilities, we encourage you to
use Gloo VirtualServices by installing as glooctl install gateway.
Gloo Gateway uses Kubernetes Custom Resources instead of Ingress
Objects as the only way to configure Ingress’ beyond their basic
routing spec is to use lots of vendor-specific Kubernetes Annotations
to your Kubernetes manifests.
So you are supposed to use VirtualService in order to achieve your goal. You can see the example below:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: 'default'
namespace: 'gloo-system'
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- prefix: '/petstore'
routeAction:
single:
upstream:
name: 'default-petstore-8080'
namespace: 'gloo-system'
options:
timeout: '20s'
retries:
retryOn: 'connect-failure'
numRetries: 3
perTryTimeout: '5s'
I hope this helps.
EDITED:
I have a service running in OpenShift on 2 pods, let's call them P1 and P2.
The service does two things:
An API
We listen to Kafka messages from a topic and then process them.
Is there a way I can restrict all calls made to API only to P1 and all calls for Kafka only to P2 ?
My suggestion may not fit with your requests, but if each one pod is running in a specific project, then it would be available as follows.
First, you should configure pod's source IP statically using Egress IP based on project level, refer Enabling Static IPs for External Project Traffic for more details.
$ oc patch netnamespace p1_project -p '{"egressIPs": ["1.1.1.1"]}'
$ oc patch netnamespace p2_project -p '{"egressIPs": ["2.2.2.2"]}'
After that, you can allow each pod IP based on whitelist, refer Route-specific IP Whitelists for more details.
kind: Route
metadata:
name: R1
annotations:
haproxy.router.openshift.io/ip_whitelist: 1.1.1.1
kind: Route
metadata:
name: R2
annotations:
haproxy.router.openshift.io/ip_whitelist: 2.2.2.2
I hope it help you.
I'm trying to setup RESTful API application with Kubernetes. I have a barebones setup with a cluster, static IP address, app deployed with exposed service of type NodePort, and an ingress configured with a managed certificate for SSL. I need to enable CORS and I am not yet using nginx. Is it possible, or do I need to install nginx instead of the default gce class?
Here is my ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: artsdata-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "artsdasta-static-ip"
networking.gke.io/managed-certificates: artsdata-certificate
ingress.kubernetes.io/enable-cors: "true"
spec:
backend:
serviceName: artsdata-kg
servicePort: 80
To check I am using curl as follows:
curl -H "Access-Control-Request-Method: GET" -H "Origin: http://localhost" --head http://db.artsdata.ca
I am expecting the response to include Access-Control-Allow-*
Currently CORS mechanism is not supported in GCP L7 load balancer, therefore ingress-gce ingress controller does contain appropriate annotation to accomplish this functionality, find here related Stack thread.
If you consider replacing native GCP Ingress class by Nginx Ingress Controller in order to enable Cross-origin requests then you might have to include at least two annotations in the origin Ingress resource definition:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
I've found a great guideline through GCP community tutorials that explains Nginx Ingress Controller implementation procedure in GKE.
There are also the other L7 proxy frameworks available on the market that can leverage CORS requests like Traefik, Skipper, etc.