I'm trying to perform some basic JWT claim based routing leveraging OpenShift Service Mesh. OpenShift version is 4.6.23. RedHat OpenShift Service Mesh version is 2.1.1-0.
Below is the ServiceMeshControlPlane resource I use to setup ServiceMesh on the OCP cluster :
apiVersion: maistra.io/v2
kind: ServiceMeshControlPlane
metadata:
name: basic
namespace: istio-system
spec:
version: v2.0
tracing:
type: Jaeger
sampling: 10000
addons:
jaeger:
name: jaeger
install:
storage:
type: Memory
kiali:
enabled: true
name: kiali
grafana:
enabled: true
Basically, I tried to follow this Istio documentation page to test the JWT claim routing
Here is the structure of the JWT token I'm using :
{
"iss": "https://eu-de.appid.cloud.ibm.com/oauth/v4/6f631e4d-7ecc-4a1c-8cf8-ea2d0a5c32e6",
"exp": 1646244105,
"aud": [
"f7f1e8bf-72d0-4e7d-90ff-cb76a8079c46"
],
"sub": "99bb916e-8f99-4e0a-8b1f-72c2171448d1",
"email_verified": true,
"amr": [
"cloud_directory"
],
"iat": 1646240505,
"tenant": "6f631e4d-7ecc-4a1c-8cf8-ea2d0a5c32e6",
"scope": "openid appid_default appid_readuserattr appid_readprofile appid_writeuserattr appid_authenticated",
"roles": [
"teamA"
]
}
The RequestAuthentication definition :
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: jwt-auth
namespace: sample-mesh
spec:
jwtRules:
- issuer: "https://eu-de.appid.cloud.ibm.com/oauth/v4/6f631e4d-7ecc-4a1c-8cf8-ea2d0a5c32e6"
jwksUri: "https://eu-de.appid.cloud.ibm.com/oauth/v4/6f631e4d-7ecc-4a1c-8cf8-ea2d0a5c32e6/publickeys"
And the definition of the VirtualService that is supposed to handle the routing :
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: catalog
spec:
hosts:
- '*'
gateways:
- sample-mesh-gateway
http:
- match:
- headers:
"#request.auth.claims.roles":
exact: teamA
route:
- destination:
host: catalog
subset: version-v1
- route:
- destination:
host: catalog
subset: version-v2
I would then expect the trafic to be routed to the catalog v1 when the roles claim contains the value teamA. But in practice, I observe that the trafic is always routed to catalog v2 even if the token has the required value in the claim.
Is there anything I missed in the configuration ?
Thanks :-)
Related
I need to configure a new listener from the Ingress.yaml manifest, currently the aws loadbalancer driver version is 2.4.4, when I perform the process from the AWS console it allows me to add the new listener and redirect the traffic to HTTPS without problem but after a few minutes it disappears, I perform the configuration directly in the ingress manifest with the annotations but the listener does not come out correctly in the AWS console.
Manifiest:
`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig":{ "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx-xxxxx:xxxxx:certificate/xxxx-xxxxxxx
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internal
kubernetes.io/ingress.class: alb
name: xxxxxx
namespace: xxxxx
spec:
rules:
- host: xxxxxxx
http:
paths:
- backend:
service:
name: service
port:
number: 80
path: /*
pathType: ImplementationSpecific
listener configured from maniefiest ingress
enter image description here
listener configuring it manually from AWS Console
enter image description here
I use countor as ingress nginx class for my ingress and try to add basic http authentication but it doesn't work.I wonder why it doesn't work?
What i'm missed?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
ingress.kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/ingress.class: contour
kubernetes.io/tls-acme: "true"
# It doesn't work
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'
spec:
tls:
- hosts:
- my_domen.com
secretName: mysecretname
rules:
- host: my_domen.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: some-service
port:
number:
8083
Contour is not nginx based so the nginx annotations don't have an effect. As far as I'm aware, you need to run this process: https://projectcontour.io/guides/external-authorization/
I have a separate ingress-internal (manifests) for the backend and the frontend.
My backend service has several endpoints: one with GraphqQL and two Rest.
After deploying the project, I find that when I request the Rest endpoint (POST request); I have the error code 404.
How can I configure properly the backend ingress manifest?
I tired too many annotations like:
nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/app-root: /
# nginx.ingress.kubernetes.io/default-backend: mcs-thirdparty-backend
nginx.ingress.kubernetes.io/rewrite-target: /$2
# nginx.ingress.kubernetes.io/rewrite-path: /response
# nginx.ingress.kubernetes.io/preserve-trailing-slash: "true"
This is my current backend's ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mcs-thirdparty-back-ingress
namespace: namespace
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx-internal
rules:
- host: backend.exemple.com
http:
paths:
- path: '/(/|$)(.*)'
backend:
service:
name: mcs-thirdparty-backend
port:
number: 8080
pathType: Prefix
This the backend ingress that I arrived to work successfullywith:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mcs-thirdparty-ingress
namespace: namespace
spec:
ingressClassName: nginx-internal
rules:
- host: bilels.exemple.com
http:
paths:
- path: /
backend:
service:
name: mcs-thirdparty-frontend
port:
number: 80
pathType: Prefix
After updating EKS cluster to 1.22 all websites are down. Pods are ok but all the networking is not working.
I don't know how to fix ingresses and load balancer.
I have tried updating deprecated API versions for ingress-kong and internal-ingress-kong.
I can't find yaml file for alb-ingress-controller, but when I check last applied it is based on new API.
I have manually updated docker image of alb from 1.1.8 to 2.4.1
Name: alb-ingress-controller
Namespace: default
CreationTimestamp: Thu, 03 Sep 2020 02:05:01 +0000
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: deployment.kubernetes.io/revision: 9
Selector: app.kubernetes.io/name=alb-ingress-controller
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=alb-ingress-controller
app.kubernetes.io/name=alb-ingress-controller
git_version=54709a8bd94f795b1184b0c8336e9a6ec8aee807
name=alb-ingress-controller
version=20200909005829
Annotations: kubectl.kubernetes.io/restartedAt: 2022-04-14T19:19:01Z
Service Account: alb-ingress-controller
Containers:
alb-ingress-controller:
Image: docker.io/amazon/aws-alb-ingress-controller:v2.4.1
Port: <none>
Host Port: <none>
Args:
--watch-namespace=default
--ingress-class=alb-ingress-controller
--cluster-name=staging-trn
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available False MinimumReplicasUnavailable
OldReplicaSets: <none>
NewReplicaSet: alb-ingress-controller-c46ff7bd9 (1/1 replicas created)
Events: <none>
I'm new to kubernetes and aws.
I think I have updated deprecated APIs in all places but errors are still pointing to the old APIs.
Error on ingresses:
E0415 07:54:29.332371 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.4/tools/cache/reflector.go:105: Failed to list *v1beta1.Ingress: the server could not find the requested resource (get ingresses.extensions)
Error on alb:
{"level":"error","ts":1650009210.0149224,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}
I have created missing CRD TargetGroupBindings:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.5.0
creationTimestamp: null
labels:
app.kubernetes.io/name: alb-ingress-controller
name: targetgroupbindings.elbv2.k8s.aws
spec:
group: elbv2.k8s.aws
names:
kind: TargetGroupBinding
listKind: TargetGroupBindingList
plural: targetgroupbindings
singular: targetgroupbinding
scope: Namespaced
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
description: TargetGroupBinding is the Schema for the TargetGroupBinding API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: TargetGroupBindingSpec defines the desired state of TargetGroupBinding
properties:
networking:
description: networking provides the networking setup for ELBV2 LoadBalancer
to access targets in TargetGroup.
properties:
ingress:
description: List of ingress rules to allow ELBV2 LoadBalancer
to access targets in TargetGroup.
items:
properties:
from:
description: List of peers which should be able to access
the targets in TargetGroup. At least one NetworkingPeer
should be specified.
items:
description: NetworkingPeer defines the source/destination
peer for networking rules.
properties:
ipBlock:
description: IPBlock defines an IPBlock peer. If specified,
none of the other fields can be set.
properties:
cidr:
description: CIDR is the network CIDR. Both IPV4
or IPV6 CIDR are accepted.
type: string
required:
- cidr
type: object
securityGroup:
description: SecurityGroup defines a SecurityGroup
peer. If specified, none of the other fields can
be set.
properties:
groupID:
description: GroupID is the EC2 SecurityGroupID.
type: string
required:
- groupID
type: object
type: object
type: array
ports:
description: List of ports which should be made accessible
on the targets in TargetGroup. If ports is empty or unspecified,
it defaults to all ports with TCP.
items:
properties:
port:
anyOf:
- type: integer
- type: string
description: The port which traffic must match. When
NodePort endpoints(instance TargetType) is used,
this must be a numerical port. When Port endpoints(ip
TargetType) is used, this can be either numerical
or named port on pods. if port is unspecified, it
defaults to all ports.
x-kubernetes-int-or-string: true
protocol:
description: The protocol which traffic must match.
If protocol is unspecified, it defaults to TCP.
enum:
- TCP
- UDP
type: string
type: object
type: array
required:
- from
- ports
type: object
type: array
type: object
serviceRef:
description: serviceRef is a reference to a Kubernetes Service and
ServicePort.
properties:
name:
description: Name is the name of the Service.
type: string
port:
anyOf:
- type: integer
- type: string
description: Port is the port of the ServicePort.
x-kubernetes-int-or-string: true
required:
- name
- port
type: object
targetGroupARN:
description: targetGroupARN is the Amazon Resource Name (ARN) for
the TargetGroup.
type: string
targetType:
description: targetType is the TargetType of TargetGroup. If unspecified,
it will be automatically inferred.
enum:
- instance
- ip
type: string
required:
- serviceRef
- targetGroupARN
type: object
status:
description: TargetGroupBindingStatus defines the observed state of TargetGroupBinding
properties:
observedGeneration:
description: The generation observed by the TargetGroupBinding controller.
format: int64
type: integer
type: object
type: object
additionalPrinterColumns:
- jsonPath: .spec.serviceRef.name
description: The Kubernetes Service's name
name: SERVICE-NAME
type: string
- jsonPath: .spec.serviceRef.port
description: The Kubernetes Service's port
name: SERVICE-PORT
type: string
- jsonPath: .spec.targetType
description: The AWS TargetGroup's TargetType
name: TARGET-TYPE
type: string
- jsonPath: .spec.targetGroupARN
description: The AWS TargetGroup's Amazon Resource Name
name: ARN
priority: 1
type: string
- jsonPath: .metadata.creationTimestamp
name: AGE
type: date
Ingress resource should be updated as follows:
apiVersion: networking.k8s.io/v1
pls see examples here:
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
to find ingress resources type the following:
kubectl get ingress --all-namespaces
then do the modification as mentioned above
pls be noted that backend configuration in ingress resource also needs some modification due to api change
also please be noted that from version 1.18 you're able to bind ingress resources using spec.ingressClassName field. If Omitted, ingress will work only if ingressClass that ingress controller implements is set to default.
How to set INGRESS_HOST and INGRESS_PORT for a sample yaml file which is having its istio file created using automatic side car injection
I am using window 10 - Docker - kubernetes -Istio configuration.Installed kubectl,istioctl verions respectievly
apiVersion: v1
kind: Service
metadata:
name: helloworld
labels:
app: helloworld
spec:
ports:
- port: 5000
name: http
selector:
app: helloworld
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v1
labels:
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v1
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v1
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-v2
labels:
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
version: v2
spec:
containers:
- name: helloworld
image: istio/examples-helloworld-v2
resources:
requests:
cpu: "100m"
imagePullPolicy: Never
ports:
- containerPort: 5000
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5
010
Getting 503 Service Temporarily Unavailable when trying to hit my sample created service
Please first verify your selector labels are perfect and your service is connected to deployment[POD].
You have 'version: v1' and 'version: v2' in deployment selector but it's not at service. That's why service is giving output 503 unavailable. if the issue in pod or service then i will be giving 502 bad gateway or something.
Istio traffic work like
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- uri:
exact: /hello
route:
- destination:
host: helloworld
port:
number: 5000 <--- change
Welcome to SO #Sreedhar!
How to set INGRESS_HOST and INGRESS_PORT
these two environment variables are not adjustable inside of manifest files (static files), that you use to create Deployments->Pods on K8S cluster. They serve just as a placeholders to ease the end-users an access to the application just deployed on Istio-enabled Kubernetes cluster from outside. Values of INGRESS_HOST/INGRESS_PORT are filled out based on information, which is auto-generated by cluster during creation of cluster resources, and available only in live objects.
Where the ingress takes its IP address from, you can read in official documentation here:
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
For the Bad gateway issue, as suggested previously by #Harsh Manvar, you have specified invalid port in VirtualService (5000 => 5010)