YAML: override properties in nested array - configuration

My YAML file contains few services which are identical except for few properties and here is the example
services:
- name: SERVICE_NAME1
connect_timeout: 60000
host: HOST_NAME1
port: 443
protocol: https
read_timeout: 60000
retries: 5
write_timeout: 60000
routes:
- hosts:
- ROUTE_HOST1
name: ROUTE_NAME1
preserve_host: false
protocols:
- http
- https
strip_path: false
https_redirect_status_code: 426
plugins:
- name: plugin1
- name: plugin2
- name: SERVICE_NAME2
connect_timeout: 60000
host: HOST_NAME2
port: 443
protocol: https
read_timeout: 60000
retries: 5
write_timeout: 60000
routes:
- hosts:
- ROUTE_HOST2
name: ROUTE_NAME2
preserve_host: false
protocols:
- http
- https
strip_path: false
https_redirect_status_code: 426
plugins:
- name: plugin1
- name: plugin2
Is it possible to have a template for the service and then reuse it and set the SERVICE_NAME, HOST_NAME, ROUTE_HOST, ROUTE_NAME for the particular service using only YAML capabilities?

Yes sure – you can use any templating engine for that. For example mustache:
test.yaml
services:
{{#services}}
- name: {{name}}
connect_timeout: 60000
host: {{host}}
port: 443
protocol: https
read_timeout: 60000
retries: 5
write_timeout: 60000
routes:
- hosts:
- {{route.host}}
name: {{route.name}}
preserve_host: false
protocols:
- http
- https
strip_path: false
https_redirect_status_code: 426
plugins:
- name: plugin1
- name: plugin2
{{/services}}
input.yaml
services:
- name: service
host: host
route: {host: rhost, name: rname}
usage
$ mustache input.yaml test.yaml
output
services:
- name: service
connect_timeout: 60000
host: host
port: 443
protocol: https
read_timeout: 60000
retries: 5
write_timeout: 60000
routes:
- hosts:
- rhost
name: rname
preserve_host: false
protocols:
- http
- https
strip_path: false
https_redirect_status_code: 426
plugins:
- name: plugin1
- name: plugin2

Related

EKS pod in workspace getting default scheduler for Fargate

Here is the issue I am running into:
I am creating the cluster using eksctl create cluster --name abc_name --profile profile_aws_creds
Once the cluster is created, I am creating the namespace using kubectl create namespace airflow-dev
On this namespace I am using helm to in
stall flux helm upgrade -i flux fluxcd/flux --set git.url=https://github.com/******/airflow-eks-config.git -n airflow-dev
when I look at the pods in the namespace they are always in the Pending state.
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2022-10-04T10:41:54-04:00"
kubernetes.io/psp: eks.privileged
creationTimestamp: "2022-10-04T14:41:54Z"
generateName: flux-596f88f8b5-
labels:
app: flux
pod-template-hash: 596f88f8b5
release: flux
name: flux-596f88f8b5-9jglf
namespace: airflow-dev
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: flux-596f88f8b5
uid: 5e86a479-96ca-46be-bec5-99a6ca64cb7b
resourceVersion: "11948"
uid: 672f236f-3a5e-4a40-a7f3-6b371e7434fc
spec:
containers:
- args:
- --log-format=fmt
- --ssh-keygen-dir=/var/fluxd/keygen
- --ssh-keygen-format=RFC4716
- --k8s-secret-name=flux-git-deploy
- --memcached-hostname=flux-memcached
- --sync-state=git
- --memcached-service=
- --git-url=https://github.com/****/airflow-eks-config.git
- --git-branch=master
- --git-path=
- --git-readonly=false
- --git-user=Weave Flux
- --git-email=support#weave.works
- --git-verify-signatures=false
- --git-set-author=false
- --git-poll-interval=5m
- --git-timeout=20s
- --sync-interval=5m
- --git-ci-skip=false
- --automation-interval=5m
- --registry-rps=200
- --registry-burst=125
- --registry-trace=false
env:
- name: KUBECONFIG
value: /root/.kubectl/config
image: docker.io/fluxcd/flux:1.25.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: flux
ports:
- containerPort: 3030
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /root/.kubectl
name: kubedir
- mountPath: /etc/fluxd/ssh
name: git-key
readOnly: true
- mountPath: /var/fluxd/keygen
name: git-keygen
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-srqn4
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: flux
serviceAccountName: flux
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: flux-kube-config
name: kubedir
- name: git-key
secret:
defaultMode: 256
secretName: flux-git-deploy
- emptyDir:
medium: Memory
name: git-keygen
- name: kube-api-access-srqn4
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-10-04T14:41:54Z"
message: '0/2 nodes are available: 2 node(s) had taint {eks.amazonaws.com/compute-type:
fargate}, that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
as you can see above the pods are never scheduled, and the scheduler name is default-scheduler. When I do the same without deploying flux to a namespace(meaning deploying to default), the schedulerName is fargate-scheduler and the pod starts up.
Any thoughts on what is being done incorrectly?
Thanks
In Fargate the fargate-profile needs to be created first. Once this is created, the namespace can be created, and everything works.

I am trying to create a basic path based routing ingress controller with an AKS managed Load Balancer. need create consistent path based routing

##Working ingress file##
apiVersion: networking.k8s.io/v1
kind: Ingress`enter code here`
metadata:
name: signaler-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: "websocket"
spec:
ingressClassName: nginx
tls:
- hosts:
- i2adevcluster-dns.westus2.cloudapp.azure.com
secretName: tls-secret
rules:
- host: i2adevcluster-dns.westus2.cloudapp.azure.com
http:
paths:
- path: /signaler(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3000
- path: /websocket(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3001
##Want to define a path with consistency## prefix /signaler/websocket
##expecting work the same with the below configuration##
--------------------------------------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: signaler-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.org/websocket-services: "websocket"
spec:
ingressClassName: nginx
tls:
- hosts:
- i2adevcluster-dns.westus2.cloudapp.azure.com
secretName: tls-secret
rules:
- host: i2adevcluster-dns.westus2.cloudapp.azure.com
http:
paths:
- path: /signaler(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3000
- path: /signaler/websocket(/|$)(.*)
pathType: Prefix
backend:
service:
name: signaler
port:
number: 3001
Details about the solutions I am looking for
my ingress route is working with the inconsistency path but I want to make my path consistent with prefix /signaler with each subpath
The first working configuration is not having path consistency with prefix /signaler with Websocket so it should be /signaler/websocket/ instead of /WebSocket/

istio routing http to https upstream causes causes 302

My ingress gateway is at port 80 http and routing to a https destination.
With the following configuration
http://ingress-gateway.example.com/zzz
it gives 302 and the urls changes to https:
https://my-site.example.com/products
Why 302 and what am I missing?
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port: # Note: I am entering using this port
number: 80
name: http
protocol: HTTP
hosts:
- "*"
- port: # Note: I am NOT entering using this port
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
credentialName: my-credential
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: apps-domain
spec:
hosts:
- my-site.example.com
ports:
- number: 443
name: https-my-site
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /zzz
rewrite:
uri: /products
route:
- destination:
port:
number: 443
host: my-site.example.com
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: my-https-backend
spec:
host: my-site.example.com
trafficPolicy:
tls:
mode: SIMPLE
sni: my-site.example.com
You have a rewrite rule pointing to port 443
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /zzz
rewrite:
uri: /products
route:
- destination:
port:
number: 443 # here
host: my-site.example.com

Openshift Enterprise : Unable to connect to deployed app via browser

Deployed a java application in Openshift (3.9 and 3.11), but cannot reach the application via the browser.
Created an REST API application image (OpenLiberty and openjdk 11) and pushed it to openshift docker-registry via a maven build. ImageStream is created. Deployed the image, created a route. The pod comes up. Pod logs shows the liberty server is started. Accessed the pod via Terminal and was able to use curl (http://localhost:9080) in the terminal and test the apis. But when I used the route to access the app from a browser, getting host could not be found error.
I have the same application successfully running on minishift.
Where and what errors do I look for ?
apiVersion: v1
kind: Template
metadata:
name: ${APPLICATION_NAME}-template
annotations:
description: ${APPLICATION_NAME}
objects:
# Application Service
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
service.alpha.openshift.io/serving-cert-secret-name: app-certs
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
namespace: ${NAME_SPACE}
spec:
ports:
- name: 9443-tcp
port: 9443
protocol: TCP
targetPort: 9443
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
sessionAffinity: None
type: ClusterIP
# Application Route
- apiVersion: v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
port:
targetPort: 9443-tcp
tls:
termination: reencrypt
to:
kind: Service
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
weight: 100
wildcardPolicy: None
# APPLICATION DEPLOYMENT CONFIG
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
generation: 1
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
# - key: region
# operator: In
# values:
# - ${TARGET_ENVIRONMENT}
topologyKey: "kubernetes.io/hostname"
containers:
- image: ${APPLICATION_NAME}:${TAG}
imagePullPolicy: Always
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_HEALTH_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 15
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
# readinessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_READINESS_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 10
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
envFrom:
- configMapRef:
name: server-env
- secretRef:
name: server-env
env:
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
- name: KEYSTORE_PKCS12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
ports:
- containerPort: 9443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
initContainers:
- name: pem-to-keystore
image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16
env:
- name: keyfile
value: /var/run/secrets/openshift.io/app-certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/app-certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: keystore_jks
value: /var/run/secrets/java.io/keystores/keystore.jks
- name: password
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
command: ['/bin/bash']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password"]
volumeMounts:
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-certs
secret:
secretName: app-certs
- name: keystore-volume
emptyDir: {}
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
from:
kind: ImageStreamTag
name: ${APPLICATION_NAME}:${APP_VERSION_TAG}
type: ImageChange
parameters:
- name: APPLICATION_NAME
description: Name of the app
value: microservice
required: true
- name: APP_VERSION_TAG
description: TAG of the image stream tag
value: latest
required: true
- name: NAME_SPACE
description: Namespace
value: microservice--sbx--microservice
required: true
- name: DOMAIN_URL
description: DOMAIN_URL
value: microservice-myproject
required: true
- name: APPLICATION_LIVENESS_CHECK_URL
description: LIVENESS Check URL
value: /health
required: true
- name: APPLICATION_READINESS_CHECK_URL
description: READINESS Check URL
value: /microservice/envvariables
required: true
- name: DOCKER_IMAGE_REPO
description: Docker Image Repository
value: docker-registry-default.apps.xxxx.xxx.xxx.xxx.com
required: true

How to make connection with Google Cloud SQL using Google Container Engine?

i am using Node JS and deployed it using Kubernetes in Google Container Engine. but I cant make a connection to MySQL.
this is my node JS connection
var pool = mysql.createPool({
connectionLimit : 100,
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
database : process.env.DB_NAME,
multipleStatements : true,
socketPath : '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME
})
I was using this for my Google App Engine and its work. now i need to move to GKE and it sent an error that said mySQL is not defined.
this is my app-frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
tier: frontend
spec:
containers:
- name: app
image: gcr.io/app-12345/app:1.0
env :
- name : DB_HOST
value : 127.0.0.1:3306
- name : DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name : DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 8080
imagePullPolicy: Always
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
imagePullPolicy: Always
command:
- /cloud_sql_proxy
- --dir=/cloudsql
- --instances=mulung=tcp:3306
- --credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
ports:
- name: portdb
containerPort: 3306
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
what should i do to fix it? thank you guys.
Can you check the command part
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
Here is the working deployment.yaml from one of my backend application
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <appname>
spec:
replicas: 2
template:
metadata:
labels:
app: <appname>
spec:
containers:
- image: gcr.io/<some_name>/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: <appname>
image: IMAGE_NAME
ports:
- containerPort: 8888
readinessProbe:
httpGet:
path: /<appname>/health
port: 8888
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 5
env:
- name: PROJECT_NAME
value: <some_name>
- name: PROJECT_ZONE
value: <some_name>
- name: INSTANCE_NAME
value: <some_name>
- name: INSTANCE_PORT
value: <some_name>
- name: CONTEXT_PATH
value: <appname>
volumeMounts:
- name: application-config
mountPath: /opt/config-mount
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
- name: application-config
secret:
secretName: <appname>-ENV_NAME-config