EKS pod in workspace getting default scheduler for Fargate - namespaces

Here is the issue I am running into:
I am creating the cluster using eksctl create cluster --name abc_name --profile profile_aws_creds
Once the cluster is created, I am creating the namespace using kubectl create namespace airflow-dev
On this namespace I am using helm to in
stall flux helm upgrade -i flux fluxcd/flux --set git.url=https://github.com/******/airflow-eks-config.git -n airflow-dev
when I look at the pods in the namespace they are always in the Pending state.
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2022-10-04T10:41:54-04:00"
kubernetes.io/psp: eks.privileged
creationTimestamp: "2022-10-04T14:41:54Z"
generateName: flux-596f88f8b5-
labels:
app: flux
pod-template-hash: 596f88f8b5
release: flux
name: flux-596f88f8b5-9jglf
namespace: airflow-dev
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: flux-596f88f8b5
uid: 5e86a479-96ca-46be-bec5-99a6ca64cb7b
resourceVersion: "11948"
uid: 672f236f-3a5e-4a40-a7f3-6b371e7434fc
spec:
containers:
- args:
- --log-format=fmt
- --ssh-keygen-dir=/var/fluxd/keygen
- --ssh-keygen-format=RFC4716
- --k8s-secret-name=flux-git-deploy
- --memcached-hostname=flux-memcached
- --sync-state=git
- --memcached-service=
- --git-url=https://github.com/****/airflow-eks-config.git
- --git-branch=master
- --git-path=
- --git-readonly=false
- --git-user=Weave Flux
- --git-email=support#weave.works
- --git-verify-signatures=false
- --git-set-author=false
- --git-poll-interval=5m
- --git-timeout=20s
- --sync-interval=5m
- --git-ci-skip=false
- --automation-interval=5m
- --registry-rps=200
- --registry-burst=125
- --registry-trace=false
env:
- name: KUBECONFIG
value: /root/.kubectl/config
image: docker.io/fluxcd/flux:1.25.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: flux
ports:
- containerPort: 3030
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/flux/v6/identity.pub
port: 3030
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /root/.kubectl
name: kubedir
- mountPath: /etc/fluxd/ssh
name: git-key
readOnly: true
- mountPath: /var/fluxd/keygen
name: git-keygen
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-srqn4
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: flux
serviceAccountName: flux
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: flux-kube-config
name: kubedir
- name: git-key
secret:
defaultMode: 256
secretName: flux-git-deploy
- emptyDir:
medium: Memory
name: git-keygen
- name: kube-api-access-srqn4
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-10-04T14:41:54Z"
message: '0/2 nodes are available: 2 node(s) had taint {eks.amazonaws.com/compute-type:
fargate}, that the pod didn''t tolerate.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
as you can see above the pods are never scheduled, and the scheduler name is default-scheduler. When I do the same without deploying flux to a namespace(meaning deploying to default), the schedulerName is fargate-scheduler and the pod starts up.
Any thoughts on what is being done incorrectly?
Thanks

In Fargate the fargate-profile needs to be created first. Once this is created, the namespace can be created, and everything works.

Related

Containers in a multiple container openshift pod can't communicate between themselves

I'm working on migrating our existing docker containers into openshift, and am running into an issue trying to get 2 of our containers into a single pod. We're using Spring Cloud Config Server for our services, with a Gitea backend. I'd like to have these in a single pod so that the java server and git server are always tied together.
I'm able to get to each of the containers individually via the associated routes, but the config server is unable to reach the git server, and vice versa. I get a 404 when the config server tries to clone the git repo. I've tried using gitea-${INSTANCE_IDENTIFIER} (INSTANCE_IDENTIFIER just being a generated value to tie all of the objects together at a glance), gitea-${INSTANCE_IDENTIFIER}.myproject.svc, and gitea-${INSTANCE_IDENTIFIER}.myproject.svc.cluster.local, as well as the full url for the route that gets created, and nothing works.
Here is my template, with a few things removed (...) for security:
apiVersion: v1
kind: Template
metadata:
name: configuration-template
annotations:
description: 'Configuration containers template'
iconClass: 'fa fa-gear'
tags: 'git, Spring Cloud Configuration'
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
data:
local-docker.ini: |
APP_NAME = Git Server
RUN_USER = git
RUN_MODE = prod
[repository]
ROOT = /home/git/data/git/repositories
[repository.upload]
TEMP_PATH = /home/git/data/gitea/uploads
[server]
APP_DATA_PATH = /home/git/data/gitea
HTTP_PORT = 8443
DISABLE_SSH = true
SSH_PORT = 22
LFS_START_SERVER = false
OFFLINE_MODE = false
PROTOCOL = https
CERT_FILE = /var/run/secrets/service-cert/tls.crt
KEY_FILE = /var/run/secrets/service-cert/tls.key
REDIRECT_OTHER_PORT = true
PORT_TO_REDIRECT = 8080
[database]
PATH = /home/git/data/gitea/gitea.db
DB_TYPE = sqlite3
NAME = gitea
USER = gitea
PASSWD = XXXX
[session]
PROVIDER_CONFIG = /home/git/data/gitea/sessions
PROVIDER = file
[picture]
AVATAR_UPLOAD_PATH = /home/git/data/gitea/avatars
DISABLE_GRAVATAR = false
ENABLE_FEDERATED_AVATAR = false
[attachment]
PATH = /home/git/data/gitea/attachments
[log]
ROOT_PATH = /home/git/data/gitea/log
MODE = file
LEVEL = Info
[mailer]
ENABLED = false
[service]
REGISTER_EMAIL_CONFIRM = false
ENABLE_NOTIFY_MAIL = false
DISABLE_REGISTRATION = false
ENABLE_CAPTCHA = false
REQUIRE_SIGNIN_VIEW = false
DEFAULT_KEEP_EMAIL_PRIVATE = false
NO_REPLY_ADDRESS = noreply.example.org
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- apiVersion: v1
kind: Route
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'gitea-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8443
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Route
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8105
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
replicas: 1
template:
metadata:
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
spec:
initContainers:
- name: pem-to-keystore
image: nginx
env:
- name: keyfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: password
value: '${STORE_PASSWORD}'
command: ['sh']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password -name 'server certificate'"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
- mountPath: /var/run/secrets/openshift.io/services_serving_certs
name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: pem-to-truststore
image: openjdk:alpine
env:
- name: ca_bundle
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- name: truststore_jks
value: /var/run/secrets/java.io/keystores/truststore.jks
- name: password
value: '${STORE_PASSWORD}'
command: ['/bin/sh']
args: ["-c",
"keytool -noprompt -importkeystore -srckeystore $JAVA_HOME/jre/lib/security/cacerts -srcstoretype JKS -destkeystore $truststore_jks -storepass $password -srcstorepass changeit && cd /var/run/secrets/java.io/keystores/ && awk '/-----BEGIN CERTIFICATE-----/{filename=\"crt-\"NR}; {print >filename}' $ca_bundle && for file in crt-*; do keytool -import -noprompt -keystore $truststore_jks -file $file -storepass $password -alias service-$file; done && rm crt-*"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
containers:
- name: 'gitea-${INSTANCE_IDENTIFIER}'
image: '...'
command: ['sh',
'-c',
'tar xf /app/gitea/gitea-data.tar.gz -C /home/git/data && cp /app/config/local-docker.ini /home/git/config/local-docker.ini && gitea web --config /home/git/config/local-docker.ini']
ports:
- containerPort: 8443
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/home/git/data'
name: 'gitea-data-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/app/config'
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/var/run/secrets/service-cert'
name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
image: '...'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command: [
"java",
"-Djava.security.egd=file:/dev/./urandom",
"-Dspring.profiles.active=terraform",
"-Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks",
"-Djavax.net.ssl.trustStoreType=JKS",
"-Djavax.net.ssl.trustStorePassword=${STORE_PASSWORD}",
"-Dserver.ssl.key-store=/var/run/secrets/java.io/keystores/keystore.pkcs12",
"-Dserver.ssl.key-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.key-store-type=PKCS12",
"-Dserver.ssl.trust-store=/var/run/secrets/java.io/keystores/truststore.jks",
"-Dserver.ssl.trust-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.trust-store-type=JKS",
"-Dspring.cloud.config.server.git.uri=https://gitea-${INSTANCE_IDENTIFIER}.svc.cluster.local/org/centralrepo.git",
"-jar",
"/app.jar"
]
ports:
- containerPort: 8105
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/var/run/secrets/java.io/keystores'
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
readOnly: true
- mountPath: 'target/centralrepo'
name: 'configuration-${INSTANCE_IDENTIFIER}'
readOnly: false
volumes:
- name: 'gitea-data-${INSTANCE_IDENTIFIER}'
persistentVolumeClaim:
claimName: 'gitea-${INSTANCE_IDENTIFIER}'
- name: 'gitea-config-${INSTANCE_IDENTIFIER}'
configMap:
defaultMode: 0660
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
- name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
emptyDir:
- name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
emptyDir:
defaultMode: 660
restartPolicy: Always
terminationGracePeriodSeconds: 62
dnsPolicy: ClusterFirst
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
parameters:
- name: GITEA_VERSION
displayName: Gitea Image Version
description: The version of the gitea image.
required: true
- name: CONFIGURATION_VERSION
displayName: Configuration Service Image Version
description: The version of the configuration service image.
required: true
- name: INSTANCE_IDENTIFIER
description: Provides an identifier to tie all objects in the deployment together.
generate: expression
from: "[a-z0-9]{6}"
- name: STORE_PASSWORD
generate: expression
from: "[a-zA-Z0-9]{25}"

Mysql container not starting up on Kubernetes

I was using this image to run my application in docker-compose. However, when I run the same on a Kubernetes cluster I get the error
[ERROR] Could not open file '/opt/bitnami/mysql/logs/mysqld.log' for error logging: Permission denied
Here's my deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
name: common-db
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: common-db
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: common-db
spec:
containers:
- env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
- name: MYSQL_DATABASE
value: "common-development"
- name: MYSQL_REPLICATION_MODE
value: "master"
- name: MYSQL_REPLICATION_PASSWORD
value: "repl_password"
- name: MYSQL_REPLICATION_USER
value: "repl_user"
image: bitnami/mysql:5.7
imagePullPolicy: ""
name: common-db
ports:
- containerPort: 3306
securityContext:
runAsUser: 0
resources:
requests:
memory: 512Mi
cpu: 500m
limits:
memory: 512Mi
cpu: 500m
volumeMounts:
- name: common-db-initdb
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
volumes:
- name: common-db-initdb
configMap:
name: common-db-config
serviceAccountName: ""
status: {}
The config map has the config my.cnf data. Any pointers on where I could be going wrong? Specially if the same image works in the docker-compose?
Try changing the file permission using init container as in official bitnami helm chart they are also updating file permissions and managing security context.
helm chart : https://github.com/bitnami/charts/blob/master/bitnami/mysql/templates/master-statefulset.yaml
UPDATE :
initContainers:
- command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
image: docker.io/bitnami/minideb:buster
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mysql
name: data
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
runAsUser: 1001
serviceAccount: mysql
You may need to use subpath. To know details about subpath click here
volumeMounts:
- name: common-db-initd
mountPath: /opt/bitnami/mysql/conf/my_custom.cnf
subPath: my_custom.cnf
Also, you can install bitnami mysql using helm chart easily.

Openshift Enterprise : Unable to connect to deployed app via browser

Deployed a java application in Openshift (3.9 and 3.11), but cannot reach the application via the browser.
Created an REST API application image (OpenLiberty and openjdk 11) and pushed it to openshift docker-registry via a maven build. ImageStream is created. Deployed the image, created a route. The pod comes up. Pod logs shows the liberty server is started. Accessed the pod via Terminal and was able to use curl (http://localhost:9080) in the terminal and test the apis. But when I used the route to access the app from a browser, getting host could not be found error.
I have the same application successfully running on minishift.
Where and what errors do I look for ?
apiVersion: v1
kind: Template
metadata:
name: ${APPLICATION_NAME}-template
annotations:
description: ${APPLICATION_NAME}
objects:
# Application Service
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
service.alpha.openshift.io/serving-cert-secret-name: app-certs
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
namespace: ${NAME_SPACE}
spec:
ports:
- name: 9443-tcp
port: 9443
protocol: TCP
targetPort: 9443
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
sessionAffinity: None
type: ClusterIP
# Application Route
- apiVersion: v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
port:
targetPort: 9443-tcp
tls:
termination: reencrypt
to:
kind: Service
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
weight: 100
wildcardPolicy: None
# APPLICATION DEPLOYMENT CONFIG
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
generation: 1
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
# - key: region
# operator: In
# values:
# - ${TARGET_ENVIRONMENT}
topologyKey: "kubernetes.io/hostname"
containers:
- image: ${APPLICATION_NAME}:${TAG}
imagePullPolicy: Always
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_HEALTH_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 15
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
# readinessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_READINESS_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 10
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
envFrom:
- configMapRef:
name: server-env
- secretRef:
name: server-env
env:
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
- name: KEYSTORE_PKCS12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
ports:
- containerPort: 9443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
initContainers:
- name: pem-to-keystore
image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16
env:
- name: keyfile
value: /var/run/secrets/openshift.io/app-certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/app-certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: keystore_jks
value: /var/run/secrets/java.io/keystores/keystore.jks
- name: password
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
command: ['/bin/bash']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password"]
volumeMounts:
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-certs
secret:
secretName: app-certs
- name: keystore-volume
emptyDir: {}
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
from:
kind: ImageStreamTag
name: ${APPLICATION_NAME}:${APP_VERSION_TAG}
type: ImageChange
parameters:
- name: APPLICATION_NAME
description: Name of the app
value: microservice
required: true
- name: APP_VERSION_TAG
description: TAG of the image stream tag
value: latest
required: true
- name: NAME_SPACE
description: Namespace
value: microservice--sbx--microservice
required: true
- name: DOMAIN_URL
description: DOMAIN_URL
value: microservice-myproject
required: true
- name: APPLICATION_LIVENESS_CHECK_URL
description: LIVENESS Check URL
value: /health
required: true
- name: APPLICATION_READINESS_CHECK_URL
description: READINESS Check URL
value: /microservice/envvariables
required: true
- name: DOCKER_IMAGE_REPO
description: Docker Image Repository
value: docker-registry-default.apps.xxxx.xxx.xxx.xxx.com
required: true

how to connect a cloud sql instance to an sql cluster?

i have the deployment yaml file on the cluster and the connection name of the sql instance and the public ip address, so what should i add and where in order for me to connect the intance and cluster? i wanna be able to add something to the sql cluster and it gets automatically saved to the instance and vice-versa.
this is the deployment code:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp:
generation: 1
labels:
app: mysql
name: mysql
namespace: default
resourceVersion: "1420"
selfLink:
uid:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mysql
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: mysql
spec:
containers:
- env:
- name:
valueFrom:
secretKeyRef:
key:
name: mysql
image: mysql:5.6
imagePullPolicy: IfNotPresent
name: mysql
ports:
- containerPort: 3306
name: mysql
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/mysql
name:
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name:
persistentVolumeClaim:
claimName:
status:
availableReplicas: 1
conditions:
- lastTransitionTime:
lastUpdateTime:
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime:
lastUpdateTime:
message: ReplicaSet "mysql" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
You should use the Cloud SQL proxy and add it as a sidecar to your application making queries to the Cloud SQL instance. Google has a suggested best practice found here.

How to make connection with Google Cloud SQL using Google Container Engine?

i am using Node JS and deployed it using Kubernetes in Google Container Engine. but I cant make a connection to MySQL.
this is my node JS connection
var pool = mysql.createPool({
connectionLimit : 100,
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
database : process.env.DB_NAME,
multipleStatements : true,
socketPath : '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME
})
I was using this for my Google App Engine and its work. now i need to move to GKE and it sent an error that said mySQL is not defined.
this is my app-frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
tier: frontend
spec:
containers:
- name: app
image: gcr.io/app-12345/app:1.0
env :
- name : DB_HOST
value : 127.0.0.1:3306
- name : DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name : DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 8080
imagePullPolicy: Always
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
imagePullPolicy: Always
command:
- /cloud_sql_proxy
- --dir=/cloudsql
- --instances=mulung=tcp:3306
- --credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
ports:
- name: portdb
containerPort: 3306
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
what should i do to fix it? thank you guys.
Can you check the command part
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
Here is the working deployment.yaml from one of my backend application
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <appname>
spec:
replicas: 2
template:
metadata:
labels:
app: <appname>
spec:
containers:
- image: gcr.io/<some_name>/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: <appname>
image: IMAGE_NAME
ports:
- containerPort: 8888
readinessProbe:
httpGet:
path: /<appname>/health
port: 8888
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 5
env:
- name: PROJECT_NAME
value: <some_name>
- name: PROJECT_ZONE
value: <some_name>
- name: INSTANCE_NAME
value: <some_name>
- name: INSTANCE_PORT
value: <some_name>
- name: CONTEXT_PATH
value: <appname>
volumeMounts:
- name: application-config
mountPath: /opt/config-mount
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
- name: application-config
secret:
secretName: <appname>-ENV_NAME-config