How to make connection with Google Cloud SQL using Google Container Engine? - mysql

i am using Node JS and deployed it using Kubernetes in Google Container Engine. but I cant make a connection to MySQL.
this is my node JS connection
var pool = mysql.createPool({
connectionLimit : 100,
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
database : process.env.DB_NAME,
multipleStatements : true,
socketPath : '/cloudsql/' + process.env.INSTANCE_CONNECTION_NAME
})
I was using this for my Google App Engine and its work. now i need to move to GKE and it sent an error that said mySQL is not defined.
this is my app-frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app
spec:
replicas: 1
template:
metadata:
labels:
app: app
tier: frontend
spec:
containers:
- name: app
image: gcr.io/app-12345/app:1.0
env :
- name : DB_HOST
value : 127.0.0.1:3306
- name : DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name : DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
ports:
- name: http-server
containerPort: 8080
imagePullPolicy: Always
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
imagePullPolicy: Always
command:
- /cloud_sql_proxy
- --dir=/cloudsql
- --instances=mulung=tcp:3306
- --credential_file=/secrets/cloudsql/credentials.json
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
ports:
- name: portdb
containerPort: 3306
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
what should i do to fix it? thank you guys.

Can you check the command part
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
Here is the working deployment.yaml from one of my backend application
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: <appname>
spec:
replicas: 2
template:
metadata:
labels:
app: <appname>
spec:
containers:
- image: gcr.io/<some_name>/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=CLOUD_SQL_INSTANCE_NAME",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: <appname>
image: IMAGE_NAME
ports:
- containerPort: 8888
readinessProbe:
httpGet:
path: /<appname>/health
port: 8888
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 30
successThreshold: 1
failureThreshold: 5
env:
- name: PROJECT_NAME
value: <some_name>
- name: PROJECT_ZONE
value: <some_name>
- name: INSTANCE_NAME
value: <some_name>
- name: INSTANCE_PORT
value: <some_name>
- name: CONTEXT_PATH
value: <appname>
volumeMounts:
- name: application-config
mountPath: /opt/config-mount
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
- name: application-config
secret:
secretName: <appname>-ENV_NAME-config

Related

AWS Kubernetes Cluster - WordPress Error establishing a database connection

I am novice person to Kubernetes and trying to deploying WordPress and MySQL using the Kubernetes pod containers but its throwing the error "Error establishing a database connection" while running the Kubernetes .
Wordpress Error
Overall Kubect status
AWS Inbound Rules
mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 80
env:
- name: MYSQL_ROOT_PASSWORD
value: DEVOPS1
- name: MYSQL_USER
value: wpuser
- name: MYSQL_PASSWORD
value: DEVOPS12345
- name: MYSQL_DATABASE
value: wpdb
mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- protocol: TCP
port: 3306
targetPort: 3306
wordpress-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-deployment
labels:
app: wordpress
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: mysql-service
- name: WORDPRESS_DB_USER
value: wpuser
- name: WORDPRESS_DB_PASSWORD
value: wpdb
- name: WORDPRESS_DEBUG
value: "1"
wp-service.yaml
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
spec:
type: NodePort
selector:
app: wordpress
ports:
- protocol: TCP
port: 80
targetPort: 80
Also I opened same thread on Kubernetes forum . Click Here to view it
Please change containerPort in your mysql-deployment file from 80 to 3306:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: DEVOPS1
- name: MYSQL_USER
value: wpuser
- name: MYSQL_PASSWORD
value: DEVOPS12345
- name: MYSQL_DATABASE
value: wpdb
Check similar problem: wordpress-database-connection-kubernetes.

MySQL Kubernetes

I have such a mysql configuration for kubernetes. But I can not connect to database with my local mysql. I am doing port-forward:
kubectl port-forward svc/mysql 3307
and then try to connect with command:
mysql -h 127.0.0.1 -P 3307 -uroot -p
with password: pass
This password is defined in secret file for the root user.
The error is: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
Do you have idea what could be wrong?
mysql-deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3307
targetPort: 3306
selector:
app: mysql
tier: database
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
#
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: database
spec:
containers:
- image: mysql:5.7 # image from docker-hub
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-root-credentials
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db-conf
key: name
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath:
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
mysqldb-root-credentials:
apiVersion: v1
kind: Secret
metadata:
name: db-root-credentials
data:
password: cGFzcwo=
mysqldb-credentials:
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
data:
username: c2ViYQo=
password: c2ViYQo=
I've reproduced your issue and solve it by changing the way secrets are created. I used kubectl CLI to create secrets:
kubectl create secret generic db-credentials --from-literal=password=xyz --from-literal=username=xyz
kubectl create secret generic mysql-pass --from-literal=password=pass
Then deployed PVC, Deployment and Service:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
tier: database
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: mysql
tier: database
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
tier: database
spec:
selector:
matchLabels:
app: mysql
tier: database
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
tier: database
spec:
containers:
- image: mysql:5.7 # image from docker-hub
args:
- "--ignore-db-dir=lost+found"
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Exec to pod:
kubectl exec -it mysql-78d9b7b765-2ms5n -- mysql -h 127.0.0.1 -P 3306 -uroot -p
Once I enter the root password everything works fine:
Welcome to the MySQL monitor. Commands end with ; or \g.
[...]
mysql>

Containers in a multiple container openshift pod can't communicate between themselves

I'm working on migrating our existing docker containers into openshift, and am running into an issue trying to get 2 of our containers into a single pod. We're using Spring Cloud Config Server for our services, with a Gitea backend. I'd like to have these in a single pod so that the java server and git server are always tied together.
I'm able to get to each of the containers individually via the associated routes, but the config server is unable to reach the git server, and vice versa. I get a 404 when the config server tries to clone the git repo. I've tried using gitea-${INSTANCE_IDENTIFIER} (INSTANCE_IDENTIFIER just being a generated value to tie all of the objects together at a glance), gitea-${INSTANCE_IDENTIFIER}.myproject.svc, and gitea-${INSTANCE_IDENTIFIER}.myproject.svc.cluster.local, as well as the full url for the route that gets created, and nothing works.
Here is my template, with a few things removed (...) for security:
apiVersion: v1
kind: Template
metadata:
name: configuration-template
annotations:
description: 'Configuration containers template'
iconClass: 'fa fa-gear'
tags: 'git, Spring Cloud Configuration'
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
data:
local-docker.ini: |
APP_NAME = Git Server
RUN_USER = git
RUN_MODE = prod
[repository]
ROOT = /home/git/data/git/repositories
[repository.upload]
TEMP_PATH = /home/git/data/gitea/uploads
[server]
APP_DATA_PATH = /home/git/data/gitea
HTTP_PORT = 8443
DISABLE_SSH = true
SSH_PORT = 22
LFS_START_SERVER = false
OFFLINE_MODE = false
PROTOCOL = https
CERT_FILE = /var/run/secrets/service-cert/tls.crt
KEY_FILE = /var/run/secrets/service-cert/tls.key
REDIRECT_OTHER_PORT = true
PORT_TO_REDIRECT = 8080
[database]
PATH = /home/git/data/gitea/gitea.db
DB_TYPE = sqlite3
NAME = gitea
USER = gitea
PASSWD = XXXX
[session]
PROVIDER_CONFIG = /home/git/data/gitea/sessions
PROVIDER = file
[picture]
AVATAR_UPLOAD_PATH = /home/git/data/gitea/avatars
DISABLE_GRAVATAR = false
ENABLE_FEDERATED_AVATAR = false
[attachment]
PATH = /home/git/data/gitea/attachments
[log]
ROOT_PATH = /home/git/data/gitea/log
MODE = file
LEVEL = Info
[mailer]
ENABLED = false
[service]
REGISTER_EMAIL_CONFIRM = false
ENABLE_NOTIFY_MAIL = false
DISABLE_REGISTRATION = false
ENABLE_CAPTCHA = false
REQUIRE_SIGNIN_VIEW = false
DEFAULT_KEEP_EMAIL_PRIVATE = false
NO_REPLY_ADDRESS = noreply.example.org
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
- apiVersion: v1
kind: Route
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'gitea-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8443
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Route
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
port:
targetPort: 'https'
tls:
termination: 'passthrough'
to:
kind: Service
name: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: Service
metadata:
name: 'configuration-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
annotations:
service.alpha.openshift.io/serving-cert-secret-name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
spec:
type: ClusterIP
ports:
- name: 'https'
port: 443
targetPort: 8105
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: 'gitea-${INSTANCE_IDENTIFIER}'
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
configuration: '${CONFIGURATION_VERSION}'
spec:
selector:
app: 'configuration-${INSTANCE_IDENTIFIER}'
replicas: 1
template:
metadata:
labels:
app: 'configuration-${INSTANCE_IDENTIFIER}'
gitea: '${GITEA_VERSION}'
spec:
initContainers:
- name: pem-to-keystore
image: nginx
env:
- name: keyfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/services_serving_certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: password
value: '${STORE_PASSWORD}'
command: ['sh']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password -name 'server certificate'"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
- mountPath: /var/run/secrets/openshift.io/services_serving_certs
name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: pem-to-truststore
image: openjdk:alpine
env:
- name: ca_bundle
value: /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
- name: truststore_jks
value: /var/run/secrets/java.io/keystores/truststore.jks
- name: password
value: '${STORE_PASSWORD}'
command: ['/bin/sh']
args: ["-c",
"keytool -noprompt -importkeystore -srckeystore $JAVA_HOME/jre/lib/security/cacerts -srcstoretype JKS -destkeystore $truststore_jks -storepass $password -srcstorepass changeit && cd /var/run/secrets/java.io/keystores/ && awk '/-----BEGIN CERTIFICATE-----/{filename=\"crt-\"NR}; {print >filename}' $ca_bundle && for file in crt-*; do keytool -import -noprompt -keystore $truststore_jks -file $file -storepass $password -alias service-$file; done && rm crt-*"]
volumeMounts:
- mountPath: /var/run/secrets/java.io/keystores
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
containers:
- name: 'gitea-${INSTANCE_IDENTIFIER}'
image: '...'
command: ['sh',
'-c',
'tar xf /app/gitea/gitea-data.tar.gz -C /home/git/data && cp /app/config/local-docker.ini /home/git/config/local-docker.ini && gitea web --config /home/git/config/local-docker.ini']
ports:
- containerPort: 8443
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/home/git/data'
name: 'gitea-data-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/app/config'
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
readOnly: false
- mountPath: '/var/run/secrets/service-cert'
name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
image: '...'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command: [
"java",
"-Djava.security.egd=file:/dev/./urandom",
"-Dspring.profiles.active=terraform",
"-Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks",
"-Djavax.net.ssl.trustStoreType=JKS",
"-Djavax.net.ssl.trustStorePassword=${STORE_PASSWORD}",
"-Dserver.ssl.key-store=/var/run/secrets/java.io/keystores/keystore.pkcs12",
"-Dserver.ssl.key-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.key-store-type=PKCS12",
"-Dserver.ssl.trust-store=/var/run/secrets/java.io/keystores/truststore.jks",
"-Dserver.ssl.trust-store-password=${STORE_PASSWORD}",
"-Dserver.ssl.trust-store-type=JKS",
"-Dspring.cloud.config.server.git.uri=https://gitea-${INSTANCE_IDENTIFIER}.svc.cluster.local/org/centralrepo.git",
"-jar",
"/app.jar"
]
ports:
- containerPort: 8105
protocol: TCP
imagePullPolicy: Always
volumeMounts:
- mountPath: '/var/run/secrets/java.io/keystores'
name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
readOnly: true
- mountPath: 'target/centralrepo'
name: 'configuration-${INSTANCE_IDENTIFIER}'
readOnly: false
volumes:
- name: 'gitea-data-${INSTANCE_IDENTIFIER}'
persistentVolumeClaim:
claimName: 'gitea-${INSTANCE_IDENTIFIER}'
- name: 'gitea-config-${INSTANCE_IDENTIFIER}'
configMap:
defaultMode: 0660
name: 'gitea-config-${INSTANCE_IDENTIFIER}'
- name: 'gitea-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'gitea-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-keystore-${INSTANCE_IDENTIFIER}'
emptyDir:
- name: 'configuration-certs-${INSTANCE_IDENTIFIER}'
secret:
defaultMode: 0640
secretName: 'configuration-certs-${INSTANCE_IDENTIFIER}'
- name: 'configuration-${INSTANCE_IDENTIFIER}'
emptyDir:
defaultMode: 660
restartPolicy: Always
terminationGracePeriodSeconds: 62
dnsPolicy: ClusterFirst
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
parameters:
- name: GITEA_VERSION
displayName: Gitea Image Version
description: The version of the gitea image.
required: true
- name: CONFIGURATION_VERSION
displayName: Configuration Service Image Version
description: The version of the configuration service image.
required: true
- name: INSTANCE_IDENTIFIER
description: Provides an identifier to tie all objects in the deployment together.
generate: expression
from: "[a-z0-9]{6}"
- name: STORE_PASSWORD
generate: expression
from: "[a-zA-Z0-9]{25}"

Unable to connect tomcat container to mariadb database container in kubernetes?

Tomcat and mariadb services are up and running and pinging was happening from tomcat service to mariadb service but not communicating from db to tomcat using ping command. attached my scripts for troubleshooting issue-----------------------------
tomcat deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzapp-tomcat
spec:
selector:
matchLabels:
app: xyzapp-tomcat
tier: backend
track: stable
template:
metadata:
labels:
app: xyzapp-tomcat
tier: backend
track: stable
spec:
containers:
- name: xyzapp-tomcat
image: xyz/xyz:tomcat
env:
- name: MYSQL_SERVICE_HOST
value: "xyzapp-mariadb"
- name: MYSQL_SERVICE_PORT
value: "3306"
#- name: DB_PORT_3306_TCP_ADDR
#value: xyzapp-mariadb #service name of mysql
- name: MYSQL_DATABASE
value: xyzapp
- name: MYSQL_USER
value: student
- name: MYSQL_PASSWORD
value: student
ports:
- name: http
containerPort: 8080
database deployment.yml
apiVersion: v1
kind: Service
metadata:
name: xyzapp-mariadb
labels:
app: xyzapp
spec:
ports:
- port: 3306
selector:
app: xyzapp
tier: mariadb
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: xyzapp-mariadb
labels:
app: xyzapp
spec:
selector:
matchLabels:
app: xyzapp
tier: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: xyzapp
tier: mariadb
spec:
containers:
- image: mariadb
name: xyzapp-mariadb
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_DATABASE
value: xyzapp
- name: MYSQL_USER
value: student
- name: MYSQL_PASSWORD
value: student
args: ["--default-authentication-plugin=mysql_native_password"]
ports:
- containerPort: 3306
volumeMounts:
- name: mariadb-init
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mariadb-init
persistentVolumeClaim:
claimName: xyzapp-initdb-pv-claim
tomcat logs:
05-May-2020 11:48:53.486 WARNING [main] org.apache.tomcat.util.scan.StandardJarScanner.processURLs Failed to scan [file:/usr/local/tomcat/lib/mysql-connector-java-8.0.20.jar] from classloader hierarchy
JDBC mapped context.xml:
<Resource name="jdbc/TestDB" auth="Container" type="javax.sql.DataSource" maxTotal="100" maxIdle="30" maxWaitMillis="10000" username="$MYSQL_USER" password="$MYSQL_PASSWORD" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://$MYSQL_SERVICE_HOST:$MYSQL_SERVICE_PORT/$MYSQL_DATABASE"/>

Openshift Enterprise : Unable to connect to deployed app via browser

Deployed a java application in Openshift (3.9 and 3.11), but cannot reach the application via the browser.
Created an REST API application image (OpenLiberty and openjdk 11) and pushed it to openshift docker-registry via a maven build. ImageStream is created. Deployed the image, created a route. The pod comes up. Pod logs shows the liberty server is started. Accessed the pod via Terminal and was able to use curl (http://localhost:9080) in the terminal and test the apis. But when I used the route to access the app from a browser, getting host could not be found error.
I have the same application successfully running on minishift.
Where and what errors do I look for ?
apiVersion: v1
kind: Template
metadata:
name: ${APPLICATION_NAME}-template
annotations:
description: ${APPLICATION_NAME}
objects:
# Application Service
- apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
service.alpha.openshift.io/serving-cert-secret-name: app-certs
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
namespace: ${NAME_SPACE}
spec:
ports:
- name: 9443-tcp
port: 9443
protocol: TCP
targetPort: 9443
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
sessionAffinity: None
type: ClusterIP
# Application Route
- apiVersion: v1
kind: Route
metadata:
annotations:
openshift.io/host.generated: "true"
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
port:
targetPort: 9443-tcp
tls:
termination: reencrypt
to:
kind: Service
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
weight: 100
wildcardPolicy: None
# APPLICATION DEPLOYMENT CONFIG
- apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
generation: 1
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
labels:
app: ${APPLICATION_NAME}-${APP_VERSION_TAG}
deploymentconfig: ${APPLICATION_NAME}-${APP_VERSION_TAG}
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
# - key: region
# operator: In
# values:
# - ${TARGET_ENVIRONMENT}
topologyKey: "kubernetes.io/hostname"
containers:
- image: ${APPLICATION_NAME}:${TAG}
imagePullPolicy: Always
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_HEALTH_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 15
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
# readinessProbe:
# failureThreshold: 3
# httpGet:
# path: ${APPLICATION_READINESS_CHECK_URL}
# port: 8080
# scheme: HTTP
# initialDelaySeconds: 10
# periodSeconds: 15
# successThreshold: 1
# timeoutSeconds: 25
name: ${APPLICATION_NAME}-${APP_VERSION_TAG}
envFrom:
- configMapRef:
name: server-env
- secretRef:
name: server-env
env:
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
- name: KEYSTORE_PKCS12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
ports:
- containerPort: 9443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
initContainers:
- name: pem-to-keystore
image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16
env:
- name: keyfile
value: /var/run/secrets/openshift.io/app-certs/tls.key
- name: crtfile
value: /var/run/secrets/openshift.io/app-certs/tls.crt
- name: keystore_pkcs12
value: /var/run/secrets/java.io/keystores/keystore.pkcs12
- name: keystore_jks
value: /var/run/secrets/java.io/keystores/keystore.jks
- name: password
valueFrom:
secretKeyRef:
name: keystore-secret
key: KEYSTORE_PASSWORD
command: ['/bin/bash']
args: ['-c', "openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password"]
volumeMounts:
- name: keystore-volume
mountPath: /var/run/secrets/java.io/keystores
- name: app-certs
mountPath: /var/run/secrets/openshift.io/app-certs
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: app-certs
secret:
secretName: app-certs
- name: keystore-volume
emptyDir: {}
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- ${APPLICATION_NAME}-${APP_VERSION_TAG}
from:
kind: ImageStreamTag
name: ${APPLICATION_NAME}:${APP_VERSION_TAG}
type: ImageChange
parameters:
- name: APPLICATION_NAME
description: Name of the app
value: microservice
required: true
- name: APP_VERSION_TAG
description: TAG of the image stream tag
value: latest
required: true
- name: NAME_SPACE
description: Namespace
value: microservice--sbx--microservice
required: true
- name: DOMAIN_URL
description: DOMAIN_URL
value: microservice-myproject
required: true
- name: APPLICATION_LIVENESS_CHECK_URL
description: LIVENESS Check URL
value: /health
required: true
- name: APPLICATION_READINESS_CHECK_URL
description: READINESS Check URL
value: /microservice/envvariables
required: true
- name: DOCKER_IMAGE_REPO
description: Docker Image Repository
value: docker-registry-default.apps.xxxx.xxx.xxx.xxx.com
required: true