Same zone calls in Spring-cloud eureka - spring-cloud-netflix

I am trying to run some apps in two different zones : office and shahbour
based on my reading if i set preferSameZoneEureka to true then applications within same zone should always talk together but in my case it is doing round robbin . Below is my application.yml that is common to all applications
eureka:
client:
preferSameZoneEureka: true
region: lebanon
serviceUrl:
office: http://localhost:8761/eureka/
shahbour: http://192.168.15.202:8761/eureka/
availabilityZones:
lebanon: office
instance:
leaseRenewalIntervalInSeconds: 10
metadataMap:
instanceId: ${vcap.application.instance_id:${spring.application.name}:${spring.application.instance_id:${server.port}}}
zone: office
hystrix:
command.default.execution.isolation.thread.timeoutInMilliseconds: 5000
---
spring:
profiles: shahbour
eureka:
instance:
metadataMap:
zone: shahbour
client:
availabilityZones:
lebanon: shahbour
My understanding is that all application that have profile shahbour
active should talk to each other unless it is not found they fall back to applications in zone office

i found out that i need two eureka to be able to accomplish the above , one in each zone
below is my eureka configuration
server:
port: ${PORT:8761}
---
spring:
profiles: office
eureka:
instance:
hostname: office
client:
serviceUrl:
office: http://office:8761/eureka/
shahbour: http://shahbour:8761/eureka/
---
spring:
profiles: shahbour
eureka:
instance:
hostname: shahbour
client:
serviceUrl:
office: http://office:8761/eureka/
shahbour: http://shahbour:8761/eureka/
and for the services
eureka:
client:
preferSameZoneEureka: true
region: lebanon
serviceUrl:
office: http://office:8761/eureka/
shahbour: http://shahbour:8761/eureka/
availabilityZones:
lebanon: office,shahbour
instance:
leaseRenewalIntervalInSeconds: 10
metadataMap:
instanceId: ${vcap.application.instance_id:${spring.application.name}:${spring.application.instance_id:${server.port}}}
zone: office
hystrix:
command.default.execution.isolation.thread.timeoutInMilliseconds: 5000
---
spring:
profiles: shahbour
eureka:
instance:
metadataMap:
zone: shahbour
client:
availabilityZones:
lebanon: shahbour,office
by doing so , i am able to use any service on the office zone but as soon i start that service on my own environment (zone) i start using it.

Related

switch user not working when connected to mysql 8 database?

I am trying to get switch user feature in spring security to work. I am using grails 4.0.10 and mysql 8.
I created a sample hello world grails app and followed the switch user guide from the documentation. https://grails.github.io/grails-spring-security-core/4.0.x/index.html#switchUser
If i use the default h2 database then it works but if i switch to the mysql 8 database it is throwing page not found 404 error and it is not switching.
i have published the code in github. here is the link.
https://github.com/sanjaygir/switching
I have created a simple page in secure controller. The page is index.gsp that has a form to switch to another user. Logged in user should be displayed at the top of this page. In bootstrap file i have created two users. one admin and another regular user.
i have a local database with this configuration
dataSource:
dbCreate: create
url: jdbc:mysql://localhost:3307/switch?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
In order to run this app you need a mysql 8 db running. please change the mysql db name and username and password in the above section in application.yml.
After the app fires please go directly to http://localhost:8080/secure/index and then enter in the textbox "user" and click on the button switch. It will throw an error page not found and if you go back to http://localhost:8080/secure/index you can not see at the top loggedin user name. That means the switch was not successful.
here is the simple code for secure/index.gsp
<%# page contentType="text/html;charset=UTF-8" %>
<html>
<head>
<title></title>
</head>
<body>
<sec:ifLoggedIn>
Logged in as <sec:username/>
</sec:ifLoggedIn>
<form action='${request.contextPath}/login/impersonate' method='POST'>
Switch to user: <input type='text' name='username'/> <br/>
<input type='submit' value='Switch'/>
</form>
</body>
</html>
i hope i have made it clear. this is a simple hello world app created to see switch user feature i n action. I am puzzled why switch user works with default h2 db but not when connected to mysql 8. if anyone have any idea i appreciate your help. Thanks
UPDATE:
Today i switched the database to mysql version 5 and it works.
I changed the following configuration in application.yml
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
dataSource:
pooled: true
jmxExport: true
driverClassName: com.mysql.jdbc.Driver
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
username: root
password: 'password'
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/switch?useUnicode=yes&characterEncoding=UTF-8
in build.gradle i used
runtime 'mysql:mysql-connector-java:5.1.19'
still i am not sure why it doesnt work in mysql 8.
i finally found the bug. i cannot believe what caused the 404 not found issue.it was a single line in the configuration file.
before the application.yml looked like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
after the fix it looks like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
# grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
it was this line in environments > development block
# grails:
it worked after commenting the grails line.
but all the contents of grails block is commented so i am still confused why having grails uncommented would have this big issue. anyways solved after days of hard search!

How to load ConfigMaps using kubernetes-config extension

I've been following https://quarkus.io/guides/kubernetes-config in order to create a configMap and test my Quarkus service into my CDK v3.5.0-1 before to push it to OpenShift 3.11 but KubernetesConfigSourceProvider is not happy:
Using:
Quarkus 1.8.1.Final
Java 11
CDK 3.5
Here is the yaml file I want to convert into a configMap doing: oc create configmap quarkus-service-configmap --from-file=application.yml
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
Here is the generated configMap:
apiVersion: v1
data:
application.yml: |
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
kind: ConfigMap
metadata:
creationTimestamp: '2020-09-21T17:56:40Z'
name: quarkus-service-configmap
namespace: dci
resourceVersion: '9572968'
selfLink: >-
/api/v1/namespaces/dci/configmaps/quarkus-service-configmap
uid: cd4570ff-fc33-11ea-bff0-080027af1c97
Here my quarkus-service/src/main/resources/application.yml:
quarkus:
application:
name: quarkus-service
kubernetes-config: # https://quarkus.io/guides/kubernetes-config
enabled: true
fail-on-missing-config: true
config-maps: quarkus-service-configmap
# secrets: quarkus-service-secrets
jaeger:
service-name: ${quarkus.application.name}
http:
port: 6280
log:
category:
"io.quarkus.kubernetes.client":
level: DEBUG
"io.fabric8.kubernetes.client":
level: DEBUG
console:
format: '%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n'
native:
additional-build-args: -H:ReflectionConfigurationFiles=reflection-config.json
'%minishift':
quarkus:
kubernetes: # https://quarkus.io/guides/deploying-to-openshift / https://quarkus.io/guides/kubernetes
container-image:
group: dci
registry: image-registry.openshift-image-registry.svc:5000
deploy: true
expose: true
The command I run: mvn clean package -Dquarkus.profile=minishift
The result I get:
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM io.quarkus.config
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM org.hibernate.validator.internal.util.Version
INFO: HV000001: Hibernate Validator %s
Sep 21, 2020 6:36:27 PM io.quarkus.application
ERROR: Failed to start application (with profile minishift)
java.lang.RuntimeException: Unable to obtain configuration for ConfigMap objects from Kubernetes API Server at: https://172.30.0.1:443/
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:85)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigSources(KubernetesConfigSourceProvider.java:45)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProvider(ConfigUtils.java:107)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProviders(ConfigUtils.java:121)
at io.quarkus.runtime.generated.Config.readConfig(Config.zig:2060)
at io.quarkus.deployment.steps.RuntimeConfigSetup.deploy(RuntimeConfigSetup.zig:60)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:509)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:91)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:61)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:38)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:106)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [ConfigMap] with name: [quarkus-service-configmap] in namespace: [dci] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:244)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:79)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:69)
... 12 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:467)
at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:461)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1403)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1309)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:336)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:300)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:185)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.BackwardsCompatibilityInterceptor.intercept(BackwardsCompatibilityInterceptor.java:134)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:68)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.HttpClientUtils.lambda$createHttpClient$3(HttpClientUtils.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
at okhttp3.RealCall.execute(RealCall.java:81)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:490)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:451)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:416)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:397)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:890)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:233)
... 15 more

How to set up kubernetes for Spring and MySql

i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.

Spring boot app scale up with mysql

I have created spring-boot app and database using mysql. Then I Dockerised And Deployed it. below show my docker-compse.yml
version: '2'
services:
seat_reservation_service:
image: springio/seat_reservation_service
ports:
- "8090:8090"
environment:
- SPRING_PROFILES_ACTIVE=docker
seat_reservation_sql:
image: mysql:5.7
ports:
- 33306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=seat-reservation-query
this is my spring application.yml file
server:
port: 8090
spring:
profiles: docker
main:
banner-mode: 'off'
datasource:
url: jdbc:mysql://seat_reservation_sql:3306/seat-reservation-query?useSSL=false
username: root
password: root
validation-query: SELECT 1
test-on-borrow: true
jpa:
show_sql: false
hibernate:
ddl-auto: update
dialect: org.hibernate.dialect.MySQL5
properties:
hibernate:
cache:
use_second_level_cache: false
use_query_cache: false
generate_statistics: false
data:
rest:
base-path: /api/
rabbitmq:
host: rabbitmq-1
username: test
password: password
logging:
level:
org.springframework: false
org.hibernate: ERROR
path: logs/prod/
axon:
amqp:
exchange: SeatReserveEvents
eventhandling:
processors:
statistics.source: statisticsQueue
My problem is I need more replicas form seat_reservation_service service. If I scale up seat_reservation_service that refer same database. According to micro-service architecture I need separate database for each replica. How can I do that?
if I use in memory database it can do
According to micro-service architecture I need separate database for each replica. How can I do that?
This "rule" refers to the microservice types, not to the instances of the same microservice. So, you can scale separately the seat_reservation_service and seat_reservation_sql. For example, you could have 4 instances of seat_reservation_service and 3 instances of seat_reservation_sql (1 master and 2 slaves or a Galera cluster).

App deployed on Kubernetes cannot be accessed from Internet

I am new to Kubernetes & Docker. I created a simple nodejs application and deployed on BlueMix Kubernetes. But I am unable to accesses the application on internet. The ip & port mentioned in the kubernetes is not accessible. Can somebody help me.
I tried to http://10.76.193.146:31972, but it did not go through. I am not sure if this the public ip as its 10. series.
I also tried the public ip ( http://184.173.1.79:31972 ) mentioned in the blue mix kubernetes cluster - screenshot below. But that too failed.
This are steps I followed.
Created a nodejs app locally. It ran as desired on the local
// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
---------- package.json
{
"name": "helloworld-nodejs",
"version": "0.0.1",
"description": "First Docker",
"main": "app.js",
"scripts": {
"start": "PORT=8000 node ./app.js"
},
"author": "",
"license": "ISC"
}
Created a docker container locally and ran the docker. It worked properly
Uploaded the docker container on Bluemix registry as
registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
Created the Nodes and Services in Kubernetes, using the following YAML file
----------Node YAML file
apiVersion: v1
kind: Pod
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
containers:
- name: helloworld-nodejs
image: registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
ports:
- containerPort: 8000
---------- Services YAML
apiVersion: v1
kind: Service
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
The application gets deployed properly and is also running, which I can confirm from the logs
Result of kubectl get services & kubectl get nodes command
Since your service's port is different from your pod's containerPort, you will have to specify targetPort in your service.
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
targetPort: 8000
According to the Kubernetes documentation on targetPort, it is the:
Number or name of the port to access on the pods targeted by the
service. .... If this is not specified, the value of the 'port' field
is used (an identity map).