How to load ConfigMaps using kubernetes-config extension - openshift

I've been following https://quarkus.io/guides/kubernetes-config in order to create a configMap and test my Quarkus service into my CDK v3.5.0-1 before to push it to OpenShift 3.11 but KubernetesConfigSourceProvider is not happy:
Using:
Quarkus 1.8.1.Final
Java 11
CDK 3.5
Here is the yaml file I want to convert into a configMap doing: oc create configmap quarkus-service-configmap --from-file=application.yml
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
Here is the generated configMap:
apiVersion: v1
data:
application.yml: |
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
kind: ConfigMap
metadata:
creationTimestamp: '2020-09-21T17:56:40Z'
name: quarkus-service-configmap
namespace: dci
resourceVersion: '9572968'
selfLink: >-
/api/v1/namespaces/dci/configmaps/quarkus-service-configmap
uid: cd4570ff-fc33-11ea-bff0-080027af1c97
Here my quarkus-service/src/main/resources/application.yml:
quarkus:
application:
name: quarkus-service
kubernetes-config: # https://quarkus.io/guides/kubernetes-config
enabled: true
fail-on-missing-config: true
config-maps: quarkus-service-configmap
# secrets: quarkus-service-secrets
jaeger:
service-name: ${quarkus.application.name}
http:
port: 6280
log:
category:
"io.quarkus.kubernetes.client":
level: DEBUG
"io.fabric8.kubernetes.client":
level: DEBUG
console:
format: '%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n'
native:
additional-build-args: -H:ReflectionConfigurationFiles=reflection-config.json
'%minishift':
quarkus:
kubernetes: # https://quarkus.io/guides/deploying-to-openshift / https://quarkus.io/guides/kubernetes
container-image:
group: dci
registry: image-registry.openshift-image-registry.svc:5000
deploy: true
expose: true
The command I run: mvn clean package -Dquarkus.profile=minishift
The result I get:
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM io.quarkus.config
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM org.hibernate.validator.internal.util.Version
INFO: HV000001: Hibernate Validator %s
Sep 21, 2020 6:36:27 PM io.quarkus.application
ERROR: Failed to start application (with profile minishift)
java.lang.RuntimeException: Unable to obtain configuration for ConfigMap objects from Kubernetes API Server at: https://172.30.0.1:443/
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:85)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigSources(KubernetesConfigSourceProvider.java:45)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProvider(ConfigUtils.java:107)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProviders(ConfigUtils.java:121)
at io.quarkus.runtime.generated.Config.readConfig(Config.zig:2060)
at io.quarkus.deployment.steps.RuntimeConfigSetup.deploy(RuntimeConfigSetup.zig:60)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:509)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:91)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:61)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:38)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:106)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [ConfigMap] with name: [quarkus-service-configmap] in namespace: [dci] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:244)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:79)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:69)
... 12 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:467)
at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:461)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1403)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1309)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:336)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:300)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:185)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.BackwardsCompatibilityInterceptor.intercept(BackwardsCompatibilityInterceptor.java:134)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:68)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.HttpClientUtils.lambda$createHttpClient$3(HttpClientUtils.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
at okhttp3.RealCall.execute(RealCall.java:81)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:490)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:451)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:416)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:397)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:890)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:233)
... 15 more

Related

switch user not working when connected to mysql 8 database?

I am trying to get switch user feature in spring security to work. I am using grails 4.0.10 and mysql 8.
I created a sample hello world grails app and followed the switch user guide from the documentation. https://grails.github.io/grails-spring-security-core/4.0.x/index.html#switchUser
If i use the default h2 database then it works but if i switch to the mysql 8 database it is throwing page not found 404 error and it is not switching.
i have published the code in github. here is the link.
https://github.com/sanjaygir/switching
I have created a simple page in secure controller. The page is index.gsp that has a form to switch to another user. Logged in user should be displayed at the top of this page. In bootstrap file i have created two users. one admin and another regular user.
i have a local database with this configuration
dataSource:
dbCreate: create
url: jdbc:mysql://localhost:3307/switch?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
In order to run this app you need a mysql 8 db running. please change the mysql db name and username and password in the above section in application.yml.
After the app fires please go directly to http://localhost:8080/secure/index and then enter in the textbox "user" and click on the button switch. It will throw an error page not found and if you go back to http://localhost:8080/secure/index you can not see at the top loggedin user name. That means the switch was not successful.
here is the simple code for secure/index.gsp
<%# page contentType="text/html;charset=UTF-8" %>
<html>
<head>
<title></title>
</head>
<body>
<sec:ifLoggedIn>
Logged in as <sec:username/>
</sec:ifLoggedIn>
<form action='${request.contextPath}/login/impersonate' method='POST'>
Switch to user: <input type='text' name='username'/> <br/>
<input type='submit' value='Switch'/>
</form>
</body>
</html>
i hope i have made it clear. this is a simple hello world app created to see switch user feature i n action. I am puzzled why switch user works with default h2 db but not when connected to mysql 8. if anyone have any idea i appreciate your help. Thanks
UPDATE:
Today i switched the database to mysql version 5 and it works.
I changed the following configuration in application.yml
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
dataSource:
pooled: true
jmxExport: true
driverClassName: com.mysql.jdbc.Driver
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
username: root
password: 'password'
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/switch?useUnicode=yes&characterEncoding=UTF-8
in build.gradle i used
runtime 'mysql:mysql-connector-java:5.1.19'
still i am not sure why it doesnt work in mysql 8.
i finally found the bug. i cannot believe what caused the 404 not found issue.it was a single line in the configuration file.
before the application.yml looked like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
after the fix it looks like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
# grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
it was this line in environments > development block
# grails:
it worked after commenting the grails line.
but all the contents of grails block is commented so i am still confused why having grails uncommented would have this big issue. anyways solved after days of hard search!

Spring in Kubernetes tries to reach DB at pod IP

I'm facing an issue while deploying a Spring API which should connect to a MySQL database.
I am deploying a standalone MySQL using the [bitnami helm chart][1] with the following values:
primary:
service:
type: ClusterIP
persistence:
enabled: true
size: 3Gi
storageClass: ""
extraVolumes:
- name: mysql-passwords
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: mysql-spc
extraVolumeMounts:
- name: mysql-passwords
mountPath: "/vault/secrets"
readOnly: true
configuration: |-
[mysqld]
default_authentication_plugin=mysql_native_password
skip-name-resolve
explicit_defaults_for_timestamp
basedir=/opt/bitnami/mysql
plugin_dir=/opt/bitnami/mysql/lib/plugin
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
datadir=/bitnami/mysql/data
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=16M
bind-address=0.0.0.0
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
log-error=/opt/bitnami/mysql/logs/mysqld.log
character-set-server=UTF8
collation-server=utf8_general_ci
slow_query_log=0
slow_query_log_file=/opt/bitnami/mysql/logs/mysqld.log
long_query_time=10.0
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/lib/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
auth:
createDatabase: true
database: api-db
username: api
usePasswordFiles: true
customPasswordFiles:
root: /vault/secrets/db-root-pwd
user: /vault/secrets/db-pwd
replicator: /vault/secrets/db-replica-pwd
serviceAccount:
create: false
name: social-app
I use the following deployment which runs a spring API (with Vault secret injection):
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: social-api
name: social-api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: social-api
template:
metadata:
labels:
app: social-api
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: 'social'
spec:
serviceAccountName: social-app
containers:
- image: quay.io/paulbarrie7/social-network-api
name: social-network-api
command:
- java
args:
- -jar
- "-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false"
- "-DSPRING_DATASOURCE_USERNAME=api"
- "-DSPRING_DATASOURCE_PASSWORD=$(cat /secrets/db-pwd)"
- "-DJWT_SECRET=$(cat /secrets/jwt-secret)"
- "-DS3_BUCKET=$(cat /secrets/s3-bucket)"
- -Dlogging.level.root=DEBUG
- -Dspring.datasource.hikari.maximum-pool-size=5
- -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG
- -Dlogging.level.com.zaxxer.hikari=TRACE
- social-network-api-1.0-SNAPSHOT.jar
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080
volumeMounts:
- name: aws-credentials
mountPath: "/root/.aws"
readOnly: true
- name: java-secrets
mountPath: "/secrets"
readOnly: true
volumes:
- name: aws-credentials
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-spc
- name: java-secrets
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: java-spc
Identifier are ok, when I run an interactive mysql pod I can connect to the database. However name resolution for the Spring API is wrong since I get the error:
java.sql.SQLException: Access denied for user 'api'#'10.24.0.194' (using password: YES)
which is wrong since 10.24.0.194 is the API pod address and not the mysql pod or service address, and I cant solve why.
Any idea?
[1]: https://artifacthub.io/packages/helm/bitnami/mysql
Thanks to David's suggestion I succeeded in solving my problem.
Actually there were two issues in my configs.
First the secrets were indeed misinterpreted, then I've changed my command/args to:
command:
- "/bin/sh"
- "-c"
args:
- |
DB_USER=$(cat /secrets/db-user)
DB_PWD=$(cat /secrets/db-pwd)
JWT=$(cat /secrets/jwt-secret)
BUCKET=$(cat /secrets/s3-bucket)
java -jar \
-DSPRING_DATASOURCE_URL=jdbc:mysql://social-mysql.default.svc.cluster.local/api-db?useSSL=false \
"-DSPRING_DATASOURCE_USERNAME=$DB_USER" \
"-DSPRING_DATASOURCE_PASSWORD=$DB_PWD" \
"-DJWT_SECRET=$JWT" \
"-DS3_BUCKET=$BUCKET" \
-Dlogging.level.root=DEBUG \
social-network-api-1.0-SNAPSHOT.jar
And the memory resources set were also too low, so I have changed them to:
resources:
limits:
cpu: 100m
memory: 400Mi
requests:
cpu: 100m
memory: 400Mi

K3d gives "Error response from daemon: invalid reference format" error

I'm trying to run k3d with a previous version of k8s (v1.20.2, which matches the current version of k8s on OVH). I understand that the correct way of doing this is to specify the image of k3s in the config file. Running this fails with: Error response from daemon: invalid reference format (full logs below).
How can I avoid this error?
Command:
k3d cluster create bitbuyer-cluster --config ./k3d-config.yml
Config:
# k3d-config.yml
apiVersion: k3d.io/v1alpha3
kind: Simple
# version for k8s v1.20.2
image: rancher/k3s:v1.20.11+k3s2
options:
k3s:
extraArgs:
- arg: "--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%"
nodeFilters:
- server:*
- arg: "--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%"
nodeFilters:
- server:*
Logs:
# $ k3d cluster create bitbuyer-cluster --trace --config ./k3d-config.yml
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.9 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
DEBU[0000] Validating file ./k3d-config.yml against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:62}
INFO[0000] Using config file ./k3d-config.yml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 0
apiversion: k3d.io/v1alpha3
image: rancher/k3s:v1.20.11+k3s2
kind: Simple
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
k3s:
extraargs:
- arg: --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%
nodeFilters:
- server:*
- arg: --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%
nodeFilters:
- server:*
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:43681} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-bitbuyer-cluster-server-0
settings:
workerConnections: 1024
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 Created:2021-10-15 12:56:38.391159451 +0100 WEST Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.25.0.0/16 IPRange: Gateway:172.25.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-bitbuyer-cluster' (f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384)
INFO[0000] Created volume 'k3d-bitbuyer-cluster-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====
Runtime:
{}
Cluster:
&{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 External:false IPAM:{IPPrefix:172.25.0.0/16 IPsUsed:[172.25.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:k3d-bitbuyer-cluster-images}
ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}
============================
INFO[0000] Starting new tools node...
TRAC[0000] Creating node from spec
&{Name:k3d-bitbuyer-cluster-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.version:v5.0.1] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.role:noRole k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00064d1cf} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0004da000]}}
INFO[0001] Creating node 'k3d-bitbuyer-cluster-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-bitbuyer-cluster-server-0 Role:server Image:rancher/k3s:v1.20.11+k3s2 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images] Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz] Cmd:[] Args:[--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000654240} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.20.11+k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:43681 k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00035040f} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0003740c0]}}
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-bitbuyer-cluster-server-0: failed to create node: runtime failed to create node 'k3d-bitbuyer-cluster-server-0': failed to create container for node 'k3d-bitbuyer-cluster-server-0': docker failed to create container 'k3d-bitbuyer-cluster-server-0': Error response from daemon: invalid reference format
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'bitbuyer-cluster'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Kubectl:
# $ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

custom SCC to access hostPath throws permission denied on the pod

I use openshift 4.7 and have this custom SCC (the goal is to have read-only access on some directories in the host node):
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
apiVersion: security.openshift.io/v1
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: 'test scc'
name: test-access
priority: 15
readOnlyRootFilesystem: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
volumes:
- 'hostPath'
- 'secret'
and here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-test
namespace: ubuntu-test
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu-test
template:
metadata:
labels:
app: ubuntu-test
spec:
serviceAccountName: ubuntu-test
containers:
- name: ubuntu-test
image: ubuntu:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
resources:
limits:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: docker
readOnly: true
mountPath: /var/lib/docker/containers
- name: containers
readOnly: true
mountPath: /var/log/containers
- name: pods
readOnly: true
mountPath: /var/log/pods
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
type: ''
- name: containers
hostPath:
path: /var/log/containers
type: ''
- name: pods
hostPath:
path: /var/log/pods
type: ''
But when I rsh to the container, I can't see the mounted hostPath:
root#ubuntu-test-6b4fcb5bd7-fnc6f:/# ls /var/log/pods
ls: cannot open directory '/var/log/pods': Permission denied
As I check the permissions, everything seems fine:
drwxr-xr-x. 44 root root 8192 Oct 12 14:30 pods
Using selinux can solve this problem. Reference article: https://zhimin-wen.medium.com/selinux-policy-for-openshift-containers-40baa1c86aa5
In addition: You can refer to the selinux parameters to set the addition, deletion, and modification of the mount directory https://selinuxproject.org/page /ObjectClassesPerms
In openshift4, if you use hostpath as the backend data volume, you need to configure the selinux policy when selinux is enabled. By default, you need to give container_file_t

How to set up kubernetes for Spring and MySql

i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.