Newly install okd4 cluster getting machine-config errors - openshift

I have installed the latest version of okd4 on a 5 node cluster where 3 control-planes and compute nodes.
When running oc get co I am seing the following error messages at the machine-config
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
authentication 4.10.0-0.okd-2022-07-09-073606 True False False 7h14m
baremetal 4.10.0-0.okd-2022-07-09-073606 True False False 15h
cloud-controller-manager 4.10.0-0.okd-2022-07-09-073606 True False False 15h
cloud-credential 4.10.0-0.okd-2022-07-09-073606 True False False 15h
cluster-autoscaler 4.10.0-0.okd-2022-07-09-073606 True False False 15h
config-operator 4.10.0-0.okd-2022-07-09-073606 True False False 15h
console 4.10.0-0.okd-2022-07-09-073606 True False False 7h14m
csi-snapshot-controller 4.10.0-0.okd-2022-07-09-073606 True False False 13h
dns 4.10.0-0.okd-2022-07-09-073606 True False False 13h
etcd 4.10.0-0.okd-2022-07-09-073606 True False False 13h
image-registry 4.10.0-0.okd-2022-07-09-073606 True False False 3h1m
ingress 4.10.0-0.okd-2022-07-09-073606 True False False 8h
insights 4.10.0-0.okd-2022-07-09-073606 True False False 14h
kube-apiserver 4.10.0-0.okd-2022-07-09-073606 True False False 13h
kube-controller-manager 4.10.0-0.okd-2022-07-09-073606 True False False 13h
kube-scheduler 4.10.0-0.okd-2022-07-09-073606 True False False 14h
kube-storage-version-migrator 4.10.0-0.okd-2022-07-09-073606 True False False 13h
machine-api 4.10.0-0.okd-2022-07-09-073606 True False False 14h
machine-approver 4.10.0-0.okd-2022-07-09-073606 True False False 15h
machine-config True True True 13h Unable to apply 4.10.0-0.okd-2022-07-09-073606: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)
marketplace 4.10.0-0.okd-2022-07-09-073606 True False False 15h
monitoring 4.10.0-0.okd-2022-07-09-073606 True False False 8h
network 4.10.0-0.okd-2022-07-09-073606 True False False 13h
node-tuning 4.10.0-0.okd-2022-07-09-073606 True False False 8h
openshift-apiserver 4.10.0-0.okd-2022-07-09-073606 True False False 13h
openshift-controller-manager 4.10.0-0.okd-2022-07-09-073606 True False False 33m
openshift-samples 4.10.0-0.okd-2022-07-09-073606 True False False 13h
operator-lifecycle-manager 4.10.0-0.okd-2022-07-09-073606 True False False 14h
operator-lifecycle-manager-catalog 4.10.0-0.okd-2022-07-09-073606 True False False 14h
operator-lifecycle-manager-packageserver 4.10.0-0.okd-2022-07-09-073606 True False False 13h
service-ca 4.10.0-0.okd-2022-07-09-073606 True False False 15h
storage 4.10.0-0.okd-2022-07-09-073606 True False False 15h
when running oc get mcp I am getting:
oc get mcp
NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
master False True True 3 0 0 3 15h
worker rendered-worker-04b4cdd431c21b96c1f98ca595ded448 True False False 2 2 2 0 15h
and when I describe the degraded machine config I see the following:
oc describe mcp master
Name: master
Namespace:
Labels: machineconfiguration.openshift.io/mco-built-in=
operator.machineconfiguration.openshift.io/required-for-upgrade=
pools.operator.machineconfiguration.openshift.io/master=
Annotations: <none>
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfigPool
Metadata:
Creation Timestamp: 2022-07-24T03:25:28Z
Generation: 2
Managed Fields:
API Version: machineconfiguration.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.:
f:machineconfiguration.openshift.io/mco-built-in:
f:operator.machineconfiguration.openshift.io/required-for-upgrade:
f:pools.operator.machineconfiguration.openshift.io/master:
f:spec:
.:
f:configuration:
f:machineConfigSelector:
.:
f:matchLabels:
.:
f:machineconfiguration.openshift.io/role:
f:nodeSelector:
.:
f:matchLabels:
.:
f:node-role.kubernetes.io/master:
f:paused:
Manager: machine-config-operator
Operation: Update
Time: 2022-07-24T03:25:28Z
API Version: machineconfiguration.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:configuration:
f:name:
f:source:
Manager: machine-config-controller
Operation: Update
Time: 2022-07-24T05:05:35Z
API Version: machineconfiguration.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:configuration:
f:degradedMachineCount:
f:machineCount:
f:observedGeneration:
f:readyMachineCount:
f:unavailableMachineCount:
f:updatedMachineCount:
Manager: machine-config-controller
Operation: Update
Subresource: status
Time: 2022-07-24T05:05:40Z
Resource Version: 41348
UID: 6eea1467-dfd1-4e25-a0a5-a303d21c4076
Spec:
Configuration:
Name: rendered-master-5ac7b1a497e20b76e47aaf715bc0dc6f
Source:
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 00-master
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 01-master-container-runtime
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 01-master-kubelet
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 99-master-generated-crio-seccomp-use-default
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 99-master-generated-registries
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 99-master-okd-extensions
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 99-master-ssh
API Version: machineconfiguration.openshift.io/v1
Kind: MachineConfig
Name: 99-okd-master-disable-mitigations
Machine Config Selector:
Match Labels:
machineconfiguration.openshift.io/role: master
Node Selector:
Match Labels:
node-role.kubernetes.io/master:
Paused: false
Status:
Conditions:
Last Transition Time: 2022-07-24T05:05:36Z
Message:
Reason:
Status: False
Type: RenderDegraded
Last Transition Time: 2022-07-24T05:05:40Z
Message:
Reason:
Status: False
Type: Updated
Last Transition Time: 2022-07-24T05:05:40Z
Message: All nodes are updating to rendered-master-5ac7b1a497e20b76e47aaf715bc0dc6f
Reason:
Status: True
Type: Updating
Last Transition Time: 2022-07-24T05:05:40Z
Message:
Reason:
Status: True
Type: Degraded
Last Transition Time: 2022-07-24T05:05:40Z
Message: Node okd4-control-plane-1 is reporting: "machineconfig.machineconfiguration.openshift.io \"rendered-master-d06288fa8a499313709afdb2c727de31\" not found", Node okd4-control-plane-2 is reporting: "machineconfig.machineconfiguration.openshift.io \"rendered-master-d06288fa8a499313709afdb2c727de31\" not found", Node okd4-control-plane-3 is reporting: "machineconfig.machineconfiguration.openshift.io \"rendered-master-d06288fa8a499313709afdb2c727de31\" not found"
Reason: 3 nodes are reporting degraded status on sync
Status: True
Type: NodeDegraded
Configuration:
Degraded Machine Count: 3
Machine Count: 3
Observed Generation: 2
Ready Machine Count: 0
Unavailable Machine Count: 3
Updated Machine Count: 0
Events: <none>
Any suggestion how to solve this?

Fixed it by deleting the master mcp which triggered it to be recreated and then everything got clean.
oc delete mcp master

try this:
oc delete mc 99-master-okd-extensions 99-okd-master-disable-mitigations

Related

switch user not working when connected to mysql 8 database?

I am trying to get switch user feature in spring security to work. I am using grails 4.0.10 and mysql 8.
I created a sample hello world grails app and followed the switch user guide from the documentation. https://grails.github.io/grails-spring-security-core/4.0.x/index.html#switchUser
If i use the default h2 database then it works but if i switch to the mysql 8 database it is throwing page not found 404 error and it is not switching.
i have published the code in github. here is the link.
https://github.com/sanjaygir/switching
I have created a simple page in secure controller. The page is index.gsp that has a form to switch to another user. Logged in user should be displayed at the top of this page. In bootstrap file i have created two users. one admin and another regular user.
i have a local database with this configuration
dataSource:
dbCreate: create
url: jdbc:mysql://localhost:3307/switch?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
In order to run this app you need a mysql 8 db running. please change the mysql db name and username and password in the above section in application.yml.
After the app fires please go directly to http://localhost:8080/secure/index and then enter in the textbox "user" and click on the button switch. It will throw an error page not found and if you go back to http://localhost:8080/secure/index you can not see at the top loggedin user name. That means the switch was not successful.
here is the simple code for secure/index.gsp
<%# page contentType="text/html;charset=UTF-8" %>
<html>
<head>
<title></title>
</head>
<body>
<sec:ifLoggedIn>
Logged in as <sec:username/>
</sec:ifLoggedIn>
<form action='${request.contextPath}/login/impersonate' method='POST'>
Switch to user: <input type='text' name='username'/> <br/>
<input type='submit' value='Switch'/>
</form>
</body>
</html>
i hope i have made it clear. this is a simple hello world app created to see switch user feature i n action. I am puzzled why switch user works with default h2 db but not when connected to mysql 8. if anyone have any idea i appreciate your help. Thanks
UPDATE:
Today i switched the database to mysql version 5 and it works.
I changed the following configuration in application.yml
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
dataSource:
pooled: true
jmxExport: true
driverClassName: com.mysql.jdbc.Driver
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
username: root
password: 'password'
environments:
development:
dataSource:
dbCreate: create-drop
url: jdbc:mysql://localhost:3306/switch?useUnicode=yes&characterEncoding=UTF-8
in build.gradle i used
runtime 'mysql:mysql-connector-java:5.1.19'
still i am not sure why it doesnt work in mysql 8.
i finally found the bug. i cannot believe what caused the 404 not found issue.it was a single line in the configuration file.
before the application.yml looked like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
after the fix it looks like this
---
grails:
profile: web
codegen:
defaultPackage: rcroadraceweb4
gorm:
reactor:
# Whether to translate GORM events into Reactor events
# Disabled by default for performance reasons
events: false
info:
app:
name: '#info.app.name#'
version: '#info.app.version#'
grailsVersion: '#info.app.grailsVersion#'
spring:
jmx:
unique-names: true
main:
banner-mode: "off"
groovy:
template:
check-template-location: false
devtools:
restart:
additional-exclude:
- '*.gsp'
- '**/*.gsp'
- '*.gson'
- '**/*.gson'
- 'logback.groovy'
- '*.properties'
management:
endpoints:
enabled-by-default: false
server:
servlet:
context-path: '/roadrace'
---
hibernate:
cache:
queries: false
use_second_level_cache: false
use_query_cache: false
grails:
plugin:
databasemigration:
updateOnStart: true
updateOnStartFileName: changelog.groovy
controllers:
upload:
maxFileSize: 2000000
maxRequestSize: 2000000
mail:
host: "localhost"
port: 25
default:
to: 'root#localhost'
from: 'noreply#runnercard.com'
dataSource:
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
dialect: org.hibernate.dialect.MySQL8Dialect
dbCreate: none
properties:
minimumIdle: 5
maximumPoolSize: 10
poolName: main-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
dataSources:
logging:
# This is not used unless `useJdbcAccessLogger` or `useJdbcLogger` is set to `true`
# This does not need to be setup unless it is in use.
type: com.zaxxer.hikari.HikariDataSource
pooled: true
driverClassName: com.mysql.cj.jdbc.Driver
properties:
minimumIdle: 2
maximumPoolSize: 5
poolName: logging-db
cachePrepStmts: true
prepStmtCacheSize: 250
prepStmtCacheSqlLimit: 2048
useServerPrepStmts: true
useLocalSessionState: true
rewriteBatchedStatements: true
cacheResultSetMetadata: true
cacheServerConfiguration: true
elideSetAutoCommits: true
maintainTimeStats: false
environments:
development:
dataSource:
dbCreate: none
url: jdbc:mysql://localhost:3307/dev2?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
# grails:
# mail:
# host: "smtp.gmail.com"
# port: 465
# username: "justforstackoverflow123#gmail.com"
# password: "1asdfqwef1"
# props:
# "mail.smtp.auth": "true"
# "mail.smtp.socketFactory.port": "465"
# "mail.smtp.socketFactory.class": "javax.net.ssl.SSLSocketFactory"
# "mail.smtp.socketFactory.fallback": "false"
test:
dataSource:
# dialect: org.hibernate.dialect.MySQL5InnoDBDialect
dbCreate: none
url: jdbc:mysql://localhost:3307/test?useUnicode=yes&characterEncoding=UTF-8
username: root
password: password
production:
---
logging:
level:
root: INFO
org.springframework: WARN
grails.plugin.springsecurity.web.access.intercept.AnnotationFilterInvocationDefinition: ERROR
grails.plugins.DefaultGrailsPluginManager: WARN
org.hibernate: ERROR # TODO: we need to lower this, and fix the warnings this is talking about.
rcroadraceweb4: DEBUG
com.runnercard: DEBUG
liquibase.ext.hibernate.snapshot.HibernateSnapshotGenerator: ERROR
---
#debug: true
#useJdbcSessionStore: true
---
environments:
nateDeploy:
behindLoadBalancer: true
grails:
insecureServerURL: 'https://nate-dev.nate-palmer.com/roadrace'
serverURL: 'https://nate-dev.nate-palmer.com/roadrace'
dataSource:
url: 'jdbc:mysql://10.1.10.240:3306/rcroadwebDEV?serverTimezone=America/Denver'
it was this line in environments > development block
# grails:
it worked after commenting the grails line.
but all the contents of grails block is commented so i am still confused why having grails uncommented would have this big issue. anyways solved after days of hard search!

Upload csv fle in elasticsearch using filebeat

I'm trying to load csv file in Elasticsearch using filebeat.
Here is my filebeat.yml
filebeat.inputs:
#- type: log
- type: stdin
setup.template.overwrite: true
enabled: true
close_eof: true
paths:
- /usr/share/filebeat/dockerlogs/*.csv
processors:
- decode_csv_fields:
fields:
message: "message"
separator: ","
ignore_missing: false
overwrite_keys: true
trim_leading_space: true
fail_on_error: true
- drop_fields:
fields: [ "log", "host", "ecs", "input", "agent" ]
- extract_array:
field: message
mappings:
sr: 0
Identifiant PSI: 1
libellé PSI: 2
Identifiant PdR: 3
T3 Date Prévisionnelle: 4
DS Reporting PdR: 5
Status PSI: 6
Type PdR: 7
- drop_fields:
fields: ["message","sr"]
#index: rapport_g035_prov_1
filebeat.registry.path: /usr/share/filebeat/data/registry/filebeat/filebeat
output:
elasticsearch:
enabled: true
hosts: ["IPAdress:8081"]
indices:
- index: "rapport_g035_prov"
#- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#- index: "filebeat-7.7.0"
#setup.dashboards.kibana_index: file-*
seccomp.enabled: false
logging.metrics.enabled: false
But when I check the index in kibana I found that the index can't read column names
https://i.stack.imgur.com/renYt.png
I tried to process the csv if filebeat.yml in another way
filebeat.inputs:
- type: log
setup.template.overwrite: true
enabled: true
close_eof: true
paths:
- /usr/share/filebeat/dockerlogs/*.csv
processors:
- decode_csv_fields:
fields:
message: decoded.csv
separator: ","
ignore_missing: false
overwrite_keys: true
trim_leading_space: false
fail_on_error: true
#index: rapport_g035_prov_1
filebeat.registry.path: /usr/share/filebeat/data/registry/filebeat/filebeat
output:
elasticsearch:
enabled: true
hosts: ["IPAdress:8081"]
indices:
- index: "rapport_g035_prov"
#- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#- index: "filebeat-7.7.0"
#setup.dashboards.kibana_index: file-*
seccomp.enabled: false
logging.metrics.enabled: false
But I got the same error it can't map the index correctly I know there is a problem in the processing of csv in the filebeat.yml but I don't know what is it

custom SCC to access hostPath throws permission denied on the pod

I use openshift 4.7 and have this custom SCC (the goal is to have read-only access on some directories in the host node):
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
apiVersion: security.openshift.io/v1
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
kind: SecurityContextConstraints
metadata:
annotations:
kubernetes.io/description: 'test scc'
name: test-access
priority: 15
readOnlyRootFilesystem: true
runAsUser:
type: RunAsAny
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
volumes:
- 'hostPath'
- 'secret'
and here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu-test
namespace: ubuntu-test
spec:
replicas: 1
selector:
matchLabels:
app: ubuntu-test
template:
metadata:
labels:
app: ubuntu-test
spec:
serviceAccountName: ubuntu-test
containers:
- name: ubuntu-test
image: ubuntu:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
resources:
limits:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: docker
readOnly: true
mountPath: /var/lib/docker/containers
- name: containers
readOnly: true
mountPath: /var/log/containers
- name: pods
readOnly: true
mountPath: /var/log/pods
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
type: ''
- name: containers
hostPath:
path: /var/log/containers
type: ''
- name: pods
hostPath:
path: /var/log/pods
type: ''
But when I rsh to the container, I can't see the mounted hostPath:
root#ubuntu-test-6b4fcb5bd7-fnc6f:/# ls /var/log/pods
ls: cannot open directory '/var/log/pods': Permission denied
As I check the permissions, everything seems fine:
drwxr-xr-x. 44 root root 8192 Oct 12 14:30 pods
Using selinux can solve this problem. Reference article: https://zhimin-wen.medium.com/selinux-policy-for-openshift-containers-40baa1c86aa5
In addition: You can refer to the selinux parameters to set the addition, deletion, and modification of the mount directory https://selinuxproject.org/page /ObjectClassesPerms
In openshift4, if you use hostpath as the backend data volume, you need to configure the selinux policy when selinux is enabled. By default, you need to give container_file_t

0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports

while redeploy it stucks as following but while scaling (0,1) deployment it work,
White might be the cause
kubectl describe pod backendnew-6f9cbc5fb-v2nbc
Name: backendnew-6f9cbc5fb-v2nbc
Namespace: default
Priority: 0
Node:
Labels: pod-template-hash=6f9cbc5fb
workload.user.cattle.io/workloadselector=deployment-default-backendnew
Annotations: cattle.io/timestamp: 2021-09-27T11:55:38Z
field.cattle.io/ports:
[[{"containerPort":7080,"dnsName":"backendnew-nodeport","hostPort":7080,"kind":"NodePort","name":"port","protocol":"TCP","sourcePort":7080...
field.cattle.io/publicEndpoints: [{"addresses":["192.168.178.13"],"nodeId":"c-jq2bh:machine-7g9vs","port":7080,"protocol":"TCP"}]
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/backendnew-6f9cbc5fb
Containers:
backendnew:
Image: kub-repo.f1soft.com/fonepay/grpay-admin:8
Port: 7080/TCP
Host Port: 7080/TCP
Environment:
server.port: 7080
spring.profiles.active: PROD
Mounts:
/app/config from vol1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rmt6g (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
vol1:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: configoriginal
Optional: false
default-token-rmt6g:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rmt6g
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
Warning FailedScheduling default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

How to load ConfigMaps using kubernetes-config extension

I've been following https://quarkus.io/guides/kubernetes-config in order to create a configMap and test my Quarkus service into my CDK v3.5.0-1 before to push it to OpenShift 3.11 but KubernetesConfigSourceProvider is not happy:
Using:
Quarkus 1.8.1.Final
Java 11
CDK 3.5
Here is the yaml file I want to convert into a configMap doing: oc create configmap quarkus-service-configmap --from-file=application.yml
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
Here is the generated configMap:
apiVersion: v1
data:
application.yml: |
jaeger_endpoint: http://192.168.56.100:14268/api/traces
jaeger_sampler_manager_host_port: 192.168.56.100:14250
sql_debug: false
quarkus:
datasource:
db-kind: h2
jdbc:
detect-statement-leaks: true
driver: io.opentracing.contrib.jdbc.TracingDriver
enable-metrics: true
url: jdbc:tracing:h2:./db;AUTO_SERVER=TRUE
max-size: 13
metrics:
enabled: false
password: sa
username: sa
flyway:
locations: db/prod/migration
migrate-at-start: true
hibernate-orm:
database:
charset: UTF-8
generation: none
dialect: org.hibernate.dialect.H2Dialect
http:
port: 6280
jaeger:
enabled: true
endpoint: ${jaeger_endpoint}
sampler-manager-host-port: ${jaeger_sampler_manager_host_port}
sampler-param: 1
sampler-type: const
resteasy:
gzip:
enabled: true
max-input: 10M
smallrye-health:
ui:
always-include: true
swagger-ui:
always-include: true
kind: ConfigMap
metadata:
creationTimestamp: '2020-09-21T17:56:40Z'
name: quarkus-service-configmap
namespace: dci
resourceVersion: '9572968'
selfLink: >-
/api/v1/namespaces/dci/configmaps/quarkus-service-configmap
uid: cd4570ff-fc33-11ea-bff0-080027af1c97
Here my quarkus-service/src/main/resources/application.yml:
quarkus:
application:
name: quarkus-service
kubernetes-config: # https://quarkus.io/guides/kubernetes-config
enabled: true
fail-on-missing-config: true
config-maps: quarkus-service-configmap
# secrets: quarkus-service-secrets
jaeger:
service-name: ${quarkus.application.name}
http:
port: 6280
log:
category:
"io.quarkus.kubernetes.client":
level: DEBUG
"io.fabric8.kubernetes.client":
level: DEBUG
console:
format: '%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n'
native:
additional-build-args: -H:ReflectionConfigurationFiles=reflection-config.json
'%minishift':
quarkus:
kubernetes: # https://quarkus.io/guides/deploying-to-openshift / https://quarkus.io/guides/kubernetes
container-image:
group: dci
registry: image-registry.openshift-image-registry.svc:5000
deploy: true
expose: true
The command I run: mvn clean package -Dquarkus.profile=minishift
The result I get:
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM io.quarkus.config
WARN: Unrecognized configuration key "%s" was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo
Sep 21, 2020 6:36:15 PM org.hibernate.validator.internal.util.Version
INFO: HV000001: Hibernate Validator %s
Sep 21, 2020 6:36:27 PM io.quarkus.application
ERROR: Failed to start application (with profile minishift)
java.lang.RuntimeException: Unable to obtain configuration for ConfigMap objects from Kubernetes API Server at: https://172.30.0.1:443/
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:85)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigSources(KubernetesConfigSourceProvider.java:45)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProvider(ConfigUtils.java:107)
at io.quarkus.runtime.configuration.ConfigUtils.addSourceProviders(ConfigUtils.java:121)
at io.quarkus.runtime.generated.Config.readConfig(Config.zig:2060)
at io.quarkus.deployment.steps.RuntimeConfigSetup.deploy(RuntimeConfigSetup.zig:60)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:509)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:91)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:61)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:38)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:106)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [ConfigMap] with name: [quarkus-service-configmap] in namespace: [dci] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:244)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:187)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:79)
at io.quarkus.kubernetes.client.runtime.KubernetesConfigSourceProvider.getConfigMapConfigSources(KubernetesConfigSourceProvider.java:69)
... 12 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.base/java.net.SocketInputStream.socketRead0(Native Method)
at java.base/java.net.SocketInputStream.socketRead(SocketInputStream.java:115)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:168)
at java.base/java.net.SocketInputStream.read(SocketInputStream.java:140)
at java.base/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:467)
at java.base/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:461)
at java.base/sun.security.ssl.SSLSocketInputRecord.decode(SSLSocketInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:110)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1403)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1309)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:336)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:300)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:185)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.BackwardsCompatibilityInterceptor.intercept(BackwardsCompatibilityInterceptor.java:134)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:68)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at io.fabric8.kubernetes.client.utils.HttpClientUtils.lambda$createHttpClient$3(HttpClientUtils.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
at okhttp3.RealCall.execute(RealCall.java:81)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:490)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:451)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:416)
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:397)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:890)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:233)
... 15 more