Logging configuration for dropwizard to log everything into one file with custom log format - logback

I'm using dropwizard 0.9.2 and had to upgrade jackson to 2.7.3 to make the following configuration at least starting:
server:
applicationConnectors:
- type: http
port: 8080
adminConnectors:
- type: http
port: 8081
requestLog:
timeZone: UTC
appenders:
- type: console
logFormat: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
- type: file
currentLogFilename: ./logs/vrp-app.log
archivedLogFilenamePattern: ./logs/vrp-app.%d.log.gz
logFormat: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
logging:
level: INFO
loggers:
"org.hibernate":
level: DEBUG
appenders:
- type: console
timeZone: UTC
logFormat: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
- type: file
timeZone: UTC
currentLogFilename: ./logs/vrp-app.log
archivedLogFilenamePattern: ./logs/vrp-app.%d.log.gz
archivedFileCount: 5
logFormat: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
But still the log format of the request log is different than the rest:
15:46:42.717 [main] INFO org.eclipse.jetty.server.Server - Started #3751ms
127.0.0.1 - - [14/Jun/2016:13:46:48 +0000] "GET /users HTTP/1.1" 415 56 "-" "curl/7.35.0" 136
Just keeping the "-type" lines in the hope that the config is picked from the logging section also does not work.
What am I doing wrong?
This should work according to this discussion
And is it okay to specify the same log for request log and 'normal log'. Why is this separated at all?

requestLog logFormat is not implemented for 0.9.2. You have to move to 1.0.0-rc3 (latest) dropwizard version.
And the formatting is bit different as specified in http://logback.qos.ch/access.html
more ref: https://www.loggly.com/ultimate-guide/apache-logging-basics/

Related

K3d gives "Error response from daemon: invalid reference format" error

I'm trying to run k3d with a previous version of k8s (v1.20.2, which matches the current version of k8s on OVH). I understand that the correct way of doing this is to specify the image of k3s in the config file. Running this fails with: Error response from daemon: invalid reference format (full logs below).
How can I avoid this error?
Command:
k3d cluster create bitbuyer-cluster --config ./k3d-config.yml
Config:
# k3d-config.yml
apiVersion: k3d.io/v1alpha3
kind: Simple
# version for k8s v1.20.2
image: rancher/k3s:v1.20.11+k3s2
options:
k3s:
extraArgs:
- arg: "--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%"
nodeFilters:
- server:*
- arg: "--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%"
nodeFilters:
- server:*
Logs:
# $ k3d cluster create bitbuyer-cluster --trace --config ./k3d-config.yml
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.9 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
DEBU[0000] Validating file ./k3d-config.yml against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:62}
INFO[0000] Using config file ./k3d-config.yml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 0
apiversion: k3d.io/v1alpha3
image: rancher/k3s:v1.20.11+k3s2
kind: Simple
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
k3s:
extraargs:
- arg: --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%
nodeFilters:
- server:*
- arg: --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%
nodeFilters:
- server:*
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:43681} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-bitbuyer-cluster-server-0
settings:
workerConnections: 1024
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 Created:2021-10-15 12:56:38.391159451 +0100 WEST Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.25.0.0/16 IPRange: Gateway:172.25.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-bitbuyer-cluster' (f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384)
INFO[0000] Created volume 'k3d-bitbuyer-cluster-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====
Runtime:
{}
Cluster:
&{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 External:false IPAM:{IPPrefix:172.25.0.0/16 IPsUsed:[172.25.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:k3d-bitbuyer-cluster-images}
ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}
============================
INFO[0000] Starting new tools node...
TRAC[0000] Creating node from spec
&{Name:k3d-bitbuyer-cluster-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.version:v5.0.1] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.role:noRole k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00064d1cf} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0004da000]}}
INFO[0001] Creating node 'k3d-bitbuyer-cluster-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-bitbuyer-cluster-server-0 Role:server Image:rancher/k3s:v1.20.11+k3s2 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images] Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz] Cmd:[] Args:[--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000654240} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.20.11+k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:43681 k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00035040f} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0003740c0]}}
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-bitbuyer-cluster-server-0: failed to create node: runtime failed to create node 'k3d-bitbuyer-cluster-server-0': failed to create container for node 'k3d-bitbuyer-cluster-server-0': docker failed to create container 'k3d-bitbuyer-cluster-server-0': Error response from daemon: invalid reference format
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'bitbuyer-cluster'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Kubectl:
# $ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

How to set up kubernetes for Spring and MySql

i follow this tutorial https://medium.com/better-programming/kubernetes-a-detailed-example-of-deployment-of-a-stateful-application-de3de33c8632
I create mysql pod and backend pod, but when application get error com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
pod mysql: running
pod backend: CrashLoopBackOff
Dockerfile
FROM openjdk:14-ea-8-jdk-alpine3.10
ADD target/credit-0.0.1-SNAPSHOT.jar .
EXPOSE 8200
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-Dspring.profiles.active=container","-jar","/credit-0.0.1-SNAPSHOT.jar"]
credit-deployment.yml
# Define 'Service' to expose backend application deployment
apiVersion: v1
kind: Service
metadata:
name: to-do-app-backend
spec:
selector: # backend application pod lables should match these
app: to-do-app
tier: backend
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer # use NodePort, if you are not running Kubernetes on cloud
---
# Configure 'Deployment' of backend application
apiVersion: apps/v1
kind: Deployment
metadata:
name: to-do-app-backend
labels:
app: to-do-app
tier: backend
spec:
replicas: 2 # Number of replicas of back-end application to be deployed
selector:
matchLabels: # backend application pod labels should match these
app: to-do-app
tier: backend
template:
metadata:
labels: # Must macth 'Service' and 'Deployment' labels
app: to-do-app
tier: backend
spec:
containers:
- name: to-do-app-backend
image: gitim21/credit_repo:1.0 # docker image of backend application
env: # Setting Enviornmental Variables
- name: DB_HOST # Setting Database host address from configMap
valueFrom:
configMapKeyRef:
name: db-conf # name of configMap
key: host
- name: DB_NAME # Setting Database name from configMap
valueFrom:
configMapKeyRef:
name: db-conf
key: name
- name: DB_USERNAME # Setting Database username from Secret
valueFrom:
secretKeyRef:
name: db-credentials # Secret Name
key: username
- name: DB_PASSWORD # Setting Database password from Secret
valueFrom:
secretKeyRef:
name: db-credentials
key: password
ports:
- containerPort: 8080
application.yml
spring:
datasource:
type: com.zaxxer.hikari.HikariDataSource
hikari:
idle-timeout: 10000
platform: mysql
username: ${DB_USERNAME}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}/${DB_NAME}
jpa:
hibernate:
naming:
physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
I placed the application.yml file in the application folder "resources"
EDIT
Name: mysql-64c7df597c-s4gbt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 17:50:18 +0200
Labels: app=mysql
pod-template-hash=64c7df597c
tier=database
Annotations: <none>
Status: Running
IP: 172.17.0.5
Controlled By: ReplicaSet/mysql-64c7df597c
Containers:
mysql:
Container ID: docker://514d3f5af76f5e7ac11f6bf6e36b44ee4012819dc1cef581829a6b5b2ce7c09e
Image: mysql:5.7
Image ID: docker-pullable://mysql#sha256:1a121f2e7590f949b9ede7809395f209dd9910e331e8372e6682ba4bebcc020b
Port: 3306/TCP
Host Port: 0/TCP
Args:
--ignore-db-dir=lost+found
State: Running
Started: Thu, 12 Sep 2019 17:50:19 +0200
Ready: True
Restart Count: 0
Environment:
MYSQL_ROOT_PASSWORD: <set to the key 'password' in secret 'db-root-credentials'> Optional: false
MYSQL_USER: <set to the key 'username' in secret 'db-credentials'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
MYSQL_DATABASE: <set to the key 'name' of config map 'db-conf'> Optional: false
Mounts:
/var/lib/mysql from mysql-persistent-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
mysql-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-pv-claim
ReadOnly: false
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/mysql-64c7df597c-s4gbt to minikube
Normal Pulled 49m kubelet, minikube Container image "mysql:5.7" already present on machine
Normal Created 49m kubelet, minikube Created container mysql
Normal Started 49m kubelet, minikube Started container mysql
Name: to-do-app-backend-8669b5467-hrr9q
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/192.168.8.160
Start Time: Thu, 12 Sep 2019 18:27:45 +0200
Labels: app=to-do-app
pod-template-hash=8669b5467
tier=backend
Annotations: <none>
Status: Running
IP: 172.17.0.7
Controlled By: ReplicaSet/to-do-app-backend-8669b5467
Containers:
to-do-app-backend:
Container ID: docker://1eb8453939710aed7a93cddbd5046f49be3382858aa17d5943195207eaeb3065
Image: gitim21/credit_repo:1.0
Image ID: docker-pullable://gitim21/credit_repo#sha256:1fb2991394fc59f37068164c72263749d64cb5c9fe741021f476a65589f40876
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 12 Sep 2019 18:51:25 +0200
Finished: Thu, 12 Sep 2019 18:51:36 +0200
Ready: False
Restart Count: 9
Environment:
DB_HOST: <set to the key 'host' of config map 'db-conf'> Optional: false
DB_NAME: <set to the key 'name' of config map 'db-conf'> Optional: false
DB_USERNAME: <set to the key 'username' in secret 'db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'db-credentials'> Optional: false
DB_PORT: <set to the key 'port' in secret 'db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-rgsmp (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-rgsmp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rgsmp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/to-do-app-backend-8669b5467-hrr9q to minikube
Normal Pulled 23m (x5 over 25m) kubelet, minikube Container image "gitim21/credit_repo:1.0" already present on machine
Normal Created 23m (x5 over 25m) kubelet, minikube Created container to-do-app-backend
Normal Started 23m (x5 over 25m) kubelet, minikube Started container to-do-app-backend
Warning BackOff 50s (x104 over 25m) kubelet, minikube Back-off restarting failed container
First and foremost make sure that you fillfull all requirements that are described in article.
During creating deployments objects like (eg. pods, services ) environment variables are injected from the configMaps and secrets that are created earlier. This deployment uses the image kubernetesdemo/to-do-app-backend which is created in step one. Make sure you've created configmap and secrets before, otherwise delete created during deployment objects, create configMap, secret and then run deployment config file once again.
Another possibility if get:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications
link failure
error it means that the DB isn't reachable at all. This can have one or more of the following causes:
IP address or hostname in JDBC URL is wrong.
Hostname in JDBC URL is not recognized by local DNS server.
Port number is missing or wrong in JDBC URL.
~~4. DB server is down.~~
DB server doesn't accept TCP/IP connections.
DB server has run out of connections.
Something in between Java and DB is blocking connections, e.g. a firewall or proxy.
I assume that if your mysql pod is running your DB server is running and point ~~4. DB server is down.~~ is wrong.
To solve the one or the other, follow the following advices:
Verify and test them with ping. Refresh DNS or use IP address in JDBC URL instead.
Check if it is based on my.cnf of MySQL DB.
Start the DB once again. Check if mysqld is started without the --skip-networking option.
Restart the DB and fix your code accordingly that it closes connections in finally.
Disable firewall and/or configure firewall/proxy to allow/forward the port.
Similar error you can find here: communication-error.

Spring boot app scale up with mysql

I have created spring-boot app and database using mysql. Then I Dockerised And Deployed it. below show my docker-compse.yml
version: '2'
services:
seat_reservation_service:
image: springio/seat_reservation_service
ports:
- "8090:8090"
environment:
- SPRING_PROFILES_ACTIVE=docker
seat_reservation_sql:
image: mysql:5.7
ports:
- 33306:3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=seat-reservation-query
this is my spring application.yml file
server:
port: 8090
spring:
profiles: docker
main:
banner-mode: 'off'
datasource:
url: jdbc:mysql://seat_reservation_sql:3306/seat-reservation-query?useSSL=false
username: root
password: root
validation-query: SELECT 1
test-on-borrow: true
jpa:
show_sql: false
hibernate:
ddl-auto: update
dialect: org.hibernate.dialect.MySQL5
properties:
hibernate:
cache:
use_second_level_cache: false
use_query_cache: false
generate_statistics: false
data:
rest:
base-path: /api/
rabbitmq:
host: rabbitmq-1
username: test
password: password
logging:
level:
org.springframework: false
org.hibernate: ERROR
path: logs/prod/
axon:
amqp:
exchange: SeatReserveEvents
eventhandling:
processors:
statistics.source: statisticsQueue
My problem is I need more replicas form seat_reservation_service service. If I scale up seat_reservation_service that refer same database. According to micro-service architecture I need separate database for each replica. How can I do that?
if I use in memory database it can do
According to micro-service architecture I need separate database for each replica. How can I do that?
This "rule" refers to the microservice types, not to the instances of the same microservice. So, you can scale separately the seat_reservation_service and seat_reservation_sql. For example, you could have 4 instances of seat_reservation_service and 3 instances of seat_reservation_sql (1 master and 2 slaves or a Galera cluster).

Dropwizard RequestLog

I have an strange exception while integration testing my Dropwizard 0.9.2 service. The exception and configuration are below.
I dont understand why requestLog is unknown? The DW documentation say, that below configuration part should work. The requestLog can be found in
io.dropwizard.server.AbstractServerFactory.class
and
io.dropwizard.server.DefaultServerFactory.class
extends it, so it should be possible to use the requestLog. Whats wrong here?
Someone knows this problem already?
Configuration Part
server:
requestLog:
timeZone: UTC
appenders:
- type: console
threshold: DEBUG
- type: file
currentLogFilename: ./log/access.log
threshold: ALL
archive: true
archivedLogFilenamePattern: ./log/access.%d.log.gz
archivedFileCount: 14
maxThreads: 1024
minThreads: 8
maxQueuedRequests: 1024
applicationConnectors:
- type: http
port: 80
adminConnectors:
- type: http
port: 12345
Exception
java.lang.RuntimeException: io.dropwizard.configuration.ConfigurationParsingException: myService.yml has an error:
* Unrecognized field at: server.requestLog
Did you mean?:
- adminConnectors
- adminContextPath
- adminMaxThreads
- adminMinThreads
- applicationConnectors
[1 more]
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at io.dropwizard.testing.DropwizardTestSupport.startIfRequired(DropwizardTestSupport.java:214)
at io.dropwizard.testing.DropwizardTestSupport.before(DropwizardTestSupport.java:115)
at io.dropwizard.testing.junit.DropwizardAppRule.before(DropwizardAppRule.java:87)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: io.dropwizard.configuration.ConfigurationParsingException: myservice.yml has an error:
* Unrecognized field at: server.requestLog
Did you mean?:
- adminConnectors
- adminContextPath
- adminMaxThreads
- adminMinThreads
- applicationConnectors
[1 more]
at io.dropwizard.configuration.ConfigurationParsingException$Builder.build(ConfigurationParsingException.java:271)
at io.dropwizard.configuration.ConfigurationFactory.build(ConfigurationFactory.java:163)
at io.dropwizard.configuration.ConfigurationFactory.build(ConfigurationFactory.java:95)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:115)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:64)
at io.dropwizard.testing.DropwizardTestSupport.startIfRequired(DropwizardTestSupport.java:212)
... 11 more
Caused by: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "requestLog" (class io.dropwizard.server.DefaultServerFactory), not marked as ignorable (6 known properties: "adminMaxThreads", "adminConnectors", "applicationConnectors", "applicationContextPath", "adminMinThreads", "adminContextPath"])
at [Source: N/A; line: -1, column: -1] (through reference chain: .....
we had the same issue. It was a problem of using an older version of Jackson. We'Ve upgraded from v 2.6.3 to v 2.6.5 and the error was gone.
Try to put timeZone: UTC under each appender like this:
server:
requestLog:
appenders:
- type: console
threshold: DEBUG
timeZone: UTC
- type: file
currentLogFilename: ./log/access.log
threshold: ALL
archive: true
archivedLogFilenamePattern: ./log/access.%d.log.gz
archivedFileCount: 14
timeZone: UTC
maxThreads: 1024
minThreads: 8
maxQueuedRequests: 1024
applicationConnectors:
- type: http
port: 80
adminConnectors:
- type: http
port: 12345
The same error arose for me with more recent versions of jackson, when I added a dependency that pulled in com.fasterxml.jackson.core:jackson-databind:jar:2.7.2
Excluding jackson-databind from that dependency in pom.xml resolved it.