How to pretty format the Traefik log in JSON? - json

I have a Traefik service with the following configuration:
version: "3.9"
services:
reverse-proxy:
image: traefik:v2.3.4
networks:
common:
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=common"
- "--entrypoints.web.address=:80"
# - "--entrypoints.websecure.address=:443"
- "--global.sendAnonymousUsage=true"
# Set a debug level custom log file
- "--log.level=DEBUG"
- "--log.format=json"
- "--log.filePath=/var/log/traefik.log"
- "--accessLog.filePath=/var/log/access.log"
# Enable the Traefik dashboard
- "--api.dashboard=true"
# - "traefik.constraint-label=common" TODO
deploy:
placement:
constraints:
- node.role == manager
labels:
# Expose the Traefik dashboard
- "traefik.enable=true"
- "traefik.http.routers.dashboard.service=api#internal"
- "traefik.http.services.traefik.loadbalancer.server.port=888" # A port number required by Docker Swarm but not being used in fact
- "traefik.http.routers.dashboard.rule=Host(`traefik.learnintouch.com`)"
- "traefik.http.routers.traefik.entrypoints=web"
# - "traefik.http.routers.traefik.entrypoints=websecure"
# Basic HTTP authentication to secure the dashboard access
- "traefik.http.routers.traefik.middlewares=traefik-auth"
- "traefik.http.middlewares.traefik-auth.basicauth.users=stephane:$$apr1$$m72sBfSg$$7.NRvy75AZXAMtH3C2YTz/"
volumes:
# So that Traefik can listen to the Docker events
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "~/dev/docker/projects/common/volumes/logs/traefik.service.log:/var/log/traefik.log"
- "~/dev/docker/projects/common/volumes/logs/traefik.access.log:/var/log/access.log"
Then I watch the log with the command:
stephane#stephane-pc:~$ tail -f dev/docker/projects/common/volumes/logs/traefik.service.log
{"level":"info","msg":"I have to go...","time":"2021-07-03T10:18:10Z"}
{"level":"info","msg":"Stopping server gracefully","time":"2021-07-03T10:18:10Z"}
{"entryPointName":"web","level":"debug","msg":"Waiting 10s seconds before killing connections.","time":"2021-07-03T10:18:10Z"}
{"entryPointName":"web","level":"error","msg":"accept tcp [::]:80: use of closed network connection","time":"2021-07-03T10:18:10Z"}
I expected the log to be formatted in JSON with indentation.
So I copy-pasted the non indented JSON output in an online JSON formatter but it only indented part of it, making the whole thing useless.

Your problem is that Traefik does not output a single JSON document, but one JSON document per line. You could beautify all documents using xargs and jq:
tail -f dev/docker/projects/common/volumes/logs/traefik.service.log | xargs -n 1 -d "\n" -- bash -c 'echo "$1" | jq' _
In your example, this will result in this output (even with syntax highlighting if your terminal supports that):
{
"level": "info",
"msg": "I have to go...",
"time": "2021-07-03T10:18:10Z"
}
{
"level": "info",
"msg": "Stopping server gracefully",
"time": "2021-07-03T10:18:10Z"
}
{
"entryPointName": "web",
"level": "debug",
"msg": "Waiting 10s seconds before killing connections.",
"time": "2021-07-03T10:18:10Z"
}
{
"entryPointName": "web",
"level": "error",
"msg": "accept tcp [::]:80: use of closed network connection",
"time": "2021-07-03T10:18:10Z"
}

Related

K3d gives "Error response from daemon: invalid reference format" error

I'm trying to run k3d with a previous version of k8s (v1.20.2, which matches the current version of k8s on OVH). I understand that the correct way of doing this is to specify the image of k3s in the config file. Running this fails with: Error response from daemon: invalid reference format (full logs below).
How can I avoid this error?
Command:
k3d cluster create bitbuyer-cluster --config ./k3d-config.yml
Config:
# k3d-config.yml
apiVersion: k3d.io/v1alpha3
kind: Simple
# version for k8s v1.20.2
image: rancher/k3s:v1.20.11+k3s2
options:
k3s:
extraArgs:
- arg: "--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%"
nodeFilters:
- server:*
- arg: "--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%"
nodeFilters:
- server:*
Logs:
# $ k3d cluster create bitbuyer-cluster --trace --config ./k3d-config.yml
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.9 OSType:linux OS:Ubuntu 20.04.3 LTS Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
api-port: ""
env: []
k3s-node-labels: []
k3sargs: []
ports: []
registries:
create: ""
runtime-labels: []
volumes: []
DEBU[0000] Validating file ./k3d-config.yml against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:62}
INFO[0000] Using config file ./k3d-config.yml (k3d.io/v1alpha3#simple)
DEBU[0000] Configuration:
agents: 0
apiversion: k3d.io/v1alpha3
image: rancher/k3s:v1.20.11+k3s2
kind: Simple
network: ""
options:
k3d:
disableimagevolume: false
disableloadbalancer: false
disablerollback: false
loadbalancer:
configoverrides: []
timeout: 0s
wait: true
k3s:
extraargs:
- arg: --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%
nodeFilters:
- server:*
- arg: --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%
nodeFilters:
- server:*
kubeconfig:
switchcurrentcontext: true
updatedefaultkubeconfig: true
runtime:
agentsmemory: ""
gpurequest: ""
serversmemory: ""
registries:
config: ""
use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:43681} Image:rancher/k3s:v1.20.11+k3s2 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[{Arg:--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% NodeFilters:[server:*]} {Arg:--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% NodeFilters:[server:*]}] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
6443.tcp:
- k3d-bitbuyer-cluster-server-0
settings:
workerConnections: 1024
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
TRAC[0000] Filtering 2 nodes by [server:*]
TRAC[0000] Filtered 1 nodes (filter: [server:*])
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] ===== Processed Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 Created:2021-10-15 12:56:38.391159451 +0100 WEST Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.25.0.0/16 IPRange: Gateway:172.25.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-bitbuyer-cluster' (f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384)
INFO[0000] Created volume 'k3d-bitbuyer-cluster-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====
Runtime:
{}
Cluster:
&{Name:bitbuyer-cluster Network:{Name:k3d-bitbuyer-cluster ID:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 External:false IPAM:{IPPrefix:172.25.0.0/16 IPsUsed:[172.25.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00019aa80 0xc00019ac00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000654240 ServerLoadBalancer:0xc0001de690 ImageVolume:k3d-bitbuyer-cluster-images}
ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}
============================
INFO[0000] Starting new tools node...
TRAC[0000] Creating node from spec
&{Name:k3d-bitbuyer-cluster-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.version:v5.0.1] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.0.1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.role:noRole k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00064d1cf} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0004da000]}}
INFO[0001] Creating node 'k3d-bitbuyer-cluster-server-0'
TRAC[0001] Creating node from spec
&{Name:k3d-bitbuyer-cluster-server-0 Role:server Image:rancher/k3s:v1.20.11+k3s2 Volumes:[k3d-bitbuyer-cluster-images:/k3d/images] Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz] Cmd:[] Args:[--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-bitbuyer-cluster] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc000654240} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
DEBU[0001] DockerHost:
TRAC[0001] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-bitbuyer-cluster-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=QEQoybzqvqzTjQkBhTpz K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1% --kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1% --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.20.11+k3s2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:bitbuyer-cluster k3d.cluster.imageVolume:k3d-bitbuyer-cluster-images k3d.cluster.network:k3d-bitbuyer-cluster k3d.cluster.network.external:true k3d.cluster.network.id:f5217ad3aa1832d1e942dea8f624a5c48baa5f3009c88aa95fa0ee812108e384 k3d.cluster.network.iprange:172.25.0.0/16 k3d.cluster.token:QEQoybzqvqzTjQkBhTpz k3d.cluster.url:https://k3d-bitbuyer-cluster-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:43681 k3d.version:v5.0.1] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-bitbuyer-cluster-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00035040f} NetworkingConfig:{EndpointsConfig:map[k3d-bitbuyer-cluster:0xc0003740c0]}}
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-bitbuyer-cluster-server-0: failed to create node: runtime failed to create node 'k3d-bitbuyer-cluster-server-0': failed to create container for node 'k3d-bitbuyer-cluster-server-0': docker failed to create container 'k3d-bitbuyer-cluster-server-0': Error response from daemon: invalid reference format
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'bitbuyer-cluster'
ERRO[0001] failed to get cluster: No nodes found for given cluster
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
Kubectl:
# $ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.11", GitCommit:"27522a29febbcc4badac257763044d0d90c11abd", GitTreeState:"clean", BuildDate:"2021-09-15T19:21:44Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Unable to connect: Communications link failure

I am trying to follow the tutorial Deploying Debezium using the new KafkaConnector resource.
Based on the tutorial, I am also using minikube but with docker driver. Basically just follow exactly step by step.
However, for the step "Create the connector", after creating the connector by
cat <<EOF | kubectl -n kafka apply -f -
apiVersion: "kafka.strimzi.io/v1alpha1"
kind: "KafkaConnector"
metadata:
name: "inventory-connector"
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: io.debezium.connector.mysql.MySqlConnector
tasksMax: 1
config:
database.hostname: 192.168.99.1
database.port: "3306"
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
database.server.id: "184054"
database.server.name: "dbserver1"
database.whitelist: "inventory"
database.history.kafka.bootstrap.servers: "my-cluster-kafka-bootstrap:9092"
database.history.kafka.topic: "schema-changes.inventory"
include.schema.changes: "true"
EOF
and check by
kubectl -n kafka get kctr inventory-connector -o yaml
I got error
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I tried to change
database.user: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_username}"
database.password: "${file:/opt/kafka/external-configuration/connector-config/debezium-mysql-credentials.properties:mysql_password}"
to
database.user: "debezium"
database.password: "dbz"
directly and re-apply, based on the user and password info in "Secure the database credentials" step.
Also, based on the description in the tutorial
I’m using database.hostname: 192.168.99.1 as IP address for connecting to MySQL because I’m using minikube with the virtualbox VM driver If you’re using a different VM driver with minikube you might need a different IP address.
I am actually a little confused for above description. MySQL in the demo is deployed in Docker, while the rest of parts like Kafka are deployed in minikube. Why the description about database.hostname says minikube instead of Docker?
Anyway, when I run minikube ip, I got 192.168.49.2. However, after I change database.hostname to 192.168.49.2, and run kubectl get kctr inventory-connector -o yaml -n kafka, I got
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"192.168.49.2","database.password":"","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T18:20:11Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "12777"
uid: 083df9a3-83ce-4170-a9bc-9573dafdb286
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: 192.168.49.2
database.password: ""
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: ""
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T18:20:11.548Z"
message: |-
PUT /connectors/inventory-connector/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 1 error(s):
A value is required
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
reason: ConnectRestException
status: "True"
type: NotReady
observedGeneration: 1
I can access MySQL by localhost as it is hosted in Docker.
However, I still same error when I changed database.hostname to localhost.
Any idea? Thanks!
The issue is related with the service in minikube failed to communicate with the MySQL in the docker.
Regarding how to access host's localhost from inside Kubernetes cluster, I found How to access host's localhost from inside kubernetes cluster
However, I end up with deploying MySQL in Kubernetes direction by
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-pv.yaml
kubectl apply -f https://k8s.io/examples/application/mysql/mysql-deployment.yaml
(Copied from https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
with
database.hostname: "mysql.default" # service `mysql` in namespace `default`
database.port: "3306"
database.user: "root"
database.password: "password"
Now when I run
kubectl -n kafka get kctr inventory-connector -o yaml
I got a new error saying MySQL not enabling row-level binlog, however, it means it can connect the MySQL now.
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnector","metadata":{"annotations":{},"labels":{"strimzi.io/cluster":"my-connect-cluster"},"name":"inventory-connector","namespace":"kafka"},"spec":{"class":"io.debezium.connector.mysql.MySqlConnector","config":{"database.history.kafka.bootstrap.servers":"my-cluster-kafka-bootstrap:9092","database.history.kafka.topic":"schema-changes.inventory","database.hostname":"mysql.default","database.password":"password","database.port":"3306","database.server.id":"184054","database.server.name":"dbserver1","database.user":"root","database.whitelist":"inventory","include.schema.changes":"true"},"tasksMax":1}}
creationTimestamp: "2021-09-29T19:36:52Z"
generation: 1
labels:
strimzi.io/cluster: my-connect-cluster
name: inventory-connector
namespace: kafka
resourceVersion: "2918"
uid: 48bb46e1-42bb-4574-a3dc-221ae7d6a803
spec:
class: io.debezium.connector.mysql.MySqlConnector
config:
database.history.kafka.bootstrap.servers: my-cluster-kafka-bootstrap:9092
database.history.kafka.topic: schema-changes.inventory
database.hostname: mysql.default
database.password: password
database.port: "3306"
database.server.id: "184054"
database.server.name: dbserver1
database.user: root
database.whitelist: inventory
include.schema.changes: "true"
tasksMax: 1
status:
conditions:
- lastTransitionTime: "2021-09-29T19:36:53.605Z"
status: "True"
type: Ready
connectorStatus:
connector:
state: UNASSIGNED
worker_id: 172.17.0.8:8083
name: inventory-connector
tasks:
- id: 0
state: FAILED
trace: "org.apache.kafka.connect.errors.ConnectException: The MySQL server is
not configured to use a row-level binlog, which is required for this connector
to work properly. Change the MySQL configuration to use a row-level binlog
and restart the connector.\n\tat io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:207)\n\tat
io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:49)\n\tat
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"
worker_id: 172.17.0.8:8083
type: source
observedGeneration: 1

Hyperledger Fabric JSON response has backslashes

I'm currently testing a Hyperledger Fabric Application, but I get an unexpected JSON response.
Why are there extra backslashes between every object in the response?
result, err := json.Marshal(history)
logger.Debug(string(result))
if err != nil {
message := fmt.Sprintf("unable to marshal the result: %s", err.Error())
logger.Error(message)
return shim.Error(message)
}
logger.Info("SimpleChaincode.getHistory exited successfully")
return shim.Success(result)
Actual CLI output:
Chaincode invoke successful. result: status:200 payload:"[{\"type\":\"history\",\"key\":\"key\",\"values\":[{\"tx_id\":\"723a398362282d92f7b05b821fc8f835736b6068e5d1b72d105fc86d6e57d64e\",\"value\":\"initial_value\",\"is_delete\":false}]}]"
Expected CLI result:
Chaincode invoke successful.
result: status:200
payload:
[
{
"type":"history",
"key":"key",
"values":[
{
"tx_id":"723a398362282d92f7b05b821fc8f835736b6068e5d1b72d105fc86d6e57d64e",
"value":"initial_value",
"is_delete":false
}
]
}
]
Docker logs:
2020-08-19 14:40:18.823 UTC [SimpleChaincode] Debug -> DEBU 015 [{"type":"history","key":"key","values":[{"tx_id":"723a398362282d92f7b05b821fc8f835736b6068e5d1b72d105fc86d6e57d64e","value":"initial_value","is_delete":false}]}]
2020-08-19 14:40:18.823 UTC [SimpleChaincode] Info -> INFO 016 SimpleChaincode.getHistory exited successfully
Logging format
The logging format of the peer and orderer commands is controlled via the FABRIC_LOGGING_FORMAT environment variable. This can be set to a format string, such as the default
"%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}"
to print the logs in a human-readable console format. It can be also set to json to output logs in JSON format.
Link: https://hyperledger-fabric.readthedocs.io/en/release-2.2/logging-control.html#logging-format
You can update core.yaml or you can use "FABRIC_LOGGING_FORMAT" in your docker compose file.
An example with core.yaml is given below:
# Logging section for the chaincode container
logging:
# Default level for all loggers within the chaincode container
level: info
# Override default level for the 'shim' logger
shim: warning
# Format for the chaincode container logs
format: json
You can find core.yaml into "fabric-samples/config" directory.
Link: https://github.com/hyperledger/fabric/blob/master/sampleconfig/core.yaml
If you download latest fabric samples, you can find sample core.yaml at "fabric-samples/config" directory.
An example with "FABRIC_LOGGING_FORMAT" in your docker compose file is given below:
You have to edit the environment of cli container with "- FABRIC_LOGGING_FORMAT=json"
cli:
container_name: cli
image: hyperledger/fabric-tools:$IMAGE_TAG
tty: true
stdin_open: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
#- FABRIC_LOGGING_SPEC=DEBUG
- FABRIC_LOGGING_FORMAT=json
- FABRIC_LOGGING_SPEC=INFO
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/hyperledger/fabric-samples/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
networks:
- byfn

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

Openshift templates with array parameters

I am trying to create an Openshift template for a Job that passes the job's command line arguments in a template parameter using the following template:
apiVersion: v1
kind: Template
metadata:
name: test-template
objects:
- apiVersion: batch/v2alpha1
kind: Job
metadata:
name: "${JOB_NAME}"
spec:
parallelism: 1
completions: 1
autoSelector: true
template:
metadata:
name: "${JOB_NAME}"
spec:
containers:
- name: "app"
image: "batch-poc/sample-job:latest"
args: "${{JOB_ARGS}}"
parameters:
- name: JOB_NAME
description: "Job Name"
required: true
- name: JOB_ARGS
description: "Job command line parameters"
Because the 'args' need to be an array, I am trying to set the template parameter using JSON syntax, e.g. from the command line:
oc process -o=yaml test-template -v=JOB_NAME=myjob,JOB_ARGS='["A","B"]'
or programmatically through the Spring Cloud Launcher OpenShift Client:
OpenShiftClient client;
Map<String,String> templateParameters = new HashMap<String,String>();
templateParameters.put("JOB_NAME", jobId);
templateParameters.put("JOB_ARGS", "[ \"A\", \"B\", \"C\" ]");
KubernetesList processed = client.templates()
.inNamespace(client.getNamespace())
.withName("test-template")
.process(templateParameters);
In both cases, it seems to fail because Openshift is interpreting the comma after the first array element as a delimiter and not parsing the remainder of the string.
The oc process command sets the parameter value to '["A"' and reports an error: "invalid parameter assignment in "test-template": "\"B\"]"".
The Java version throws an exception:
Error executing: GET at: https://kubernetes.default.svc/oapi/v1/namespaces/batch-poc/templates/test-template. Cause: Can not deserialize instance of java.util.ArrayList out of VALUE_STRING token\n at [Source: N/A; line: -1, column: -1] (through reference chain: io.fabric8.openshift.api.model.Template[\"objects\"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Job[\"spec\"]->io.fabric8.kubernetes.api.model.JobSpec[\"template\"]->io.fabric8.kubernetes.api.model.PodTemplateSpec[\"spec\"]->io.fabric8.kubernetes.api.model.PodSpec[\"containers\"]->java.util.ArrayList[0]->io.fabric8.kubernetes.api.model.Container[\"args\"])
I believe this is due to a known Openshift issue.
I was wondering if anyone has a workaround or an alternative way of setting the job's parameters?
Interestingly, if I go to the OpenShift web console, click 'Add to Project' and choose test-template, it prompts me to enter a value for the JOB_ARGS parameter. If I enter a literal JSON array there, it works, so I figure there must be a way to do this programmatically.
We worked out how to do it; template snippet:
spec:
securityContext:
supplementalGroups: "${{SUPPLEMENTAL_GROUPS}}"
parameters:
- description: Supplemental linux groups
name: SUPPLEMENTAL_GROUPS
value: "[14051, 14052, 48, 65533, 9050]"
In our case we have 3 files :
- environment configuration,
- template yaml
- sh file which run oc process.
And working case looks like this :
environment file :
#-- CORS ---------------------------------------------------------
cors_origins='["*"]'
cors_acceptable_headers='["*","Authorization"]'
template yaml :
- apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: plugin-common-cors
annotations:
kubernetes.io/ingress.class: ${ingress_class}
config:
origins: "${{origins}}"
headers: "${{acceptable_headers}}"
credentials: true
max_age: 3600
plugin: cors
sh file running oc :
if [ -f templates/kong-plugins-template.yaml ]; then
echo "++ Applying Global Plugin Template ..."
oc process -f templates/kong-plugins-template.yaml \
-p ingress_class="${kong_ingress_class}" \
-p origins=${cors_origins} \
-p acceptable_headers=${cors_acceptable_headers} \
-p request_per_second=${kong_throttling_request_per_second:-100} \
-p request_per_minute=${kong_throttling_request_per_minute:-2000} \
-p rate_limit_by="${kong_throttling_limit_by:-ip}" \
-o yaml \
> yaml.tmp && \
cat yaml.tmp | oc $param_mode -f -
[ $? -ne 0 ] && [ "$param_mode" != "delete" ] && exit 1
rm -f *.tmp
fi
The sh file should read environment file.