Filebeat and json logs from Kubernetes not working - json

I've setup a sample Kubernetes cluster using minikube with Elasticsearch and Kibana 6.8.6, and Filebeat 7.5.1.
My application generate log messages in json format {"#timestamp":"2019-12-30T21:59:48+0000","message":"example","data":"data-462"}
I can see the log message in Kibana, but my json log is embedded inside "message" atribute as a string:
I configured json.keys_under_root: true to no effect (as stated in documentation: https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-log.html#filebeat-input-log-config-json)
My configuration:
filebeat.yml: |-
migration.6_to_7.enabled: true
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config.enabled: false
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
kubernetes.yml: |-
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
I need the "message" and "data" fields as separate fields in Kibana.
What I'm missing?

Try adding json.message_key: message in your filebeat configuration

Related

Parsing k8s docker container json log correctly with Filebeat 7.9.3

I'm working with Filebeat 7.9.3 as a daemonset on k8s.
I'm not able to parse docker container logs of a Springboot app that writes logs to stdout in json.
The fact is that the every row of the Springboot app logs is written in this way:
{ "#timestamp": "2020-11-16T13:39:57.760Z", "log.level": "INFO", "message": "Checking comment 'se' done = true", "service.name": "conduit-be-moderator", "event.dataset": "conduit-be-moderator.log", "process.thread.name": "http-nio-8081-exec-2", "log.logger": "it.koopa.app.ModeratorController", "transaction.id": "1ed5c62964ff0cc2", "trace.id": "20b4b28a3817c9494a91de8720522972"}
But the corresponding docker log file under /var/log/containers/ writes log in this way:
{
"log": "{\"#timestamp\":\"2020-11-16T11:27:32.273Z\", \"log.level\": \"INFO\", \"message\":\"Checking comment 'a'\", \"service.name\":\"conduit-be-moderator\",\"event.dataset\":\"conduit-be-moderator.log\",\"process.thread.name\":\"http-nio-8081-exec-4\",\"log.logger\":\"it.koopa.app.ModeratorController\",\"transaction.id\":\"9d3ad972dba65117\",\"trace.id\":\"8373edba92808d5e838e07c7f34af6c7\"}\n",
"stream": "stdout",
"time": "2020-11-16T11:27:32.274816903Z"
}
I always receive this on filebeat logs
Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
This is my filebeat config that tries to parse json log message from docker logs where I'm using decode_json_fields to try to catch Elasticsearch standard fields (I'm using co.elastic.logging.logback.EcsEncoder)
filebeat.yml: |-
filebeat.inputs:
- type: container
#json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: log
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ["log"]
overwrite_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
How can I do this???
As processors are applied before the JSON parser of the input, you will need to first configure the decode_json_fields processors which will allow you to decode your json.log field. You will then be able to apply the json configuration fo the inputs on the message fields. Something like:
filebeat.yml: |-
filebeat.inputs:
- type: container
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: message
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ['log']
expand_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
This configuration assumes that all your logs use JSON format. Else you will probably need to add an exclude or include regex pattern.

request response data when using taurus junit-xml module

I am trying to clean-up the jmeter docker+ci pipeline of our functional tests. I see taurus has a clean way to run jmeter scripts in a container and it does the heavy lifting of downloading the version of jmeter I want + installing the plugins my scripts use - excellent.
Now I need to generate the reports in junit.xml so I could keep the reporting consistent. Up until now I was using a modified fork of https://github.com/tguzik/m2u to convert jtl reports to junit.xml
Appreciate any help with how I can get request, response (code & body) for all samples onto junit.xml (at least for the failed samples)?
I tried few variations of taurus yaml ...
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: junit-xml
filename: report/report.xml
data-source: sample-labels
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
- module: passfail
- module: junit-xml
filename: report/report.xml
data-source: pass-fail
Also added certain passfail criteria variations on the passfail module. did not help
After fiddling with this for few hours, I believe there is no clean way to get anything meaningful onto the junit .xml report from the junit-xml module in taurus. It appears barebone. I also noticed that it could mess up the default jenkins junit plugin test result summary.
So I settled down with the following yaml setting and continued to use m2u.jar to convert the jtl to junit.xml
modules:
jmeter:
path: ~/.bzt/jmeter-taurus/bin/jmeter
download-link: https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-{version}.zip
version: 5.3
force-ctg: true
detect-plugins: true
plugins:
- jpgc-json=2.2
- jmeter-ftp
- jpgc-casutg
xml-jtl-flags:
xml: true
fieldNames: true
time: true
timestamp: true
latency: true
connectTime: false
success: true
label: true
code: true
message: true
threadName: true
dataType: false
encoding: false
assertions: true
subresults: true
responseData: false
samplerData: false
responseHeaders: false
requestHeaders: true
responseDataOnError: true
saveAssertionResultsFailureMessage: true
bytes: true
threadCounts: false
url: true
execution:
- write-xml-jtl: full
scenario:
script: v_jmxfilename
properties:
environment: v_env
reporting:
- module: console
- module: final_stats
summary: true
percentiles: true
test-duration: true
# - module: junit-xml
# filename: report/junit-report.xml
# data-source: sample-labels
As per JUnit-XML-Reporter documentation currently this is not possible:
This reporter provides test results in JUnit XML format parseable by Jenkins JUnit Plugin. Reporter has two options:
filename (full path to report file, optional. By default xunit.xml in artifacts dir)
data-source (which data source to use: sample-labels or pass-fail)
If sample-labels used as source data, report will contain urls with test errors. If pass-fail used as source data, report will contain Pass/Fail criteria information. Please note that you have to place pass-fail module in reporters list, before junit-xml module.
Taurus is not only for JMeter, it supports many more tools and not all of them provide possibility to store request and response data so the options I can think of are in:
Add a Listener to your Test Plan and choose what metrics you need to store into a separate file, the easiest one for using is Flexible File Writer
Use ShellExec Service to run your m2u.jar from Taurus config YAML

How to create kubernetes secret as json object and load the same in kubernetes environment as json

I need to pass a JWK as kubernetes environment variable to my app.
I created a file to store my key like so:
cat deploy/keys/access-signature-public-jwk
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Which is then used to create a kubernetes secret like so:
kubectl create secret generic intimations-signature-public-secret --from-file=./deploy/keys/access-signature-public-jwk
Which is then retrived in the kubernetes environment variable as:
- name: ACCESS_SIGNATURE_PUBLIC_JWK
valueFrom:
secretKeyRef:
name: intimations-signature-public-secret
key: access-signature-public-jwk
And passed to the application.conf of the application likeso:
pac4j.lagom.jwt.authenticator {
signatures = [
${ACCESS_SIGNATURE_PUBLIC_JWK}
]
}
The pac4j library expects the config pac4j.lagom.jwt.authenticator as a json object. But get the following exception when I run this app:
com.typesafe.config.ConfigException$WrongType: env variables: signatures has type list of STRING rather than list of OBJECT
at com.typesafe.config.impl.SimpleConfig.getHomogeneousWrappedList(SimpleConfig.java:452)
at com.typesafe.config.impl.SimpleConfig.getObjectList(SimpleConfig.java:460)
at com.typesafe.config.impl.SimpleConfig.getConfigList(SimpleConfig.java:465)
at org.pac4j.lagom.jwt.JwtAuthenticatorHelper.parse(JwtAuthenticatorHelper.java:84)
at com.codingkapoor.holiday.impl.core.HolidayApplication.jwtClient$lzycompute(HolidayApplication.scala
POD Description
Name: holiday-deployment-55b86f955d-9klk2
Namespace: default
Priority: 0
Node: minikube/192.168.99.103
Start Time: Thu, 28 May 2020 12:42:50 +0530
Labels: app=holiday
pod-template-hash=55b86f955d
Annotations: <none>
Status: Running
IP: 172.17.0.5
IPs:
IP: 172.17.0.5
Controlled By: ReplicaSet/holiday-deployment-55b86f955d
Containers:
holiday:
Container ID: docker://18443cfedc7fd39440f5fa6f038f36c58cec1660a2974e6432500e8c7d51f5e6
Image: codingkapoor/holiday-impl:latest
Image ID: docker://sha256:6e0ddcf41e0257755b7e865424671970091d555c4bad88b5d896708ded139eb7
Port: 8558/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:49:24 +0530
Finished: Thu, 28 May 2020 22:49:29 +0530
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Thu, 28 May 2020 22:44:15 +0530
Finished: Thu, 28 May 2020 22:44:21 +0530
Ready: False
Restart Count: 55
Liveness: http-get http://:management/alive delay=20s timeout=1s period=10s #success=1 #failure=10
Readiness: http-get http://:management/ready delay=20s timeout=1s period=10s #success=1 #failure=10
Environment:
JAVA_OPTS: -Xms256m -Xmx256m -Dconfig.resource=prod-application.conf
APPLICATION_SECRET: <set to the key 'secret' in secret 'intimations-application-secret'> Optional: false
MYSQL_URL: jdbc:mysql://mysql/intimations_holiday_schema
MYSQL_USERNAME: <set to the key 'username' in secret 'intimations-mysql-secret'> Optional: false
MYSQL_PASSWORD: <set to the key 'password' in secret 'intimations-mysql-secret'> Optional: false
ACCESS_SIGNATURE_PUBLIC_JWK: <set to the key 'access-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REFRESH_SIGNATURE_PUBLIC_JWK: <set to the key 'refresh-signature-public-jwk' in secret 'intimations-signature-public-secret'> Optional: false
REQUIRED_CONTACT_POINT_NR: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kqmmv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-kqmmv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kqmmv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 5m21s (x23 over 100m) kubelet, minikube Container image "codingkapoor/holiday-impl:latest" already present on machine
Warning BackOff 27s (x466 over 100m) kubelet, minikube Back-off restarting failed container
I was wondering if there is any way to pass the environment variable as a json object instead of string. Please suggest. TIA.
First, the file access-signature-public-jwk is not a valid JSON file. You should update it as a valid one.
{
"algorithm" : "RS256",
"jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
Steps I followed to validate.
kubectl create secret generic token1 --from-file=jwk.json
Mount the secret into the pod.
env:
- name: JWK
valueFrom:
secretKeyRef:
name: token
key: jwk.json
exec to the pod and check the env variable JWK
$ echo $JWK
{ "algorithm" : "RS256", "jwk" : {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
Copy the content to a file
echo $JWK > jwk.json
Validate the file
$ jsonlint-php jwk.json
Valid JSON (jwk.json)
If I use the file you are given and followed the same steps. It gives an json validation error. Also, env variables are always strings. You have to convert them into the required types in your code.
$ echo $JWK
{ algorithm = "RS256" jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"} }
$ echo $JWK > jwk.json
$ jsonlint-php jwk.json
jwk.json: Parse error on line 1:
{ algorithm = "RS256"
-^
Expected one of: 'STRING', '}'
Although not a direct answer but an alternate solution to this problem.
As #hariK pointed out environment variables are always strings and in order to consume them as json we would need to convert the env var read as string into json.
However, in my case, this was not a viable solution because I was using a lib that was expecting a Config object and not a json object directly which would have meant a lot of work. Converting string -> json -> Config. Plus this approach is inconsistent with how Config object was being built in the developement scenarios i.e., json -> Config. See here.
The framework I am using to build this app is based on Play Framework which allows to modularize application configs in separate files and then club the required pieces together in a desired config file, as shown below. You can read it more in detail here.
application.conf
include "/opt/conf/app1.conf"
include "/opt/conf/app2.conf"
This allowed me to make use of Using Secrets as files from a Pod
feature from kubernetes.
Basically, I created a small config file that contains a part of my main application configuration file, as shown below:
cat deploy/keys/signature-public-jwk
pac4j.lagom.jwt.authenticator {
signatures = [
{
algorithm = "RS256"
jwk = {"kty":"RSA","e":"AQAB","n":"ghhDZxuUo6TaSvAlD23mLP6n_T9pQuJsFY4JWdBYTjtcp_8Q3QeR477jou4cScPGczWw2JMGnx-Ao_b7ewagSl7VHpECBFHgcnlAgs5j6jfnd3M9ADKD2Yc756iXlIMT9xKDblIcXQQYlXalqxGvnLRLv1KAgVVVpVWzQd6Iz8WdTnexVrh7L9N87QQbOWcAVWGHCWCLCBsVE7JbC-XDt9h9P1g1sMqMV-qp7HjSXUKWuF2NwOnL2VeFSED7gdefs2Za1UYqhfwxdGl7aaPDXhjib0cfg4NvbcXMzxDEVkeJqhdDfD82wHOs4qFvnFMVxq9n6VVExSxsJq8gBJ7Z2AmfoXpmZC1L1ZwULB2KKpFXDCzgBELPLrfyIf8mNnk2nuuLT-aaMsqy2uB-ea3du4lyWo9MLk6x-L5g-n1oADKFKBY9aP2QQwruCG92XSd7jA9yLtbgr9OGVCYezxIxFp4vW6KcmPwJQjozWtwkZjeo4hv-zhRac73WDox2hDkif7WPTuEvC21fRy3GvyPIUPKPJA8pJjb2TXT7DXknR97CTnOWicuh3HMoRlVIwUzM5SVLGSXex0VjHZKgLYwQYukg5O2rab_4NxpD6LqLHx1bbPssC7BedCIfWX1Vcae40tlfvJAM09MiwQPZjWRahW_fK_9X5F5_rtUhCznm32M"}
}
]
}
Then created a kubernetes secret and mounted volumes in deployment to appear in the pod as file
kubectl create secret generic signature-public-secret --from-file=./deploy/secrets/signature-public-jwks.conf
// deployment yaml
spec:
containers:
- name: employee
image: "codingkapoor/employee-impl:latest"
volumeMounts:
- name: signature-public-secret-conf
mountPath: /opt/conf/signature-public-jwks.conf
subPath: signature-public-jwks.conf
readOnly: true
volumes:
- name: signature-public-secret-conf
secret:
secretName: signature-public-secret
Use this mounted file location in the application.conf to include the same
include file("/opt/conf/signature-public-jwks.conf")
Notice that the mountPath and the file location in the application.conf are same.
Advantages of this approach:
The solution is consistent with both the development and test, production environments as we could return json instead of string to the lib, as explained above
Secrets shouldn't be passed as environment variables anyway! You can read more about it here.

How to read json file using filebeat and send it to elasticsearch via logstash

This is my json log file. I'm trying to store the file to my elastic-Search through my logstash.
{"message":"IM: Orchestration","level":"info"}
{"message":"Investment Management","level":"info"}
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- D:/Development_Avecto/test-log/tn-logs/im.log
json.keys_under_root: true
json.add_error_key: true
processors:
- decode_json_fields:
fields: ["message"]
output.logstash:
hosts: ["localhost:5044"]
input{
beats {
port => "5044"
}
}
filter {
json {
source => "message"
}
}
output{
elasticsearch{
hosts => ["localhost:9200"]
index => "data"
}
}
No able to view out put in elasticserach. Not able to find whats the error.
filebeat log
2019-06-18T11:30:03.448+0530 INFO registrar/registrar.go:134 Loading registrar data from D:\Development_Avecto\filebeat-6.6.2-windows-x86_64\data\registry
2019-06-18T11:30:03.448+0530 INFO registrar/registrar.go:141 States Loaded from registrar: 10
2019-06-18T11:30:03.448+0530 WARN beater/filebeat.go:367 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2019-06-18T11:30:03.448+0530 INFO crawler/crawler.go:72 Loading Inputs: 1
2019-06-18T11:30:03.448+0530 INFO log/input.go:138 Configured paths: [D:\Development_Avecto\test-log\tn-logs\im.log]
2019-06-18T11:30:03.448+0530 INFO input/input.go:114 Starting input of type: log; ID: 16965758110699470044
2019-06-18T11:30:03.449+0530 INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-06-18T11:30:34.842+0530 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":312,"time":{"ms":312}},"total":{"ticks":390,"time":{"ms":390},"value":390},"user":{"ticks":78,"time":{"ms":78}}},"handles":{"open":213},"info":{"ephemeral_id":"66983518-39e6-461c-886d-a1f99da6631d","uptime":{"ms":30522}},"memstats":{"gc_next":4194304,"memory_alloc":2963720,"memory_total":4359488,"rss":22421504}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":10,"update":1},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":4}}}}}
2
https://www.elastic.co/guide/en/ecs-logging/dotnet/master/setup.html
Check step 3 at the bottom of the page for the config you need to put in your filebeat.yaml file:
filebeat.inputs:
- type: log
paths: /path/to/logs.json
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.expand_keys: true

Ansible "set_fact" repository url from json file using filters like "from_json"

Using Ansible "set_fact" module, I need to get repository url from json file using filters like "from_json". I tried in couple ways, and still doesn't get it how is should work.
- name: initial validation
tags: bundle
hosts: localhost
connection: local
tasks:
- name: register bundle version_file
include_vars:
file: '/ansible/playbook/workbench-bundle/bundle.json'
register: bundle
- name: debug registered bundle file
debug:
msg: '{{ bundle }}'
I get json that I wanted:
TASK [debug registered bundle file] ************************************************
ok: [127.0.0.1] => {
"msg": {
"ansible_facts": {
"engine-config": "git#bitbucket.org/engine-config.git",
"engine-monitor": "git#bitbucket.org/engine-monitor.git",
"engine-server": "git#bitbucket.org/engine-server.git",
"engine-worker": "git#bitbucket.org/engine-worker.git"
},
"changed": false
}
}
And then I'm trying to select each value by key name to use this value as URL to "npm install" each package in separate instances.
- name: set_fact some paramater
set_fact:
engine_url: "{{ bundle.('engine-server') | from_json }}"
And then I get error:
fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "template error
while templating string: expected name or number. String: {{
bundle.('engine-server') }}"}
I many others ways like this loopkup, and it still fails with others errors. Can someone help to understand, how I can find each parameter and store him as "set_fact"? Thanks
Here is a sample working code to set a variable like in the question (although I don't see much sense in it):
- name: initial validation
tags: bundle
hosts: localhost
connection: local
tasks:
- name: register bundle version_file
include_vars:
file: '/ansible/playbook/workbench-bundle/bundle.json'
name: bundle
- debug:
var: bundle
- debug:
var: bundle['engine-server']
- name: set_fact some paramater
set_fact:
engine_url: "{{ bundle['engine-server'] }}"
The above assumes your input data (which you did not include) is:
{
"engine-config": "git#bitbucket.org/engine-config.git",
"engine-monitor": "git#bitbucket.org/engine-monitor.git",
"engine-server": "git#bitbucket.org/engine-server.git",
"engine-worker": "git#bitbucket.org/engine-worker.git"
}