Incorporate JSON into YAML with Indentation - json

I'm trying to incorporate a JSON into a YAML file.
The YAML looks like this:
filebeat.inputs:
- type: log
<incorporate here with a single level indent>
enabled: true
paths:
Suppose you have the following variable:
a = { processors: { drop_event: { when: { or: [ {equals: { status: 500 }},{equals: { status: -1 }}]}}}}
I want to incorporate it into an existing YAML.
I've tried to use:
JSON.parse((a).to_json).to_yaml
After applying this, I got a valid YAML but without indentation (all lines have to be indented) and with a "---" which is Ruby's new document in YAML.
The result:
filebeat.inputs:
- type: log
---
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true
The result I'm looking for:
filebeat.inputs:
- type: log
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true```

It’s easier to produce a valid ruby object by merging hashes and then serialize the result to YAML than vice versa.
puts(yaml.map do |hash|
hash.each_with_object({}) do |(k, v), acc|
# the trick: we insert before "enabled" key
acc.merge!(JSON.parse(a.to_json)) if k == "enabled"
# regular assignment for all hash elements
acc[k] = v
end
end.to_yaml)
Results in:
---
- type: log
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1
enabled: true
JSON.parse(a.to_json) basically converts symbols to strings.

In order to do that first you need to convert your original YAML into JSON
original = YAML.load(File.read(File.join('...', 'filebeat.inputs')))
# => [
{
"type": "log",
"enabled": true,
"paths": null
}
]
Then you have to merge your JSON into this original variable
original[0].merge!(a.stringify_keys)
original.to_yaml
# =>
---
-
type: log
enabled: true
paths:
processors:
drop_event:
when:
or:
- equals:
status: 500
- equals:
status: -1

Related

ansible - parsing json w jmesquery

I'm querying Splunk via the uri module w a certain search:
- name: splunk query
uri:
url: https://localhost:8089/servicesNS/admin/search/search/jobs/export?output_mode=json
method: POST
user: ...
password: ...
validate_certs: false
body_format: form-urlencoded
return_content: true
headers:
Content-Type: application/json
body:
- [ search, "{{ splunk_search }}" ]
vars:
- splunk_search: '| ...'
register: me
Splunk returns a 200, w content:
TASK [debug] ********************************************************************************************************************************************************************************************
ok: [host1] => {
"msg": "{\n \"title\": \"clientMessageId\",\n \"app\": \"whatever\"\n}\n{\n \"title\": \"a_title\",\n \"app\": \"some_app\"\n}\n{\n \"title\": \"another_title\",\n \"app\": \"my_app\"\n}\n{\n \"title\": \"title_for_app\",\n \"app\": \"another_app\"\n}"
}
However, I can't properly parse the output, I've tried:
- name: query result from content
debug:
msg: "{{ me.content | json_query('title') }}"
In this case, Ansible will return an empty string. I assume something is wrong with the formatting of the output.
Q: How can I configure Ansible to properly parse the json output, so that a new list is created based on the output.
Example:
my_list:
- title: clientMessageId
app: whatever
- title: a_title
app: some_app
...
What I did was change the url output_mode in which Splunk returns the content from json to csv.
url: https://localhost:8089/servicesNS/admin/search/search/jobs/export?output_mode=csv
Then, I wrote the output to a file and read it using the ansible read_csv module.
- name: write received content to csv file on controller
copy:
content: "{{ me.content }}"
dest: "./file.csv"
mode: '0775'
delegate_to: localhost
- name: read csv file
read_csv:
path: ./file.csv
delegate_to: localhost
register: read_csv
- name: set fact to create list
set_fact:
the_output: "{{ read_csv.list }}"

Upload csv fle in elasticsearch using filebeat

I'm trying to load csv file in Elasticsearch using filebeat.
Here is my filebeat.yml
filebeat.inputs:
#- type: log
- type: stdin
setup.template.overwrite: true
enabled: true
close_eof: true
paths:
- /usr/share/filebeat/dockerlogs/*.csv
processors:
- decode_csv_fields:
fields:
message: "message"
separator: ","
ignore_missing: false
overwrite_keys: true
trim_leading_space: true
fail_on_error: true
- drop_fields:
fields: [ "log", "host", "ecs", "input", "agent" ]
- extract_array:
field: message
mappings:
sr: 0
Identifiant PSI: 1
libellé PSI: 2
Identifiant PdR: 3
T3 Date Prévisionnelle: 4
DS Reporting PdR: 5
Status PSI: 6
Type PdR: 7
- drop_fields:
fields: ["message","sr"]
#index: rapport_g035_prov_1
filebeat.registry.path: /usr/share/filebeat/data/registry/filebeat/filebeat
output:
elasticsearch:
enabled: true
hosts: ["IPAdress:8081"]
indices:
- index: "rapport_g035_prov"
#- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#- index: "filebeat-7.7.0"
#setup.dashboards.kibana_index: file-*
seccomp.enabled: false
logging.metrics.enabled: false
But when I check the index in kibana I found that the index can't read column names
https://i.stack.imgur.com/renYt.png
I tried to process the csv if filebeat.yml in another way
filebeat.inputs:
- type: log
setup.template.overwrite: true
enabled: true
close_eof: true
paths:
- /usr/share/filebeat/dockerlogs/*.csv
processors:
- decode_csv_fields:
fields:
message: decoded.csv
separator: ","
ignore_missing: false
overwrite_keys: true
trim_leading_space: false
fail_on_error: true
#index: rapport_g035_prov_1
filebeat.registry.path: /usr/share/filebeat/data/registry/filebeat/filebeat
output:
elasticsearch:
enabled: true
hosts: ["IPAdress:8081"]
indices:
- index: "rapport_g035_prov"
#- index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#- index: "filebeat-7.7.0"
#setup.dashboards.kibana_index: file-*
seccomp.enabled: false
logging.metrics.enabled: false
But I got the same error it can't map the index correctly I know there is a problem in the processing of csv in the filebeat.yml but I don't know what is it

How can I loop through json object REST API body in Ansible?

When using Ansible I am able to execute when passed one-by-one like this:
---
- name: Using a REST API
become: false
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: “Let’s get list of Interfaces”
- name: Adding a Bridge-Interface
uri:
url: https://router/rest/interface/bridge
method: PUT
validate_certs: false
url_username: ansible
url_password: ansible
force_basic_auth: yes
body_format: json
status_code: 201
body: '{"name":"bridge_ansible"}'
register: results
- debug:
var: results
I want to iterate through a set of commands so I thought of looping, but that does not work for me, I am using this code:
---
- name: Using a REST API
become: false
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: “Let’s get list of Interfaces”
- name: Adding a Bridge-Interface
uri:
url: "{{item.url}}"
method: PUT
validate_certs: false
url_username: ansible
url_password: ansible
force_basic_auth: yes
body_format: json
status_code: 201
body: "{{item.body}}"
register: results
loop:
- {body:'{"name":"bridge_ansible"}', url:'https://router/rest/interface/bridge'}
- {body:'{"address":"6.6.6.6", "interface":"bridge_ansible"}', url:'https://router/rest/ip/address'}
- debug:
var: results
I get an error for this {body:'{"name":"bridge_ansible"}', url:'https://router/rest/interface/bridge'} in the json object { I think my syntax is not correct but cannot understand the correct thing. Can someone please help
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
did not find expected ',' or '}'
The error appears to be in '/ansible-playbook/1-demo.yaml': line 23, column 19, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
loop:
- {body:'\{"name":"bridge_ansible"\}', url:'https://router/rest/interface/bridge'}
^ here
This one looks easy to fix. It seems that there is a value started
with a quote, and the YAML parser is expecting to see the line ended
with the same kind of quote. For instance:
when: "ok" in result.stdout
Could be written as:
when: '"ok" in result.stdout'
Or equivalently:
when: "'ok' in result.stdout"
We could be wrong, but this one looks like it might be an issue with
unbalanced quotes. If starting a value with a quote, make sure the
line ends with the same set of quotes. For instance this arbitrary
example:
foo: "bad" "wolf"
Could be written as:
foo: '"bad" "wolf"'
Thanks

Parsing k8s docker container json log correctly with Filebeat 7.9.3

I'm working with Filebeat 7.9.3 as a daemonset on k8s.
I'm not able to parse docker container logs of a Springboot app that writes logs to stdout in json.
The fact is that the every row of the Springboot app logs is written in this way:
{ "#timestamp": "2020-11-16T13:39:57.760Z", "log.level": "INFO", "message": "Checking comment 'se' done = true", "service.name": "conduit-be-moderator", "event.dataset": "conduit-be-moderator.log", "process.thread.name": "http-nio-8081-exec-2", "log.logger": "it.koopa.app.ModeratorController", "transaction.id": "1ed5c62964ff0cc2", "trace.id": "20b4b28a3817c9494a91de8720522972"}
But the corresponding docker log file under /var/log/containers/ writes log in this way:
{
"log": "{\"#timestamp\":\"2020-11-16T11:27:32.273Z\", \"log.level\": \"INFO\", \"message\":\"Checking comment 'a'\", \"service.name\":\"conduit-be-moderator\",\"event.dataset\":\"conduit-be-moderator.log\",\"process.thread.name\":\"http-nio-8081-exec-4\",\"log.logger\":\"it.koopa.app.ModeratorController\",\"transaction.id\":\"9d3ad972dba65117\",\"trace.id\":\"8373edba92808d5e838e07c7f34af6c7\"}\n",
"stream": "stdout",
"time": "2020-11-16T11:27:32.274816903Z"
}
I always receive this on filebeat logs
Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
This is my filebeat config that tries to parse json log message from docker logs where I'm using decode_json_fields to try to catch Elasticsearch standard fields (I'm using co.elastic.logging.logback.EcsEncoder)
filebeat.yml: |-
filebeat.inputs:
- type: container
#json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: log
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ["log"]
overwrite_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
How can I do this???
As processors are applied before the JSON parser of the input, you will need to first configure the decode_json_fields processors which will allow you to decode your json.log field. You will then be able to apply the json configuration fo the inputs on the message fields. Something like:
filebeat.yml: |-
filebeat.inputs:
- type: container
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
json.message_key: message
paths:
- /var/log/containers/*.log
include_lines: "conduit-be-moderator"
processors:
- decode_json_fields:
fields: ['log']
expand_keys: true
- add_kubernetes_metadata:
host: ${NODE_NAME}
in_cluster: true
matchers:
- logs_path:
logs_path: "/var/log/containers/"
processors:
- add_cloud_metadata:
- add_host_metadata:
This configuration assumes that all your logs use JSON format. Else you will probably need to add an exclude or include regex pattern.

'Templating' on Swagger API Description with yaml

Is it possible to use templating w/ swagger.
How is it done.
I do not want to repeat the three properties time, len and off each time.
Have a look at the end of this post where I made up a 'template' for explanation.
More Detail:
I have a JSON response structure which always returns a JSON always with the same properties but only the content of data is subject to change.
data could be an array, could be a string, a number, null or an object.
That depends on the Api's function handling.
{
time: "2019-02-01T12:12:324",
off: 13,
len: 14,
data: [
"Last item in the row of 14 items :-)"
]
}
See at the end of this post for my example of the Swagger definition.
It is a yaml which can be pasted into the swagger editor at https://editor.swagger.io/
In the swagger documentation (yaml) I do not want to repeat the statically reoccuring items, which will not change in their functionality for any other request.
Let me know, if the question is not precisely enough to understand.
swagger: "2.0"
info:
description: ""
version: 1.0.0
title: "Templating?"
contact:
email: "someone#somewhere.com"
host: localhost
basePath: /api
paths:
/items:
get:
summary: "list of items"
produces:
- application/json
responses:
200:
description: "successful operation"
schema:
$ref: "#/definitions/Items"
/item/{id}:
get:
summary: "specific item"
produces:
- application/json
parameters:
- name: id
in: path
description: "ID of the demanded item"
required: true
responses:
200:
description: "successful operation"
schema:
$ref: "#/definitions/Item"
definitions:
Items:
type: object
description: ""
properties:
time:
type: string
format: date-time
description: "date-time of the request"
off:
type: integer
description: "index 0 based offset of list data"
default: 0
len:
type: integer
description: "overall amount of items returned"
default: -1
data:
type: array
items:
$ref: "#/definitions/ListingItem"
Item:
type: object
description: ""
properties:
time:
type: string
format: date-time
description: "date-time of the request"
off:
type: integer
description: "index 0 based offset of list data"
default: 0
len:
type: integer
description: "overall amount of items returned"
default: -1
data:
$ref: "#/definitions/InfoItem"
ListingItem:
type: integer
description: "ID of the referenced item"
InfoItem:
type: object
properties:
id:
type: string
text:
type: string
Based on #Anthon's answer it came to my mind that this is somewhat the construct I would need. Actually it is inheriting from a 'template':
...
templates:
AbstractBasicResponse:
properties:
time:
type: string
format: date-time
description: "date-time of the request"
off:
type: integer
description: "index 0 based offset of list data"
default: 0
len:
type: integer
description: "overall amount of items returned"
default: -1
definitions:
Items:
type: object
extends: AbstractBasicResponse
properties:
data:
type: array
items:
$ref: "#/definitions/ListingItem"
Item:
type: object
extends: AbstractBasicResponse
properties:
data:
$ref: "#/definitions/InfoItem"
ListingItem:
type: integer
description: "ID of the referenced item"
InfoItem:
type: object
properties:
id:
type: string
text:
type: string
...
You might not have to revert to full templating, there are two things within YAML that help with "undoubling" recurring data: anchors/aliases and merge keys.
An example of an anchor (introduced by &) referenced by an alias (*) would be:
definitions:
Items:
type: object
description: ""
properties:
time:
type: string
format: date-time
description: "date-time of the request"
off: &index
type: integer
description: "index 0 based offset of list data"
default: 0
len: &amount
type: integer
description: "overall amount of items returned"
default: -1
data:
type: array
items:
$ref: "#/definitions/ListingItem"
Item:
type: object
description: ""
properties:
time:
type: string
format: date-time
description: "date-time of the request"
off: *index
len: *amount
data:
A YAML parser needs to be able to handle this, but since the alias points to the same object after loading, the code using the data might no longer work the same because of side effect in some cases depending on how the loaded data is processed.
You can have multiple aliases referring to the same anchor.
The merge key (<<) is a special key in a mapping with which you can pre-load that mapping where it occurs with a bunch of key-value pairs. This is most effective when used with a anchor/alias. With that you you some finer control and you could do:
len: &altlen
type: integer
description: "overall amount of items returned"
default: -1
and then
len:
<<: &altlen
default: 42
Which would then be the same doing:
len:
type: integer
description: "overall amount of items returned"
default: 42
Merge keys are normally resolved at load time by the YAML parser, so there are no potential side-effects when using those even though they involve anchors and aliases.