I'm using the following filter to split a JSON array so that I have one message per data in array:
input {
stdin {}
}
filter {
split {
field => "results"
}
}
output {
stdout { codec => rubydebug }
}
The input I send is:
{"results" : [{"id": "a1", "name": "hello"}, {"id": "a2", "name": "logstash"}]}
Yet, the output is a single message with the following error:
[main] Only String and Array types are splittable. field:results is of type = NilClass
/logstash-7.4.0/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"message" => "{\"results\" : [{\"id\": \"a1\", \"name\": \"hello\"}, {\"id\": \"a2\", \"name\": \"logstash\"}]}",
"#version" => "1",
"#timestamp" => 2019-10-18T14:07:57.285Z,
"host" => "C02Z40E8LVDR",
"tags" => [
[0] "_split_type_failure"
]
}
Any hint?
Many thanks. Christian
As documented in Logstash website: https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html
You should use the JSON filter plugin, by applying
filter {
json {
source => "message"
}
}
Instead of the split filter.
Related
I have some JSON data sent in to my logstash filter and wish to mask secrets from appearing in Kibana. My log looks like this:
{
"payloads":
[
{
"sequence": 1,
"request":
{
"url": "https://hello.com",
"method": "POST",
"postData": "{\"one:\"1\",\"secret:"THISISSECRET",\"username\":\"hello\",\"secret2\":\"THISISALSOSECRET\"}",
},
"response":
{
"status": 200,
}
}
],
...
My filter converts the payloads to payload and I then wish to mask the JSON in postData to be:
"postData": "{\"one:\"1\",\"secret\":\"[secret]\",\"username\":\"hello\",\"secret2\":\"[secret]\"}"
My filter now looks like this:
if ([payloads]) {
split {
field => "payloads"
target => "payload"
remove_field => [payloads]
}
}
# innetTmp is set to JSON here - this works
json {
source => "innerTmp"
target => "parsedJson"
if [parsedJson][secret] =~ /.+/ {
remove_field => [ "secret" ]
add_field => { "secret" => "[secret]" }
}
if [parsedJson][secret2] =~ /.+/ {
remove_field => [ "secret2" ]
add_field => { "secret2" => "[secret]" }
}
}
Is this a correct approach? I cannot see the filter replacing my JSON key/values with "[secret]".
Kind regards /K
The approach is good, you are using the wrong field
After the split the secret field is part of postData and that field is part of parsedJson.
if [parsedJson][postData][secret] {
remove_field => [ "[parsedJson][postData][secret]" ]
add_field => { "[parsedJson][postData][secret]" => "[secret]" }
}
We want to implement service request trace using http plugin of logstash in JSON Array format.
We are getting the following error when trying to parse the JSON array:
error:
:message=>"gsub mutation is only applicable for Strings, skipping", :field=>"message", :value=>nil, :level=>:debug, :file=>"logstash/filters/mutate.rb", :line=>"322", :method=>"gsub"}
:message=>"Exception in filterworker", "exception"=>#<LogStash::ConfigurationError: Only String and Array types are splittable. field:message is of type = NilClass>
My json array is :
{
"data": [
{
"appName": "DemoApp",
"appVersion": "1.1",
"deviceId": "1234567",
"deviceName": "moto e",
"deviceOSVersion": "5.1",
"packageName": "com.DemoApp",
"message": "testing null pointer exception",
"errorLog": "null pointer exception"
},
{
"appName": "DemoApp",
"appVersion": "1.1",
"deviceId": "1234567",
"deviceName": "moto e",
"deviceOSVersion": "5.1",
"packageName": "com.DemoApp",
"message": "testing illegal state exception",
"errorLog": "illegal state exception"
}
]
}
my logstash config is :
input {
http {
codec => "plain"
}
}
filter{
json {
source => "message"
}
mutate { gsub => [ "message", "},", "shr" ] }
split {
terminator => "shr"
field => "data"
}
}
}
output {
stdout { codec => "json" }
gelf{
host => localhost
facility => "%{type}"
level =>["%{SeverityLevel}", "INFO"]
codec => "json"
}
file{
path => "/chroot/result.log"
}
}
Any help would be appreciated.
Logstash has a default metadata field named message. So your json message field is overlapping that. Consider changing json field name message to another.
The other option maybe using target setting and referencing the target field like:
json { source => "message" target => "data"}
mutate { gsub => [ "[data][message]", "\}\,\r\n\r\n\{", "\}shr\{" ] }
I hope this helps.
Hi I am trying to parse a json file. I have tried troubleshooting with suggestions from stackoverflow (links at bottom)but none have worked for me. I am hoping someone has some insight on probably a silly mistake I am making.
I have tried using only the json codec, only the json filter, as well as both. For some reason I am still getting this _jsonparsefailure. What can I do to get this to work?
Thanks in advance!
My json file:
{
"log": {
"version": "1.2",
"creator": {
"name": "WebInspector",
"version": "537.36"
},
"pages": [
{
"startedDateTime": "2015-10-13T20:28:46.081Z",
"id": "page_1",
"title": "https://demo.com",
"pageTimings": {
"onContentLoad": 377.8560000064317,
"onLoad": 377.66200001351535
}
},
{
"startedDateTime": "2015-10-13T20:29:01.734Z",
"id": "page_2",
"title": "https://demo.com",
"pageTimings": {
"onContentLoad": 1444.0670000039972,
"onLoad": 2279.20100002666
}
},
{
"startedDateTime": "2015-10-13T20:29:04.014Z",
"id": "page_3",
"title": "https://demo.com",
"pageTimings": {
"onContentLoad": 1802.0240000041667,
"onLoad": 2242.4060000048485
}
},
{
"startedDateTime": "2015-10-13T20:29:09.224Z",
"id": "page_4",
"title": "https://demo.com",
"pageTimings": {
"onContentLoad": 274.82699998654425,
"onLoad": 1453.034000005573
}
}
]
}
}
My logstash conf:
input {
file {
type => "json"
path => "/Users/anonymous/Documents/demo.json"
start_position => beginning
}
}
filter{
json{
source => "message"
}
}
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}
Output I am getting from logstash hopefully with clues:
Trouble parsing json {:source=>"message", :raw=>" \"startedDateTime\": \"2015-10-19T18:05:37.887Z\",", :exception=>#<TypeError: can't convert String into Hash>, :level=>:warn}
{
"message" => " {",
"#version" => "1",
"#timestamp" => "2015-10-26T20:05:53.096Z",
"host" => "15mbp-09796.local",
"path" => "/Users/anonymous/Documents/demo.json",
"type" => "json",
"tags" => [
[0] "_jsonparsefailure"
]
}
Decompose Logstash json message into fields
How to use logstash's json filter?
I test my JSON here JSONLint. Perhaps this will solve your problem. The error I am getting is that it is expecting string.
It seems that you have an unnecessary comma(',') at the end. Either remove it or add another JSON variable after that.
I've got a question regarding JSON in Logstash.
I have got a JSON Input that looks something like this:
{
"2": {
"name": "name2",
"state": "state2"
},
"1": {
"name": "name1",
"state": "state1"
},
"0": {
"name": "name0",
"state": "state0"
}
}
Now, let's say I want to add a field in the logstash config
json{
source => "message"
add_field => {
"NAME" => "%{ What to write here ?}"
"STATE" => "%{ What to write here ?}"
}
}
Is there a way to access the JSON Input such that I get a field Name with value name1, another field with name 2 and a third field with name 3. The first key in the JSON is changing, that means there can only be one or many more parts. So I don't want to hardcode it like
%{[0][name]}
Thanks for your help.
If you remove all new lines in your input you can simply use the json filter. You don't need any add_field action.
Working config without new lines:
filter {
json { source => message }
}
If you can't remove the new lines in your input you need to merge the lines with the multiline codec.
Working config with new lines:
input {
file {
path => ["/path/to/your/file"] # I suppose your input is a file.
start_position => "beginning"
sincedb_path => "/dev/null" # just for testing
codec => multiline {
pattern => "^}"
what => "previous"
negate => "true"
}
}
}
filter {
mutate { replace => { "message" => "%{message}}" } }
json { source => message }
}
I suppose that you use the file input. In case you don't, just change it.
Output (for both):
"2" => {
"name" => "name2",
"state" => "state2"
},
"1" => {
"name" => "name1",
"state" => "state1"
},
"0" => {
"name" => "name0",
"state" => "state0"
}
I am using the latest ELK (Elasticsearch 1.5.2 , Logstash 1.5.0, Kibana 4.0.2)
I have a question that
sample .json
{ "field1": "This is value1", "field2": "This is value2" }
longstash.conf
input {
stdin{ }
}
filter {
json {
source => "message"
add_field =>
{
"field1" => "%{field1}"
"field2" => "%{field2}"
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
host => "localhost"
index => "scan"
}
}
Output:
{
"message" => "{ \"field1\": \"This is value1\", \"field2\": \"This is value2\" }",
"#version" => "1",
"#timestamp" => "2015-05-07T06:02:56.088Z",
"host" => "myhost",
"field1" => [
[0] "This is value1",
[1] "This is value1"
],
"field2" => [
[0] "This is value2",
[1] "This is value2"
]
}
My question is 1) why the field result appear double in the result? 2) If there is nested array , how is it should reference in the logstash configure?
Thanks a lot!
..Petera
I think you have misunderstood what the json filter does. When you process a field through the json filter it will look for field names and corresponding values.
In your example, you have done that with this part:
filter {
json {
source => "message"
Then you have added a field called "field1" with the content of field "field1", since the field already exists you have just added the same information to the field that was already there, it has now become an array:
add_field =>
{
"field1" => "%{field1}"
"field2" => "%{field2}"
}
}
}
If you simplify your code to the following you should be fine:
filter {
json {
source => "message"
}
}
I suspect your question about arrays becomes moot at this point, as you probably don't need the nested array, and therefore, won't need to address it, but in case you do, I believe you can do this like so:
[field1][0]
[field1][1]