Parsing JSON file into logstash - json

Hi I am trying to send a json file with multiple objects to elasticsearch with the logstash so I can display the data using kibana. I have researched this extensively and simply cannot understand how to make the data formatted correctly to be used in kibana.
I have tried to use different filters such as: json, date, and grok
The issue is probably how I'm going about using these filters as I can't understand it's setup all to well.
Here is a sample line of the input json file:
{"time":"2015-09-20;12:13:24","bug_code":"tr","stacktrace":"543534"},
I want to use this format for displaying the data in kibana and sorting many objects according to their "time"
this following is what my current filter section is:
filter {
date {
match => ["time", "YYYY-MM-dd;HH:mm:ss Z" ]
timezone => "America/New_York"
locale => "en"
target => "#timestamp"
}
grok {
match => ["time", "%{TIMESTAMP_ISO8601:timestamp}"]
}
}
At this point I know the grok is wrong because I get "_grokparsefailure"
but how can I figure out the correct way to use grok or is there a simple way to sort the data using the given timestamp and not the processed timestamp given when sending the data through.
here is what the output currently shows:
"message" => "{\"time\":\"2015-09-20;12:13:24\",\"bug_code\":\"tr\",\"stacktrace\":\"543534\"},\r",
"#version" => "1",
"#timestamp" => "2015-11-23T09:54:50:274Z",
"host" => "<my_computer>",
"path" => "<path_to_.json>",
"type" => "json",
"tags" => [
[0] "_grokparsefailure"
any advice would be very much appreciated

You're almost there, I could get it working with a few tweaks.
First, you need to add the json{} filter in the first position. Then you need to change the date pattern to YYYY-MM-dd;HH:mm:ss and finally you can remove the grok filter at the end. You filter configuration would look like this:
filter {
json {
source => "message"
}
date {
match => ["time", "YYYY-MM-dd;HH:mm:ss" ]
timezone => "America/New_York"
locale => "en"
target => "#timestamp"
}
}
The parsed event for your sample JSON line would then look like this:
{
"message" => "{\"time\":\"2015-09-20;12:13:24\",\"bug_code\":\"tr\",\"stacktrace\":\"543534\"}",
"#version" => "1",
"#timestamp" => "2015-09-20T16:13:24.000Z",
"host" => "iMac.local",
"time" => "2015-09-20;12:13:24",
"bug_code" => "tr",
"stacktrace" => "543534"
}

Related

Filtering JSON/non-JSON entries in Logstash

I have a question about filtering entries in Logstash. I have two different logs coming into Logstash. One log is just a std format with a timestamp and message, but the other comes in as JSON.
I use an if statement to test for a certain host and if that host is present, then I use the JSON filter to apply to the message... the problem is that when it encounters the non-JSON stdout message it can't parse it and throws exceptions.
Does anyone know how to test to see if an entry is JSON coming in apply the filter and if not, just ignore it?
thanks
if [agent][hostname] == "some host"
# if an entry is not in json format how to ignore?
{
json {
source => "message"
target => "gpfs"
}
}
You can try with a grok filter as a first step.
grok {
match => {
"message" => [
"{%{GREEDYDATA:json_message}}",
"%{GREEDYDATA:std_out}"
]
}
}
if [json_message]
{
mutate {
replace => { "json_message" => "{%{json_message}}"}
}
json {
source => "json_message"
target => "gpfs"
}
}
Probably there is a more cleaner solution then this, but it will do the job.

Laravel 5.4 won't validate JSON

I'm using Laravel 5.4 and trying to validate JSON in my POST request however the validator fails stating that the JSON isn't valid, even though it is. I'm assuming I'm not understanding the validation rules correctly and my implementation is wrong, rather than a bug or something else.
I have a simple POST endpoint which has both the Accept and Content-Type headers set to application/json.
In my POST request (testing using Postman) I'm supplying RAW data.
{
"only_this_key": { "one": "two" }
}
In my controller method I have the following:
// I'm using intersect to remove any other parameters that may have been supplied as this endpoint only requires one
$requestData = $request->intersect(['only_this_key']);
$messages = [
'only_this_key.required' => 'The :attribute is required',
'only_this_key.json' => 'The :attribute field must be valid JSON',
];
$validator = \Validator::make($requestData, [
'only_this_key' => 'required|json',
], $messages);
if ($validator->fails()) {
return new APIErrorValidationResponse($request, $validator);
}
return response()->json(['all good' => 'here']);
The error I get back is The inventory field must be valid JSON even though it is!
Passing in the raw data using Postman
{
"only-this-key": {
"item-one": "one",
"item-two": "two",
"item-three": "three"
},
"not": "wanted"
}
When I use dd($request->all()); within the method
array:2 [
"what-i-want" => array:3 [
"item-one" => "one"
"item-two" => "two"
"item-three" => "three"
]
"not" => "wanted"
]
The problem is with how Laravel is interpreting the raw data in the request. If you run dd($request->all()) in your controller you will see this output:
array:1 [
"{"only_this_key":{"one":"two"}}" => ""
]
Your entire JSON string is getting set as a key with a value of an empty string. If you absolutely must send it as raw data, then you're going to have to grab that key value and save it to an array with the key that you want. This should work (instead of the intersect line).
$requestData = ['only_this_key' => key($request->all())];
Alternatively, you can just send the body as x-www-form-urlencoded with your entire JSON string as the only value for one key.

Logstash JSON Parse - Ignore or Remove Sub-Tree

I'm sending JSON to logstash with a config like so:
filter {
json {
source => "event"
remove_field => [ "event" ]
}
}
Here is an example JSON object I'm sending:
{
  "#timestamp": "2015-04-07T22:26:37.786Z",
  "type": "event",
  "event": {
    "activityRecord": {
      "id": 68479,
      "completeTime": 1428445597542,
      "data": {
        "2015-03-16": true,
        "2015-03-17": true,
        "2015-03-18": true,
        "2015-03-19": true
      }
    }
  }
}
Because of the arbitrary nature of the activityRecord.data object, I don't want logstash and elasticsearch to index all these date fields. As is, I see activityRecord.data.2015-03-16 as a field to filter on in Kibana.
Is there a way to ignore this sub-tree of data? Or at least delete it after it has already been parsed? I tried remove_field with wildcards and whatnot, but no luck.
Though not entirely intuitive it is documented that subfield references are made with square brackets, e.g. [field][subfield], so that's what you'll have to use with remove_field:
mutate {
remove_field => "[event][activityRecord][data]"
}
To delete fields using wildcard matching you'd have to use a ruby filter.

Not able to use add_field in logstash

I would like to extract the interface word from a text from logstash.
Sample log -
2013 Aug 28 13:14:49 logFile: Interface Etherface1/9 is down (Transceiver Absent)
I want to extract "Etherface1/9" out of this and add it as a field called interface.
I am having the following conf file for the same
input
{
file
{
type => "syslog"
path => [ "/home/vineeth/logstash/mylog.log" ]
#path => ["d:/New Folder/sjdc.show.tech/n5k-3a-show-tech.txt"]
start_position=>["beginning"]
}
}
filter {
grok {
type => "syslog"
add_field => [ "port", "Interface %{WORD}" ]
}
}
output
{
stdout
{
debug => true debug_format => "json"
}
elasticsearch
{
embedded => true
}
}
But then i am always getting "_grokparsefailure" under tags and none of these new fields are appearing.
Kindly let me know how i can get the required output
The grok filter expects that you're trying to match some text. Since you're not passing any possible matches, it triggers the _grokparsefailure tag (per the manual, the tag is added "when there has been no successful match").
You might use a match like this:
grok {
match => ["message", "Interface %{DATA:port} is down"]
}
This will still fail if the match text isn't present. Logstash is pretty good at parsing fields with a simple structure, but data embedded in a user-friendly string is sometimes tricky. Usually you'll need to branch based on the message format.
Here's a very simple example, using a conditional with a regex:
if [message] =~ /Interface .+ is down/ {
grok {
match => ["message", "Interface %{DATA:port} is down"]
}
}

JBoss AS 7 update system property via cli

I can read system properties via the CLI interface by
/system-property=propertyname:read-attribute(name="value")
Is there a simple way I can update the property via the CLI interface?
You can use the write-attribute operation to change system property values.
/system-property=propertyname:write-attribute(name="value", value="newValue")
See the answer below for a better description.
You can use the write-attribute operation.
A healthy workflow for the Management CLI is to expose, read and write resource attributes. To give an example of this workflow, we are going to doing the following steps on a fresh default installation of JBoss Application Server 7.1.0Beta1.
Steps to identify and write a system resource attribute
Read all system properties
Read a specific system property in more detail
Expose an example system property attribute
Write an example system property attribute
Expose the change to confirm it
Reset the attribute back to the original value
1. Read all system properties
We don't always know the exact name of what we are looking for. We can use a mix of tab completion and wildcard searches to make it easy to expose the resources and attributes. The read-resource operation is a great start to any workflow, as it exposes all present entities.
[domain#localhost:9999 /] /system-property=*:read-resource
{
"outcome" => "success",
"result" => [{
"address" => [("system-property" => "java.net.preferIPv4Stack")],
"outcome" => "success",
"result" => {
"boot-time" => true,
"value" => "true"
}
}]
}
2. Read a specific system property in more detail
The read-resource operation has exposed the java.net.preferIPv4Stack property. We can query this further by using the read-resource-description operation.
[domain#localhost:9999 /] /system-property=java.net.preferIPv4Stack:read-resource-description
{
"outcome" => "success",
"result" => {
"description" => "A system property to set on all servers in the domain.",
"head-comment-allowed" => true,
"tail-comment-allowed" => false,
"attributes" => {
"value" => {
"type" => STRING,
"description" => "The value of the system property.",
"required" => false,
"access-type" => "read-write",
"storage" => "configuration",
"restart-required" => "no-services"
},
"boot-time" => {
"type" => BOOLEAN,
"description" => "If true the system property is passed on the command-line to the started server jvm. If false, it will be pushed to the server as part of the startup sequence.",
"required" => false,
"default" => true,
"access-type" => "read-write",
"storage" => "configuration",
"restart-required" => "no-services"
}
}
}
}
3. Expose an example system property attribute
The read-resource-description operation prints information about the resource, including its attributes. We can specifically query these attributes with the read-attribute operation. Again, tab completion makes it easy to compose these operation strings as you begin typing, and hit tab to complete the string or to suggest available additions.
[domain#localhost:9999 /] /system-property=java.net.preferIPv4Stack:read-attribute(name=boot-time)
{
"outcome" => "success",
"result" => true
}
4. Write an example system property attribute
In the same way that we just queried the attribute, we can change it. In this case, we can use the write-attribute operation, keeping in mind the intended value type as reported by the read-resource-description operation. This operation declared the attributed to be BOOLEAN, but you should be able to work this out simply by looking at the existing value in the read-attribute command (where it is defined).
[domain#localhost:9999 /] /system-property=java.net.preferIPv4Stack:write-attribute(name=boot-time, value=false)
{
"outcome" => "success",
"result" => {
"domain-results" => {"step-1" => undefined},
"server-operations" => undefined
}
}
5. Expose the change to confirm it
We can run the read-attribute operation again to show the value change.
[domain#localhost:9999 /] /system-property=java.net.preferIPv4Stack:read-attribute(name=boot-time)
{
"outcome" => "success",
"result" => false
}
6. Reset the attribute back to the original value
Just to gracefully end the example, let's change the value back to the original state.
[domain#localhost:9999 /] /system-property=java.net.preferIPv4Stack:write-attribute(name=boot-time, value=true)
{
"outcome" => "success",
"result" => {
"domain-results" => {"step-1" => undefined},
"server-operations" => undefined
}
}
Summary
Yes, you can write attribute values. To make the process easier, a workflow habit of exposing the attribute values and file type definitions is a good practice, and should make the process clearer.
And for completeness, here's how to remove (undefine) a property attribute:
/system-property=propertyname:undefine-attribute(name=attribute-name)