Filebeat multiline codec not working in my case - multiline

I have defined multiline codec in filebeat.yml like below
multiline.pattern: '^%{TIMESTAMP_ISO8601} '
multiline.negate: true
multiline.match: after
But it does not seem to be working as multiple lines of log get appended together like below
Single line of log
2017-05-07 22:29:43 [0] [pool-2-thread-1] INFO c.app.task.ChannelActiveCheckTask - ----
Inside checkIfChannelActive execution ----
The corresponding log stored in elastic search after multi-line parsing
---- Inside checkIfChannelActive execution ---- 2017-05-09 08:16:13 [0] [pool-2-thread-1] INFO
XYZZ - XYZ :: XYZ 2017-05-09 08:16:13 [0] [pool-2-thread-1] INFO XYZ - XYZYZZ
Since the above did not work, I also tried using the below multi-pattern but it does not work too
multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}'
Below is my logstash.conf
input {
beats {
port => 5044
}
}
filter {
mutate {
gsub => ["message", "\n", " "]
}
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} [%{NOTSPACE:uid}] [%
{NOTSPACE:thread}] %{LOGLEVEL:loglevel} %{DATA:class}-%{GREEDYDATA:message}" ]
overwrite => [ "message" ]
}
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
target => "#timestamp"
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
output {
elasticsearch {
hosts => localhost
index => "%{type}-%{+YYYY.MM.dd}"
}
}
Can someone help me fix this ?

filebeats does not recognize grok patterns. You have to use the regex.
You can find the definitions of the patterns # https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns

Related

logstash write log from filebeat on two indexes

I installed the elk stack on a server, and on another server I installed filebeat to send syslog on filebeats-[data] indexes and it works fine.
Now, on the elk server I configured another input in logstash to send a json file on json_data indexes and it work fine but now I find the filebeat log on both indexes and I don't understand why.
I want the filebeat log only on filebeat-[data] index and not on json_data index.
Where do I wrong?
This is my logstash conf file
input {
file {
path => "/home/centos/json/test.json"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => "http://10.xxx.xxx.xxx:9200"
index => "json_data"
}
}
input {
beats {
port => 5044
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => "http://10.xxx.xxx.xxx:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
I tried different configuration, I tried also to delete the json.conf and in this case filebeat write only on the filebeat-[data] index
For the logs coming from filebeat to logstash, you can set the index name in filebeat configuration. In this case, logstash will not populate or manipulate the index name, ofcourse you need to remove the index part from logstash's filebeat config as well.
For json_file, keep the config as is, no need to change anything there.
To set custom index name in filebeat, you can refer: https://www.elastic.co/guide/en/beats/filebeat/current/change-index-name.html

Parsing out awkward JSON in Logstash

Afternoon,
I've been trying to sort this for the past few weeks and cannot find a solution. We receive some logs via a 3rd part and so far I've used grok to pull out the value below into the details field. Annoyingly this would be extremely simple if it weren't for the all the slashes.
Is there an easy way to parse this data out as JSON in Logstash?
{\"CreationTime\":\"2021-05-11T06:42:44\",\"Id\":\"xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx\",\"Operation\":\"SearchMtpBatch\",\"OrganizationId\":\"xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx\",\"RecordType\":52,\"UserKey\":\"eample#example.onmicrosoft.com\",\"UserType\":5,\"Version\":1,\"Workload\":\"SecurityComplianceCenter\",\"UserId\":\"example#example.onmicrosoft.com\",\"AadAppId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx\",\"DataType\":\"MtpBatch\",\"DatabaseType\":\"DataInsights\",\"RelativeUrl\":\"/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx\",\"ResultCount\":\"1\"}
You can achieve this easily with the json filter:
filter {
json {
source => "message"
}
}
If your source data actually contains those backslashes, then you need to somehow remove them before Logstash can recognise the message as valid JSON.
You could do that before it hits Logstash, then the json codec will probably work as expected. Or if you want Logstash to handle it, you can use the Mutate's gsub option, followed by the JSON filter to parse:
filter {
mutate {
gsub => ["message", "[\\]", "" ]
}
json {
source => "message"
}
}
A couple of things to note: this will just blindly strip out all backslashes. If your strings ever might contain backslashes, you need to do something a little more sophisticated. I've had trouble escaping backslashes in gsub before and found that using the regex any of/[] construction is safer.
Here's a docker one-liner to run that config. The stdin input and stdout output are the default when using -e to specify config on the command line, so I've omitted them here for readability:
docker run --rm -it docker.elastic.co/logstash/logstash:7.12.1 -e 'filter { mutate { gsub => ["message", "[\\]", "" ]} json { source => "message" } }'
Pasting your example in and hitting return results in this output:
{
"#timestamp" => 2021-05-13T01:57:40.736Z,
"RelativeUrl" => "/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx",
"OrganizationId" => "xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx",
"UserKey" => "eample#example.onmicrosoft.com",
"DataType" => "MtpBatch",
"message" => "{\"CreationTime\":\"2021-05-11T06:42:44\",\"Id\":\"xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx\",\"Operation\":\"SearchMtpBatch\",\"OrganizationId\":\"xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx\",\"RecordType\":52,\"UserKey\":\"eample#example.onmicrosoft.com\",\"UserType\":5,\"Version\":1,\"Workload\":\"SecurityComplianceCenter\",\"UserId\":\"example#example.onmicrosoft.com\",\"AadAppId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx\",\"DataType\":\"MtpBatch\",\"DatabaseType\":\"DataInsights\",\"RelativeUrl\":\"/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx\",\"ResultCount\":\"1\"}",
"UserType" => 5,
"UserId" => "example#example.onmicrosoft.com",
"type" => "stdin",
"host" => "de2c988c09c7",
"#version" => "1",
"Operation" => "SearchMtpBatch",
"AadAppId" => "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"ResultCount" => "1",
"DatabaseType" => "DataInsights",
"Version" => 1,
"RecordType" => 52,
"CreationTime" => "2021-05-11T06:42:44",
"Id" => "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",
"Workload" => "SecurityComplianceCenter"
}

elasticsearch delete documents using logstash and csv

Is there any way to delete documents from ElasticSearch using Logstash and a csv file?
I read the Logstash documentation and found nothing and tried a few configs but nothing happened using action "delete"
output {
elasticsearch{
action => "delete"
host => "localhost"
index => "index_name"
document_id => "%{id}"
}
}
Has anyone tried this? Is there anything special that I should add to the input and filter sections of the config? I used file plugin for input and csv plugin for filter.
It is definitely possible to do what you suggest, but if you're using Logstash 1.5, you need to use the transport protocol as there is a bug in Logstash 1.5 when doing deletes over the HTTP protocol (see issue #195)
So if your delete.csv CSV file is formatted like this:
id
12345
12346
12347
And your delete.conf Logstash config looks like this:
input {
file {
path => "/path/to/your/delete.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
columns => ["id"]
}
}
output {
elasticsearch{
action => "delete"
host => "localhost"
port => 9300 <--- make sure you have this
protocol => "transport" <--- make sure you have this
index => "your_index" <--- replace this
document_type => "your_doc_type" <--- replace this
document_id => "%{id}"
}
}
Then when running bin/logstash -f delete.conf you'll be able to delete all the documents whose id is specified in your CSV file.
In addition to Val's answer, I would add that if you have a single input that has a mix of deleted and upserted rows, you can do both if you have a flag that identifies the ones to delete. The output > elasticsearch > action parameter can be a "field reference," meaning that you can reference a per-row field. Even better, you can change that field to a metadata field so that it can be used in a field reference without being indexed.
For example, in your filter section:
filter {
# [deleted] is the name of your field
if [deleted] {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "delete"
}
}
mutate {
remove_field => [ "deleted" ]
}
} else {
mutate {
add_field => {
"[#metadata][elasticsearch_action]" => "index"
}
}
mutate {
remove_field => [ "deleted" ]
}
}
}
Then, in your output section, reference the metadata field:
output {
elasticsearch {
hosts => "localhost:9200"
index => "myindex"
action => "%{[#metadata][elasticsearch_action]}"
document_type => "mytype"
}
}

How can I remove field which are nil in CSV file

My CSV file contains fields which are nil like that :
{ "message" => [
[0] "m_FRA-LIENSs-R2012-1;\r"
],
"#version" => "1",
"#timestamp" => "2015-05-24T13:51:14.735Z",
"host" => "debian",
"SEXTANT_UUID" => "m_FRA-LIENSs-R2012-1",
"SEXTANT_ALTERNATE_TITLE" => nil
}
How can I remove all : messages and fields
Here is my CSV file
SEXTANT_UUID|SEXTANT_ALTERNATE_TITLE
a1afd680-543c | ZONE_ENJEU
4b80d9ad-e59d | ZICO
800d640f-1f82 |
I want to delete the last line, I used filter ruby, but it doesn't work! It remove just the field not the entire message.
If you configure your Ruby filter like this, it will work:
filter {
# let ruby check all fields of the event and remove any empty ones
ruby {
code => "event.to_hash.delete_if {|field, value| value.blank? }"
}
}
I used if ([message]=~ "^;") { drop { } } ans it's work => that for csv file

json filter fails with >#<NoMethodError: undefined method `[]' for nil:NilClass>

I'm trying to process entries from a logfile that contains both plain messages and json formatted messages. My initial idea was to grep for messages enclosed in curly braces and have them processed by another chained filter. Grep works fine (as does plain message processing), but the subsequent json filter reports an exception. I attached the logstash configuration, input and error message below.
Do you have any ideas what the problem might be? Any alternative suggestions for processing plain and json formatted entries from the same file?
Thanks a lot,
Johannes
Error message:
Trouble parsing json {:key=>"#message", :raw=>"{\"time\":\"14.08.2013 10:16:31:799\",\"level\":\"DEBUG\",\"thread\":\"main\",\"clazz\":\"org.springframework.beans.factory.support.DefaultListableBeanFactory\",\"line\":\"214\",\"msg\":\"Returning cached instance of singleton bean 'org.apache.activemq.xbean.XBeanBrokerService#0'\"}", :exception=>#<NoMethodError: undefined method `[]' for nil:NilClass>, :level=>:warn}
logstash conf:
file {
path => [ "plain.log" ]
type => "plainlog"
format => "plain"
}
}
filter {
# Grep json formatted messages and send them to following json filter
grep {
type => "plainlog"
add_tag => [ "grepped_json" ]
match => [ "#message", "^{.*}" ]
}
json {
tags => [ "grepped_json" ]
source => "#message"
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch { embedded => true }
}
Input from logfile (just one line):
{"time":"14.08.2013 10:16:31:799","level":"DEBUG","thread":"main","clazz":"org.springframework.beans.factory.support.DefaultListableBeanFactory","line":"214","msg":"Returning cached instance of singleton bean 'org.apache.activemq.xbean.XBeanBrokerService#0'"}
I had the same problem and solved it by adding a target to the json filter.
The documentation does say the target is optional but apparently it isn't.
Changing your example you should have:
json {
tags => [ "grepped_json" ]
source => "#message"
target => "data"
}