I have this config file (logstash):
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
json {
source => "message"
target => "log"
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
jdbc {
driver_jar_path => "/etc/logstash/mysql-connector-java-8.0.11.jar"
driver_class => "com.mysql.cj.jdbc.Driver"
connection_string => "jdbc:mysql://localhost:3306/cste?user=master&password=testets!123"
statement => ["INSERT INTO cste_log (ip, log, event, created, inserted) VALUES(?,?,?,?,?)", "log.userid", "log", "log.event", "#timestamp", "#timestamp"]
}
stdout {
codec => "rubydebug"
}
for save data as mySQL Database. but it doesn't work with an error message (column 'ip', 'event' cannot be null)
I think the grammar of 'jdbc.statement' is wrong, and I'm trying to fix it.
'output.elasticsearch' works very nice.
{
"agent" => {
"version" => "7.10.0",
"name" => "DESKTOP-GEB1AGR",
"id" => "7e109ece-5874-4149-9842-21acb86c9da0",
"type" => "filebeat",
"hostname" => "DESKTOP-GEB1AGR",
"ephemeral_id" => "0730755e-f234-48c4-b7f1-2d2339df0e86"
},
"#version" => "1",
"#timestamp" => 2020-11-23T06:31:59.005Z,
"log" => {
"userid" => "192.111.11.111",
"writetime" => "2020/11/23 15:31:51",
"target" => "crackme.exe - PID: 5528 - Module: ntdll.dll - Thread: Main Thread 3240 (switched from 19C0)",
"event" => "dbgRestart"
},
"input" => {
"type" => "log"
},
"ecs" => {
"version" => "1.6.0"
},
"message" => "{\"writetime\": \"2020/11/23 15:31:51\", \"userid\": \"111.111.111.111\", \"target\": \"crackme.exe - PID: 5528 - Module: ntdll.dll - Thread: Main Thread 3240 (switched from 19C0)\", \"event\": \"dbgRestart\"} ",
"host" => {
"name" => "DESKTOP-GEB1AGR",
"architecture" => "x86_64",
"os" => {
"version" => "10.0",
"name" => "Windows 10 Home",
"build" => "16299.1087",
"family" => "windows",
"platform" => "windows",
"kernel" => "10.0.16299.1087 (WinBuild.160101.0800)"
},
"id" => "659f1b29-3-2cb22793a39c",
"ip" => [
[0] "fe80::adb9:b",
[1] "192.168.43.",
[2] "2001:0:348b:",
[3] "fe80::180947e"
],
"hostname" => "DESKTOP-GEB1AGR",
"mac" => [
[0] "00:0c:6c:d7",
[1] "00:00:00:e0"
]
},
"tags" => [
[0] "beats_input_codec_plain_applied"
]
}
How can I use the value 'writetime' and 'event'?
Please give me some advice.
"log" => {
"userid" => "192.168.43.129",
"writetime" => "2020/11/23 15:31:51",
"target" => "crackme.exe - PID: 5528 - Module: ntdll.dll - Thread: Main Thread 3240 (switched from 19C0)",
"event" => "dbgRestart"},
If event is a field inside the log object then in logstash you refer to that as "[log][event]". [log.event] refers to a field that has a period in its name. Similarly for "[log][userid]".
Related
I am posting a json from an application to logstash wanting to get the location of an IP-adress with logstashes geoip plugin. However i get a _geoip_lookup_failure.
this is my logstash config
http {
port => "4200"
codec => json
}
}
filter{
geoip {
source => "clientip"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
this is what I post to the port :
{'used_credentials': [
{'username': 'l', 'number_of_usages': 1, 'used_commands': {},
'get_access': 'false',
'timestamps': {'1': '04/15/2019, 21:08:54'}, 'password': 'l'}],
'clientip': '192.168.xxx.xx',
'unsuccessfull_logins': 1}
and this is what i get in logstash:
{
"unsuccessfull_logins" => 1,
"#version" => "1",
"used_credentials" => [
[0] {
"username" => "l",
"used_commands" => {},
"password" => "l",
"timestamps" => {
"1" => "04/15/2019, 21:08:54"
},
"number_of_usages" => 1,
"get_access" => "false"
}
],
"clientip" => "192.168.xxx.xx",
"#timestamp" => 2019-04-15T19:08:57.147Z,
"host" => "127.0.0.1",
"headers" => {
"request_path" => "/telnet",
"connection" => "keep-alive",
"accept_encoding" => "gzip, deflate",
"http_version" => "HTTP/1.1",
"content_length" => "227",
"http_user_agent" => "python-requests/2.21.0",
"request_method" => "POST",
"http_accept" => "*/*",
"content_type" => "application/json",
"http_host" => "127.0.0.1:4200"
},
"geoip" => {},
"tags" => [
[0] "_geoip_lookup_failure"
]
}
I don't understand why the input is recognized corectly but goeip does not find it
The problem is that your clientip is in the 192.168.0.0/16 network, which is a private network reserved for local use only, it is not present on the database used by the geoip filter.
The geoip filter will only work with public IP addresses.
I have problem with Logstash configuration
My logs pattern are
2017-07-26 14:31:03,644 INFO [http-bio-10.60.2.21-10267-exec-92] jsch.DeployManagerFileUSImpl (DeployManagerFileUSImpl.java:132) - passage par ficher temporaire .bindings.20170726-143103.tmp
My current pattern is
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \(%{DATA:class}\):%{GREEDYDATA:message}" }
Which pattern for [http-bio-10.60.2.21-10267-exec-92] and for jsch.DeployManagerFileUSImpl?
Doesn't seem like the current pattern you've shown would work, as you don't have anything in your sample message that matches \(%{DATA:class}\):%{GREEDYDATA:message} and you're not dealing with the double space after the loglevel.
If you want to match some random stuff in the middle of a line, use %{DATA}, e.g.:
\[%{DATA:myfield}\]
and then you can use %{GREEDYDATA} to get the stuff at the end of the line:
\[%{DATA:myfield1}\] %{GREEDYDATA:myfield2}
If you need to break these items down into fields of their own, then be more specific with the pattern or use a second grok{} block.
in my logstash.conf i have change my pattern to
match => [ "message", "%{TIMESTAMP_ISO8601:logdate},%{INT} %{LOGLEVEL:log-level} \[(?<threadname>[^\]]+)\] %{JAVACLASS:package} \(%{JAVAFILE:file}:%{INT:line}\) - %{GREEDYDATA:message}" ]
With helping of site https://grokdebug.herokuapp.com/ .
But i could not see in kibana 5.4.3 my static log file contains in /home/elasticsearch/static_logs/ directory ?
My logstash configuration file with "static" section
input {
file {
type => "access-log"
path => "/home/elasticsearch/tomcat/logs/*.txt"
}
file {
type => "tomcat"
path => "/home/elasticsearch/tomcat/logs/*.log" exclude => "*.zip"
codec => multiline {
negate => true
pattern => "(^%{MONTH} %{MONTHDAY}, 20%{YEAR} %{HOUR}:?%{MINUTE}(?::?%{SECOND}) (?:AM|PM))"
what => "previous"
}
}
file {
type => "static"
path => "/home/elasticsearch/static_logs/*.log" exclude => "*.zip"
}
}
filter {
if [type] == "access-log" {
grok {
# Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T "%{Referer}i" "%{User-Agent}i"
match => [ "message" , "%{IPV4:clientIP} %{NOTSPACE:user} \[%{DATA:timestamp}\] %{WORD:method} %{NOTSPACE:request} %{NUMBER:status} %{NUMBER:bytesSent} %{NUMBER:duration} \"%{NOTSPACE:referer}\" \"%{DATA:userAgent}\"" ]
remove_field => [ "message" ]
}
grok{
match => [ "request", "/%{USERNAME:app}/" ]
tag_on_failure => [ ]
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
geoip {
source => ["clientIP"]
}
dns {
reverse => [ "clientIP" ]
}
mutate {
lowercase => [ "user" ]
convert => [ "bytesSent", "integer", "duration", "float" ]
}
if [referer] == "-" {
mutate {
remove_field => [ "referer" ]
}
}
if [user] == "-" {
mutate {
remove_field => [ "user" ]
}
}
}
if [type] == "tomcat" {
if [message] !~ /(.+)/ {
drop { }
}
grok{
patterns_dir => "./patterns"
overwrite => [ "message" ]
# oK Catalina normal
match => [ "message", "%{CATALINA_DATESTAMP:timestamp} %{NOTSPACE:className} %{WORD:methodName}\r\n%{LOGLEVEL: logLevel}: %{GREEDYDATA:message}" ]
}
grok{
match => [ "path", "/%{USERNAME:app}.20%{NOTSPACE}.log"]
tag_on_failure => [ ]
}
# Aug 25, 2014 11:23:31 AM
date{
match => [ "timestamp", "MMM dd, YYYY hh:mm:ss a" ]
remove_field => [ "timestamp" ]
}
}
if [type] == "static" {
if [message] !~ /(.+)/ {
drop { }
}
grok{
patterns_dir => "./patterns"
overwrite => [ "message" ]
# 2017-08-03 16:01:11,352 WARN [Thread-552] pcf2.AbstractObjetMQDAO (AbstractObjetMQDAO.java:137) - Descripteur de
match => [ "message", "%{TIMESTAMP_ISO8601:logdate},%{INT} %{LOGLEVEL:log-level} \[(?<threadname>[^\]]+)\] %{JAVACLASS:package} \(%{JAVAFILE:file}:%{INT:line}\) - %{GREEDYDATA:message}" ]
}
# 2017-08-03 16:01:11,352
date{
match => [ "timestamp", "YYYY-MM-dd hh:mm:ss,SSS" ]
remove_field => [ "timestamp" ]
}
}
}
output {
elasticsearch { hosts => ["192.168.99.100:9200"]}
}
Where is my mistake ?
Regards
I'm trying to send logstash outputs to csv, but the columns are not being written in the file.
This is my logstash configuration:
input
{
http
{
host => "0.0.0.0"
port => 31311
}
}
filter
{
grok {
match => { "id" => "%{URIPARAM:id}?" }
}
kv
{
field_split => "&?"
source => "[headers][request_uri]"
}
}
output
{
stdout { codec => rubydebug }
csv
{
fields => ["de,cd,dl,message,bn,ua"]
path => "/tmp/logstash-bq/text.csv"
flush_interval => 0
csv_options => {"col_sep" => ";" "row_sep" => "\r\n"}
}
}
This is my input:
curl -X POST 'http://localhost:31311/?id=9decaf95-20a5-428e-a3ca-50485edb9f9f&uid=1-fg4fuqed-j0hzl5q2&ev=pageview&ed=&v=1&dl=http://dev.xxx.com.br/&rl=http://dev.xxxx.com.br/&ts=1491758180677&de=UTF-8&sr=1600x900...
This is logstash answer:
{
"headers" => {
"http_accept" => "*/*",
"request_path" => "/",
"http_version" => "HTTP/1.1",
"request_method" => "POST",
"http_host" => "localhost:31311",
"request_uri" => "/?id=xxx...",
"http_user_agent" => "curl/7.47.1"
},
"de" => "UTF-8",
"cd" => "24",
"dl" => "http://dev.xxx.com.br/",
"message" => "",
"bn" => "Chrome%2057",
"ua" => "Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_11_3)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/57.0.2987.133%20Safari/537.36",
"dt" => "xxxx",
"uid" => "1-fg4fuqed-j0hzl5q2",
"ev" => "pageview",
"#timestamp" => 2017-04-09T17:41:03.083Z,
"v" => "1",
"md" => "false",
"#version" => "1",
"host" => "0:0:0:0:0:0:0:1",
"rl" => "http://dev.xxx.com.br/",
"vp" => "1600x236",
"id" => "9decaf95-20a5-428e-a3ca-50485edb9f9f",
"ts" => "1491758180677",
"sr" => "1600x900"
}
[2017-04-09T14:41:03,137][INFO ][logstash.outputs.csv ] Opening file {:path=>"/tmp/logstash-bq/text.csv"}
But when I open /tmp/logstash-bq/text.csv I see this:
2017-04-09T16:26:17.464Z 127.0.0.1 abc2017-04-09T17:19:19.690Z 0:0:0:0:0:0:0:1 2017-04-09T17:23:12.117Z 0:0:0:0:0:0:0:1 2017-04-09T17:24:08.067Z 0:0:0:0:0:0:0:1 2017-04-09T17:31:39.269Z 0:0:0:0: 0:0:0:1 2017-04-09T17:38:02.624Z 0:0:0:0:0:0:0:1 2017-04-09T17:41:03.083Z 0:0:0:0:0:0:0:1
CSV output is bugged for logstash 5.x. I had to install logstash 2.4.1.
I have a Problem with accessing a nested JSON field in logstash (latest version).
My config file is the following:
input {
http {
port => 5001
codec => "json"
}
}
filter {
mutate {
add_field => {"es_index" => "%{[statements][authority][name]}"}
}
mutate {
gsub => [
"es_index", " ", "_"
]
}
mutate {
lowercase => ["es_index"]
}
ruby {
init => "
def remove_dots hash
new = Hash.new
hash.each { |k,v|
if v.is_a? Hash
v = remove_dots(v)
end
new[ k.gsub('.','_') ] = v
if v.is_a? Array
v.each { |elem|
if elem.is_a? Hash
elem = remove_dots(elem)
end
new[ k.gsub('.','_') ] = elem
} unless v.nil?
end
} unless hash.nil?
return new
end
"
code => "
event.instance_variable_set(:#data,remove_dots(event.to_hash))
"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "elasticsearch:9200"
index => "golab-%{+YYYY.MM.dd}"
}
}
I have a filter with mutate. I want to add a field that I can use as a part of the index name. When I use this "%{[statements][authority][name]}" the content in the brackets is used as string.%{[statements][authority][name]} is saved in the es_indexfield. Logstash seems to think this is a string, but why?
I've also tried to use this expression: "%{statements}". It's working like expected. Everything in the field statements is passed to es_index. If I use "%{[statements][authority]}" strange things happen. es_index is filled with the exact same output that "%{statements}" produces. What am I missing?
Logstash Output with "%{[statements][authority]}":
{
"statements" => {
"verb" => {
"id" => "http://adlnet.gov/expapi/verbs/answered",
"display" => {
"en-US" => "answered"
}
},
"version" => "1.0.1",
"timestamp" => "2016-07-21T07:41:18.013880+00:00",
"object" => {
"definition" => {
"name" => {
"en-US" => "Example Activity"
},
"description" => {
"en-US" => "Example activity description"
}
},
"id" => "http://adlnet.gov/expapi/activities/example"
},
"actor" => {
"account" => {
"homePage" => "http://example.com",
"name" => "xapiguy"
},
"objectType" => "Agent"
},
"stored" => "2016-07-21T07:41:18.013880+00:00",
"authority" => {
"mbox" => "mailto:info#golab.eu",
"name" => "GoLab",
"objectType" => "Agent"
},
"id" => "0771b9bc-b1b8-4cb7-898e-93e8e5a9c550"
},
"id" => "a7e31874-780e-438a-874c-964373d219af",
"#version" => "1",
"#timestamp" => "2016-07-21T07:41:19.061Z",
"host" => "172.23.0.3",
"headers" => {
"request_method" => "POST",
"request_path" => "/",
"request_uri" => "/",
"http_version" => "HTTP/1.1",
"http_host" => "logstasher:5001",
"content_length" => "709",
"http_accept_encoding" => "gzip, deflate",
"http_accept" => "*/*",
"http_user_agent" => "python-requests/2.9.1",
"http_connection" => "close",
"content_type" => "application/json"
},
"es_index" => "{\"verb\":{\"id\":\"http://adlnet.gov/expapi/verbs/answered\",\"display\":{\"en-us\":\"answered\"}},\"version\":\"1.0.1\",\"timestamp\":\"2016-07-21t07:41:18.013880+00:00\",\"object\":{\"definition\":{\"name\":{\"en-us\":\"example_activity\"},\"description\":{\"en-us\":\"example_activity_description\"}},\"id\":\"http://adlnet.gov/expapi/activities/example\",\"objecttype\":\"activity\"},\"actor\":{\"account\":{\"homepage\":\"http://example.com\",\"name\":\"xapiguy\"},\"objecttype\":\"agent\"},\"stored\":\"2016-07-21t07:41:18.013880+00:00\",\"authority\":{\"mbox\":\"mailto:info#golab.eu\",\"name\":\"golab\",\"objecttype\":\"agent\"},\"id\":\"0771b9bc-b1b8-4cb7-898e-93e8e5a9c550\"}"
}
You can see that authority is part of es_index. So it was not chosen as a field.
Many thanks in advance
I found a solution. Credits go to jpcarey (Elasticsearch Forum)
I had to remove codec => "json". That leads to another data structure. statements is now an array and not an object. So I needed to change %{[statements][authority][name]} to %{[statements][0][authority][name]}. That works without problems.
If you follow the given link you'll also find an better implementation of my mutate filters.
I'm trying to parse multiline data from log file.
I have tried multiline codec and multiline filter.
but it doesn't work for me.
Log data
INFO 2014-06-26 12:34:42,881 [4] [HandleScheduleRequests] Request Entity:
User Name : user
DLR : 04
Text : string
Interface Type : 1
Sender : sdr
DEBUG 2014-06-26 12:34:43,381 [4] [HandleScheduleRequests] Entitis is : 1 System.Exception
and this is configuration file
input {
file {
type => "cs-bulk"
path =>
[
"/logs/bulk/*.*"
]
start_position => "beginning"
sincedb_path => "/logstash-1.4.1/bulk.sincedb"
codec => multiline {
pattern => "^%{LEVEL4NET}"
what => "previous"
negate => true
}
}
}
output {
stdout { codec => rubydebug }
if [type] == "cs-bulk" {
elasticsearch {
host => localhost
index => "cs-bulk"
}
}
}
filter {
if [type] == "cs-bulk" {
grok {
match => { "message" => "%{LEVEL4NET:level} %{TIMESTAMP_ISO8601:time} %{THREAD:thread} %{LOGGER:method} %{MESSAGE:message}" }
overwrite => ["message"]
}
}
}
and this is what I get when logstash parsing the multiline part
It just get the first line, and tag it as multiline.
the other lines not parsed!
{
"#timestamp" => "2014-06-27T16:27:21.678Z",
"message" => "Request Entity:",
"#version" => "1",
"tags" => [
[0] "multiline"
],
"type" => "cs-bulk",
"host" => "lab",
"path" => "/logs/bulk/22.log",
"level" => "INFO",
"time" => "2014-06-26 12:34:42,881",
"thread" => "[4]",
"method" => "[HandleScheduleRequests]"
}
Place a (?m) at the beginning of your grok pattern. That will allow regex to not stop at \n.
Not quite sure what's going on, but using a multiline filter instead of a codec like this:
input {
stdin {
}
}
filter {
multiline {
pattern => "^(WARN|DEBUG|ERROR)"
what => "previous"
negate => true
}
}
Does work in my testing...
{
"message" => "INFO 2014-06-26 12:34:42,881 [4] [HandleScheduleRequests] Request Entity:\nUser Name : user\nDLR : 04\nText : string\nInterface Type : 1\nSender : sdr",
"#version" => "1",
"#timestamp" => "2014-06-27T20:32:05.288Z",
"host" => "HOSTNAME",
"tags" => [
[0] "multiline"
]
}
{
"message" => "DEBUG 2014-06-26 12:34:43,381 [4] [HandleScheduleRequests] Entitis is : 1 System.Exception",
"#version" => "1",
"#timestamp" => "2014-06-27T20:32:05.290Z",
"host" => "HOSTNAME"
}
Except... the test file I used it never prints out the last line (because it's still looking for more to follow)