I'm running logstash like it saids in the starting page:
java -jar logstash-1.2.1-flatjar.jar agent --config logstash-dev.conf
With logstash-dev.conf like this:
input {
file {
path => ["/tmp/catalina.jsonevent.log"]
codec => json {
charset => "UTF-8"
}
}
}
output {
# Use stdout in debug mode again to see what logstash makes of the event.
stdout {
debug => true
}
elasticsearch_http {
host => "127.0.0.1"
}
}
And it jumps with this error:
Exception in thread "LogStash::Runner" org.jruby.exceptions.RaiseException: (StoreError) loading file failed: problem creating X509 Aux certificate: java.io.IOException: problem parsing cert: java.security.cert.CertificateParsingException: java.io.IOException: Duplicate extensions not allowed
at org.jruby.ext.openssl.X509Store.add_file(org/jruby/ext/openssl/X509Store.java:151)
at RUBY.initialize(file:/usr/local/bin/logstash/logstash-1.2.1-flatjar.jar!/ftw/agent.rb:70)
at RUBY.register(file:/usr/local/bin/logstash/logstash-1.2.1-flatjar.jar!/logstash/outputs/elasticsearch_http.rb:46)
at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1617)
at RUBY.outputworker(file:/usr/local/bin/logstash/logstash-1.2.1-flatjar.jar!/logstash/pipeline.rb:208)
at RUBY.start_outputs(file:/usr/local/bin/logstash/logstash-1.2.1-flatjar.jar!/logstash/pipeline.rb:140)
I've looking everywhere (google, mail groups of logstash and jruby, and the same with their ircs) but I don't find a way to solve this. I only see similar stacktraces but no solution.
Can you give me any pointer in address this?
thanks in advance
We've been looking for this as well and this fixed it for us:
curl http://curl.haxx.se/ca/cacert.pem -o /usr/local/etc/openssl/cert.pem
Related
I downloaded the chisel-tutorial which is offered on the website of usb-bar.
In order to do practise I created a scala file named as "Regfile.scala" under the path:
"chisel-tutorial/src/main/scala/solutions/Regfile.scala".
The Test-file is stored under the path :
"chisel-tutorial/src/test/scala/solutions/RegfileTests.scala".
While running the sbt I was reported
(after execution of command "test:run-main solutions.Launcher Regfile"):
"Errors: 1: in the following tutorials
Bad tutorial name: Regfile "
How can I solve this problem?
You have to add your Regfile to Launcher.scala. The launcher is available in directory :
src/test/scala/solutions/Launcher.scala
I think you can add somethings like this to Launch.scala to test your Regfile:
"Regfile" -> { (backendName: String) =>
Driver(() => new Regfile(), backendName) {
(c) => new RegfileTests(c)
}
},
I am currently trying to setup some data collections for our app using the full elk stack (Beats - Logstash - ElasticSearch-Kibana). So far everything is working as it should but I have a requirement to capture statistics on the exceptions thrown by the applications (e.g. java.lang.IllegalArgumentException)
I am not really interested in the stack trace itself so I went ahead and added a separate grok filter just for the exception.
Example of Message:
2016-11-15 05:19:28,801 ERROR [App-Initialisation-Thread] appengine.java:520 Failed to initialize external authenticator myapp Support Access || appuser#vm23-13:/mnt/data/install/assembly app-1.4.12#cad85b224cce11eb5defa126030f21fa867b0dad
java.lang.IllegalArgumentException: Could not check if provided root is a directory
at com.myapp.io.AbstractRootPrefixedFileSystem.checkAndGetRoot(AbstractRootPrefixedFileSystem.java:67)
at com.myapp.io.AbstractRootPrefixedFileSystem.<init>(AbstractRootPrefixedFileSystem.java:30)
at com.myapp.io.s3.S3FileSystem.<init>(S3FileSystem.java:32)
at com.myapp.io.s3.S3FileSystemDriver.loadFileSystem(S3FileSystemDriver.java:60)
at com.myapp.io.FileSystems.getFileSystem(FileSystems.java:55)
at com.myapp.authentication.ldap.S3LdapConfigProvider.initializeCloudFS(S3LdapConfigProvider.java:77)
at com.myapp.authentication.ldap.S3LdapConfigProvider.loadS3Config(S3LdapConfigProvider.java:51)
at com.myapp.authentication.ldap.S3LdapConfigProvider.getLdapConfig(S3LdapConfigProvider.java:42)
at com.myapp.authentication.ldap.DelegatingLdapConfigProvider.getLdapConfig(DelegatingLdapConfigProvider.java:45)
at com.myapp.authentication.ldap.LdapExternalAuthenticatorFactory.create(LdapExternalAuthenticatorFactory.java:28)
at com.myapp.authentication.ldap.LdapExternalAuthenticatorFactory.create(LdapExternalAuthenticatorFactory.java:10)
at com.myapp.frob.appengine.getExternalAuthenticators(appengine.java:516)
at com.myapp.frob.appengine.startUp(appengine.java:871)
at com.myapp.frob.appengine.startUp(appengine.java:754)
at com.myapp.jsp.KewServeInitContextListener$1.run(QServerInitContextListener.java:104)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.NoSuchFileException: fh-ldap-config/
at com.upplication.s3fs.util.S3Utils.getS3ObjectSummary(S3Utils.java:55)
at com.upplication.s3fs.util.S3Utils.getS3FileAttributes(S3Utils.java:64)
at com.upplication.s3fs.S3FileSystemProvider.readAttributes(S3FileSystemProvider.java:463)
at com.myapp.io.AbstractRootPrefixedFileSystem.checkAndGetRoot(AbstractRootPrefixedFileSystem.java:61)
Example of grok statement:
grok {
patterns_dir => ["./patterns"]
match => ["message", "%{GREEDYDATA}\n%{JAVAFILE:exception}"]
}
Testing on the grok debugger shows correct results:
{
"GREEDYDATA": [
[
"2016-11-15 05:19:28,801 ERROR [App-Initialisation-Thread] appengine.java:520 Failed to initialize external authenticator myapp Support Access || appuser#vm23-13:/mnt/data/install/assembly app-1.4.12#cad85b224cce11eb5defa126030f21fa867b0dad"
]
],
"exception": [
[
"java.lang.IllegalArgumentException"
]
]
}
Problem
When I add the configuration to logstash it captures the Caused string instead of the exception name, even though the "Caused" string is after another new line character. However it works perfectly for other exception messages such as :
016-11-15 06:17:49,691 WARN [SCReplicationWorkerThread-2] ClientJob.java:207 50345 Error communicating to server `199.181.131.249':`80'. Waiting `10' seconds before retrying... If you see this message rarely, the sc will have recovered gracefully. || appuser#vm55-12:/mnt/deployment/install/app app-3.1.23#cad85b224cce11eb5defa126030f21fa867b0dad
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.net.SocketInputStream.read(SocketInputStream.java:223)
at java.io.DataInputStream.readBoolean(DataInputStream.java:242)
at com.myapp.replication.client.ClientJob.passCheckRevision(ClientJob.java:279)
at com.myapp.replication.client.ClientJob.execute(ClientJob.java:167)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:534)
Any advice would be appreciated.
Thanks
Did you setting the mutiline in the input or filebeat input ,
like this to show the pattern start with ISO8601
I think maybe you mutiline not fetch the whole line
input {
beats {
port => 5044
codec => multiline {
pattern => "^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}[\.,][0-9]{3,7} "
negate => true
what => "previous"
}
}
}
I have Filebeat, Logstash, ElasticSearch and Kibana. Filebeat is on a separate server and it's supposed to receive data in different formats: syslog, json, from a database, etc and send it to Logstash.
I know how to setup Logstash to make it handle a single format, but since there are multiple data formats, how would I configure Logstash to handle each data format properly?
In fact, how can I setup them both, Logstash and Filebeat, so that all the data in different formats get sent from Filebeat and submitted to Logstash properly? I mean, the config setting which handle sending and receiving data.
To separate different types of inputs within the Logstash pipeline, use the type field and tags for more identification.
In your Filebeat configuration, you should be using a different prospector for each different data format, each prospector can then be set to have a different document_type: field.
Reference
For example:
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# For each file found under this path, a harvester is started.
paths:
- "/var/log/apache/httpd-*.log"
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
document_type: apache
-
paths:
- /var/log/messages
- "/var/log/*.log"
document_type: log_message
In the above example, logs from /var/log/apache/httpd-*.log will have document_type: apache, while the other prospector has document_type: log_message.
This document-type field becomes the type field when Logstash is processing the event. You can then use if statements in Logstash to do different processing on different types.
Reference
For example:
filter {
if [type] == "apache" {
# apache specific processing
}
else if [type] == "log_message" {
# log_message processing
}
}
If the "data formats" in your question are codecs, this has to be configured in the input of logstash. The following is about filebeat 1.x and logstash 2.x, not the elastic 5 stack.
In our setup, we have two beats inputs - the first is default = "plain":
beats {
port => 5043
}
beats {
port => 5044
codec => "json"
}
On the filebeat side, we need two filebeat instances, sending their output to their respective ports. It's not possible to tell filebeat "route this prospector to that output".
Documentation logstash: https://www.elastic.co/guide/en/logstash/2.4/plugins-inputs-beats.html
Remark: If you ship with different protocols, e.g. legacy logstash-forwarder / lumberjack, you need more ports.
Supported with 7.5.1
filebeat-multifile.yml // file beat installed on a machine
filebeat.inputs:
- type: log
tags: ["gunicorn"]
paths:
- "/home/hduser/Data/gunicorn-100.log"
- type: log
tags: ["apache"]
paths:
- "/home/hduser/Data/apache-access-100.log"
output.logstash:
hosts: ["0.0.0.0:5044"] // target logstash IP
gunicorn-apache-log.conf // log stash installed on another machine
input {
beats {
port => "5044"
host => "0.0.0.0"
}
}
filter {
if "gunicorn" in [tags] {
grok {
match => { "message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\"" }
remove_field => "message"
}
}
else if "apache" in [tags] {
grok {
match => { "message" => "%{IPORHOST:client_ip} %{DATA:u1} %{DATA:u2} \[%{HTTPDATE:http_date}\] \"%{WORD:http_method} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:gd}\" \"%{DATA:u3}\""}
remove_field => "message"
}
}
}
output {
if "gunicorn" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "gunicorn-index"
}
}
else if "apache" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "apache-index"
}
}
}
Run filebeat from binary
Give proper permission to file
sudo chown root:root filebeat-multifile.yml
sudo chmod go-w filebeat-multifile.yml
sudo ./filebeat -e -c filebeat-multifile-1.yml -d "publish"
Run logstash from binary
./bin/logstash -f gunicorn-apache-log.conf
I am currently working on setting up ELK stack on Bluemix containers. By following this blog, I was able to create a logstash Drain and get all the Cloud Foundry logs from the Bluemix web app into logstash.
Is there a way to filter out logs based on log levels? I am trying to filter out ERR in logstash output and send them to Slack.
The following code is the filter configuration of the logstash.conf file:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
I am trying to add a Slack webhook to the logstash.conf output so that when a log level with ERR is detected, the error message is posted into the Slack channel.
My output conf file with the Slack HTTP post looks something like this code:
output {
if [loglevel] == "ERR" {
http {
http_method => "post"
url => "https://hooks.slack.com/services/<id>"
format => "json"
mapping => {
"channel" => "#logstash-staging"
"username" => "pca_elk"
"text" => "%{message}"
"icon_emoji" => ":warning:"
}
}
}
elasticsearch { }
}
Sample Logs from cloud Foundry:
2016-05-25T13:14:51.269-0400[App/0]ERR npm ERR! There is likely additional logging output above.
2016-05-25T13:14:51.269-0400[App/0]ERR npm ERR! npm owner ls pca-uiapi
2016-05-25T13:14:51.274-0400[App/0]ERR npm ERR! /home/vcap/app/npm-debug.log
2016-05-25T13:14:51.274-0400[App/0]ERR npm ERR! Please include the following file with any support request:
2016-05-25T13:14:51.431-0400[API/1]OUT App instance exited with guid cc73db5d- 6e8c-4ff4-b20f-a69d7c2ba9f4 payload: {"cc_partition"=>"default", "droplet"=>"cc73db5d-6e8c-4ff4-b20f-a69d7c2ba9f4", "version"=>"f9fb3e09-f234-43d4-94b1-a337f8ad72ad", "instance"=>"9d7ad0585b824fa196a2a64e78df9eef", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"app instance exited", "crash_timestamp"=>1464196491}
2016-05-25T13:16:10.948-0400[DEA/50]OUT Starting app instance (index 0) with guid cc73db5d-6e8c-4ff4-b20f-a69d7c2ba9f4
2016-05-25T13:16:36.032-0400[App/0]OUT > pca-uiapi#1.0.0-build.306 start /home/vcap/app
2016-05-25T13:16:36.032-0400[App/0]OUT > node server.js
2016-05-25T13:16:36.032-0400[App/0]OUT
2016-05-25T13:16:37.188-0400[App/0]OUT PCA REST Service is listenning on port: 62067
2016-05-25T13:19:02.241-0400[App/0]ERR at Layer.handle_error (/home/vcap/app/node_modules/express/lib/router/layer.js:71:5)
2016-05-25T13:19:02.241-0400[App/0]ERR at /home/vcap/app/node_modules/body-parser/lib/read.js:125:7
2016-05-25T13:19:02.241-0400[App/0]ERR at Object.module.exports.log (/home/vcap/app/utils/Logger.js:35:25)
Is there a way to get this working? Is there a way to check the log level of each message? I am kinda stuck and was wondering if you could help me out.
In the Bluemix UI, the logs can be filtered based on the channel ERR or OUT. I could not figure how to do the same on logstash.
Thank you for looking into this problem.
The grok provided in that article is meant to parse the syslog message coming on port 5000. After all syslog filters have run, your application log (i.e. the sample log lines you've shown in your question) are in the #message field.
So you need another grok in order to parse that message. So after the last mutate you can add this:
grok {
match => {"#message" => "%{TIMESTAMP_ISO8601:timestamp}\[%{WORD:app}/%{NUMBER:num}\]%{WORD:loglevel} %{GREEDYDATA:log}"}
}
After this filter runs, you'll have a field named loglevel which will contain either ERR or OUT and in the former case will activate your slack output.
When my Grails application crashes, it shows the error and the stacktrace on the error page because the error.gsp page has the following snippet <g:renderException exception="${exception}" />. However nothing gets logged in the log file.
How can I change this? because for the production application I plan to remove the renderException because I don't want users to see the entire stacktrace.
My log4j settings are as follows:
appenders {
rollingFile name:'catalinaOut', maxFileSize:1024, fileName:"${System.properties.getProperty('catalina.home')}/logs/mylog.log"
}
root {
error 'catalinaOut'
debug 'catalinaOut'
additivity = true
}
error 'org.codehaus.groovy.grails.web.servlet', // controllers
'org.codehaus.groovy.grails.web.pages', // GSP
'org.codehaus.groovy.grails.web.sitemesh', // layouts
'org.codehaus.groovy.grails.web.mapping.filter', // URL mapping
'org.codehaus.groovy.grails.web.mapping', // URL mapping
'org.codehaus.groovy.grails.commons', // core / classloading
'org.codehaus.groovy.grails.plugins', // plugins
'org.codehaus.groovy.grails.orm.hibernate', // hibernate integration
'org.springframework',
'org.hibernate',
'net.sf.ehcache.hibernate',
'grails.app'
debug 'grails.app'
}
I'm running the app in development as grails run-app
I use these settings for console and file based logging. You can remove stdout if you don't want/need console. Just copy all your error classes in the corresponding list.
log4j = {
def loggerPattern = '%d %-5p >> %m%n'
def errorClasses = [] // add more classes if needed
def infoClasses = ['grails.app.controllers.myController'] // add more classes if needed
def debugClasses = [] // add more classes if needed
appenders {
console name:'stdout', layout:pattern(conversionPattern: loggerPattern)
rollingFile name: "file", maxFileSize: 1024, file: "./tmp/logs/logger.log", layout:pattern(conversionPattern: loggerPattern)
}
error stdout: errorClasses, file: errorClasses
info stdout: infoClasses, file: infoClasses
debug stdout: debugClasses, file: debugClasses
}