aws cloudwatch -> logstash -> elastic cloud | [ERROR][logstash.pipeline] Error registering plugin - configuration

I am running logstash v6.8.1 on an aws ec2 linux 2 ami instance. I have installed logstash successfully.
I am using logstash to get logs from aws cloudwatch using this plugin https://github.com/lukewaite/logstash-input-cloudwatch-logs. I have installed the plugin successfully.
I am in /usr/share/logstash directory.
The command I use is sudo bin/logstash --path.settings /etc/logstash/ -f config/cloud_watch.conf
This is my cloud_watch.conf file
input {
cloudwatch_logs {
log_group => [ "/my/log/group" ]
region => "us-west-2"
access_key_id => "access_key"
secret_access_key => "secret_key"
}
}
output {
elasticsearch {
hosts => "https://xxxxxxxx.us-west-1.aws.found.io:9243"
user => "elastic"
password => "my_password"
}
stdout { }
}
When I run this command I get this error message
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-07-01T19:33:17,229][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-07-01T19:33:17,261][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.8.1"}
[2019-07-01T19:34:05,286][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-07-01T19:34:06,506][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx#41e8f9885e01498aaa03909926286fc9.us-west-1.aws.found.io:9243/]}}
[2019-07-01T19:34:07,754][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://elastic:xxxxxx#41e8f9885e01498aaa03909926286fc9.us-west-1.aws.found.io:9243/"}
[2019-07-01T19:34:08,126][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x6d3bd20a>", :error=>"Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')\n
at [Source: (byte[])\"<!DOCTYPE html><html lang=\"en\"><head><meta charset=\"utf-8\"><meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge,chrome=1\"><meta name=\"viewport\" content=\"width=device-width\"><title>Kibana</title><style>/* INTER UI FONT */\n/
* INTER UI FONT */\n/* INTER UI FONT */\n/* INTER UI FONT */\n#font-face {\n font-family: 'Inter UI';\n font-style: normal;\n font-weight: 100;\n src: url(\"/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff2\") format(\"woff2\"),\n url(\"/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff\") format\"[truncated 73333 bytes]; line: 1, column: 2]", :thread=>"#<Thread:0x66dd4fdf run>"}
[2019-07-01T19:34:08,136][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<LogStash::Json::ParserError: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
at [Source: (byte[])"<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"><meta name="viewport" content="width=device-width"><title>Kibana</title><style>/* INTER UI FONT */
/* INTER UI FONT */
/* INTER UI FONT */
/* INTER UI FONT */
#font-face {
font-family: 'Inter UI';
font-style: normal;
font-weight: 100;
src: url("/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff2") format("woff2"),
The error message goes on for quite a few lines.
When I modify my cloudwatch.conf to this
input {
cloudwatch_logs {
log_group => [ "/my/log/group" ]
region => "us-west-2"
access_key_id => "access_key"
secret_access_key => "secret_key"
}
}
output {
stdout { }
}
I can see new logs coming into my log stream in my cli console. I know the plugin is working properly. BUT when I attempt to send those logs to my elastic cloud I get the error mentioned above. I have no idea what is happening.

Related

Logstash: Unable to connect to external Amazon RDS Database

Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"

Logstash breaking with recent renaming of JDBC mySQL connector

For some reason Logstash with the Elastic Stack X-Pack is breaking. I believe it's to do with the recent renaming of the mySQL connector, which hasn't been updated in the various config files. However, I can't find where the error file is originating in this error log. Additionally, if anyone knows how to rename the actual mySQL connector from com.mysql.cj.jdbc.Driver to com.mysql.jdbc.Driver this should fix everything.
Error Log:
C:\Program Files\logstash-6.3.2\bin>logstash -f sql.conf
Sending Logstash's logs to C:/Program Files/logstash-6.3.2/logs which is now configured via log4j2.properties
[2018-08-31T15:01:52,502][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-31T15:01:53,031][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-31T15:01:55,217][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-31T15:02:06,342][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-31T15:02:06,342][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-31T15:02:06,539][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-31T15:02:06,575][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-31T15:02:06,592][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-31T15:02:06,609][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-31T15:02:06,625][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-31T15:02:06,659][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-31T15:02:06,909][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x598b8674 sleep>"}
[2018-08-31T15:02:06,996][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-31T15:02:07,359][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-31T15:02:07,895][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLNonTransientConnectionException: Cannot load connection class because of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Failed to parse the host:port pair 'localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;'."}
[2018-08-31T15:02:07,916][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:main
Plugin: <LogStash::Inputs::Jdbc jdbc_connection_string=>"jdbc:mysql://localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;", jdbc_driver_class=>"com.mysql.cj.jdbc.Driver", jdbc_user=>"doesntmatterwithauthentication", statement=>"SELECT * FROM phones", id=>"de31f73d4505e1de7e76bce4917c48b412909473f3872288edd51acccf0e0be6", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_ad3ad9d5-a3e8-492a-9231-1e729e8c4190", enable_metric=>true, charset=>"UTF-8">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>"info", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, parameters=>{"sql_last_value"=>1970-01-01 01:00:00 +0100}, last_run_metadata_path=>"C:\\Users\\ross.massie/.logstash_jdbc_last_run", use_column_value=>false, tracking_column_type=>"numeric", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>
Error: Java::JavaSql::SQLNonTransientConnectionException: Cannot load connection class because of underlying exception: com.mysql.cj.exceptions.WrongArgumentException: Failed to parse the host:port pair 'localhost:3306;user=test;password=test123;databaseName=test;integratedSecurity=true;'.
Exception: Sequel::DatabaseConnectionError
Stack: com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:110)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:97)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:89)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:63)
com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(com/mysql/cj/jdbc/exceptions/SQLError.java:73)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(com/mysql/cj/jdbc/exceptions/SQLExceptionsMapping.java:79)
com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(com/mysql/cj/jdbc/exceptions/SQLExceptionsMapping.java:131)
com.mysql.cj.jdbc.NonRegisteringDriver.connect(com/mysql/cj/jdbc/NonRegisteringDriver.java:227)
java.lang.reflect.Method.invoke(java/lang/reflect/Method)
org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:423)
org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:290)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.adapters.jdbc.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/adapters/jdbc.rb:215)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.adapters.jdbc.RUBY$method$connect$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/adapters/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/adapters/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.make_new(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool.rb:127)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.RUBY$method$make_new$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.assign_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:206)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.RUBY$method$assign_connection$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/connection_pool/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.acquire(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:138)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.RUBY$method$acquire$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/sequel_minus_5_dot_10_dot_0/lib/sequel/connection_pool/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.connection_pool.threaded.hold(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/connection_pool/threaded.rb:90)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.synchronize(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:270)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.test_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:279)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.database.connecting.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/database/connecting.rb:58)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.sequel_minus_5_dot_10_dot_0.lib.sequel.core.connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.10.0/lib/sequel/core.rb:116)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.block in jdbc_connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:114)
org.jruby.RubyKernel.loop(org/jruby/RubyKernel.java:1292)
org.jruby.RubyKernel$INVOKER$s$0$0$loop.call(org/jruby/RubyKernel$INVOKER$s$0$0$loop.gen)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.jdbc_connect(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:111)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$jdbc_connect$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.open_jdbc_connection(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:164)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$open_jdbc_connection$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.execute_statement(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb:220)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.plugin_mixins.jdbc.RUBY$method$execute_statement$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/plugin_mixins/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.execute_query(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:264)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.RUBY$method$execute_query$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/inputs/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.run(C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:250)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9.lib.logstash.inputs.jdbc.RUBY$method$run$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/vendor/bundle/jruby/$2_dot_3_dot_0/gems/logstash_minus_input_minus_jdbc_minus_4_dot_3_dot_9/lib/logstash/inputs/C:/Program Files/logstash-6.3.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.inputworker(C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:512)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.RUBY$method$inputworker$0$__VARARGS__(C_3a_/Program_20_Files/logstash_minus_6_dot_3_dot_2/logstash_minus_core/lib/logstash/C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb)
C_3a_.Program_20_Files.logstash_minus_6_dot_3_dot_2.logstash_minus_core.lib.logstash.pipeline.block in start_input(C:/Program Files/logstash-6.3.2/logstash-core/lib/logstash/pipeline.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:289)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:246)
sql.conf:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/test?useSSL=false&serverTimezone=GMT&DatabaseName=test"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_driver_library => "C:\Program Files (x86)\MySQL\Connector J 8.0\mysql-connector-java-8.0.12.jar"
jdbc_user => "test"
jdbc_password => "test123"
statement => "SELECT * FROM phones"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "phones"
}
}
Rewriting the whole connection string worked after a few tweaks. At the time of the issue a few drivers references I found within x-pack were referencing the wrong driver if memory serves and needed changing.

Logstash pipeline error: error registering jdbc plugin

When I first created Logstash jdbc conf file to import my MySQL data to Elasticsearch, it was working good. But, suddenly, the same files which worked okay, are not working any more and giving an error "Error registering plugin".
Here is my sms-logstash.conf file
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/sms"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => ""
# The path to our downloaded jdbc driver
jdbc_driver_library => "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/bin/mysql-connector-java-5.1.45-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * FROM salon_reg"
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "sms"
"document_type" => "salon_reg"
}
}
When I run this command as bin/logstash -f sms-logstash.conf
It gives the following error
C:\Users\robesh\Downloads\logstash-6.2.3\logstash-6.2.3\bin>logstash -f sms-logstash.conf
Sending Logstash's logs to C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logs which is now configured via log4j2.properties
[2018-04-15T15:05:46,900][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/modules/fb_apache/configuration"}
[2018-04-15T15:05:47,028][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/modules/netflow/configuration"}
[2018-04-15T15:05:47,665][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-04-15T15:05:49,635][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2018-04-15T15:05:51,303][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-15T15:06:04,935][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//localhost:9200], index=>"sms", document_type=>"salon_reg", id=>"7eecf64f77b050d7ebba1e645e2de1d988a4f3d4b88814c75044d6e6c4606a2b", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_cbc6f6b2-287c-44bf-8771-3a951d7ceabf", enable_metric=>true, charset=>"UTF-8">, workers=>1, manage_template=>true, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-04-15T15:06:05,141][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-15T15:06:06,405][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-04-15T15:06:06,426][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-04-15T15:06:07,079][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-04-15T15:06:07,280][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-04-15T15:06:07,289][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-04-15T15:06:07,336][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-04-15T15:06:07,424][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-04-15T15:06:07,533][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-04-15T15:06:08,863][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/sms\", jdbc_user=>\"root\", jdbc_driver_library=>\"C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/bin/mysql-connector-java-5.1.46-bin.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM salon_reg\", id=>\"0c99246377cb88117db974a51d7bdcb982e8fe882ab825575c8ebdc3c890fb5a\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_d687f831-1ac5-4480-b23d-e7fc976f5e9a\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"C:\\\\Users\\\\robesh/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"(<unknown>): 'reader' unacceptable code point '\u0000' (0x0) special characters are not allowed\nin \"'reader'\", position 0 at line 0 column 0", :thread=>"#<Thread:0x1ad70417 run>"}
[2018-04-15T15:06:09,386][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Psych::SyntaxError: (<unknown>): 'reader' unacceptable code point ' ' (0x0) special characters are not allowed
in "'reader'", position 0 at line 0 column 0>, :backtrace=>["org/jruby/ext/psych/PsychParser.java:231:in `parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:377:in `parse_stream'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:325:in `parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/psych.rb:252:in `load'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:102:in `read'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:78:in `get_initial'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.5/lib/logstash/inputs/jdbc.rb:216:in `register'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:341:in `register_plugin'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:352:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:352:in `register_plugins'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:502:in `start_inputs'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:393:in `start_workers'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:289:in `run'", "C:/Users/robesh/Downloads/logstash-6.2.3/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:249:in `block in start'"], :thread=>"#<Thread:0x1ad70417 run>"}
[2018-04-15T15:06:09,506][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
C:\Users\robesh\Downloads\logstash-6.2.3\logstash-6.2.3\bin>
The interesting thing here is that this same file was working fine previously and it indexed my data nicely to Elasticsearch, but suddenly now its giving an error.
It may be an encoding problem, ensure that the logstash file has UTF-8 encoding.
Make sure your config is correct :
bin/logstash --config.test_and_exit -f <path_to_config_file>
If your config is valid, then looks at $USER_HOME/.logstash_jdbc_last_run file, probably that file exists but isn't valid YAML. Fix what's broken about the file, or just delete it.

Filebeat and LogStash -- data in multiple different formats

I have Filebeat, Logstash, ElasticSearch and Kibana. Filebeat is on a separate server and it's supposed to receive data in different formats: syslog, json, from a database, etc and send it to Logstash.
I know how to setup Logstash to make it handle a single format, but since there are multiple data formats, how would I configure Logstash to handle each data format properly?
In fact, how can I setup them both, Logstash and Filebeat, so that all the data in different formats get sent from Filebeat and submitted to Logstash properly? I mean, the config setting which handle sending and receiving data.
To separate different types of inputs within the Logstash pipeline, use the type field and tags for more identification.
In your Filebeat configuration, you should be using a different prospector for each different data format, each prospector can then be set to have a different document_type: field.
Reference
For example:
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# For each file found under this path, a harvester is started.
paths:
- "/var/log/apache/httpd-*.log"
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
document_type: apache
-
paths:
- /var/log/messages
- "/var/log/*.log"
document_type: log_message
In the above example, logs from /var/log/apache/httpd-*.log will have document_type: apache, while the other prospector has document_type: log_message.
This document-type field becomes the type field when Logstash is processing the event. You can then use if statements in Logstash to do different processing on different types.
Reference
For example:
filter {
if [type] == "apache" {
# apache specific processing
}
else if [type] == "log_message" {
# log_message processing
}
}
If the "data formats" in your question are codecs, this has to be configured in the input of logstash. The following is about filebeat 1.x and logstash 2.x, not the elastic 5 stack.
In our setup, we have two beats inputs - the first is default = "plain":
beats {
port => 5043
}
beats {
port => 5044
codec => "json"
}
On the filebeat side, we need two filebeat instances, sending their output to their respective ports. It's not possible to tell filebeat "route this prospector to that output".
Documentation logstash: https://www.elastic.co/guide/en/logstash/2.4/plugins-inputs-beats.html
Remark: If you ship with different protocols, e.g. legacy logstash-forwarder / lumberjack, you need more ports.
Supported with 7.5.1
filebeat-multifile.yml // file beat installed on a machine
filebeat.inputs:
- type: log
tags: ["gunicorn"]
paths:
- "/home/hduser/Data/gunicorn-100.log"
- type: log
tags: ["apache"]
paths:
- "/home/hduser/Data/apache-access-100.log"
output.logstash:
hosts: ["0.0.0.0:5044"] // target logstash IP
gunicorn-apache-log.conf // log stash installed on another machine
input {
beats {
port => "5044"
host => "0.0.0.0"
}
}
filter {
if "gunicorn" in [tags] {
grok {
match => { "message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\"" }
remove_field => "message"
}
}
else if "apache" in [tags] {
grok {
match => { "message" => "%{IPORHOST:client_ip} %{DATA:u1} %{DATA:u2} \[%{HTTPDATE:http_date}\] \"%{WORD:http_method} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:gd}\" \"%{DATA:u3}\""}
remove_field => "message"
}
}
}
output {
if "gunicorn" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "gunicorn-index"
}
}
else if "apache" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "apache-index"
}
}
}
Run filebeat from binary
Give proper permission to file
sudo chown root:root filebeat-multifile.yml
sudo chmod go-w filebeat-multifile.yml
sudo ./filebeat -e -c filebeat-multifile-1.yml -d "publish"
Run logstash from binary
./bin/logstash -f gunicorn-apache-log.conf

logstash_forwarder connected to lostash-server-IP but never receive event

I installed elasticsearch, logstash, kibana, ngix and logstash-forwarder at same server to centralized logs. The log file (allapp.json) is a json file with logs entry like this:
"{\"timestamp\":\"2015-08-30 19:42:26.724\",\"MAC_Address\":\"A8:7C:01:CB:2D:09\",\"DeviceID\":\"96f389972de989d1\",\"RunningApp\":\"null{com.tools.app_logs\\/com.tools.app_logs.Main}{com.gtp.nextlauncher\\/com.gtp.nextlauncher.LauncherActivity}{com.android.settings\\/com.android.settings.Settings$WifiSettingsActivity}{com.android.incallui\\/com.android.incallui.InCallActivity}{com.tools.app_logs\\/com.tools.app_logs.Main}{com.gtp.nextlauncher\\/com.gtp.nextlauncher.LauncherActivity}{com.android.settings\\/com.android.settings.Settings$WifiSettingsActivity}{com.android.incallui\\/com.android.incallui.InCallActivity}\",\"PhoneName\":\"samsung\",\"IP\":\"192.168.1.101\"}"
my logstash.conf is:
input {
lumberjack {
port => 5002
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
udp {
type => "json"
port => 5001
}
}
filter {
json {
"source" => "message"
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
my logstash-forwarder.conf (at same system that logstash is installed) is:
{
"network":{
"servers": [ "192.168.1.102:5002" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" },
"files": [
{
"paths":[ "/var/log/app-log/allapp.json" ],
"fields": { "type": "json" }
}
]
}
my elasticsearch.yml is:
network.host: localhost
when i enter tail -f /var/log/logstash-forwarder/logstash-forwarder.err in terminal i get this:
2015/09/04 11:33:05.282495 Waiting for 1 prospectors to initialise
2015/09/04 11:33:05.282544 Launching harvester on new file: /var/log/app-log/allapp.json
2015/09/04 11:33:05.282591 harvest: "/var/log/app-log/allapp.json" (offset snapshot:0)
2015/09/04 11:33:05.283709 All prospectors initialised with 0 states to persist
2015/09/04 11:33:05.283806 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2015/09/04 11:33:05.284254 Connecting to [192.168.1.102]:5002 (192.168.1.102)
2015/09/04 11:33:05.417174 Connected to 192.168.1.102
the allapp.json file has been update frequently and new log add in it but in above I never see the log which looks like :
Registrar received 1 events
Registrar received 23 events ...
In addition i have another client with logstash-forwarder to send its logs to kibana, logstash-forwarder on that client works correctly and logs from that shown in kibana but at this one client doesn't.
All result in kibana are look like this:
Time file
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
I want to see logs from /var/log/app-log/allapp.json too in kibana, what is problem? why they aren't shown in kibana? why one client work correctly but logstash-forwarder on same system with logstash doesn't work?