logstash_forwarder connected to lostash-server-IP but never receive event - json

I installed elasticsearch, logstash, kibana, ngix and logstash-forwarder at same server to centralized logs. The log file (allapp.json) is a json file with logs entry like this:
"{\"timestamp\":\"2015-08-30 19:42:26.724\",\"MAC_Address\":\"A8:7C:01:CB:2D:09\",\"DeviceID\":\"96f389972de989d1\",\"RunningApp\":\"null{com.tools.app_logs\\/com.tools.app_logs.Main}{com.gtp.nextlauncher\\/com.gtp.nextlauncher.LauncherActivity}{com.android.settings\\/com.android.settings.Settings$WifiSettingsActivity}{com.android.incallui\\/com.android.incallui.InCallActivity}{com.tools.app_logs\\/com.tools.app_logs.Main}{com.gtp.nextlauncher\\/com.gtp.nextlauncher.LauncherActivity}{com.android.settings\\/com.android.settings.Settings$WifiSettingsActivity}{com.android.incallui\\/com.android.incallui.InCallActivity}\",\"PhoneName\":\"samsung\",\"IP\":\"192.168.1.101\"}"
my logstash.conf is:
input {
lumberjack {
port => 5002
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
udp {
type => "json"
port => 5001
}
}
filter {
json {
"source" => "message"
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
my logstash-forwarder.conf (at same system that logstash is installed) is:
{
"network":{
"servers": [ "192.168.1.102:5002" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt" },
"files": [
{
"paths":[ "/var/log/app-log/allapp.json" ],
"fields": { "type": "json" }
}
]
}
my elasticsearch.yml is:
network.host: localhost
when i enter tail -f /var/log/logstash-forwarder/logstash-forwarder.err in terminal i get this:
2015/09/04 11:33:05.282495 Waiting for 1 prospectors to initialise
2015/09/04 11:33:05.282544 Launching harvester on new file: /var/log/app-log/allapp.json
2015/09/04 11:33:05.282591 harvest: "/var/log/app-log/allapp.json" (offset snapshot:0)
2015/09/04 11:33:05.283709 All prospectors initialised with 0 states to persist
2015/09/04 11:33:05.283806 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
2015/09/04 11:33:05.284254 Connecting to [192.168.1.102]:5002 (192.168.1.102)
2015/09/04 11:33:05.417174 Connected to 192.168.1.102
the allapp.json file has been update frequently and new log add in it but in above I never see the log which looks like :
Registrar received 1 events
Registrar received 23 events ...
In addition i have another client with logstash-forwarder to send its logs to kibana, logstash-forwarder on that client works correctly and logs from that shown in kibana but at this one client doesn't.
All result in kibana are look like this:
Time file
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
September 4th 2015, 06:14:00.942 /var/log/suricata/eve.json
I want to see logs from /var/log/app-log/allapp.json too in kibana, what is problem? why they aren't shown in kibana? why one client work correctly but logstash-forwarder on same system with logstash doesn't work?

Related

Logstash: Unable to connect to external Amazon RDS Database

Am relatively new to logstash & Elasticsearch...
Installed logstash & Elasticsearch using on macOS Mojave (10.14.2):
brew install logstash
brew install elasticsearch
When I check for these versions:
brew list --versions
Receive the following output:
elasticsearch 6.5.4
logstash 6.5.4
When I open up Google Chrome and type this into the URL Address field:
localhost:9200
This is the JSON response that I receive:
{
"name" : "9oJAP16",
"cluster_name" : "elasticsearch_local",
"cluster_uuid" : "PgaDRw8rSJi-NDo80v_6gQ",
"version" : {
"number" : "6.5.4",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Inside:
/usr/local/etc/logstash/logstash.yml
Resides the following variables:
path.data: /usr/local/Cellar/logstash/6.5.4/libexec/data
pipeline.workers: 2
path.config: /usr/local/etc/logstash/conf.d
log.level: info
path.logs: /usr/local/var/log
Inside:
/usr/local/etc/logstash/pipelines.yml
Resides the following variables:
- pipeline.id: main
path.config: "/usr/local/etc/logstash/conf.d/*.conf"
Have setup the following logstash_etl.conf file underneath:
/usr/local/etc/logstash/conf.d
Its contents:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products"
jdbc_user => "products_admin"
jdbc_password => "products123"
jdbc_driver_library => "/etc/logstash/mysql-connector/mysql-connector-java-5.1.21.jar"
jdbc_driver_class => "com.mysql.jdbc.driver"
schedule => "*/5 * * * *"
statement => "select * from products"
use_column_value => false
clean_run => true
}
}
# sudo /usr/share/logstash/bin/logstash-plugin install logstash-output-exec
output {
if ([purge_task] == "yes") {
exec {
command => "curl -XPOST 'localhost:9200/_all/products/_delete_by_query?conflicts=proceed' -H 'Content-Type: application/json' -d'
{
\"query\": {
\"range\" : {
\"#timestamp\" : {
\"lte\" : \"now-3h\"
}
}
}
}
'"
}
}
else {
stdout { codec => json_lines}
elasticsearch {
"hosts" => "localhost:9200"
"index" => "product_%{product_api_key}"
"document_type" => "%{[#metadata][index_type]}"
"document_id" => "%{[#metadata][index_id]}"
"doc_as_upsert" => true
"action" => "update"
"retry_on_conflict" => 7
}
}
}
When I do this:
brew services start logstash
Receive the following inside my /usr/local/var/log/logstash-plain.log file:
[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}
[2019-01-15T14:57:31,435][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::ComMysqlCjJdbcExceptions::CommunicationsException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server."}[2019-01-15T14:51:15,319][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x399927c7 run>"}
[2019-01-15T14:51:15,663][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-01-15T14:51:16,514][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-01-15T14:57:31,432][ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times
What am I possibly doing wrong?
Is there a way to obtain a dump (e.g. mysqldump) from an Elasticsearch server (Stage or Production) and then reimport into a local instance running Elasticsearch without using logstash?
This is the same configuration file that works inside an Amazon EC-2 Production Instance but don't know why its not working in my local macOS Mojave instance?
You may encounter the SSL issue of RDS, since
If you use either the MySQL Java Connector v5.1.38 or later, or the MySQL Java Connector v8.0.9 or later to connect to your databases, even if you haven't explicitly configured your applications to use SSL/TLS when connecting to your databases, these client drivers default to using SSL/TLS. In addition, when using SSL/TLS, they perform partial certificate verification and fail to connect if the database server certificate is expired.
as described in AWS RDS Doc
To overcome, either set up the trust store for the LogStash, which is described in the above link as well.
Or take the risk to disable the SSL in the connecting string, like
jdbc_connection_string => "jdbc:mysql://myapp-production.crankbftdpmc.us-west-2.rds.amazonaws.com:3306/products?sslMode=DISABLED"

Filebeat and LogStash -- data in multiple different formats

I have Filebeat, Logstash, ElasticSearch and Kibana. Filebeat is on a separate server and it's supposed to receive data in different formats: syslog, json, from a database, etc and send it to Logstash.
I know how to setup Logstash to make it handle a single format, but since there are multiple data formats, how would I configure Logstash to handle each data format properly?
In fact, how can I setup them both, Logstash and Filebeat, so that all the data in different formats get sent from Filebeat and submitted to Logstash properly? I mean, the config setting which handle sending and receiving data.
To separate different types of inputs within the Logstash pipeline, use the type field and tags for more identification.
In your Filebeat configuration, you should be using a different prospector for each different data format, each prospector can then be set to have a different document_type: field.
Reference
For example:
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# For each file found under this path, a harvester is started.
paths:
- "/var/log/apache/httpd-*.log"
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: log
document_type: apache
-
paths:
- /var/log/messages
- "/var/log/*.log"
document_type: log_message
In the above example, logs from /var/log/apache/httpd-*.log will have document_type: apache, while the other prospector has document_type: log_message.
This document-type field becomes the type field when Logstash is processing the event. You can then use if statements in Logstash to do different processing on different types.
Reference
For example:
filter {
if [type] == "apache" {
# apache specific processing
}
else if [type] == "log_message" {
# log_message processing
}
}
If the "data formats" in your question are codecs, this has to be configured in the input of logstash. The following is about filebeat 1.x and logstash 2.x, not the elastic 5 stack.
In our setup, we have two beats inputs - the first is default = "plain":
beats {
port => 5043
}
beats {
port => 5044
codec => "json"
}
On the filebeat side, we need two filebeat instances, sending their output to their respective ports. It's not possible to tell filebeat "route this prospector to that output".
Documentation logstash: https://www.elastic.co/guide/en/logstash/2.4/plugins-inputs-beats.html
Remark: If you ship with different protocols, e.g. legacy logstash-forwarder / lumberjack, you need more ports.
Supported with 7.5.1
filebeat-multifile.yml // file beat installed on a machine
filebeat.inputs:
- type: log
tags: ["gunicorn"]
paths:
- "/home/hduser/Data/gunicorn-100.log"
- type: log
tags: ["apache"]
paths:
- "/home/hduser/Data/apache-access-100.log"
output.logstash:
hosts: ["0.0.0.0:5044"] // target logstash IP
gunicorn-apache-log.conf // log stash installed on another machine
input {
beats {
port => "5044"
host => "0.0.0.0"
}
}
filter {
if "gunicorn" in [tags] {
grok {
match => { "message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\"" }
remove_field => "message"
}
}
else if "apache" in [tags] {
grok {
match => { "message" => "%{IPORHOST:client_ip} %{DATA:u1} %{DATA:u2} \[%{HTTPDATE:http_date}\] \"%{WORD:http_method} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:gd}\" \"%{DATA:u3}\""}
remove_field => "message"
}
}
}
output {
if "gunicorn" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "gunicorn-index"
}
}
else if "apache" in [tags]{
stdout { codec => rubydebug }
elasticsearch {
hosts => [...]
index => "apache-index"
}
}
}
Run filebeat from binary
Give proper permission to file
sudo chown root:root filebeat-multifile.yml
sudo chmod go-w filebeat-multifile.yml
sudo ./filebeat -e -c filebeat-multifile-1.yml -d "publish"
Run logstash from binary
./bin/logstash -f gunicorn-apache-log.conf

Filtering Bluemix cloud foundry ERR logs on logstash

I am currently working on setting up ELK stack on Bluemix containers. By following this blog, I was able to create a logstash Drain and get all the Cloud Foundry logs from the Bluemix web app into logstash.
Is there a way to filter out logs based on log levels? I am trying to filter out ERR in logstash output and send them to Slack.
The following code is the filter configuration of the logstash.conf file:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
I am trying to add a Slack webhook to the logstash.conf output so that when a log level with ERR is detected, the error message is posted into the Slack channel.
My output conf file with the Slack HTTP post looks something like this code:
output {
if [loglevel] == "ERR" {
http {
http_method => "post"
url => "https://hooks.slack.com/services/<id>"
format => "json"
mapping => {
"channel" => "#logstash-staging"
"username" => "pca_elk"
"text" => "%{message}"
"icon_emoji" => ":warning:"
}
}
}
elasticsearch { }
}
Sample Logs from cloud Foundry:
2016-05-25T13:14:51.269-0400[App/0]ERR npm ERR! There is likely additional logging output above.
2016-05-25T13:14:51.269-0400[App/0]ERR npm ERR! npm owner ls pca-uiapi
2016-05-25T13:14:51.274-0400[App/0]ERR npm ERR! /home/vcap/app/npm-debug.log
2016-05-25T13:14:51.274-0400[App/0]ERR npm ERR! Please include the following file with any support request:
2016-05-25T13:14:51.431-0400[API/1]OUT App instance exited with guid cc73db5d- 6e8c-4ff4-b20f-a69d7c2ba9f4 payload: {"cc_partition"=>"default", "droplet"=>"cc73db5d-6e8c-4ff4-b20f-a69d7c2ba9f4", "version"=>"f9fb3e09-f234-43d4-94b1-a337f8ad72ad", "instance"=>"9d7ad0585b824fa196a2a64e78df9eef", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"app instance exited", "crash_timestamp"=>1464196491}
2016-05-25T13:16:10.948-0400[DEA/50]OUT Starting app instance (index 0) with guid cc73db5d-6e8c-4ff4-b20f-a69d7c2ba9f4
2016-05-25T13:16:36.032-0400[App/0]OUT > pca-uiapi#1.0.0-build.306 start /home/vcap/app
2016-05-25T13:16:36.032-0400[App/0]OUT > node server.js
2016-05-25T13:16:36.032-0400[App/0]OUT
2016-05-25T13:16:37.188-0400[App/0]OUT PCA REST Service is listenning on port: 62067
2016-05-25T13:19:02.241-0400[App/0]ERR at Layer.handle_error (/home/vcap/app/node_modules/express/lib/router/layer.js:71:5)
2016-05-25T13:19:02.241-0400[App/0]ERR at /home/vcap/app/node_modules/body-parser/lib/read.js:125:7
2016-05-25T13:19:02.241-0400[App/0]ERR at Object.module.exports.log (/home/vcap/app/utils/Logger.js:35:25)
Is there a way to get this working? Is there a way to check the log level of each message? I am kinda stuck and was wondering if you could help me out.
In the Bluemix UI, the logs can be filtered based on the channel ERR or OUT. I could not figure how to do the same on logstash.
Thank you for looking into this problem.
The grok provided in that article is meant to parse the syslog message coming on port 5000. After all syslog filters have run, your application log (i.e. the sample log lines you've shown in your question) are in the #message field.
So you need another grok in order to parse that message. So after the last mutate you can add this:
grok {
match => {"#message" => "%{TIMESTAMP_ISO8601:timestamp}\[%{WORD:app}/%{NUMBER:num}\]%{WORD:loglevel} %{GREEDYDATA:log}"}
}
After this filter runs, you'll have a field named loglevel which will contain either ERR or OUT and in the former case will activate your slack output.

How to integrate ElasticSearch with MySQL?

In one of my project, I am planning to use ElasticSearch with MySQL.
I have successfully installed ElasticSearch. I am able to manage index in ES separately. but I don't know how to implement the same with MySQL.
I have read a couple of documents but I am a bit confused and not having a clear idea.
As of ES 5.x , they have given this feature out of the box with logstash plugin.
This will periodically import data from database and push to ES server.
One has to create a simple import file given below (which is also described here) and use logstash to run the script. Logstash supports running this script on a schedule.
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/mydb"
jdbc_user => "user"
jdbc_password => "pswd"
schedule => "* * * * *"
jdbc_validate_connection => true
jdbc_driver_library => "/path/to/latest/mysql-connector-java-jar"
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
statement => "SELECT * from contacts where updatedAt > :sql_last_value"
}
}
output {
elasticsearch {
protocol => http
index => "contacts"
document_type => "contact"
document_id => "%{id}"
host => "ES_NODE_HOST"
}
}
# "* * * * *" -> run every minute
# sql_last_value is a built in parameter whose value is set to Thursday, 1 January 1970,
# or 0 if use_column_value is true and tracking_column is set
You can download the mysql jar from maven here.
In case indexes do not exist in ES when this script is executed, they will be created automatically. Just like a normal post call to elasticsearch
Finally i was able to find the answer. sharing my findings.
To use ElasticSearch with Mysql you will require The Java Database Connection (JDBC) importer. with JDBC drivers you can sync your mysql data into elasticsearch.
I am using ubuntu 14.04 LTS and you will require to install Java8 to run elasticsearch as it is written in Java
following are steps to install ElasticSearch 2.2.0 and ElasticSearch-jdbc 2.2.0 and please note both the versions has to be same
after installing Java8 ..... install elasticsearch 2.2.0 as follows
# cd /opt
# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.2.0/elasticsearch-2.2.0.deb
# sudo dpkg -i elasticsearch-2.2.0.deb
This installation procedure will install Elasticsearch in /usr/share/elasticsearch/ whose configuration files will be placed in /etc/elasticsearch .
Now lets do some basic configuration in config file. here /etc/elasticsearch/elasticsearch.yml is our config file
you can open file to change by
nano /etc/elasticsearch/elasticsearch.yml
and change cluster name and node name
For example :
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: servercluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: vps.server.com
#
# Add custom attributes to the node:
#
# node.rack: r1
Now save the file and start elasticsearch
/etc/init.d/elasticsearch start
to test ES installed or not run following
curl -XGET 'http://localhost:9200/?pretty'
If you get following then your elasticsearch is installed now :)
{
"name" : "vps.server.com",
"cluster_name" : "servercluster",
"version" : {
"number" : "2.2.0",
"build_hash" : "8ff36d139e16f8720f2947ef62c8167a888992fe",
"build_timestamp" : "2016-01-27T13:32:39Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
Now let's install elasticsearch-JDBC
download it from http://xbib.org/repository/org/xbib/elasticsearch/importer/elasticsearch-jdbc/2.3.3.1/elasticsearch-jdbc-2.3.3.1-dist.zip and extract the same in /etc/elasticsearch/ and create "logs" folder also there ( path of logs should be /etc/elasticsearch/logs)
I have one database created in mysql having name "ElasticSearchDatabase" and inside that table named "test" with fields id,name and email
cd /etc/elasticsearch
and run following
echo '{
"type":"jdbc",
"jdbc":{
"url":"jdbc:mysql://localhost:3306/ElasticSearchDatabase",
"user":"root",
"password":"",
"sql":"SELECT id as _id, id, name,email FROM test",
"index":"users",
"type":"users",
"autocommit":"true",
"metrics": {
"enabled" : true
},
"elasticsearch" : {
"cluster" : "servercluster",
"host" : "localhost",
"port" : 9300
}
}
}' | java -cp "/etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/lib/*" -"Dlog4j.configurationFile=file:////etc/elasticsearch/elasticsearch-jdbc-2.2.0.0/bin/log4j2.xml" "org.xbib.tools.Runner" "org.xbib.tools.JDBCImporter"
now check if mysql data imported in ES or not
curl -XGET http://localhost:9200/users/_search/?pretty
If all goes well, you will be able to see all your mysql data in json format
and if any error is there you will be able to see them in /etc/elasticsearch/logs/jdbc.log file
Caution :
In older versions of ES ... plugin Elasticsearch-river-jdbc was used which is completely deprecated in latest version so do not use it.
I hope i could save your time :)
Any further thoughts are appreciated
Reference url : https://github.com/jprante/elasticsearch-jdbc
The logstash JDBC plugin will do the job:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "factweavers"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/comp/Downloads/mysql-connector-java-5.1.38.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
schedule => "* * * *"
statement => "SELECT" * FROM testtable where Date > :sql_last_value order by Date"
use_column_value => true
tracking_column => Date
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "test-migrate"
"document_type" => "data"
"document_id" => "%{personid}"
}
}
To make it more simple I have created a PHP class to Setup MySQL with Elasticsearch. Using my Class you can sync your MySQL data in elasticsearch and also perform full-text search. You just need to set your SQL query and class will do the rest for you.

rabbitmq 3.3.4 shovel configuration is crashing start process

I'm trying to configure the shovel plugin via the config file (running in docker) but I get this error:
BOOT FAILED
===========
Error description:
{error,{failed_to_cluster_with,[rabbit#dalmacpmfd57],
"Mnesia could not connect to any nodes."}}
The config is set up this way because the destination for shovel will be created on demand when a dev environment is spun up... the source is a permanent rabbitmq instance running that the new, dev environment will attach to.
Here is the config file contents:
[
{rabbitmq_shovel,
[{shovels,
[{indexer_replica_static,
[{sources,
[{broker, [ "amqp://guest:guest#rabbitmq/newdev" ]},
{declarations,
[{'queue.declare', [{queue, <<"Indexer_Replica_Static">>}, durable]},
{'queue.bind',[ {exchange, <<"Indexer">>}, {queue, <<"Indexer_Replica_Static">>}]}
]
}
]
},
{destinations,
[{broker, "amqp://"},
{declarations, [ {'exchange.declare', [ {exchange, <<"Indexer_Replica_Static">>}
, {type, <<"fanout">>}, durable]},
{'queue.declare', [
{queue, <<"Indexer_Replica_Static">>},
durable]},
{'queue.bind',
[ {exchange, <<"Indexer_Replica_Static">>}
, {queue, <<"Indexer_Replica_Static">>}
]}
]
}
]
},
{queue, <<"Indexer_Replica_Static">>},
{prefetch_count, 0},
{ack_mode, on_confirm},
{publish_properties, [ {delivery_mode, 2} ]},
{reconnect_delay, 2.5}
]
}
]
},
{reconnect_delay, 2.5}
]
}
].
[UPDATE]
This is being run in docker but since I couldn't debug the issue in docker I tried booting up rabbit locally with the same config file. I noticed in the logs that the rabbit config system variable I set (RABBITMQ_CONFIG_FILE) isn't reflected in the log and the shovel settings haven't been applied (no surprise huh). I verified the variable with an echo statement and the correct path is displayed: /dev/rabbitmq_server-3.3.4/rabbitmq
=INFO REPORT==== 3-Sep-2014::15:30:37 ===
node : rabbit#dalmacpmfd57
home dir : /Users/e002678
config file(s) : (none)
cookie hash : n6vhh8tY7Z+uR2DV6gcHUg==
log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57.log
sasl log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57- sasl.log
database dir : /usr/local/rabbitmq_server-3.3.4/sbin/../var/lib/rabbitmq/mnesia/rabbit#dalmacpmfd57
Thanks!