Using NXLog with IIS and have json output - json

I am trying to use nxlog to oarse the IIS files and create a JSON output so that I can later push it into logstash
However all I get is the raw data in the file and not the formatted output I was led to expect e.g. https://github.com/StephenHynes7/confs/blob/master/windows_iis_JSON/nxlog-iis.conf.
This is my config file
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT D:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension _syslog>
Module xm_syslog
</Extension>
<Extension json>
Module xm_json
</Extension>
# Select the input folder where logs will be scanned
# Create the parse rule for IIS logs. You can copy these from the header of the IIS log file.
# Uncomment Extension w3c for IIS logging
<Extension w3c>
Module xm_csv
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
Fields $date, $time, $s-sitename, $s-computername, $s-ip, $cs-method, $cs-uri-stem, $cs-uri-query, $s-port, $cs-username, $c-ip, $csUser-Agent, $cs-cookie, $cs-referrer, $cs-host, $sc-status, $sc-substatus, $sc-win32-status, $sc-bytes, $cs-bytes, $time-taken
FieldTypes string, string, string, string, string, string, string, string, integer, string, string, string, string, string, string, integer, integer, integer, integer, integer, integer
Delimiter ' '
UndefValue -
QuoteChar '"'
EscapeControl FALSE
</Extension>
<Input iis_logs>
Module im_file
File "C:\\Resources\\Directory\\\\u_ex*.log"
ReadFromLast True
Recursive True
SavePos True
Exec if $raw_event =~ /^#/ drop(); \
else \
{ \
w3c->parse_csv(); \
$EventTime = parsedate($date + " " + $time); \
$SourceName = "IIS"; \
$Message = to_json(); \
}
</Input>
<Output debug>
Module om_file
File "C:\\nxlog-debug.txt"
</Output>
<Route 1>
Path iis_logs => debug
</Route>
and my output (not json)
2015-09-05 07:42:00 W3SVC1273337584 ****** *.*.*.* GET / - 443 - *.*.*.* HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.2;+WOW64;+Trident/6.0) - - *.*.*.* 403 14 0 5530 258 31
2015-09-05 07:42:00 W3SVC1273337584 ****** *.*.*.* GET / - 443 - *.*.*.* HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.2;+WOW64;+Trident/6.0) - - *.*.*.* 403 14 0 5530 258 16
2015-09-05 07:53:05 W3SVC1273337584 ****** *.*.*.* GET / - 443 - *.*.*.* HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.2;+WOW64;+Trident/6.0) - - *.*.*.* 403 14 0 5530 258 31
2015-09-05 07:53:06 W3SVC1273337584 ****** *.*.*.* GET / - 443 - *.*.*.* HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+10.0;+Windows+NT+6.2;+WOW64;+Trident/6.0) - - *.*.*.* 403 14 0 5530 258 31
I assume I am close but what am I missing?

om_file writes $raw_event into the output by default and all other fields are discarded including $Message. So you need either
$raw_event = to_json();
or simply just
to_json();
The two are equivalent.

Related

Linux capabilities for container to update file atime programmatically

I have a container running as non-privileged mode. I'd like to update file atime via python code for some reason but found I could not do that due to permission issue, even though I can write to that file.
I tried to add linux capabilities to the container, but even with SYS_AMDIN, it still does not work.
Anyone happens to know what capabilities to add or what I missed there?
thank you!
bash-5.1$ id
uid=1000(contest) gid=1000(contest) groups=1000(contest)
bash-5.1$ ls -l
total 250
-rwxrwxrwx 1 root contest 0 Oct 27 07:16 anotherfile
-rwxrwxrwx 1 root contest 254823 Oct 27 07:37 outfile
-rwxrwxrwx 1 root contest 0 Oct 24 03:52 test
-rwxrwxrwx 1 root contest 364 Oct 27 07:16 test.py
-rwxrwxrwx 1 root contest 18 Oct 24 05:25 testfile
bash-5.1$ python3 test.py
1666854988.190472
1666851388.190472
Traceback (most recent call last):
File "/mnt/azurefile/test.py", line 19, in <module>
os.utime(myfile, (atime - 3600.0, mtime))
PermissionError: [Errno 1] Operation not permitted
bash-5.1$ capsh --print
Current: =
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_net_raw,cap_sys_chroot,cap_sys_admin,cap_mknod,cap_audit_write,cap_setfcap
Ambient set =
Current IAB: !cap_dac_read_search,!cap_linux_immutable,!cap_net_broadcast,!cap_net_admin,!cap_ipc_lock,!cap_ipc_owner,!cap_sys_module,!cap_sys_rawio,!cap_sys_ptrace,!cap_sys_pacct,!cap_sys_boot,!cap_sys_nice,!cap_sys_resource,!cap_sys_time,!cap_sys_tty_config,!cap_lease,!cap_audit_control,!cap_mac_override,!cap_mac_admin,!cap_syslog,!cap_wake_alarm,!cap_block_suspend,!cap_audit_read
Securebits: 00/0x0/1'b0 (no-new-privs=0)
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
secure-no-ambient-raise: no (unlocked)
uid=1000(contest) euid=1000(contest)
gid=1000(contest)
groups=1000(contest)
Guessed mode: HYBRID (4)
my python code to update atime:
from datetime import datetime
import os
import time
myfile = "anotherfile"
current_time = time.time()
"""
Set the access time of a given filename to the given atime.
atime must be a datetime object.
"""
stat = os.stat(myfile)
mtime = stat.st_mtime
atime = stat.st_atime
print(mtime)
mtime = mtime - 3600.0
print(mtime)
os.utime(myfile, (atime - 3600.0, mtime))
pod yaml
---
kind: Pod
apiVersion: v1
metadata:
name: nginx-azurefile
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
nodeSelector:
"kubernetes.io/os": linux
containers:
- image: acheng.azurecr.io/capsh
name: nginx-azurefile
securityContext:
capabilities:
add: ["CHOWN","SYS_ADMIN","SYS_RESOURCES"]
command:
- "/bin/bash"
- "-c"
- set -euo pipefail; while true; do echo $(date) >> /mnt/azurefile/outfile; sleep 10; done
volumeMounts:
- name: persistent-storage
mountPath: "/mnt/azurefile"
imagePullSecrets:
- name: acr-secret
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: pvc-azurefile
tried to add SYS_ADMIN capabilities but didn't work.
if container runs in privileged mode, the code is able to update file access time as expected
answering my own question here.
After searching around, I found kubernetes does not support capabilities for non-root users. the capabilities added in container spec is for root user only. won't take effect for non-root users.
see this github issue for details: https://github.com/kubernetes/kubernetes/issues/56374
a workaround is to add cap directly to the executable file using setcap command (from libcap).
and the capability needed is CAP_FOWNER

JMeter CLI report generation fails - org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json"

JMeter 5.3
I use the CLI as follows:
C:\Users\guyl\OneDrive - xxxxLTD\Guy\apache-jmeter-5.3\bin>jmeter -n -t "C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\DCS DB threaded test.jmx" -l"C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\db_report.csv" -Jthreads=5 -Jloops=100 -e -o"C:\Users\guyl\OneDrive - xxxxLTD\Guy\JMeter\output\19082020\"
Creating summariser <summary>
Created the tree successfully using C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\DCS DB threaded test.jmx
Starting standalone test # Wed Aug 26 14:22:56 IDT 2020 (1598440976507)
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
Generate Summary Results + 91 in 00:00:03 = 32.0/s Avg: 35 Min: 0 Max: 1031 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 91 in 00:00:03 = 32.1/s Avg: 35 Min: 0 Max: 1031 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 104409 in 00:00:29 = 3621.4/s Avg: 1 Min: 0 Max: 448 Err: 50400 (48.27%) Active: 0 Started: 5 Finished: 5
summary = 104500 in 00:00:32 = 3299.8/s Avg: 1 Min: 0 Max: 1031 Err: 50400 (48.23%)
Generate Summary Results + 104409 in 00:00:29 = 3620.9/s Avg: 1 Min: 0 Max: 448 Err: 50400 (48.27%) Active: 0 Started: 5 Finished: 5
Generate Summary Results = 104500 in 00:00:32 = 3299.1/s Avg: 1 Min: 0 Max: 1031 Err: 50400 (48.23%)
Tidying up ... # Wed Aug 26 14:23:28 IDT 2020 (1598441008843)
Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
... end of run
At the end, you can see we have:
Error generating the report: org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
... end of run
Note that it didn't matter whether the report's extension was CSV or JTL.
I was able to generate the JTL report, and then run jmeter -g <my JTL file>, but I'd like the -e option to work.
Update: Now I get errors with the g option:
C:\Users\guyl\OneDrive - xxxLTD\Guy\apache-jmeter-5.3\bin>jmeter -g "C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\db_report" -o"C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\output\19082020\"
An error occurred: Data exporter "json" is unable to export data.
errorlevel=1
Press any key to continue . . .
Here is what I found in the jmeter.log file:
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): nameFilter produced 1567500 samples
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.FilterConsumer#stopProducing(): dateRangeFilter produced 313500 samples
2020-08-26 14:37:51,812 INFO o.a.j.r.p.AbstractSampleConsumer: class org.apache.jmeter.report.processor.NormalizerSampleConsumer#stopProducing(): normalizer produced 104500 samples
2020-08-26 14:37:51,813 INFO o.a.j.r.p.CsvFileSampleSource: produce(): 104500 samples produced in 6s 70 ms on channel 0
2020-08-26 14:37:51,813 INFO o.a.j.r.d.ReportGenerator: Exporting data using exporter:'json' of className:'org.apache.jmeter.report.dashboard.JsonExporter'
2020-08-26 14:37:51,814 INFO o.a.j.r.d.JsonExporter: Found data for consumer statisticsSummary in context
2020-08-26 14:37:51,814 INFO o.a.j.r.d.JsonExporter: Creating statistics for overall
2020-08-26 14:37:51,815 INFO o.a.j.r.d.JsonExporter: Creating statistics for other transactions
2020-08-26 14:37:51,815 INFO o.a.j.r.d.JsonExporter: Checking output folder
2020-08-26 14:37:51,816 ERROR o.a.j.JMeter: An error occurred:
org.apache.jmeter.report.dashboard.GenerationException: Data exporter "json" is unable to export data.
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:385) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.generate(ReportGenerator.java:258) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.JMeter.start(JMeter.java:545) [ApacheJMeter_core.jar:5.3]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_241]
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:1.8.0_241]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:1.8.0_241]
at java.lang.reflect.Method.invoke(Unknown Source) ~[?:1.8.0_241]
at org.apache.jmeter.NewDriver.main(NewDriver.java:252) [ApacheJMeter.jar:5.3]
Caused by: org.apache.jmeter.report.dashboard.ExportException: Error creating output folder C:\Users\guyl\OneDrive - Nayax LTD\Guy\JMeter\output\19082020"
at org.apache.jmeter.report.dashboard.JsonExporter.checkAndGetOutputFolder(JsonExporter.java:112) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.JsonExporter.export(JsonExporter.java:77) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:379) ~[ApacheJMeter_core.jar:5.3]
... 7 more
Caused by: java.io.IOException: Unable to create directory C:\Users\guyl\OneDrive - xxxLTD\Guy\JMeter\output\19082020"
at org.apache.commons.io.FileUtils.forceMkdir(FileUtils.java:2491) ~[commons-io-2.6.jar:2.6]
at org.apache.jmeter.report.dashboard.JsonExporter.checkAndGetOutputFolder(JsonExporter.java:110) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.JsonExporter.export(JsonExporter.java:77) ~[ApacheJMeter_core.jar:5.3]
at org.apache.jmeter.report.dashboard.ReportGenerator.exportData(ReportGenerator.java:379) ~[ApacheJMeter_core.jar:5.3]
... 7 more
So it all started from:
Caused by: java.io.IOException: Unable to create directory C:\Users\guyl\OneDrive - xxx LTD\Guy\JMeter\output\19082020"
It's supposed to create that folder if it doesn't exist, isn't it?
JMeter will create only 1 level of folder and not the full hierarchy.
Second , try avoiding a folder with spaces in it.
I was getting this error as well and the solution to my problem was that from this command:
jmeter -n -t “location of test file” -l “location of your result file” -e -o “location of reports folder”
The third option "location of reports folder" needed to be a brand new non-existing folder. After telling the command to create a folder, my HTML reports were successful!
I've just experienced the same error message, using jMeter 5.4.1.
In my case, I ended up investigating the problem using ProcMon
Procmon showed that the closing double quote " was included in the folder path, which is an invalid character in windows folder/file names.
I was fortunate enough that my path did not have any spaces in it so removing the double quote did not impact me. Try to use folders without spaces in the names, or alternatively use the 8.3 short names in the paths instead.

DKIM hmailserver and NameCheap Setup

I've been trying to setup my hmailserver with DKIM.
I was following this guide -> https://www.hmailserver.com/forum/viewtopic.php?t=29402
And I created my keys with this site -> https://www.port25.com/dkim-wizard/
Domain name: linnabary.us
DomainKey Selector: dkim
Key size: 1024
I created a pem file;
-----BEGIN RSA PRIVATE KEY-----
<key>
-----END RSA PRIVATE KEY-----
Saved it and loaded it into hmailserver
When I set this up on NameCheap I selected TXT Record, set my host as #, and put this line in, minus key of course;
v=DKIM1; k=rsa; p=<KEY>
Now when I test with -> http://www.isnotspam.com
It says my DKIM key is as follows;
----------------------------------------------------------
DKIM check details:
----------------------------------------------------------
Result: invalid
ID(s) verified: header.From=admin#linnabary.us
Selector=
domain=
DomainKeys DNS Record=._domainkey.
I was wondering if I am making any obvious errors in my record.
Edit;
The email contains the following line;
dkim-signature: v=1; a=rsa-sha256; d=linnabary.us; s=dkim;
This is what the setup looks like on NameCheap;
And here is the next test email from ;
This message is an automatic response from isNOTspam's authentication verifier service. The service allows email senders to perform a simple check of various sender authentication mechanisms. It is provided free of charge, in the hope that it is useful to the email community. While it is not officially supported, we welcome any feedback you may have at .
Thank you for using isNOTspam.
The isNOTspam team
==========================================================
Summary of Results
==========================================================
SPF Check : pass
Sender-ID Check : pass
DKIM Check : invalid
SpamAssassin Check : ham (non-spam)
==========================================================
Details:
==========================================================
HELO hostname: [69.61.241.46]
Source IP: 69.61.241.46
mail-from: admin#linnabary.us
Anonymous To: ins-a64wsfm3#isnotspam.com
---------------------------------------------------------
SPF check details:
----------------------------------------------------------
Result: pass
ID(s) verified: smtp.mail=admin#linnabary.us
DNS record(s):
linnabary.us. 1799 IN TXT "v=spf1 a mx ip4:69.61.241.46 ~all"
----------------------------------------------------------
Sender-ID check details:
----------------------------------------------------------
Result: pass
ID(s) verified: smtp.mail=admin#linnabary.us
DNS record(s):
linnabary.us. 1799 IN TXT "v=spf1 a mx ip4:69.61.241.46 ~all"
----------------------------------------------------------
DKIM check details:
----------------------------------------------------------
Result: invalid
ID(s) verified: header.From=admin#linnabary.us
Selector=
domain=
DomainKeys DNS Record=._domainkey.
----------------------------------------------------------
SpamAssassin check details:
----------------------------------------------------------
SpamAssassin 3.4.1 (2015-04-28)
Result: ham (non-spam) (04.6points, 10.0 required)
pts rule name description
---- ---------------------- -------------------------------
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* -0.0 SPF_HELO_PASS SPF: HELO matches SPF record
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* 0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
* 0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
X-Spam-Status: Yes, hits=4.6 required=-20.0 tests=BAYES_99,BAYES_999,
DKIM_SIGNED,RDNS_NONE,SPF_HELO_PASS,SPF_PASS,T_DKIM_INVALID autolearn=no
autolearn_force=no version=3.4.0
X-Spam-Score: 4.6
To learn more about the terms used in the SpamAssassin report, please search
here: http://wiki.apache.org/spamassassin/
==========================================================
Explanation of the possible results (adapted from
draft-kucherawy-sender-auth-header-04.txt):
==========================================================
"pass"
the message passed the authentication test.
"fail"
the message failed the authentication test.
"softfail"
the message failed the authentication test, and the authentication
method has either an explicit or implicit policy which doesn't require
successful authentication of all messages from that domain.
"neutral"
the authentication method completed without errors, but was unable
to reach either a positive or a negative result about the message.
"temperror"
a temporary (recoverable) error occurred attempting to authenticate
the sender; either the process couldn't be completed locally, or
there was a temporary failure retrieving data required for the
authentication. A later retry may produce a more final result.
"permerror"
a permanent (unrecoverable) error occurred attempting to
authenticate the sender; either the process couldn't be completed
locally, or there was a permanent failure retrieving data required
for the authentication.
==========================================================
Original Email
==========================================================
From admin#linnabary.us Wed Apr 12 17:41:22 2017
Return-path: <admin#linnabary.us>
X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on isnotspam.com
X-Spam-Flag: YES
X-Spam-Level: ****
X-Spam-Report:
* 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100%
* [score: 1.0000]
* -0.0 SPF_HELO_PASS SPF: HELO matches SPF record
* -0.0 SPF_PASS SPF: sender matches SPF record
* 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100%
* [score: 1.0000]
* 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily
* valid
* 0.8 RDNS_NONE Delivered to internal network by a host with no rDNS
* 0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
X-Spam-Status: Yes, hits=4.6 required=-20.0 tests=BAYES_99,BAYES_999,
DKIM_SIGNED,RDNS_NONE,SPF_HELO_PASS,SPF_PASS,T_DKIM_INVALID autolearn=no
autolearn_force=no version=3.4.0
Envelope-to: ins-a64wsfm3#isnotspam.com
Delivery-date: Wed, 12 Apr 2017 17:41:22 +0000
Received: from [69.61.241.46] (helo=linnabary.us)
by localhost.localdomain with esmtp (Exim 4.84_2)
(envelope-from <admin#linnabary.us>)
id 1cyMGg-0007x2-1Q
for ins-a64wsfm3#isnotspam.com; Wed, 12 Apr 2017 17:41:22 +0000
dkim-signature: v=1; a=rsa-sha256; d=linnabary.us; s=dkim;
c=relaxed/relaxed; q=dns/txt; h=From:Subject:Date:Message-ID:To:MIME-Version:Content-Type:Content-Transfer-Encoding;
bh=Ns4aRUgWUtil4fiVnvitgeV+q1K/smEYtRGN497S5Ew=;
b=Nc2Kzrzas0QqMpWM4fnF5o5wLWlWYFxlGlAipe+85H9cwGgc4hvEKUj1UvgB6I2VHUbJ0OGN/sJO9tjWgwlGypaUuW7Q8x/iI0UtC6cn7X6ZLHT+K6A2A6MdoyR1NF4xxvqPadcmcQwnrY0Tth4ycydpQMlBCZS30sc1qUjUrN0=
Received: from [192.168.1.12] (Aurora [192.168.1.12])
by linnabary.us with ESMTPA
; Wed, 12 Apr 2017 13:41:28 -0400
To: ins-a64wsfm3#isnotspam.com
From: Admin <admin#linnabary.us>
Subject: Welcome to Linnabary
Message-ID: <8e8be6cd-6354-aeb9-b577-2b0efc25a1a1#linnabary.us>
Date: Wed, 12 Apr 2017 13:41:28 -0400
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101
Thunderbird/45.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-DKIM-Status: invalid (pubkey_unavailable)
I honestly have no idea what I should put in here in order to protect
myself from filters, so I'm just making it up as I go.
- Tad
The Host value for your TXT entry should just be dkim._domainkey. Currently your domain key is located at: dkim._domainkey.linnabary.us.linnabary.us, so you're not supposed to add the domain here.
That's why the response to the test email says X-DKIM-Status: invalid (pubkey_unavailable) - the public key can't be found where it is supposed to be.

Push Mysql data to Elasticsearch using fluentd mysql-replicator Plugin

I am working on Fluentd, ElasticSearch and Kibana to push my mysql database data into elasticsearch I'm using plugin fluentd mysql-replicator plugin , But my fluntd throw following error:
2016-06-08 15:43:56 +0530 [warn]: super was not called in #start: called it forcedly plugin=Fluent::MysqlReplicatorInput
2016-06-08 15:43:56 +0530 [info]: listening dRuby uri="druby://127.0.0.1:24230" object="Engine"
2016-06-08 15:43:56 +0530 [info]: listening fluent socket on 0.0.0.0:24224
2016-06-08 15:43:57 +0530 [error]: mysql_replicator: missing primary_key. :tag=>replicator.livechat.chat_chennaibox.${event}.${primary_key} :primary_key=>chat_id
2016-06-08 15:43:57 +0530 [error]: mysql_replicator: failed to execute query.
2016-06-08 15:43:57 +0530 [error]: error: bad value for range
2016-06-08 15:43:57 +0530 [error]: /opt/td-agent/embedded/lib/ruby/gems/2.1.0/gems/fluent-plugin-mysql-replicator-0.5.2/lib/fluent/plugin/in_mysql_replicator.rb:105:in `block in poll'
This is my configuration File for fluentd:
####
## Output descriptions:
##
# Treasure Data (http://www.treasure-data.com/) provides cloud based data
# analytics platform, which easily stores and processes data from td-agent.
# FREE plan is also provided.
# #see http://docs.fluentd.org/articles/http-to-td
#
# This section matches events whose tag is td.DATABASE.TABLE
#<match td.*.*>
# type tdlog
# apikey YOUR_API_KEY
# auto_create_table
# buffer_type file
# buffer_path /var/log/td-agent/buffer/td
# <secondary>
# type file
# path /var/log/td-agent/failed_records
# </secondary>
#</match>
## match tag=debug.** and dump to console
#<match debug.**>
# type stdout
#</match>
####
## Source descriptions:
##
## built-in TCP input
## #see http://docs.fluentd.org/articles/in_forward
<source>
type forward
</source>
## built-in UNIX socket input
#<source>
# type unix
#</source>
# HTTP input
# POST http://localhost:8888/<tag>?json=<json>
# POST http://localhost:8888/td.myapp.login?json={"user"%3A"me"}
# #see http://docs.fluentd.org/articles/in_http
<source>
type http
port 8888
</source>
## live debugging agent
<source>
type debug_agent
bind 127.0.0.1
port 24230
</source>
####
## Examples:
##
## File input
## read apache logs continuously and tags td.apache.access
#<source>
# type tail
# format apache
# path /var/log/httpdaccess.log
# tag td.apache.access
#</source>
## File output
## match tag=local.** and write to file
#<match local.**>
# type file
# path /var/log/td-agent/apache.log
#</match>
## Forwarding
## match tag=system.** and forward to another td-agent server
#<match system.**>
# type forward
# host 192.168.0.11
# # secondary host is optional
# <secondary>
# host 192.168.0.12
# </secondary>
#</match>
## Multiple output
## match tag=td.*.* and output to Treasure Data AND file
#<match td.*.*>
# type copy
# <store>
# type tdlog
# apikey API_KEY
# auto_create_table
# buffer_type file
# buffer_path /var/log/td-agent/buffer/td
# </store>
# <store>
# type file
# path /var/log/td-agent/td-%Y-%m-%d/%H.log
# </store>
#</match>
#<source>
# #type tail
# format apache
# tag apache.access
# path /var/log/td-agent/apache_log/ssl_access_log.1
# read_from_head true
# pos_file /var/log/httpd/access_log.pos
#</source>
#<match apache.access*>
# type stdout
#</match>
#<source>
# #type tail
# format magento_system
# tag magento.access
# path /var/log/td-agent/Magento_log/system.log
# pos_file /tmp/fluentd_magento_system.pos
# read_from_head true
#</source>
#<match apache.access
# type stdout
#</match>
#<source>
# #type http
# port 8080
# bind localhost
# body_size_limit 32m
# keepalive_timeout 10s
#</source>
#<match magento.access*>
# type stdout
#</match>
#<match magento.access*>
# #type elasticsearch
# logstash_format true
# host localhost
# port 9200
#</match>
<source>
type mysql_replicator
host 127.0.0.1
username root
password gworks.mobi2
database livechat
query select chat_name from chat_chennaibox;
#query SELECT t2.firstname,t2.lastname, t1.* FROM status t1 INNER JOIN student_detail t2 ON t1.number = t2.number;
primary_key chat_id # specify unique key (default: id)
interval 10s # execute query interval (default: 1m)
enable_delete yes
tag replicator.livechat.chat_chennaibox.${event}.${primary_key}
</source>
#<match replicator.**>
#type stdout
#</match>
<match replicator.**>
type mysql_replicator_elasticsearch
host localhost
port 9200
tag_format (?<index_name>[^\.]+)\.(?<type_name>[^\.]+)\.(?<event>[^\.]+)\.(?<primary_key>[^\.]+)$
flush_interval 5s
max_retry_wait 1800
flush_at_shutdown yes
buffer_type file
buffer_path /var/log/td-agent/buffer/mysql_replicator_elasticsearch.*
</match>
If there anyone who faced the same issue please suggests to me How to solve this problem.
I solve This Problem. My Problem is Missing primary_key in Query.
wrong Query:
<source>
type mysql_replicator
host 127.0.0.1
username root
password xxxxxxx
database livechat
query select chat_name from chat_chennaibox;
primary_key chat_id
interval 10s # execute query interval (default: 1m)
enable_delete yes
tag replicator.livechat.chat_chennaibox.${event}.${primary_key}
</source>
In this Query i Mentioned primary_key is chat_id But, doesn't mention in query so, I got Error in mysql_replicator: missing primary_key. :tag=>replicator.livechat.chat_chennaibox.${event}.${primary_key} :primary_key=>chat_id
Right Query:
<source>
#type mysql_replicator
host localhost
username root
password xxxxxxx
database livechat
query SELECT chat_id, chat_name FROM chat_chennaibox
primary_key chat_id
interval 10s
enable_delete yes
tag replicator.livechat.chat_chennaibox.${event}.${primary_key}
</source>
You need to Mention primary_key in Query Field
query SELECT chat_id, chat_name FROM chat_chennaibox
Its Working For me.

Encrypting Nagios report mails with GnuPG fails with empty mails, why?

I am trying to crytp using gpg2 the mails sent by Nagios3. For that, I have create this custom command on /etc/nagios3/commands.cfg :
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
}
Some points:
The e-mail is sent but it is "empty":
Sep 19 14:35:25 tutu nagios3: Finished daemonizing... (New PID=4313)
Sep 19 14:36:15 tutu nagios3: SERVICE ALERT:
tete_vm;HTTP;OK;HARD;4;HTTP OK: HTTP/1.1 200 OK - 347 bytes in 0.441
second response time Sep 19 14:36:15 tutu nagios3: SERVICE
NOTIFICATION: tata;tete_vm;HTTP;OK;notify-service-by-email;HTTP OK:
HTTP/1.1 200 OK - 347 bytes in 0.441 second response time
The command:
/usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$</code>
works very well on command line
I have tested this command:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com >> /tmp/toto.txt
The file /tmp/toto.txt is created but "empty".
So, it seems to be a problem using /usr/bin/gpg2 on this file, but I cannot find why!
The most common mistake when encrypting from within services using GnuPG is that the recipient's key was imported by another (system) user than the one the service is running under, for example imported by root, but the service runs as nagios.
GnuPG maintains per-user "GnuPG home directories" (usually ~/.gnupg) with per-user keyrings in them. If you imported as root, other service accounts don't know anything about the keys in there.
The first step for debugging the issue would be to redirect gpg's stderr to a file, so you can read the error message by adding 2>>/tmp/gpg-error.log to the GnuPG call:
/usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/gpg2 --armor --encrypt --recipient toto#titi.com 2>>/tmp/gpg-error.log | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$
If the issue is something like "key not found" or similar, you've got two possibilities to resolve the issue:
Import to the service's user account. Switch to the service's user, and import the key again.
Hard-code the GnuPG home directory to somewhere else using the --homedir [directory] option, for example in a place you also store your Nagios plugins.
Be aware of using appropriate, restrictive permissions. GnuPG is very picky if other users than the owner are allowed to read the files!