Sensu checks results event data - json

I am working on Sensu. I have installed sensu in CentOS. I need to get the event messages which is generated by Sensu checks.I have added some of the sensu community plugins like check-procs.rb,check-load.rb,check-banner.rb, metrics-ebs-volume.rb etc. I have written some handler files to event handle these .rb files. I am getting events in sensu-server.log.
Example:
{"timestamp":"2016-08-10T07:32:08.000003+0000","level":"info","message":"publishing check request","payload":{"name":"swap-free","issued":1470814327,"command":"check-swap.sh 20 10"},"subscribers":["base_centos_monitoring"]}
I have written a ruby file "nephele_events_handler.rb" which sends events messages through rest call to another server. The ruby file is in the location "/etc/sensu/handlers/". I am reading events from STDIN.read, i have read from official sensu documentation that events will be stored inside STDIN.
#!/opt/sensu/embedded/bin/ruby
require "#{File.dirname(__FILE__)}/base"
require 'rubygems'
require 'json'
require 'uri'
require 'net/http'
require 'net/https'
require 'json'
class RunProcs < BaseHandler
def payload_check
#Read event data
sensuhash = "{ \"SensuMessage\"" + ":"
braces = "}"
s = sensuhash.to_s
event = JSON.parse(STDIN.read, :symbolize_names => true)
eventPayload = event.to_json
sensujson = s + eventPayload + braces
uri = URI.parse("https://localhost:8080/Processor/services/sourceEvents/requestMsg")
https = Net::HTTP.new(uri.host,uri.port)
https.use_ssl = false
req = Net::HTTP::Post.new(uri.path, initheader = {'Content-Type' =>'application/json'})
req.body = "[ #{sensujson} ]"
res = https.request(req)
end
info = RunProcs.new
info.payload_check
end
Am writing handler json file "processor.json" inside the location "/etc/sensu/conf.d/handlers".
{
"handlers": {
"nephele_processor": {
"type": "pipe",
"command": "nephele_events_handler.rb"
}
}
}
But the issue am facing is am only getting events from 'check-procs'
{"client":{"address":"10.81.1.105","subscriptions":["base_centos","base_chef-client","python","base_centos_monitoring","base_centos_monitoring_metrics","sensu_client","base_aws","base_aws_monitoring","sensu_master","all"],"name":"ip-localhost.internal","hostname":"ip-localhost","version":"0.25.3","timestamp":1470896756},"check":{"command":"check-procs.rb --pattern='chef-client' -W=1","subscribers":["base_centos_monitoring"],"handlers":["base_with_jira"],"interval":60,"team":"ops","aggregate":true,"occurrences":3,"refresh":300,"ticket":true,"name":"process-chef-client","issued":1470896771,"executed":1470896771,"duration":0.864,"output":"CheckProcs CRITICAL: Found 0 matching processes; cmd /chef-client/\n","status":2,"type":"standard","history":["2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2","2"],"total_state_change":0},"occurrences":19879,"action":"create","timestamp":1470896772,"id":"dc2b0698-dbac-416d-a9ae-42aa09d53cc3","last_state_change":1469690268,"last_ok":null}
The Check which is getting executed
{
"checks": {
"process-chef-client": {
"command": "check-procs.rb --pattern='chef-client' -W=1",
"subscribers": [
"base_centos_monitoring"
],
"handlers": [
"base_with_jira"
],
"interval": 60,
"team": "ops",
"aggregate": true,
"occurrences": 3,
"interval": 60,
"refresh": 300,
"ticket": true
}
}
}
base_with_jira.json
{
"handlers": {
"base_with_jira": {
"type": "set",
"handlers": [
"jira",
"nephele_processor"
],
"config": "http://apache.enron.nephele.solutions/uchiwa"
}
}
}
I am not getting events from other plugins. Can you explain what i have to do for this.

Related

TTN V3 (MQTT JSON) -> Telegraf -> Grafana / Sensor data from Dragino LSE01 does not apear

I have a problem with Telegraf. I have a Dragino LSE01-8 sensor which is registered on TTN v3. I can check the decoded payload by subscribing to the topic "v3/lse01-8#ttn/devices/+/up".
But when I want to grab the data from Influx, I can not get "temp_SOIL" and "water_SOIL", although the data appears in JSON. "conduct_SOIL" is no problem. But I don't know why. Can somebody give me a hint?
Another sensor (Dragino LHT 65) works fine with all data I want to access.
It's possible to get this data from the Influx-Database:
uplink_message_decoded_payload_BatV
uplink_message_decoded_payload_Mod
uplink_message_decoded_payload_conduct_SOIL
uplink_message_decoded_payload_i_flag
uplink_message_decoded_payload_s_flag
uplink_message_f_cnt
uplink_message_f_port
uplink_message_locations_user_latitude
uplink_message_locations_user_longitude
uplink_message_rx_metadata_0_channel_index
uplink_message_rx_metadata_0_channel_rssi
uplink_message_rx_metadata_0_location_altitude
uplink_message_rx_metadata_0_location_latitude
uplink_message_rx_metadata_0_location_longitude
uplink_message_rx_metadata_0_rssi
uplink_message_rx_metadata_0_snr
uplink_message_rx_metadata_0_timestamp
uplink_message_settings_data_rate_lora_bandwidth
uplink_message_settings_data_rate_lora_spreading_factor
uplink_message_settings_timestamp
## Feuchtigkeitssensor Dragino LSE01-8
[[inputs.mqtt_consumer]]
name_override = "TTN-LSE01"
servers = ["tcp://eu1.cloud.thethings.network:1883"]
qos = 0
connection_timeout = "30s"
topics = [ "v3/lse01-8#ttn/devices/+/up" ]
client_id = "telegraf"
username = "lse01-8#ttn"
password = "NNSXS.LLSNSE67AP..................P67Q.Q...........HPG............KJA..........." //
data_format = "json"
This is the JSON data I can get (I changed some data in order not to send any passwords or tokens).
{
"end_device_ids":{
"device_id":"eui-a8.40.141.bbe4",
"application_ids":{
"application_id":"lse01-8"
},
"dev_eui":"A8...40.BE...4",
"join_eui":"A8.40.010.1",
"dev_addr":"2.9F.....8"
},
"correlation_ids":[
"as:up:01G4WDNS..P3C3R...RK56VQ...KT7N076",
"gs:conn:01G4H2F.ETRG.V2QER...RQ.0K1MGZ44",
"gs:up:host:01G4H2F.ETWRZX.4PFN.A2M.6RDKD4",
"gs:uplink:01G4WDN.N7B6P.J8E.JS.503F1",
"ns:uplink:01G4WDNSFM.MCYYEZZ1.KY.4M78",
"rpc:/ttn.lorawan.v3.GsNs/HandleUplink:01G4W.NSFM29Z3.PABYW...43",
"rpc:/ttn.lorawan.v3.NsAs/HandleUplink:01G4W....VTQ4DMKBF"
],
"received_at":"2022-06-06T11:51:18.979353604Z",
"uplink_message":{
"session_key_id":"AYE...j+DM....A==",
"f_port":2,
"f_cnt":292,
"frm_payload":"DSQAAAcVB4AADBA=",
"decoded_payload":{
"BatV":3.364,
"Mod":0,
"conduct_SOIL":12,
"i_flag":0,
"s_flag":1,
"temp_DS18B20":"0.00",
"temp_SOIL":"19.20",
"water_SOIL":"18.13"
},
"rx_metadata":[
{
"gateway_ids":{
"gateway_id":"lr8",
"eui":"3.6201F0.058.....00"
},
"time":"2022-06-06T11:51:00.289713Z",
"timestamp":4283143007,
"rssi":-47,
"channel_rssi":-47,
"snr":7,
"location":{
"latitude":51.______________,
"longitude":6.__________________,
"altitude":25,
"source":"SOURCE_REGISTRY"
},
"uplink_token":"ChsKG________________________________",
"channel_index":2
}
],
"settings":{
"data_rate":{
"lora":{
"bandwidth":125000,
"spreading_factor":7
}
},
"coding_rate":"4/5",
"frequency":"868500000",
"timestamp":4283143007,
"time":"2022-06-06T11:51:00.289713Z"
},
"received_at":"2022-06-06T11:51:18.772518399Z",
"consumed_airtime":"0.061696s",
"locations":{
"user":{
"latitude":51._________________,
"longitude":6.__________________4,
"source":"SOURCE_REGISTRY"
}
},
"version_ids":{
"brand_id":"dragino",
"model_id":"lse01",
"hardware_version":"_unknown_hw_version_",
"firmware_version":"1.1.4",
"band_id":"EU_863_870"
},
"network_ids":{
"net_id":"000013",
"tenant_id":"ttn",
"cluster_id":"eu1",
"cluster_address":"eu1.cloud.thethings.network"
}
}
}

Creating json with RUBY looping through SQL Server table

This is a followup to this question:
Ruby create JSON from SQL Server
I was able to create nested arrays in JSON. But I'm struggling with looping through records and appending a file with each record. Also how would I add a root element just at the top of the json and not on each record. "aaSequences" needs to be at the top just once... I also need a comma between each record.
here is my code so far
require 'pp'
require 'tiny_tds'
require 'awesome_print'
require 'json'
class Document
def initialize strategy
#document = strategy
#load helper functions
load "helpers_ruby.rb"
#set environment 'dev', 'qa', or 'production'
load "envconfig_ruby.rb"
end
def StartUP
#document.StartUP
end
def getseqrecord
#document.getseqrecord
end
end
class GetSqlaaSequence
def StartUP
##system "clear" ##linux
system "cls" ##Windows
# create connection to db
$connReportingDB = createReportingxxSqlConn($ms_sql_host, $ms_sql_user, $ms_sql_password, $ms_sql_dbname)
##$currentDateTime = DateTime.now
##pp 'def StartUP ran at: '+$currentDateTime.to_s
end
def getseqrecord
# get the aaaaSequences data
#result = $connReportingDB.execute("SELECT
[jsonFile]
,[id]
,[title]
,[authorIds]
,[name]
,[aminoAcids]
,[schemaId]
,[registryId]
,[namingStrategy]
FROM tablename
")
$aaSequences = Array.new
#i = 0
#result.each do |aaSequence|
jsonFile = aaSequence['jsonFile']
id = aaSequence['id']
title = aaSequence['title']
authorIds = aaSequence['authorIds']
name = aaSequence['name']
aminoAcids = aaSequence['aminoAcids']
schemaId = aaSequence['schemaId']
registryId = aaSequence['registryId']
namingStrategy = aaSequence['namingStrategy']
##end
#hash = Hash[
"jsonFile", jsonFile,
"id", id,
"title", title,
"authorIds", authorIds,
"name", name,
"aminoAcids", aminoAcids,
"schemaId", schemaId,
"registryId", registryId,
"namingStrategy", namingStrategy
]
#filename = jsonFile
jsonFileOutput0 = {:"#{title}" => [{:authorIds => ["#{authorIds}"],:aminoAcids => "#{aminoAcids}",:name => "#{name}",:schemaId => "#{schemaId}",:registryId => "#{registryId}",:namingStrategy => "#{namingStrategy}"}]}
jsonFileOutput = JSON.pretty_generate(jsonFileOutput0)
File.open(jsonFile,"a") do |f|
f.write(jsonFileOutput)
####ad the comma between records...Not sure if this is the best way to do it...
# File.open(jsonFile,"a") do |f|
# f.write(',')
# end
end
$aaSequences[#i] = #hash
#i = #i + 1
###createReportingSqlConn.close
end
end
end
Document.new(GetSqlaaSequence.new).StartUP
#get aaSequences and create json files
Document.new(GetSqlaaSequence.new).getseqrecord
here is a sample of the json it creates so far...
{
"aaSequences": [
{
"authorIds": [
"fff_fdfdfdfd"
],
"aminoAcids": "aminoAcids_data",
"name": "fdfdfddf-555_1",
"schemaId": "5555fdfd5",
"registryId": "5fdfdfdf",
"namingStrategy": "NEW_IDS"
}
]
}{
"aaSequences": [
{
"authorIds": [
"fff_fdfdfdfd"
],
"aminoAcids": "aminoAcids_data",
"name": "fdfdfddf-555_2",
"schemaId": "5555fdfd5",
"registryId": "5fdfdfdf",
"namingStrategy": "NEW_IDS"
}
]
}
and here is an example of what I need it to look like
{
"aaSequences": [
{
"authorIds": [
"authorIds_data"
],
"aminoAcids": "aminoAcids_data",
"name": "name_data",
"schemaId": "schemaId_data",
"registryId": "registryId_data",
"namingStrategy": "namingStrategy_data"
},
{
"authorIds": [
"authorIds_data"
],
"aminoAcids": "aminoAcids_data",
"name": "name_data",
"schemaId": "schemaId_data",
"registryId": "registryId_data",
"namingStrategy": "namingStrategy_data"
}
]
}
You can just do the whole thing in SQL using FOR JSON.
Unfortunately, arrays are not possible using this method. There are anumber of hacks, but the easiest one in your situation is to just append to [] using JSON_MODIFY
SELECT
authorIds = JSON_MODIFY('[]', 'append $', a.authorIds),
[aminoAcids],
[name],
[schemaId],
[registryId],
[namingStrategy]
FROM aaSequences a
FOR JSON PATH, ROOT('aaSequences');
db<>fiddle

should I replace wct-istanbul by WCT-istanbub in order to estimate how much polymer web compnents is test coveraged

There is some similiarity between my question and How to measure common coverage for Polymer components + .js files?. Nevertheless, it is accepted as answer "split to .js files and include it to components" in order to use wct-istanbul and all my web components and tests are in .html files (the javascript is inside of each .html file).
My straight question is: can I still use wct-istambul to check how much from my code is covered by tests? If so, what is wrong in configuration described bellow? If not, is wct-istanbub planned to replace wct-istanbul for polymer projects?
package.json
"polyserve": "^0.18.0",
"web-component-tester": "^6.0.0",
"web-component-tester-istanbul": "^0.10.0",
...
wct.conf.js
var path = require('path');
var ret = {
'suites': ['test'],
'webserver': {
'pathMappings': []
},
'plugins': {
'local': {
'browsers': ['chrome']
},
'sauce': {
'disabled': true
},
"istanbul": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"/*.html"
],
"exclude": [
],
thresholds: {
global: {
statements: 100
}
}
}
}
};
var mapping = {};
var rootPath = (__dirname).split(path.sep).slice(-1)[0];
mapping['/components/' + rootPath + '/bower_components'] = 'bower_components';
ret.webserver.pathMappings.push(mapping);
module.exports = ret;
Well, I tried WCT-istanbub (https://github.com/Bubbit/wct-istanbub) which seams to be a temporary workaround (Code coverage of Polymer Application with WCT), it works.
wct.conf.js
"istanbub": {
"dir": "./coverage",
"reporters": ["text-summary", "lcov"],
"include": [
"**/*.html"
],
"exclude": [
"**/test/**",
"*/*.js"
],
thresholds: {
global: {
statements: 100
}
}
}
...
and the result is
...
chrome 66 RESPONSE quit()
chrome 66 BrowserRunner complete
Test run ended with great success
chrome 66 (2/0/0)
=============================== Coverage summary ===============================
Statements : 21.18% ( 2011/9495 )
Branches : 15.15% ( 933/6160 )
Functions : 18.08% ( 367/2030 )
Lines : 21.14% ( 2001/9464 )
================================================================================
Coverage for statements (21.18%) does not meet configured threshold (100%)
Error: Coverage failed

Node.js how do you read a JSON with an element that is a HTTP address

OK I just ran into an issue. I am using Auth0 to create users with different rights (not scopes just rights) in the App Metadata. When I decode the Token I get this json:
{
"iss": "https://testing.auth0.com/",
"sub": "auth0|58e7bae154941844b507eaf5",
"aud": "OSBkLd832tIhpDe0QFJbQ9vutgB2s6cJ",
"exp": 1497016797,
"iat": 1496980797,
"https://thetestgroup.com/app_metadata": {
"is_admin": true
}
}
As you can see the app metadata is in the element "https://thetestgroup.com/app_metadata". Normally I would just do something like this in my code (auth.payload.iat) to get the iat but for the app_metadata it rejects it because of the :. Is there a good way to get at that data?
ok lets talk javascript (node) and your json
Firefox Scratchpad (Shift-F4)
var x = {
"iss": "https://testing.auth0.com/",
"sub": "auth0|58e7bae154941844b507eaf5",
"aud": "OSBkLd832tIhpDe0QFJbQ9vutgB2s6cJ",
"exp": 1497016797,
"iat": 1496980797,
"https://thetestgroup.com/app_metadata": {
"is_admin": true
}
}
x['https://thetestgroup.com/app_metadata'].is_admin // hit run
/*
true
*/
node.js
~ $ node -v
v8.1.0
~ $ node
> var x = {
... "iss": "https://testing.auth0.com/",
... "sub": "auth0|58e7bae154941844b507eaf5",
... "aud": "OSBkLd832tIhpDe0QFJbQ9vutgB2s6cJ",
... "exp": 1497016797,
... "iat": 1496980797,
... "https://thetestgroup.com/app_metadata": {
..... "is_admin": true
..... }
... }
undefined
> x['https://thetestgroup.com/app_metadata'].is_admin
true
>
Please provide a mcve since pure JS and JSON are quite happily using the (strange) key - as demonstrated.

how to parse and iterate a json using ruby

Im a beginner with ruby and learning to iterate and parse json file. The contents inside input.json
[
{
"scheme": "http",
"domain_name": "www.example.com",
"path": "path/to/file",
"fragment": "header2"
},
{
"scheme": "http",
"domain_name": "www.example2.org",
"disabled": true
},
{
"scheme": "https",
"domain_name": "www.stack.org",
"path": "some/path",
"query": {
"key1": "val1",
"key2": "val2"
}
}
]
ho do I parse print the output as:
http://www.example.com/path/to/file#header2
https://www.stack.org/some/path?key1=val1&key2=val2
Any learning references would be very helpful.
Hopefully this code is self-explanatory:
require 'URI'
require 'json'
entries = JSON.parse(File.read('input.json'))
entries.reject { |entry| entry["disabled"] }.each do |entry|
puts URI::Generic.build({
:scheme => entry["scheme"],
:host => entry["domain_name"],
:fragment => entry["fragment"],
:query => entry["query"] && URI.encode_www_form(entry["query"]),
:path => entry["path"] && ("/" + entry["path"])
}).to_s
end
# Output:
# http://www.example.com/path/to/file#header2
# https://www.stack.org/some/path?key1=val1&key2=val2
The first step is to turn this JSON into Ruby data:
require 'json'
data = JSON.load(DATA)
Then you need to iterate over this and reject all those that are flagged as disabled:
data.reject do |entry|
entry['disabled']
end
Which you can chain together with an operation that leverages the URI library to build your output:
require 'uri'
uris = data.reject do |entry|
entry['disabled']
end.map do |entry|
case (entry['scheme'])
when 'https'
URI::HTTPS
else
URI::HTTP
end.build(
host: entry['domain_name'],
path: normalized_path(entry['path']),
query: entry['query'] && URI.encode_www_form(entry['query'])
).to_s
end
#=> ["http://www.example.com/path/to/file", "http://www.stack.org/some/path?key1=val1&key2=val2"]
This requires a function called normalized_path to deal with nil or invalid paths and fix them:
def normalized_path(path)
case (path and path[0,1])
when nil
'/'
when '/'
path
else
"/#{path}"
end
end