In adding a TO_JSON method (to convert a blessed reference via JSON.pm) into CGI::Cookie if I do this:
package CGI::Cookie;
sub TO_JSON {
return {
map { name => $_->name,
value => $_->value,
domain => $_->domain,
path => $_->path,
expires => $_->expires }
shift
}
}
syntax error at XXX.pm line 76, near "shift " syntax error at XXX.pm
line 77, near "}" Compilation failed in require at (eval 50) line 3.
But if I do this:
package CGI::Cookie;
sub TO_JSON {
return {
map { ''.'name' => $_->name,
value => $_->value,
domain => $_->domain,
path => $_->path,
expires => $_->expires }
shift
}
}
it works
Can't for the life of me figure out why. Also just quoting "name" doesn't help. I have to concatenate an empty string for it to work.
I'm mystified.
The Perl grammar is a bit ambiguous when it comes to blocks and anonymous hashrefs. When Perl cannot guess correctly, you can force the correct interpretation:
Hashref by +{ ... }
Codeblock by {; ... }
Forcing the block after map to be a codeblock resolves the issue. Previously it thought the block was an anonymous hash, and missed a comma before the shift: map can be of the form map EXPR, LIST, and a hashref is a valid expression.
The sub uses misuses map to assign one element to $_. It would better be written:
sub TO_JSON {
my $o = shift; # my $_ should work as well, but that is beside the point
return +{
name => $o->name,
value => $o->value,
domain => $o->domain,
path => $o->path,
expires => $o->expires,
};
}
But it could be abbreviated to
sub TO_JSON {
my $o = shift;
return +{
map { $_ => $o->$_() } qw/name value domain path expires/
};
}
Related
I'm working on some document enhancements and example code snippets for Ruby's JSON class. I'm puzzled by this option to JSON.parse:
create_additions: If set to false, the Parser doesn't create additions even if a matching class and ::create_id was found. This option defaults to false.
Could someone please provide example code for using this?
Consider this:
require 'json'
class Range
def to_json(*a)
{
'json_class' => self.class.name,
'data' => [ first, last, exclude_end? ]
}.to_json(*a)
end
def self.json_create(o)
new(*o['data'])
end
end
foo = 1 .. 2
Generating JSON:
JSON.generate(foo) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
JSON.generate(foo, { create_additions: false }) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
JSON.generate(foo, { create_additions: true }) # => "{\"json_class\":\"Range\",\"data\":[1,2,false]}"
Parsing the generated JSON:
JSON.parse( JSON.generate(foo) ) # => {"json_class"=>"Range", "data"=>[1, 2, false]}
JSON.parse( JSON.generate(foo), { create_additions: false } ) # => {"json_class"=>"Range", "data"=>[1, 2, false]}
JSON.parse( JSON.generate(foo), { create_additions: true } ) # => 1..2
"2.4.3. JSON.parse and JSON.load" demonstrates a potential bug in JSON that affected create_additions. From there it was a simple thing, just some lines testing the result of toggling the state.
Why they had to close the security hole is for you to research as it involves the specification for JSON serialized data and it being a data-exchange standard, and an example in the JSON docs needs to cover that.
The example is right there in the documentation: https://ruby-doc.org/stdlib-2.6.3/libdoc/json/rdoc/JSON.html#module-JSON-label-Extended+rendering+and+loading+of+Ruby+objects.
The main difference in this respect between parse and load is that the former defaults to not create additions, the latter defaults to do it.
Extended rendering and loading of Ruby objects
provides optional additions allowing to serialize and deserialize Ruby
classes without loosing their type.
# without additions
require "json"
json = JSON.generate({range: 1..3, regex: /test/})
# => '{"range":"1..3","regex":"(?-mix:test)"}'
JSON.parse(json)
# => {"range"=>"1..3", "regex"=>"(?-mix:test)"}
# with additions
require "json/add/range"
require "json/add/regexp"
json = JSON.generate({range: 1..3, regex: /test/})
# => '{"range":{"json_class":"Range","a":[1,3,false]},"regex":{"json_class":"Regexp","o":0,"s":"test"}}'
JSON.parse(json)
# => {"range"=>{"json_class"=>"Range", "a"=>[1, 3, false]}, "regex"=>{"json_class"=>"Regexp", "o"=>0, "s"=>"test"}}
JSON.load(json)
# => {"range"=>1..3, "regex"=>/test/}
See #load for details.
JSON string input: https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=MSFT&apikey=demo
I am trying to return just the first key (current day) in the hash but have been unable to do so. My code looks like the following
#!/usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
use Data::Dumper;
use JSON;
my $html = get("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=AMD&apikey=CMDPTEHVYH7W5VSZ");
my $decoded = decode_json($html);
my ($open) = $decoded->{'Time Series (Daily)'}->[0]->{'1. open'};
I keep getting "Not an ARRAY reference" which I researched and got more confused.
I can access what I want directly with the below code but I want to access just the first result or the current day:
my ($open) = $decoded->{'Time Series (Daily)'}{'2017-12-20'}{'1. open'};
Also if I do something like this:
my ($open) = $decoded->{'Time Series (Daily)'};
print Dumper($open);
The output is as follows:
$VAR1 = {
'2017-09-07' => {
'1. open' => '12.8400',
'5. volume' => '35467788',
'2. high' => '12.9400',
'4. close' => '12.6300',
'3. low' => '12.6000'
},
'2017-11-15' => {
'3. low' => '10.7700',
'4. close' => '11.0700',
'2. high' => '11.1300',
'5. volume' => '33326871',
'1. open' => '11.0100'
},
'2017-11-30' => {
'1. open' => '10.8700',
'2. high' => '11.0300',
'5. volume' => '43101899',
'3. low' => '10.7600',
'4. close' => '10.8900'
},
Thank you in advance for any help you can provide a noob.
Problem 1: { denotes the start of a JSON object, which gets decoded into a hash. Trying to derefence an array is going to fail.
Problem 2: Like Perl hashes, JSON objects are unordered, so talking about the
"first key" makes no sense. Perhaps you want the most recent date?
use List::Util qw( maxstr );
my $time_series_daily = $decoded->{'Time Series (Daily)'};
my $latest_date = maxstr #$time_series_daily;
my $open = $time_series_daily->{$latest_date}{'1. open'};
You are picking among hashref keys, not array (sequential container) elements. Since hashes are inherently unordered you can't index into that list but need to sort keys as needed.
With the exact format you show this works
my $top = (sort { $b cmp $a } keys %{ $decoded->{'Time Series (Daily)'} } )[0];
say $decoded->{'Time Series (Daily)'}{$top}{'1. open'};
It gets the list of keys, inverse-sorts them (alphabetically), and takes the first element of that list.
If your date-time format may vary then you'll need to parse it for sorting.
If you will really ever only want the most-recent one this is inefficient since it sorts the whole list. Then use a more specific tool to extract only the "largest" element, like
use List::Util qw(reduce);
my $top = reduce { $a gt $b ? $a : $b }
keys %{ $decoded->{'Time Series (Daily)'} };
But then in your case this can be done simply by maxstr from the same List::Util module, as shown in ikegami's answer. On the other hand, if the datetime format doesn't lend itself to a direct lexicographical comparison used by strmax then the reduce allows use of custom comparisons.
I have a perl hash that is obtained from parsing JSON. The JSON could be anything a user defined API could generated. The goal is to obtain a date/time string and determine if that date/time is out of bounds according to a user defined threshold. The only issue I have is that perl seems a bit cumbersome when dealing with hash key/subkey iteration. How can I look through all the keys and determine if a key or subkey exists throughout the hash? I have read many threads throughout stackoverflow, but nothing that exactly meets my needs. I only started perl last week so I may be missing something... Let me know if that's the case.
Below is the "relevant" code/subs. For all code see: https://gitlab.com/Jedimaster0/check_http_freshness
use warnings;
use strict;
use LWP::UserAgent;
use Getopt::Std;
use JSON::Parse 'parse_json';
use JSON::Parse 'assert_valid_json';
use DateTime;
use DateTime::Format::Strptime;
# Verify the content-type of the response is JSON
eval {
assert_valid_json ($response->content);
};
if ( $# ){
print "[ERROR] Response isn't valid JSON. Please verify source data. \n$#";
exit EXIT_UNKNOWN;
} else {
# Convert the JSON data into a perl hashrefs
$jsonDecoded = parse_json($response->content);
if ($verbose){print "[SUCCESS] JSON FOUND -> ", $response->content , "\n";}
if (defined $jsonDecoded->{$opts{K}}){
if ($verbose){print "[SUCCESS] JSON KEY FOUND -> ", $opts{K}, ": ", $jsonDecoded->{$opts{K}}, "\n";}
NAGIOS_STATUS(DATETIME_DIFFERENCE(DATETIME_LOOKUP($opts{F}, $jsonDecoded->{$opts{K}})));
} else {
print "[ERROR] Retreived JSON does not contain any data for the specified key: $opts{K}\n";
exit EXIT_UNKNOWN;
}
}
sub DATETIME_LOOKUP {
my $dateFormat = $_[0];
my $dateFromJSON = $_[1];
my $strp = DateTime::Format::Strptime->new(
pattern => $dateFormat,
time_zone => $opts{z},
on_error => sub { print "[ERROR] INVALID TIME FORMAT: $dateFormat OR TIME ZONE: $opts{z} \n$_[1] \n" ; HELP_MESSAGE(); exit EXIT_UNKNOWN; },
);
my $dt = $strp->parse_datetime($dateFromJSON);
if (defined $dt){
if ($verbose){print "[SUCCESS] Time formatted using -> $dateFormat\n", "[SUCCESS] JSON date converted -> $dt $opts{z}\n";}
return $dt;
} else {
print "[ERROR] DATE VARIABLE IS NOT DEFINED. Pattern or timezone incorrect."; exit EXIT_UNKNOWN
}
}
# Subtract JSON date/time from now and return delta
sub DATETIME_DIFFERENCE {
my $dateInitial = $_[0];
my $deltaDate;
# Convert to UTC for standardization of computations and it's just easier to read when everything matches.
$dateInitial->set_time_zone('UTC');
$deltaDate = $dateNowUTC->delta_ms($dateInitial);
if ($verbose){print "[SUCCESS] (NOW) $dateNowUTC UTC - (JSON DATE) $dateInitial ", $dateInitial->time_zone->short_name_for_datetime($dateInitial), " = ", $deltaDate->in_units($opts{u}), " $opts{u} \n";}
return $deltaDate->in_units($opts{u});
}
Sample Data
{
"localDate":"Wednesday 23rd November 2016 11:03:37 PM",
"utcDate":"Wednesday 23rd November 2016 11:03:37 PM",
"format":"l jS F Y h:i:s A",
"returnType":"json",
"timestamp":1479942217,
"timezone":"UTC",
"daylightSavingTime":false,
"url":"http:\/\/www.convert-unix-time.com?t=1479942217",
"subkey":{
"altTimestamp":1479942217,
"altSubkey":{
"thirdTimestamp":1479942217
}
}
}
[SOLVED]
I have used the answer that #HåkonHægland provided. Here are the below code changes. Using the flatten module, I can use any input string that matches the JSON keys. I still have some work to do, but you can see the issue is resolved. Thanks #HåkonHægland.
use warnings;
use strict;
use Data::Dumper;
use LWP::UserAgent;
use Getopt::Std;
use JSON::Parse 'parse_json';
use JSON::Parse 'assert_valid_json';
use Hash::Flatten qw(:all);
use DateTime;
use DateTime::Format::Strptime;
# Verify the content-type of the response is JSON
eval {
assert_valid_json ($response->content);
};
if ( $# ){
print "[ERROR] Response isn't valid JSON. Please verify source data. \n$#";
exit EXIT_UNKNOWN;
} else {
# Convert the JSON data into a perl hashrefs
my $jsonDecoded = parse_json($response->content);
my $flatHash = flatten($jsonDecoded);
if ($verbose){print "[SUCCESS] JSON FOUND -> ", Dumper($flatHash), "\n";}
if (defined $flatHash->{$opts{K}}){
if ($verbose){print "[SUCCESS] JSON KEY FOUND -> ", $opts{K}, ": ", $flatHash>{$opts{K}}, "\n";}
NAGIOS_STATUS(DATETIME_DIFFERENCE(DATETIME_LOOKUP($opts{F}, $flatHash->{$opts{K}})));
} else {
print "[ERROR] Retreived JSON does not contain any data for the specified key: $opts{K}\n";
exit EXIT_UNKNOWN;
}
}
Example:
./check_http_freshness.pl -U http://bastion.mimir-tech.org/json.html -K result.creation_date -v
[SUCCESS] JSON FOUND -> $VAR1 = {
'timestamp' => '20161122T200649',
'result.data_version' => 'data_20161122T200649_data_news_topics',
'result.source_version' => 'kg_release_20160509_r33',
'result.seed_version' => 'seed_20161016',
'success' => 1,
'result.creation_date' => '20161122T200649',
'result.data_id' => 'data_news_topics',
'result.data_tgz_name' => 'data_news_topics_20161122T200649.tgz',
'result.source_data_version' => 'seed_vtv: data_20161016T102932_seed_vtv',
'result.data_digest' => '6b5bf1c2202d6f3983d62c275f689d51'
};
Odd number of elements in anonymous hash at ./check_http_freshness.pl line 78, <DATA> line 1.
[SUCCESS] JSON KEY FOUND -> result.creation_date:
[SUCCESS] Time formatted using -> %Y%m%dT%H%M%S
[SUCCESS] JSON date converted -> 2016-11-22T20:06:49 UTC
[SUCCESS] (NOW) 2016-11-26T19:02:15 UTC - (JSON DATE) 2016-11-22T20:06:49 UTC = 94 hours
[CRITICAL] Delta hours (94) is >= (24) hours. Data is stale.
You could try use Hash::Flatten. For example:
use Hash::Flatten qw(flatten);
my $json_decoded = parse_json($json_str);
my $flat = flatten( $json_decoded );
say "found" if grep /(?:^|\.)\Q$key\E(?:\.?|$)/, keys %$flat;
You can use Data::Visitor::Callback to traverse the data structure. It lets you define callbacks for different kinds of data types inside your structure. Since we're only looking at a hash it's relatively simple.
The following program has a predefined list of keys to find (those would be user input in your case). I converted your example JSON to a Perl hashref and included it in the code because the conversion is not relevant. The program visits every hashref in this data structure (including the top level) and runs the callback.
Callbacks in Perl are code references. These can be created in two ways. We're doing the anonymous subroutine (sometimes called lambda function in other languages). The callback gets passed two arguments: the visitor object and the current data substructure.
We'll iterate all the keys we want to find and simply check if they exist in that current data structure. If we see one, we count it's existence in the %seen hash. Using a hash to store things we have seen is a common idiom in Perl.
We're using a postfix if here, which is convenient and easy to read. %seen is a hash, so we access the value behind the $key with $seen{$key}, while $data is a hash reference, so we use the dereferencing operator -> to access the value behind $key with $data->{$key}.
The callback needs us to return the $data again so it continues. The last line is just there, it's not important.
I've used Data::Printer to output the %seen hash because it's convenient. You can also use Data::Dumper if you want. In production, you will not need that.
use strict;
use warnings;
use Data::Printer;
use Data::Visitor::Callback;
my $from_json = {
"localDate" => "Wednesday 23rd November 2016 11:03:37 PM",
"utcDate" => "Wednesday 23rd November 2016 11:03:37 PM",
"format" => "l jS F Y h:i:s A",
"returnType" => "json",
"timestamp" => 1479942217,
"timezone" => "UTC",
"daylightSavingTime" =>
0, # this was false, I used 0 because that's a non-true value
"url" => "http:\/\/www.convert-unix-time.com?t=1479942217",
"subkey" => {
"altTimestamp" => 1479942217,
"altSubkey" => {
"thirdTimestamp" => 1479942217
}
}
};
my #keys_to_find = qw(timestamp altTimestamp thirdTimestamp missingTimestamp);
my %seen;
my $visitor = Data::Visitor::Callback->new(
hash => sub {
my ( $visitor, $data ) = #_;
foreach my $key (#keys_to_find) {
$seen{$key}++ if exists $data->{$key};
}
return $data;
},
);
$visitor->visit($from_json);
p %seen;
The program outputs the following. Note this is not a Perl data structure. Data::Printer is not a serializer, it's a tool to make data human readable in a convenient way.
{
altTimestamp 1,
thirdTimestamp 1,
timestamp 1
}
Since you also wanted to constraint the input, here's an example how to do that. The following program is a modification of the one above. It allows to give a set of different constraints for every required key.
I've done that by using a dispatch table. Essentially, that's a hash that contains code references. Kind of like the callbacks we use for the Visitor.
The constraints I've included are doing some things with dates. An easy way to work with dates in Perl is the core module Time::Piece. There are lots of questions around here about various date things where Time::Piece is the answer.
I've only done one constraint per key, but you could easily include several checks in those code refs, or make a list of code refs and put them in an array ref (keys => [ sub(), sub(), sub() ]) and then iterate that later.
In the visitor callback we are now also keeping track of the keys that have %passed the constraints check. We're calling the coderef with $coderef->($arg). If a constraint check returns a true value, it gets noted in the hash.
use strict;
use warnings;
use Data::Printer;
use Data::Visitor::Callback;
use Time::Piece;
use Time::Seconds; # for ONE_DAY
my $from_json = { ... }; # same as above
# prepare one of the constraints
# where I'm from, Christmas eve is considered Christmas
my $christmas = Time::Piece->strptime('24 Dec 2016', '%d %b %Y');
# set up the constraints per required key
my %constraints = (
timestamp => sub {
my ($epoch) = #_;
# not older than one day
return $epoch < time && $epoch > time - ONE_DAY;
},
altTimestamp => sub {
my ($epoch) = #_;
# epoch value should be an even number
return ! $epoch % 2;
},
thirdTimestamp => sub {
my ($epoch) = #_;
# before Christmas 2016
return $epoch < $christmas;
},
);
my %seen;
my %passed;
my $visitor = Data::Visitor::Callback->new(
hash => sub {
my ( $visitor, $data ) = #_;
foreach my $key (%constraints) {
if ( exists $data->{$key} ) {
$seen{$key}++;
$passed{$key}++ if $constraints{$key}->( $data->{$key} );
}
}
return $data;
},
);
$visitor->visit($from_json);
p %passed;
The output this time is:
{
thirdTimestamp 1,
timestamp 1
}
If you want to learn more about the dispatch tables, take a look at chapter two of the book Higher Order Perl by Mark Jason Dominus which is legally available for free here.
I'm using koajs with bunyan to save error logs to my server then I use filebeat to have them shipped to my logstash application.
My error logs are being forwarded correctly however I would now like to create a filter which will add a tag to specific logs.
{"name":"myapp","hostname":"sensu-node-dev","pid":227,"level":50,"err":{"message":"Cannot find module 'lol'","name":"Error","stack":"Error: Cannot find module 'lol'\n at Function.Module._resolveFilename (module.js:339:15)\n at Function.Module._load (module.js:290:25)\n at Module.require (module.js:367:17)\n at require (internal/module.js:16:19)\n at Object.<anonymous> (/srv/www/dev.site/app.js:27:6)\n at next (native)\n at Object.<anonymous> (/srv/www/dev.site/node_modules/koa-compose/index.js:29:5)\n at next (native)\n at onFulfilled (/srv/www/dev.site/node_modules/co/index.js:65:19)\n at /srv/www/dev.site/node_modules/co/index.js:54:5","code":"MODULE_NOT_FOUND"},"msg":"Cannot find module 'lol'","time":"2016-02-24T22:04:26.492Z","v":0}
Now the interesting part in that specific log is "err":{...} and the "name":"Error" bits. For simplicity reasons I would just like to create a filter which detects "name":"Error" in the log (if it exists) and then apply a tag add_tag => ["error"] to the log.
Here is my /etc/logstash/conf.d/logstash.conf file:
input {
beats {
port => 5044
type => "logs"
}
}
filter {
grok {
type => "log"
pattern => "???" // <--- have no idea what to do here
add_tag => ["error"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
http {
http_method => "post"
url => "<MY_URL>"
format => "message"
message => "{"text":"dis is workinz, you has error"}"
tags => ["error"]
}
}
I tried the following:
pattern => ""name":"Error""
But got the following error:
Error: Expected one of #, {, } at line 9, column 31 (byte 107) after filter {
grok {
match => { "message" => ""
You may be interested in the '--configtest' flag which you can
use to validate logstash's configuration before you choose
to restart a running system.
There is no simple example of this specific type of matching anywhere.
Bonus: Also how does one escape in logstash, I couldn't find anything on the subject?
If you only want to see if a string exists in your message, try this:
if [message] =~ /"name":"Error"/ {
mutate {
add_tag { ... }
}
}
If you really want to grok the input into fields, check out the json codec or filter instead.
Logstash is awesome. I can send it JSON like this (multi-lined for readability):
{
"a": "one"
"b": {
"alpha":"awesome"
}
}
And then query for that line in kibana using the search term b.alpha:awesome. Nice.
However I now have a JSON log line like this:
{
"different":[
{
"this": "one",
"that": "uno"
},
{
"this": "two"
}
]
}
And I'd like to be able to find this line with a search like different.this:two (or different.this:one, or different.that:uno)
If I was using Lucene directly I'd iterate through the different array, and generate a new search index for each hash within it, but Logstash currently seems to ingest that line like this:
different: {this: one, that: uno}, {this: two}
Which isn't going to help me searching for log lines using different.this or different.that.
Any got any thoughts as to a codec, filter or code change I can make to enable this?
You can write your own filter (copy & paste, rename the class name, the config_name and rewrite the filter(event) method) or modify the current JSON filter (source on Github)
You can find the JSON filter (Ruby class) source code in the following path logstash-1.x.x\lib\logstash\filters named as json.rb. The JSON filter parse the content as JSON as follows
begin
# TODO(sissel): Note, this will not successfully handle json lists
# like your text is '[ 1,2,3 ]' JSON.parse gives you an array (correctly)
# which won't merge into a hash. If someone needs this, we can fix it
# later.
dest.merge!(JSON.parse(source))
# If no target, we target the root of the event object. This can allow
# you to overwrite #timestamp. If so, let's parse it as a timestamp!
if !#target && event[TIMESTAMP].is_a?(String)
# This is a hack to help folks who are mucking with #timestamp during
# their json filter. You aren't supposed to do anything with
# "#timestamp" outside of the date filter, but nobody listens... ;)
event[TIMESTAMP] = Time.parse(event[TIMESTAMP]).utc
end
filter_matched(event)
rescue => e
event.tag("_jsonparsefailure")
#logger.warn("Trouble parsing json", :source => #source,
:raw => event[#source], :exception => e)
return
end
You can modify the parsing procedure to modify the original JSON
json = JSON.parse(source)
if json.is_a?(Hash)
json.each do |key, value|
if value.is_a?(Array)
value.each_with_index do |object, index|
#modify as you need
object["index"]=index
end
end
end
end
#save modified json
......
dest.merge!(json)
then you can modify your config file to use the/your new/modified JSON filter and place in \logstash-1.x.x\lib\logstash\config
This is mine elastic_with_json.conf with a modified json.rb filter
input{
stdin{
}
}filter{
json{
source => "message"
}
}output{
elasticsearch{
host=>localhost
}stdout{
}
}
if you want to use your new filter you can configure it with the config_name
class LogStash::Filters::Json_index < LogStash::Filters::Base
config_name "json_index"
milestone 2
....
end
and configure it
input{
stdin{
}
}filter{
json_index{
source => "message"
}
}output{
elasticsearch{
host=>localhost
}stdout{
}
}
Hope this helps.
For a quick and dirty hack, I used the Ruby filter and below code , no need to use the out of box 'json' filter anymore
input {
stdin{}
}
filter {
grok {
match => ["message","(?<json_raw>.*)"]
}
ruby {
init => "
def parse_json obj, pname=nil, event
obj = JSON.parse(obj) unless obj.is_a? Hash
obj = obj.to_hash unless obj.is_a? Hash
obj.each {|k,v|
p = pname.nil?? k : pname
if v.is_a? Array
v.each_with_index {|oo,ii|
parse_json_array(oo,ii,p,event)
}
elsif v.is_a? Hash
parse_json(v,p,event)
else
p = pname.nil?? k : [pname,k].join('.')
event[p] = v
end
}
end
def parse_json_array obj, i,pname, event
obj = JSON.parse(obj) unless obj.is_a? Hash
pname_ = pname
if obj.is_a? Hash
obj.each {|k,v|
p=[pname_,i,k].join('.')
if v.is_a? Array
v.each_with_index {|oo,ii|
parse_json_array(oo,ii,p,event)
}
elsif v.is_a? Hash
parse_json(v,p, event)
else
event[p] = v
end
}
else
n = [pname_, i].join('.')
event[n] = obj
end
end
"
code => "parse_json(event['json_raw'].to_s,nil,event) if event['json_raw'].to_s.include? ':'"
}
}
output {
stdout{codec => rubydebug}
}
Test json structure
{"id":123, "members":[{"i":1, "arr":[{"ii":11},{"ii":22}]},{"i":2}], "im_json":{"id":234, "members":[{"i":3},{"i":4}]}}
and this is whats output
{
"message" => "{\"id\":123, \"members\":[{\"i\":1, \"arr\":[{\"ii\":11},{\"ii\":22}]},{\"i\":2}], \"im_json\":{\"id\":234, \"members\":[{\"i\":3},{\"i\":4}]}}",
"#version" => "1",
"#timestamp" => "2014-07-25T00:06:00.814Z",
"host" => "Leis-MacBook-Pro.local",
"json_raw" => "{\"id\":123, \"members\":[{\"i\":1, \"arr\":[{\"ii\":11},{\"ii\":22}]},{\"i\":2}], \"im_json\":{\"id\":234, \"members\":[{\"i\":3},{\"i\":4}]}}",
"id" => 123,
"members.0.i" => 1,
"members.0.arr.0.ii" => 11,
"members.0.arr.1.ii" => 22,
"members.1.i" => 2,
"im_json" => 234,
"im_json.0.i" => 3,
"im_json.1.i" => 4
}
The solution I liked is the ruby filter because that requires us to not write another filter. However, that solution creates fields that are on the "root" of JSON and it's hard to keep track of how the original document looked.
I came up with something similar that's easier to follow and is a recursive solution so it's cleaner.
ruby {
init => "
def arrays_to_hash(h)
h.each do |k,v|
# If v is nil, an array is being iterated and the value is k.
# If v is not nil, a hash is being iterated and the value is v.
value = v || k
if value.is_a?(Array)
# "value" is replaced with "value_hash" later.
value_hash = {}
value.each_with_index do |v, i|
value_hash[i.to_s] = v
end
h[k] = value_hash
end
if value.is_a?(Hash) || value.is_a?(Array)
arrays_to_hash(value)
end
end
end
"
code => "arrays_to_hash(event.to_hash)"
}
It converts arrays to has with each key as the index number. More details:- http://blog.abhijeetr.com/2016/11/logstashelasticsearch-best-way-to.html