Logstash dynamically split events - configuration

is there a way to split a logstash (1.4.2) event into multiple other events?
My input looks like this:
{ "parts" => ["one", "two"],
"timestamp" => "2014-09-27T12:29:17.601Z"
"one.key=> "1", "one.value"=>"foo",
"two.key" => "2", "two.value"=>"bar"
}
And I'd like to create two events with the following content:
{ "key" => "1", "value" => "foo", "timestamp" => "2014-09-27T12:29:17.601Z" }
{ "key" => "2", "value" => "bar", "timestamp" => "2014-09-27T12:29:17.601Z" }
Problem is that I can't know the actual "parts"...
Thanks for your help :)

Updating a very old answer because there is a better way to do this in newer versions of logstash without resorting to a custom filter.
You can do this using a ruby filter and a split filter:
filter {
ruby {
code => '
arrayOfEvents = Array.new()
parts = event.get("parts")
timestamp = event.get("timestamp")
parts.each { |part|
arrayOfEvents.push({
"key" => event.get("#{part}.key"),
"value" => event.get("#{part}.value"),
"timestamp" => timestamp
})
event.remove("#{part}.key")
event.remove("#{part}.value")
}
puts arrayOfEvents
event.remove("parts")
event.set("event",arrayOfEvents)
'
}
split {
field => 'event'
}
mutate {
rename => {
"[event][key]" => "key"
"[event][value]" => "value"
"[event][timestamp]" => "timestamp"
}
remove_field => ["event"]
}
}
My original answer was:
You need to resort to a custom filter for this (you can't call yield from a ruby code filter which is what's needed to generate new events).
Something like this (dropped into lib/logstash/filters/custom_split.rb):
# encoding: utf-8
require "logstash/filters/base"
require "logstash/namespace"
# custom code to break up an event into multiple
class LogStash::Filters::CustomSplit < LogStash::Filters::Base
config_name "custom_split"
milestone 1
public
def register
# Nothing
end # def register
public
def filter(event)
return unless filter?(event)
if event["parts"].is_a?(Array)
event["parts"].each do |key|
e = LogStash::Event.new("timestamp" => event["timestamp"],
"key" => event["#{key}.key"],
"value" => event["#{key}.value"])
yield e
end
event.cancel
end
end
end
And then just put filter { custom_split {} } into your config file.

For future reference and based on #alcanzar answer, it is now possible to do things like this:
ruby {
code => "
# somefield is an array
array = event.get('somefield')
# drop the current event (this was my use case, I didn't need the feeding event)
event.cancel
# iterate over to construct new events
array.each { |a|
# creates a new logstash event
generated = LogStash::Event.new({ 'foo' => 'something' })
# puts the event in the pipeline queue
new_event_block.call(generated)
}
"
}

Related

DBIx to JSON - Wrong format

In a Catalyst application, I need to generate JSON from DBIx::Class:Core objects.
Such a class definition looks like this:
use utf8;
package My::Schema::Book;
use strict;
use warnings;
use Moose;
use MooseX::NonMoose;
use MooseX::MarkAsMethods autoclean => 1;
extends 'DBIx::Class::Core';
__PACKAGE__->load_components("InflateColumn::DateTime");
__PACKAGE__->table("books");
__PACKAGE__->add_columns(
"id",
{
data_type => "uuid",
default_value => \"uuid_generate_v4()",
is_nullable => 0,
size => 16,
},
"title"
);
__PACKAGE__->set_primary_key("id");
__PACKAGE__->meta->make_immutable;
sub TO_JSON {
my $self = shift;
{book => {
id => $self->id,
title => $self->title,
}}
}
1;
After queriyng the books from database I do the encoding of the blessed objects:
$c->stash(books_rs => $c->model('My::Schema::Book'));
$c->stash(books => [$c->stash->{books_rs}->search(
{},
{order_by => 'title ASC'})]
);
$c->stash(json => $json->convert_blessed->encode($c->stash->{books}));
$c->forward('View::JSON');
The JSON output of the query is this:
{"json":"[{\"book\":{\"id\":\"ae355346-8e19-46ee-88ee-773ac30938a9\",\"title\":\"TITLE1\"}},{\"book\":{\"id\":\"9a20f526-d4cd-4e7d-a726-55e78bc3c0ac\",\"title\":\"TITLE2\"}},{\"book\":{\"title\":\"TITLE3\",\"id\":\"1ddb2d27-3ec6-46c1-a1a7-0b151fe44597\"}}]"}
The value of the json key and each particular book key got double quotes what can not be parsed by jQuery. It complains about format exception.
$json->convert_blessed->encode($c->stash->{books}) returns a string. It looks like View::JSON also encodes json.
Try to pass your data as is: $c->stash(json => $c->stash->{books});. You may also need to configure expose_stash and json_encoder_args to handle the right keys from your stash and correctly convert your objects.
See
https://metacpan.org/pod/Catalyst::View::JSON#CONFIG-VARIABLES

Manipulating and Outputting JSON in Ruby

I have a JSON return (is it a hash? array? JS object?) where every entry is information on a person and follows the following format:
{"type"=>"PersonSummary",
"id"=>"123", "properties"=>{"permalink"=>"personname",
"api_path"=>"people/personname"}}
I would like to go through every entry, and output only the "id"
I've put the entire JSON pull "response" into "result"
result = JSON.parse(response)
then, I'd like to go through result and do print the ID and "api_path" of the person:
result.each do |print id AND api_path|
How do I go about doing this in Ruby?
The only time you would need to use JSON.parse is if you have a string you need to parse into a Hash. For Example:
result = JSON.parse('{ "type" : "PersonSummary", "id" : 123, "properties" : { "permalink" : "personname", "api_path" : "people/personname" } }')
Once you have the Hash, result could be accessed by giving it the key, like result[:id] or result['id'] (both will work), or you can iterate through the hash too using the following code.
If you need to access the api_path value you would do so by using result['properties']['api_path']
result = { 'type' => 'PersonSummary', 'id' => 123, 'properties' => { 'permalink' => 'personname', 'api_path' => 'people/personname' } }
result.each do |key, value|
puts "Key: #{key}\t\tValue: #{value}"
end
You could even do something like puts value if key == 'id' if you just want to show certain values.

Jira API | Error: "Operation value must be a string" - trying to set value nested two levels deep

Trying to create a new jira ticket of specific requestType, but it is nested two levels deep. Tried few possible alterations, but no luck. Here's the code I have,
require 'jira-ruby' # https://github.com/sumoheavy/jira-ruby
options = {
:username => jira_username,
:password => jira_password,
:site => 'https://jiraurl/rest/api/2/',
:context_path => '',
:auth_type => :basic,
:read_timeout => 120
}
client = JIRA::Client.new(options)
issue = client.Issue.build
fields_options = {
"fields" =>
{
"summary" => "Test ticket creation",
"description" => "Ticket created from Ruby",
"project" => {"key" => "AwesomeProject"},
"issuetype" => {"name" => "Task"},
"priority" => {"name" => "P1"},
"customfield_23070" =>
{
"requestType" => {
"name" => "Awesome Request Type"
}
}
}
}
issue.save(fields_options)
"errors"=>{"customfield_23070"=>"Operation value must be a string"}
Also tried passing a JSON object to customfield_23070,
"customfield_23070": { "requestType": { "name": "Awesome Request Type" } }
still no luck, get the same error message.
If it helps, this is how customfield_23070 looks like in our Jira,
Does anyone know how to set requestType in this case, please? Any help is greatly appreciated!!
It seems that for custom fields with specific data types (string/number), you must pass the value as:
"customfield_1111": 1
or:
"customfield_1111": "string"
instead of:
"customfield_1111":{ "value": 1 }
or:
"customfield_1111":{ "value": "string" }
I'm not sure but you can try this possible examples:
eg.1:
"customfield_23070"=>{"name"=>"requestType","value"=>"Awesome Request Type"}
eg.2:
"customfield_23070"=>{"requestType"=>"Awesome Request Type"}
eg.3:
"customfield_23070"=>{"value"=>"Awesome Request Type"}
eg.4
"customfield_23070"=>{"name"=>"Awesome Request Type"}
for ref there are 2 methods depending upon the fields you are interacting with
have a look here '
updating-an-issue-via-the-jira-rest-apis-6848604
' for the applicable fields for update via verb operations, the other fields you can use examples as per above,
you can use both methods within the same call
{
"update": {"description": [{"set": "Description by API Update - lets do this thing"}]},
"fields": {"customfield_23310": "TESTING0909"}
}
Ok, I think I found how to do it.
You need to provide a string, and that string is the GUID of the RequestType.
In order to get that GUID. You need to run the following in a scriptrunner console:
import com.atlassian.jira.component.ComponentAccessor
def issue = ComponentAccessor.issueManager.getIssueByCurrentKey("ISSUE-400546") //Issue with the desired Request Type
def cf = ComponentAccessor.customFieldManager.getCustomFieldObjectByName("Tipo de solicitud del cliente") //Change it to the name of your request type field
issue.getCustomFieldValue(cf)
Source: https://community.atlassian.com/t5/Jira-Software-questions/how-to-set-request-type-value-in-while-create-jira-issue/qaq-p/1106696

Altering and updating a JSON file

I wrote a small script which loops my current Jsonfile but it's not possible to append a key and value.
This is my file
[
{
"URL": "https://p59-caldav.icloud.com",
}
]
I would like to append a key and value like this
[
{
"URL": "https://p59-caldav.icloud.com",
"test": "test"
}
]
My current script setup
#!/usr/bin/env ruby
require 'rubygems'
require 'json'
require 'domainatrix'
jsonHash = File.read("/Users/ReffasCode/Desktop/sampleData.json")
array = JSON.parse(jsonHash)
File.open("/Users/BilalReffas/Desktop/sampleData.json","w") do |f|
array.each do |child|
url = Domainatrix.parse(child["URL"])
json = {
"url" => child["URL"],
"provider" => url.domain,
"service" => url.subdomain,
"context" => url.path,
"suffix" => url.public_suffix
}
f.write(JSON.pretty_generate(json))
end
end
The script overwrite my whole jsonfile...this is not what I want :/
This is untested but looks about right:
ORIGINAL_JSON = 'sampleData.json'
NEW_JSON = ORIGINAL_JSON + '.new'
OLD_JSON = ORIGINAL_JSON + '.old'
json = JSON.parse(File.read(ORIGINAL_JSON))
ary = json.map do |child|
url = Domainatrix.parse(child['URL'])
{
'url' => child['URL'],
'provider' => url.domain,
'service' => url.subdomain,
'context' => url.path,
'suffix' => url.public_suffix
}
end
File.write(NEW_JSON, ary.to_json)
File.rename(ORIGINAL_JSON, OLD_JSON)
File.rename(NEW_JSON, ORIGINAL_JSON)
File.delete(OLD_JSON)
It's important to not overwrite the original file until all processing has occurred, hence writing to a new file, closing the new and old, then renaming the old one to something safe, renaming the new to the original name, and then being able to remove the original. If you don't follow a process like that you run a risk of corrupting or losing your original data if the code or machine crashes mid-stream.
See "How to search file text for a pattern and replace it with a given value" for more information.
Probably the simplest way to use f.write would be to use it to replace the entire contents of the file. With that in mind, let's see if we can compose what we want the entire contents of the file to be, in memory, and then write it.
#!/usr/bin/env ruby
require 'rubygems'
require 'json'
require 'domainatrix'
write_array = []
jsonHash = File.read("/Users/ReffasCode/Desktop/sampleData.json")
read_array = JSON.parse(jsonHash)
read_array.each do |child|
url = Domainatrix.parse(child["URL"])
write_array << {
"url" => child["URL"],
"provider" => url.domain,
"service" => url.subdomain,
"context" => url.path,
"suffix" => url.public_suffix
}
end
File.open("/Users/BilalReffas/Desktop/sampleData.json","w") do |f|
f.write(JSON.pretty_generate(write_array))
end
Note some changes:
Fixed indentation :)
Removed some nesting
Write once with the entire contents of the file. Unless you have a really big file or a really important need for pipelined I/O this is probably the simplest thing to do.

return json structure using perl

i have the following sql statement inside a function..
my $sth = $dbh->prepare(qq[SELECT device_uuid,device_name FROM ].DB_SCHEMA().qq[.user_device WHERE user_id = ?]);
$sth->execute($user_id) || die $dbh->errstr;
the results are being fetched using the following statement
while(my $data = $sth->fetchrow_arrayref()) {
}
my question is how can i create and return a json structure containing objects for every row being fetched?something like this
{
object1:{
"device_uuid1":"id1",
"device_name1":"name1"
},
object2:{
"device_uuid2":"id2",
"device_name2":"name2"
},
object3:{
"device_uuid3":"id3",
"device_name3":"name3"
}
}
the total number of json objects will be equal to the number of rows returned by the sql statement.
i have managed to build the structure like this
$VAR1 = [{"device_name":"device1","device_id":"device_id1"},{"device_name":"device2","device_id":"device_id2"}]
how can i iterate through the array refs and get "device_name" and "device_id" values?
For your needs, this library should work well. What you need to do is have a scalar variable defined as below and push the element for each iteration in while loop
my $json = JSON->new->utf8->space_after->encode({})
while(my $data = $sth->fetchrow_arrayref()) {
#Push new element here in $json using incr_parse method
#or using $json_text = $json->encode($perl_scalar)
}
Hope this helps you.
finally what i did was to create an array ref and push the fetched rows which are being returned as hash refs
my #device = ();
while(my $data = $sth->fetchrow_hashref()) {
push(#device, $data);
}
last i convert the #device array ref to json and return the outcome
return encode_json(\#device);
The statement handle method fetchall_arrayref() can return an array reference where each element in the referenced array is a hash reference containing details of one row in the resultset. This seems to me to be exactly the data structure that you want. So you can just call that method and pass the returned data structure to a JSON encoding function.
# Passing a hash ref to fetchall_arrayref() tells it to
# return each row as a hash reference.
my $json = encode_json($sth->fetchall_arrayref({});
Your sample JSON is incorrect - JSON is actually quite nicely represented by perl data structures - [] denotes array, {} denotes key-value (very similar to hash).
I would rather strongly suggest though, that what you've asked for is probably not what you want - you've seemingly gone for globally unique keys, which ... isn't good style when they're nested.
Why? well, so you can do things like this:
print $my_data{$_}->{'name'} for keys %my_data;
Far better to go for something like:
#!/usr/bin/env perl
use strict;
use warnings;
use Data::Dumper;
use JSON;
my %my_data = (
object1 => {
uuid => "id1",
name => "name1"
},
object2 => {
uuid => "id2",
name => "name2"
},
object3 => {
uuid => "id3",
name => "name3"
},
);
print Dumper \%my_data;
print to_json ( \%my_data, { 'pretty' => 1 } )."\n";
Now, that does assume your 'object1' is a unique key - if it isn't, you can instead do something like this - an array of anonymous hashes (for bonus points, it preserves ordering)
my #my_data = (
{ object1 => {
uuid => "id1",
name => "name1"
}
},
{ object2 => {
uuid => "id2",
name => "name2"
}
},
{ object3 => {
uuid => "id3",
name => "name3"
}
},
);
Now, how to take your example and extend it? Easy peasy really - assemble what you want to add to your structure in your loop, and insert it into the structure:
while(my $data = $sth->fetchrow_arrayref()) {
my $objectname = $data -> [0]; #assuming it's this element!
my $uuid = $data -> [1];
my $name = $data -> [2];
my $new_hash = { uuid => $uuid, name => $name };
$mydata{$objectname} = $new_hash;
}