Logstash - create terms from relational database group by - mysql

I have a table in MySQL that I want to import into Elasticsearch
As an example the data looks like this
team buyer
==== ======
one Q76876
one Q66567
one T99898
two Q45456
two S77676
I want to import this into elasticsearch using logstash and create an index that looks like this
{
"id": "one",
"team": one,
"buyers": ["Q76876", "Q66567", "T99898"]
},
{
"id": "two",
"team": "two",
"buyers": ["Q45456", "S77676"]
}
How would I write my .conf script to achieve this?

Logstash put the events in the index as they come to it unless you apply some filter. You case looks pretty straight forward. If you format your sql query to return the data in the format your require, then you dont need to apply any filter and can just hook up the database and sql query to run in logstash config and the output as elastic search index.
For example:
MySql query would look something like: (i'm not good at mysql, below is just to give an idea - please verify it works)
SELECT team as id,
team,
GROUP_CONCAT(DISTINCT buyer SEPARATOR ', ') as buyers
FROM tablename GROUP BY team
This would return something like:
+-----+------+------------------------+
| id | team | buyers |
+-----+------+------------------------+
| one | one | Q76876, Q66567, T99898 |
| two | two | Q45456, S77676 |
+-----+------+------------------------+
And the logstash config would simply look like :
input {
jdbc {
jdbc_driver_library => "${DATABASE_DRIVER_PATH}"
jdbc_driver_class => "${DATABASE_DRIVER_PATH}"
jdbc_connection_string => "{CONNECTIONSTRING}"
jdbc_user => "${DATABASE_USERNAME}"
jdbc_password => "${DATABASE_PASSWORD}"
statement_filepath => "${LOGSTASH_SQL_FILEPATH}" #this will be the sql written above
}
}
filter {
}
output {
elasticsearch {
action => "index"
hosts => ["${ELASTICSEARCH_HOST}"]
user => "${ELASTICSEARCH_USER}"
password => "${ELASTICSEARCH_PASSWORD}"
index => "${INDEX_NAME}"
document_type => "doc"
document_id => "%{id}"
}
stdout { codec => rubydebug }
stdout { codec => dots }
}

Related

Adding a new field to an existing index from CSV using logstash

I have an index in ElasticSearch with the following fields
EmpId | Name | Age | Title
It contains some data , I need to add other fields like
Address | City | MobileNumber
These extra columns are exist in a CSV file
EmpId| Address | City | MobileNumber
below is logstash.conf
input {
file {
path => "D:/data.csv"
start_position =>"beginning"
sincedb_path =>"index/test1"
}
}
filter {
csv {
separator => ","
columns => ["EmpId","Address","City","MobileNumber"]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "myindex"
document_id => "%{EmpId}"
}
stdout {}
}
Now when I run this file , old fields are removed and I can only see the new columns
"EmpId","Address","City","MobileNumber"
How can I add a new fields but keep old fields as well ?
Add these two lines to your elasticsearch output section:
action => "update"
doc_as_upsert => true
So the whole output block would be:
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "myindex"
document_id => "%{EmpId}"
action => "update"
doc_as_upsert => true
}
}
Further reading:
Elasticsearch Update API docs
Logstash ES Output's "action" setting (see also the doc_as_upsert setting, obviously)
[Edit: complete rewrite after discovering this totally works. Mea culpa]

How to migrate Mysql data to elasticsearch using logstash

I need a brief explanation of how can I convert MySQL data to Elastic Search using logstash.
can anyone explain the step by step process about this
This is a broad question, I don't know how much you familiar with MySQL and ES. Let's say you have a table user. you may just simply dump it as csv and load it at your ES will be good. but if you have a dynamic data, like the MySQL just like a pipeline, you need to write a Script to do those stuff. anyway you can check the below link to build your basic knowledge before you ask How.
How to dump mysql?
How to load data to ES
Also, since you probably want to know how to covert your CSV to json file, which is the best suite for ES to understand.
How to covert CSV to JSON
You can do it using the jdbc input plugin for logstash.
Here is a config example.
Let me provide you with a high level instruction set.
Install Logstash, and Elasticsearch.
In Logstash bin folder copy jar ojdbc7.jar.
For logstash, create a config file ex: config.yml
#
input {
# Get the data from database, configure fields to get data incrementally
jdbc {
jdbc_driver_library => "./ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#db:1521:instance"
jdbc_user => "user"
jdbc_password => "pwd"
id => "some_id"
jdbc_validate_connection => true
jdbc_validation_timeout => 1800
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 10
#fetch the db logs using logid
statement => "select * from customer.table where logid > :sql_last_value order by logid asc"
#limit how many results are pre-fetched at a time from the cursor into the client’s cache before retrieving more results from the result-set
jdbc_fetch_size => 500
jdbc_default_timezone => "America/New_York"
use_column_value => true
tracking_column => "logid"
tracking_column_type => "numeric"
record_last_run => true
schedule => "*/2 * * * *"
type => "log.customer.table"
add_field => {"source" => "customer.table"}
add_field => {"tags" => "customer.table" }
add_field => {"logLevel" => "ERROR" }
last_run_metadata_path => "last_run_metadata_path_table.txt"
}
}
# Massage the data to store in index
filter {
if [type] == 'log.customer.table' {
#assign values from db column to custom fields of index
ruby{
code => "event.set( 'errorid', event.get('ssoerrorid') );
event.set( 'msg', event.get('errormessage') );
event.set( 'logTimeStamp', event.get('date_created'));
event.set( '#timestamp', event.get('date_created'));
"
}
#remove the db columns that were mapped to custom fields of index
mutate {
remove_field => ["ssoerrorid","errormessage","date_created" ]
}
}#end of [type] == 'log.customer.table'
} #end of filter
# Insert into index
output {
if [type] == 'log.customer.table' {
amazon_es {
hosts => ["vpc-xxx-es-yyyyyyyyyyyy.us-east-1.es.amazonaws.com"]
region => "us-east-1"
aws_access_key_id => '<access key>'
aws_secret_access_key => '<secret password>'
index => "production-logs-table-%{+YYYY.MM.dd}"
}
}
}
Go to bin, Run as
logstash -f config.yml

Logstash mysql database data not loading to elasticsearch index

I have been working on this all evening and it is driving me crazy. It is supposed to be very simple but it is not working. This works with Oracle but not with MySQL and I created similar db.config that is fed to logstash using -f option.
input {
jdbc {
jdbc_driver_library => "/opt/elk/logstash-5.6.0/lib/mysql-connector-java-5.1.45-bin.jar"
jdbc_driver_class => "Java::com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://serverName:3306/dbName?verifyServerCertificate=false&useSSL=true"
jdbc_user => "userName"
jdbc_password => "PasswordValue"
statement => "select user_id, visitor_returning, config_os, visitor_days_since_last from visiting_table where user_id is not null"
#optional extras I use
type => "visit"
tags => ["awesome", "import"]
}
}
output{
stdout {codec => json_lines}
if [type] == "visit"{
elasticsearch{
hosts => "127.0.0.1"
index => "visitDb"
document_type => "visit_results"
}
}
stdout{}
}
Once I run the logstash, it is not loading the data into Elastic search index. I cannot even see the index named visitDb when I do the below.
curl 'localhost:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana Fg6P7XuHSTaonbKEbLcz5A 1 1 21 3 56.5kb 56.5kb
yellow open orderstotdb obxZ38prTFCG0W-BFTIhgw 5 1 60 0 245.4kb 245.4kb
y
I am unable to figure out what is going on with MySQL. I can see the console log retrieving the data in json format and writing to console, but the index does not reflect in ElasticSearch nor shows up in Kibana to create index pattern.
Can someone please help ??
Answering my own question in case any one else is having the same issue. Elastic Search rejects index names with upper case, since I had name visitDb with D capital, it was rejecting index name creation, go figure :) Hours lost in debugging different options.

import csv into elasticsearch

I'm doing "elastic search getting started" tutorial. Unfortunatelly this tutorial doesn't cover first step which is importing csv database into elasticsearch.
I googled to find solution but it doesn't work unfortunatelly. Here is what I want to achieve and what I have:
I have a file with data which I want to import (simplified)
id,title
10,Homer's Night Out
12,Krusty Gets Busted
I would like to import it using logstash. After research over the internet I end up with following config:
input {
file {
path => ["simpsons_episodes.csv"]
start_position => "beginning"
}
}
filter {
csv {
columns => [
"id",
"title"
]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
workers => 1
}
}
I have a trouble with specifying document type so once data is imported and I navigate to http://localhost:9200/simpsons/episode/10 I expect to see result with episode 10.
Good job, you're almost there, you're only missing the document ID. You need to modify your elasticsearch output like this:
elasticsearch {
action => "index"
hosts => ["127.0.0.1:9200"]
index => "simpsons"
document_type => "episode"
document_id => "%{id}" <---- add this line
workers => 1
}
After this you'll be able to query episode with id 10
GET http://localhost:9200/simpsons/episode/10
I'm the author of moshe/elasticsearch_loader
I wrote ESL for this exact problem.
You can download it with pip:
pip install elasticsearch-loader
And then you will be able to load csv files into elasticsearch by issuing:
elasticsearch_loader --index incidents --type incident csv file1.csv
Additionally, you can use custom id file by adding --id-field=document_id to the command line

Logstash parsing csv date

I have a problem parsing date from csv and I cannot find the problem with (one would assume) simple date - dd/MM/yy. Here's structure of my csv file:
Date,Key-values,Line Item,Creative,Ad unit,Creative size,Ad server impressions,Ad server clicks,Ad server CTR
04/04/16,prid=DUBAP,Hilton_PostAuth 1,Stop Clicking Around - 300x250,383UKHilton_300x250,300 x 250,31,0,0.00%
04/04/16,prid=DUBAP,Hilton_PostAuth 2,16-0006_Auction_Banners_300x250_cat4,383UKHilton_300x250,300 x 250,59,0,0.00%
and my logstash.config file:
input {
file {
path => "/Users/User/*.csv"
type => "core2"
start_position => "beginning"
}
}
filter {
csv {
columns => ["Date","Key-values","Line Item","Creative","Ad unit","Creative size","Ad server impressions","Ad server clicks","Ad server CTR"]
separator => ","
}
date {
match => ["Date", "dd/MM/YY"]
}
mutate {convert => ["Ad server impressions", "float"]}
mutate {convert => ["Ad server clicks", "float"]}
mutate {convert => ["Ad server CTR", "float"]}
}
output {
elasticsearch {
action => "index"
hosts => "localhost"
index => "test1"
workers => 1
}
stdout {}
}
I have also tried combinations with date being "dd/MM/yy" with no luck, Date is not indexed as date, and I can select only #timestamp in Kibana..
I think there must be a simple thing I'm just missing but as for this moment I cannot find it..
Cheers!
EDIT 1:
Please find my console output when logstash starts and how data is being processed:
Settings: Default pipeline workers: 4
Pipeline main started
Failed parsing date from field {:field=>"Date", :value=>"Date", :exception=>"Invalid format: \"Date\"", :config_parsers=>"dd/MM/YY", :config_locale=>"default=en_US", :level=>:warn}
2016-05-06T20:32:48.034Z Pawels-MacBook-Air.local Date,Key-values,Line Item,Creative,Ad unit,Creative size,Ad server impressions,Ad server clicks,Ad server CTR
2016-04-03T23:00:00.000Z Pawels-MacBook-Air.local 04/04/16,prid=DUBAP,Hilton_PostAuth 1,Stop Clicking Around - 300x250,383UKHilton_300x250,300 x 250,31,0,0.00%
It still loads it into Elasticsearch but in Kibana there's no 'Date' field - I can only use #timestamp
Cheers
Actually what date filter does is:
The date filter is used for parsing dates from fields, and then using
that date or timestamp as the logstash timestamp for the event.
So with that configuration it reads your date and use it as timestamp field. If you want to use it as a seperate field, configure as:
date {
match => ["Date", "dd/MM/yy"]
target => "Date"
}