Elastic Search Bulk Import from JSON without ID - json

Is there any way to import data from a JSON file into elasticSearch without having to provide ID to each document?
I have some data in a JSON file. It contains around 1000 documents but no ID has been specified for any document. Here's how the data looks like:
{"business_id": "aasd231as", "full_address": "202 McClure 15034", "hours":{}}
{"business_id": "123123444", "full_address": "1322 lure 34", "hours": {}}
{"business_id": "sd231as", "full_address": "2 McCl 5034", "hours": {}}
It does not have {"index":{"_id":"5"}} before any document.
Now I am trying to import the data into elasticsearch using the following command:
curl -XPOST localhost:9200/newindex/newtype/_bulk?pretty --data-binary #path/file.json
But it throws the following error:
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [VALUE_STRING]"
This is because of the absence of ID in line before each document.
Is there any way to import the data without providing {"index":{"_id":"5"}} before each document.
Any help will be highly appreciated!!

How about using Logstash which is perfectly suited for this task. Just use the following config file and you're done:
Save the following config in logstash.conf:
input {
file {
path => "/path/to/file.json"
start_position => "beginning"
sincedb_path => "/dev/null"
codec => "json"
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp", "path", "host" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "newindex"
document_type => "newtype"
workers => 1
}
}
Then start logstash with
bin/logstash -f logstash.conf

Another option, perhaps the easier one since you are not filtering data is to use filebeat. Latest filebeat-5.0.0-alpha3 has JSON shipper. Here is a sample

Related

Parsing out awkward JSON in Logstash

Afternoon,
I've been trying to sort this for the past few weeks and cannot find a solution. We receive some logs via a 3rd part and so far I've used grok to pull out the value below into the details field. Annoyingly this would be extremely simple if it weren't for the all the slashes.
Is there an easy way to parse this data out as JSON in Logstash?
{\"CreationTime\":\"2021-05-11T06:42:44\",\"Id\":\"xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx\",\"Operation\":\"SearchMtpBatch\",\"OrganizationId\":\"xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx\",\"RecordType\":52,\"UserKey\":\"eample#example.onmicrosoft.com\",\"UserType\":5,\"Version\":1,\"Workload\":\"SecurityComplianceCenter\",\"UserId\":\"example#example.onmicrosoft.com\",\"AadAppId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx\",\"DataType\":\"MtpBatch\",\"DatabaseType\":\"DataInsights\",\"RelativeUrl\":\"/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx\",\"ResultCount\":\"1\"}
You can achieve this easily with the json filter:
filter {
json {
source => "message"
}
}
If your source data actually contains those backslashes, then you need to somehow remove them before Logstash can recognise the message as valid JSON.
You could do that before it hits Logstash, then the json codec will probably work as expected. Or if you want Logstash to handle it, you can use the Mutate's gsub option, followed by the JSON filter to parse:
filter {
mutate {
gsub => ["message", "[\\]", "" ]
}
json {
source => "message"
}
}
A couple of things to note: this will just blindly strip out all backslashes. If your strings ever might contain backslashes, you need to do something a little more sophisticated. I've had trouble escaping backslashes in gsub before and found that using the regex any of/[] construction is safer.
Here's a docker one-liner to run that config. The stdin input and stdout output are the default when using -e to specify config on the command line, so I've omitted them here for readability:
docker run --rm -it docker.elastic.co/logstash/logstash:7.12.1 -e 'filter { mutate { gsub => ["message", "[\\]", "" ]} json { source => "message" } }'
Pasting your example in and hitting return results in this output:
{
"#timestamp" => 2021-05-13T01:57:40.736Z,
"RelativeUrl" => "/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx",
"OrganizationId" => "xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx",
"UserKey" => "eample#example.onmicrosoft.com",
"DataType" => "MtpBatch",
"message" => "{\"CreationTime\":\"2021-05-11T06:42:44\",\"Id\":\"xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx\",\"Operation\":\"SearchMtpBatch\",\"OrganizationId\":\"xxxxxxxxx-xxx-xxxx-xxxx-xxxxxxx\",\"RecordType\":52,\"UserKey\":\"eample#example.onmicrosoft.com\",\"UserType\":5,\"Version\":1,\"Workload\":\"SecurityComplianceCenter\",\"UserId\":\"example#example.onmicrosoft.com\",\"AadAppId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx\",\"DataType\":\"MtpBatch\",\"DatabaseType\":\"DataInsights\",\"RelativeUrl\":\"/DataInsights/DataInsightsService.svc/Find/MtpBatch?tenantid=xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx&PageSize=200&Filter=ModelType+eq+1+and+ContainerUrn+eq+%xxurn%xAZappedUrlInvestigation%xxxxxxxxxxxxxxxxxxxxxx%xx\",\"ResultCount\":\"1\"}",
"UserType" => 5,
"UserId" => "example#example.onmicrosoft.com",
"type" => "stdin",
"host" => "de2c988c09c7",
"#version" => "1",
"Operation" => "SearchMtpBatch",
"AadAppId" => "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"ResultCount" => "1",
"DatabaseType" => "DataInsights",
"Version" => 1,
"RecordType" => 52,
"CreationTime" => "2021-05-11T06:42:44",
"Id" => "xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",
"Workload" => "SecurityComplianceCenter"
}

Add a header line before each json document

I have a json file with 1000 json object.
is there any way to add a header line before each json document ? Is there any easiest way ?
Example : I have 1000 object like this
{"id":58,"first_name":"Louis","last_name":"Jordan","email":"ljordan1l#nature.com","gender":"Male","Latitude":"-15.93444","Longitude":"-50.14028"}
i want to add index header like below for every json object so that i can use in Elasticsearch Bulk api
{ "index" : { "_index" : "test", "_type" : "type1", "_id" : "unique_id" } }
{"id":58,"first_name":"Louis","last_name":"Jordan","email":"ljordan1l#nature.com","gender":"Male","Latitude":"-15.93444","Longitude":"-50.14028"}
If you are willing to leverage Logstash, you don't need to modify your file and can simply read it line by line and stream it to ES using the elasticsearch output which leverages the Bulk API.
Store the following Logstash configuration in a file named es.conf (make sure the file path and ES hosts match your settings):
input {
file {
path => "/path/to/your/json"
sincedb_path => "/dev/null"
start_position => "beginning"
codec => "json"
}
}
filter {
mutate {
remove_fields => ["#version", "#timestamp"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
index => "test"
document_type => "type1"
document_id => "%{id}"
}
}
Then, you need to install logstash and you'll be able to run the following command in order to load your JSON files to your ES server:
bin/logstash -f es.conf
I found the best way to Add a header line before each json document.
https://stackoverflow.com/a/30899000/5029432

Working on JSON based logs using logstash

I have a logs which contains logs as following format
{ "start_time" : "12-May-2011", "name" : "this is first heading", "message" : "HELLO this is first message" }
{ "start_time" : "13-May-2011", "name" : "this is second heading", "message" : "HELLO this is second message" }
{ "start_time" : "14-May-2011", "name" : "this is third heading", "message" : "HELLO this is third message" }
...
I am new to logstash, I am currently having an app that is creating this log entries as JSON strings one below the other in that file (say location as /root/applog/scheduler.log)
I m looking for some help on how to parse this json from the logs into different fields to the stdout. How does the conf file should be.
note: idea is later to use it to kibana for visualization.
Example config:
input {
file {
path => ["/root/applog/scheduler.log"]
codec => "json"
start_position => "beginning" # If your file already exists
}
}
filter { } # Add filters here (optional)
output {
elasticsearch { } # pass the output to ES to prepare visualization with kibana
stdout { codec => "rubydebug" } # If you want to see the result in stdout
}
Logstash includes a json codec that will split your json into fields for you.

Import JSON Files into Logstash + Elasticsearch + Kibana

So, I have a web platform that prints a JSON file per request containing some log data about that request. I can configure several rules about when should it log stuff, only at certain levels, etc...
Now, I've been toying with the Logstash + Elasticsearch + Kibana3 stack, and I'd love to find a way to see those logs in Kibana. My question is, is there a way to make Logstash import these kind of files, or would I have to write a custom input plugin for it? I've searched around and for what I've seen, plugins are written in Ruby, a language I don't have experience with.
Logstash is a very good tool for processing dynamic files.
Here is the way to import your json file into elasticsearch using logstash:
configuration file:
input
{
file
{
path => ["/path/to/json/file"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
output
{
elasticsearch {
protocol => "http"
codec => json
host => "localhost"
index => "json"
embedded => true
}
stdout { codec => rubydebug }
}
example of json file:
{"foo":"bar", "bar": "foo"}
{"hello":"world", "goodnight": "moon"}
Note the json need to be in one line. if you want to parse a multiline json file, replace relevant fields in your configuration file:
input
{
file
{
codec => multiline
{
pattern => '^\{'
negate => true
what => previous
}
path => ["/opt/mount/ELK/json/*.json"]
start_position => "beginning"
sincedb_path => "/dev/null"
exclude => "*.gz"
}
}
filter
{
mutate
{
replace => [ "message", "%{message}}" ]
gsub => [ 'message','\n','']
}
if [message] =~ /^{.*}$/
{
json { source => message }
}
}
Logstash is just a tool for converting various kinds of syslog files into JSON and loading them into elasticsearch (or graphite, or... ).
Since your files are already in JSON, you don't need logstash. You can upload them directly into elasticsearch using curl.
See Import/Index a JSON file into Elasticsearch
However, in order to work well with Kibana, your JSON files need to be at a minimum.
Flat - Kibana does not grok nested JSON structs. You need a simple hash of key/value pairs.
Have a identifiable timestamp.
What I would suggest is looking the JSON files logstash outputs and seeing if you can massage your JSON files to match that structure. You can do this in any language you
like that supports JSON. The program jq is very handy for filtering json from one format to another.
Logstash format - https://gist.github.com/jordansissel/2996677
jq - http://stedolan.github.io/jq/
Logstash can import different formats and sources as it provides a lot of plugins. There are also other log collector and forwarder tools that can send logs to logstash such as nxlog, rsyslog, syslog-ng, flume, kafka, fluentd, etc. From what I've heard most people use nxlog on windows (though it works on linux equally well) in combination with the ELK stack because of its low resource footprint. (Disclaimer: I'm affiliated with the project)

json filter fails with >#<NoMethodError: undefined method `[]' for nil:NilClass>

I'm trying to process entries from a logfile that contains both plain messages and json formatted messages. My initial idea was to grep for messages enclosed in curly braces and have them processed by another chained filter. Grep works fine (as does plain message processing), but the subsequent json filter reports an exception. I attached the logstash configuration, input and error message below.
Do you have any ideas what the problem might be? Any alternative suggestions for processing plain and json formatted entries from the same file?
Thanks a lot,
Johannes
Error message:
Trouble parsing json {:key=>"#message", :raw=>"{\"time\":\"14.08.2013 10:16:31:799\",\"level\":\"DEBUG\",\"thread\":\"main\",\"clazz\":\"org.springframework.beans.factory.support.DefaultListableBeanFactory\",\"line\":\"214\",\"msg\":\"Returning cached instance of singleton bean 'org.apache.activemq.xbean.XBeanBrokerService#0'\"}", :exception=>#<NoMethodError: undefined method `[]' for nil:NilClass>, :level=>:warn}
logstash conf:
file {
path => [ "plain.log" ]
type => "plainlog"
format => "plain"
}
}
filter {
# Grep json formatted messages and send them to following json filter
grep {
type => "plainlog"
add_tag => [ "grepped_json" ]
match => [ "#message", "^{.*}" ]
}
json {
tags => [ "grepped_json" ]
source => "#message"
}
}
output {
stdout { debug => true debug_format => "json"}
elasticsearch { embedded => true }
}
Input from logfile (just one line):
{"time":"14.08.2013 10:16:31:799","level":"DEBUG","thread":"main","clazz":"org.springframework.beans.factory.support.DefaultListableBeanFactory","line":"214","msg":"Returning cached instance of singleton bean 'org.apache.activemq.xbean.XBeanBrokerService#0'"}
I had the same problem and solved it by adding a target to the json filter.
The documentation does say the target is optional but apparently it isn't.
Changing your example you should have:
json {
tags => [ "grepped_json" ]
source => "#message"
target => "data"
}