Filebeat upload JSON templates to ElasticSearch - json

In short
I need to load several JSON templates to ElasticSearch within filebeat.yaml configuration
I have
Directory with templates:
-rootdir
|
| - templates
|
|- some-template.json
|- some-2-template.json
|- some-3-template.json
Pre-setup properties in filebeat.yaml configuration, like:
setup.template:
json:
enabled: true
path: /rootdir/templates
pattern: "*-template.json"
name: "json-templates"
This is actually a blueprint as I do not know how to load to ElasticSearch all templates, because one template using this config loaded successfully, if I append to path, for example, /some-template.json.
After the starting the Filebeat I've got next error logs:
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://:9200)): Connection marked as failed because the onConnect callback failed: error loading template: error reading file /rootdir/templates for template: read /rootdir/templates: is a directory
Question is
How I can upload multiple files within one property with different index-patterns in each template, so the results after running GET _cat/templates?v=true should be like this:
name index_patterns order version composed_of
some-template [some*] 0 7140099
some-2-template [some-2*] 0 7140099
some-3-template [some-3*] 0 7140099
.monitoring-es [.monitoring-es-7-*] 0 7140099
.monitoring-alerts-7 [.monitoring-alerts-7] 0 7140099
.monitoring-logstash [.monitoring-logstash-7-*] 0 7140099
.monitoring-kibana [.monitoring-kibana-7-*] 0 7140099
.monitoring-beats [.monitoring-beats-7-*] 0 7140099
ilm-history [ilm-history-5*] 2147483647 5 []
.triggered_watches [.triggered_watches*] 2147483647 12 []
.kibana-event-log-7.16.3-template [.kibana-event-log-7.16.3-*] 0 []
.slm-history [.slm-history-5*] 2147483647 5 []
synthetics [synthetics-*-*] 100 1 [synthetics-mappings, data-streams-mappings, synthetics-settings]
metrics [metrics-*-*] 100 1 [metrics-mappings, data-streams-mappings, metrics-settings]
.watch-history-12 [.watcher-history-12*] 2147483647 12 []
.deprecation-indexing-template [.logs-deprecation.*] 1000 1 [.deprecation-indexing-mappings, .deprecation-indexing-settings]
.watches [.watches*] 2147483647 12 []
logs [logs-*-*] 100 1 [logs-mappings, data-streams-mappings, logs-settings]
.watch-history-13 [.watcher-history-13*] 2147483647 13 []
Additionally
I'm running Filebeat and ElasticSearch in Docker using Docker compose, may be it would be helpful somehow
Thank you in advance!
Best Regards, Anton.

Related

Julia CSV.read not recognizing "select" keyword

I am reading in a space-delimited file using the CSV library in Julia.
edgeList = CSV.read(
joinpath(dataDirectory, "out.file"),
types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
This yields the following error:
MethodError: no method matching CSV.File(::String; types=DataType[Int64, Int64], header=["node1", "node2"], skipto=3, select=[1, 2])
Closest candidates are:
CSV.File(::Any; header, normalizenames, datarow, skipto, footerskip, limit, transpose, comment, use_mmap, ignoreemptylines, missingstrings, missingstring, delim, ignorerepeated, quotechar, openquotechar, closequotechar, escapechar, dateformat, decimal, truestrings, falsestrings, type, types, typemap, categorical, pool, strict, silencewarnings, threaded, debug, parsingdebug, allowmissing) at /Users/n.jordanjameson/.julia/packages/CSV/4GOjG/src/CSV.jl:221 got unsupported keyword argument "select"
I am using Julia v. 1.6.2. Here is the output versioninfo():
Julia Version 1.6.2
Commit 1b93d53fc4 (2021-07-14 15:36 UTC)
Platform Info:
OS: macOS (x86_64-apple-darwin18.7.0)
CPU: Intel(R) Core(TM) i7-5650U CPU # 2.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-11.0.1 (ORCJIT, broadwell)
The version of CSV is 0.10.4. The wiki for this version of CSV is here: https://csv.juliadata.org/stable/reading.html#CSV.read, and it has a select / drop entry.
The file I am trying to read is from here: http://konect.cc/networks/moreno_crime/ (the file I'm using is called "out.moreno_crime_crime"). The first few lines are:
% bip unweighted
% 1476 829 551
1 1
1 2
1 3
1 4
2 5
2 6
2 7
2 8
2 9
2 10
I get a different error than you, can you restart Julia and make sure?
julia> CSV.read("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
ERROR: ArgumentError: provide a valid sink argument, like `using DataFrames; CSV.read(source, DataFrame)`
Stacktrace:
[1] read(source::String, sink::Nothing; copycols::Bool, kwargs::Base.Pairs{Symbol, Any, NTuple{4, Symbol}, NamedTuple{(:types, :header, :skipto, :select), Tuple{Vector{DataType}, Vector{String}, Int64, Vector{Int64}}}})
# CSV ~/.julia/packages/CSV/jFiCn/src/CSV.jl:89
[2] top-level scope
# REPL[8]:1
Stacktrace:
this error is telling you you can't CSV.read without a target sink, you might want to use CSV.File
julia> CSV.File("/home/akako/Downloads/moreno_crime/out.moreno_crime_crime"; types=[Int, Int],
header=["node1", "node2"],
skipto=3,
select=[1,2]
)
┌ Warning: thread = 1 warning: parsed expected 2 columns, but didn't reach end of line around data row: 1. Parsing extra columns and widening final columnset
└ # CSV ~/.julia/packages/CSV/jFiCn/src/file.jl:579
1476-element CSV.File:
CSV.Row: (node1 = 1, node2 = 1, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 2, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 3, Column3 = missing)
CSV.Row: (node1 = 1, node2 = 4, Column3 = missing)

How to capture the values from Get Response Body - Robot framework

Output from Response Body
{"data":[{"id”:122,"name”:”Test 1“,”description”:”TEST 1 Test 2 …..}]},{"id”:123,"name”:”DYNAMO”……}]},{"id”:126,”name”:”T DYNAMO”……
*** Keywords ***
Capture The Data Ids
#{ids}= Create List 122 123 126 167 190
${header} Create Dictionary Authoriztion...
${resp} Get Response httpsbin /data
${t_ids}= Get Json Value ${resp.content} /data/0/id
Problem
I have created a list of above ids in the test case and I need to compare the created data against the id returned in the response body.
t_ids returns 122and when 0 is replaced by 1, returns 123
Rather than capturing individual id, is it possible to put them in for loop?
:FOR ${i} IN ${ids}
\ ${the_id= Get Json Value ${resp.content} /data/${i}/id ?
I tried this and failed.
What is the possible solution to compare the ids from the response data against the created list?
Thank you.
It is possible to what you want, but it is always good to know what kind of data structure your variable contains. In the below example loading a json file replaces the received answer in ${resp.content}. To my knowledge this is a string, which is also what Get File returns.
The example is split into the json file and the robot file.
so_json.json
{
"data":[
{
"id":122,
"name": "Test 1",
"description": "TEST 1 Test 2"
},
{
"id": 123,
"name": "DYNAMO"
},
{
"id": 126,
"name": "T DYNAMO"
}
]
}
so_robot.robot
*** Settings ***
Library HttpLibrary.HTTP
Library OperatingSystem
Library Collections
*** Test Cases ***
TC
${json_string} Get File so_json.json
${json_object} Parse Json ${json_string}
:FOR ${item} IN #{json_object['data']}
\ Log To Console ${item['id']}
Which in turn gives the following result:
==============================================================================
Robot - Example
==============================================================================
Robot - Example.SO JSON
==============================================================================
TC 122
123
126
| PASS |
------------------------------------------------------------------------------
Robot - Example.SO JSON | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Robot - Example | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================

Assign puppet Hash to hieradata yaml

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Reading index content, possible?

Is there a way to analyze the contents of a specific index (fdb file)? I know I can see the index creation statement and try to guess from there but it would be nice if there is a way to see the contents/records inside an fdb file.
two tools cbindex and forestdb_dump can help. These are available in the bin folder along with other couchbase binaries. Note that, these tools are not supported, as documented at http://developer.couchbase.com/documentation/server/4.5/release-notes/relnotes-40-ga.html
given bucket/indexname, tool cbindex gets index level details:
couchbases-MacBook-Pro:bin varakurprasad$ pwd
/Users/varakurprasad/Downloads/couchbase-server-enterprise_451_GA/Couchbase Server.app/Contents/Resources/couchbase-core/bin
couchbases-MacBook-Pro:bin varakurprasad$ ./cbindex -server 127.0.0.1:8091 -type scanAll -bucket travel-sample -limit 4 -index def_type -auth Administrator:couch1
ScanAll index:
[airline] ... airline_10
[airline] ... airline_10123
[airline] ... airline_10226
[airline] ... airline_10642
Total number of entries: 4
Given a forestdb file, the tool forestdb_dump gets more low level details:
couchbases-MacBook-Pro:varakurprasad$ pwd
/Users/varakurprasad/Library/Application Support/Couchbase/var/lib/couchbase/data/#2i/travel-sample_def_type_1018858748122363634_0.index
couchbases-MacBook-Pro:varakurprasad$ forestdb_dump data.fdb.53 | more
[FDB INFO] Forestdb opened database file data.fdb.53
DB header info:
BID: 1568 (0x620, byte offset: 6422528)
DB header length: 237 bytes
DB header revision number: 3
...
Doc ID: airline_10
KV store name: back
Sequence number: 14637
Byte offset: 2063122
Indexed by the main index
Length: 10 (key), 0 (metadata), 24 (body)
Status: normal
Metadata: (null)
Body:^Fairline
...

Couchbase cbdocloader not loading documents from zip file

I recently installed Couchbase 4.5 beta on Windows 10. I'm following along with the free training videos and labs at learn.couchbase.com, specifically the CB110 course.
One step in the lab requires me to load up sample data with cbdocloader. I did this before with Couchbase 4.5 developer preview and it worked fine, but now it's not loading any documents.
It creates the bucket, but it doesn't load documents into it. Here's my powershell output:
PS C:\Users\mgroves\Desktop> cbdocloader -u Administrator -p password -b couchmu
sic1 -n 127.0.0.1:8091 -s 100 .\couchmusic1-countries-20151228-win.zip
[2016-05-12 10:23:50,480] - [rest_client] [6240] - INFO - existing buckets : [u'
couchmusic1', u'hello-couchbase', u'travel-sample']
[2016-05-12 10:23:50,496] - [rest_client] [6240] - INFO - found bucket couchmusi
c1
bucket creation is successful
.
bucket: couchmusic1-countries-20151228-win.zip, msgs transferred...
: total | last | per sec
byte : 0 | 0 | 0.0
done
PS C:\Users\mgroves\Desktop>
I've made one of the zip files available on dropbox if you'd like to try: couchmusic1-countries-20151228.zip
I suspect this is probably user error and not related to the Couchbase release, but I don't know for sure.
UPDATE: I ran with the -v flag (v for "verbose"), and below is the output from that. I'm still not seeing what the issue is:
PS C:\Users\mgroves\Desktop> cbdocloader -u Administrator -p password -b couchmu
sic1 -n 127.0.0.1:8091 -s 100 -v .\couchmusic1-countries-20151228-win.zip
[2016-05-12 10:40:06,549] - [rest_client] [7764] - INFO - existing buckets : [u'
couchmusic1', u'hello-couchbase', u'travel-sample']
[2016-05-12 10:40:06,561] - [rest_client] [7764] - INFO - found bucket couchmusi
c1
bucket creation is successful
2016-05-12 10:40:06,594: mt cbtransfer...
2016-05-12 10:40:06,595: mt source : json://.\couchmusic1-countries-20151228-wi
n.zip
2016-05-12 10:40:06,605: mt sink : http://127.0.0.1:8091
2016-05-12 10:40:06,612: mt opts : {'username': '<xxx>', 'destination_vbucket
_state': 'active', 'verbose': 1, 'extra': {'max_retry': 10.0, 'rehash': 0.0, 'dc
p_consumer_queue_length': 1000.0, 'data_only': 0.0, 'uncompress': 0.0, 'nmv_retr
y': 1.0, 'conflict_resolve': 1.0, 'cbb_max_mb': 100000.0, 'report': 5.0, 'mcd_co
mpatible': 1.0, 'try_xwm': 1.0, 'backoff_cap': 10.0, 'batch_max_bytes': 400000.0
, 'report_full': 2000.0, 'flow_control': 1.0, 'batch_max_size': 1000.0, 'seqno':
0.0, 'design_doc_only': 0.0, 'recv_min_bytes': 4096.0}, 'ssl': False, 'threads'
: 4, 'key': None, 'password': '<xxx>', 'id': None, 'destination_operation': None
, 'source_vbucket_state': 'active', 'silent': False, 'dry_run': False, 'single_n
ode': False, 'bucket_destination': 'couchmusic1', 'vbucket_list': None, 'bucket_
source': None}
2016-05-12 10:40:06,726: mt bucket: couchmusic1-countries-20151228-win.zip
2016-05-12 10:40:06,749: w3 source : json://.\couchmusic1-countries-20151228-w
in.zip(couchmusic1-countries-20151228-win.zip#N/A)
2016-05-12 10:40:06,760: w3 sink : http://127.0.0.1:8091(couchmusic1-countri
es-20151228-win.zip#N/A)
2016-05-12 10:40:06,767: w3 : total | last | pe
r sec
2016-05-12 10:40:06,772: w3 batch : 1 | 1 |
28.6
2016-05-12 10:40:06,776: w3 byte : 0 | 0 |
0.0
2016-05-12 10:40:06,779: w3 msg : 0 | 0 |
0.0
.
bucket: couchmusic1-countries-20151228-win.zip, msgs transferred...
: total | last | per sec
batch : 1 | 1 | 8.0
byte : 0 | 0 | 0.0
msg : 0 | 0 | 0.0
done
PS C:\Users\mgroves\Desktop>
Turns out, this is an issue that occurred because of a change between Couchbase 4.5 developer preview and Couchbase 4.5 beta.
Apparently, these couchmusic json files aren't in the correct format. There was a case to allow the import of these invalid json zip files for backwards compatibility (see https://github.com/couchbase/couchbase-cli/commit/3794ffa8fdfcdd5224cb4e332d5ef882aa8140b5). However, another case appears to have broken this (see: https://github.com/couchbase/couchbase-cli/commit/c892c9241d1e6997fa30317af791d6fcde73aeaa).
In any case, there are two problems:
1) The example json files for couchmusic aren't in the correct format
2) Backwards compatibility import is broken
I've spoken to the Couchbase support team, and they are going to try to get this issue reactivated and fixed before the Couchbase 4.5 release (you can view the issue here if you'd like: https://issues.couchbase.com/browse/MB-18905)