ZabbixAPI, retrieving information from a particular field within each host - zabbix

I want to retrieve the percentage of disk space left for a particular diskX: from all hosts within a particular host group.
I tried to work with the item.get() function but that returned an empty list.
zapi =ZabbixApi(server)
for t in zapi.item.get(groups = 'Type1',filter = {'name': 'Free Disk Space on X'},)
This^^ method using the item.get method.
Gives me an empty list
I tried using the history.get method but that kept timing out
for t in groups:
t2 += zapi.history.get(filter = {'name':'free Disk Space on E:(percentage)'},)
Anyone have any experience with Zabbix Api's to advice me on what I am doing wrong?
Thanks :)

Edited after more details regarding the request, see the comments.
To avoid php timeouts you should split your requests and use time_from/time_till as Jan suggested.
When using discovered items the item name obtained through the APIs will not expand the macros, there's a feature request about it.
For example if you use a Windows Filesystem Discovery and your server has C: and D: drives, in Zabbix you will have two items with the same name ("Free disk space on $1 (percentage)"), while the discovered drive will be in the key_ field of each item, for instance:
vfs.fs.size[C:,pfree]
vfs.fs.size[D:,pfree]
So, you will have to call the item get API filtering for the generic name (the $1), then get the history values only if the key_ contains your target drive name
I've updated the sample script with a hostgroup filter and more verbose variables and output: edit out any non-needed field to simplify the output you need.
from zabbix.api import ZabbixAPI
import re
import time
import datetime
zapi = ZabbixAPI(url=zabbixServer, user=zabbixUser, password=zabbixPass)
# Static filters, implement argparse if needed
itemFilter = { "name" : "Free disk space on $1 (percentage)" }
hostgroupFilter = { "name": "Some HostGroup" }
keyFilter = "C\:"
# args.f and args.t supplied from cmd line - see argparse
fromTimestamp = time.mktime(datetime.datetime.strptime(args.f, "%d/%m/%Y %H:%M").timetuple())
tillTimestamp = time.mktime(datetime.datetime.strptime(args.t, "%d/%m/%Y %H:%M").timetuple())
# Get only the host of the specified hostgroup
hostGroup = zapi.hostgroup.get(filter=hostgroupFilter,output='extend')
hosts = zapi.host.get(groupids=hostGroup[0]['groupid'],output='extend')
for host in hosts:
items = zapi.item.get(filter=itemFilter, host=host['host'], output='extend' )
for item in items:
# Check if the item key contains the target object (in your example, if in contains C:)
if re.search(keyFilter, item['key_']):
values = zapi.history.get(itemids=item['itemid'], time_from=fromTimestamp, time_till=tillTimestamp, history=item['value_type'])
for historyValue in values:
currentDate = datetime.datetime.fromtimestamp(int(historyValue['clock'])).strftime('%d/%m/%Y %H:%M:%S')
print "{}:{}({}) - {} {} Value: {}".format(
host['host'],
item['name'],
item['key_'],
historyValue['clock'],
currentDate, historyValue['value'])
Sample output of 5 minutes, hostgroup with 3 windows server
SRV01:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128853 28/09/2018 12:00:53 Value: 63.3960
SRV01:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128914 28/09/2018 12:01:54 Value: 63.3960
SRV01:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128974 28/09/2018 12:02:54 Value: 63.3960
SRV01:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129034 28/09/2018 12:03:54 Value: 63.3960
SRV01:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129094 28/09/2018 12:04:54 Value: 63.3960
SRV02:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128824 28/09/2018 12:00:24 Value: 52.2341
SRV02:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128885 28/09/2018 12:01:25 Value: 52.2341
SRV02:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128944 28/09/2018 12:02:24 Value: 52.2341
SRV02:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129004 28/09/2018 12:03:24 Value: 52.2341
SRV02:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129065 28/09/2018 12:04:25 Value: 52.2341
SRV03:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128828 28/09/2018 12:00:28 Value: 33.2409
SRV03:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128888 28/09/2018 12:01:28 Value: 33.2409
SRV03:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538128947 28/09/2018 12:02:27 Value: 33.2409
SRV03:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129008 28/09/2018 12:03:28 Value: 33.2409
SRV03:Free disk space on $1 (percentage)(vfs.fs.size[C:,pfree]) - 1538129069 28/09/2018 12:04:29 Value: 33.2409

You are trying to get full history (without time limitation) with history.get() - that can be a lot of datapoints, which needs to be preprocessed by API. That is a really not a good idea because you can reach some PHP/API limits - time or memory - that's your current case.
Use time_from/time_till parameters to limit time range of history.get().
See doc: https://www.zabbix.com/documentation/3.4/manual/api/reference/history/get

Related

NLM UMLS MRREL is broken / incomplete

I have been working with the Unified Medical Language System (UMLS) for decades. But I have been aware for some years now (since 2017) that the MRREL table is woefully defective. And I wonder how can that possibly be?
I have tons of examples, but I am just making it very simple. The ATC code is a simple tree. Among many others, there is a top-level category 'G' (CUI: C3653431) and another 'C' (CUI: C3540036).
To be absolutely sure that I am not losing anything due to my importing process into a relational database, I am checking the raw files from the UMLS distribution:
( unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.aa.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ab.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ac.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ad.gz |zcat ;
) |egrep 'C3540036|C3653431'
and here is what I get:
|||PAR|C3540036|A22726695||inverse_isa|R162880348||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162896206||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162888235||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162884662||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162904098||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162892260||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162895918||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162895969||||||N||
|||PAR|C3540036|A22726695||inverse_isa|R162884408||||||N||
|||CHD|C3540036|A22726695||isa|R162905548||||||N||
|||CHD|C3653431|A22724193||isa|R145149031||||||N||
C3540036|A22726695|AUI|CHD|C0001645|A22729715|AUI|isa|R162894118||ATC||||N||
C3653431|A22724193|AUI|CHD|C3653561|A22721518|AUI|isa|R145152424||ATC||||N||
|||PAR|C3653431|A22724193||inverse_isa|R145147348||||||N||
|||PAR|C3653431|A22724193||inverse_isa|R145150236||||||N||
|||PAR|C3653431|A22724193||inverse_isa|R145153001||||||N||
|||PAR|C3653431|A22724193||inverse_isa|R162904046||||||N||
Why would there only be one link for each of these top level ATC categories?
CUI: C0001645 is ATC C07 - BETA BLOCKING AGENTS
CUI: C3653561 is ATC G03 - SEX HORMONES AND MODULATORS OF THE GENITAL SYSTEM
but where is C06, C05 (CUI: C0304533), G02 (CUI: C3653939), etc?
Let's search the other way around:
( unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.aa.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ab.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ac.gz |zcat ;
unzip -p 2021AA-full/2021aa-2-meta.nlm 2021AA/META/MRREL.RRF.ad.gz |zcat ;
) |egrep 'C0001645|C0304533|C3653561|C3653939' \
|fgrep '|ATC|'
this time I filter out everything but the MRRELs from the source ATC. First is C07AA child of C07
C0001645|A22726519|AUI|CHD|C0304515|A22728404|AUI|isa|R145146143||ATC||||N||
C0001645|A22729715|AUI|CHD|C0001645|A22726519|AUI|isa|R162909942||ATC||||N||
look above there is even a cycle! And where are all the other children of C07. Nowhere. The only other row with C07 is the link to C that we already had.
C3540036|A22726695|AUI|CHD|C0001645|A22729715|AUI|isa|R162894118||ATC||||N||
And the C05? Only one child C05B, but no parent link to C nor any other child!
C0304533|A22730499|AUI|CHD|C0360720|A22722089|AUI|isa|R162902080||ATC||||N||
Now here is G02 with 3 of its (certainly more) children:
C3653939|A22723315|AUI|CHD|C3653712|A22724891|AUI|isa|R162905420||ATC||||N||
C3653939|A22731353|AUI|CHD|C3653306|A22721882|AUI|isa|R162890442||ATC||||N||
C3653939|A22722139|AUI|CHD|C0164398|A22725073|AUI|member_of|R162897807||ATC||||N||
and then we have inverse links, which are not actually from ATC, those concepts are from SNOMED and other sources:
C0164398|A22725073|AUI|PAR|C3653939|A22722139|AUI|has_member|R162896052||ATC||||N||
C0754280|A26456152|AUI|PAR|C3653939|A22722139|AUI|has_member|R171341743||ATC||||N||
C1721339|A32510681|AUI|PAR|C3653939|A22722139|AUI|has_member|R202594180||ATC||||N||
C3652943|A22728555|AUI|PAR|C3653939|A22722139|AUI|has_member|R162895991||ATC||||N||
C3652944|A22730286|AUI|PAR|C3653939|A22722139|AUI|has_member|R162884649||ATC||||N||
And here is G to G03
C3653431|A22724193|AUI|CHD|C3653561|A22721518|AUI|isa|R145152424||ATC||||N||
and this here also is not a ATC link, the target is in SNOMED and other sources, but not in ATC:
C3653561|A22721518|AUI|CHD|C0002844|A22722789|AUI|isa|R145149338||ATC||||N||
So this is completely random.
I remember from decades ago that the MRREL was pretty redundant having both directions for all relationships. But not any more. What is going on here?
I have sent a problem report to NLM and they replied that the file in the UMLS-Full.zip, the ones that end in .nlm, that also contain the UMLS data tables, are somehow incomplete and one needs their MetamorphoSys program to assemble the right files.
It seems like they do some data compression (for whatever reason) in rows by which they can reduce the size of the MRREL file by about 20%.
MRREL.RRF from the metathesaurus distribution 5,137,657,601 bytes
MRREL.RRF from the UMLS-Full .nlm file 3,662,797,614 bytes
$ head MRREL.RRF.met
C0000005|A13433185|SCUI|RB|C0036775|A7466261|SCUI||R86000559||MSHFRE|MSHFRE|||N||
C0000005|A26634265|SCUI|RB|C0036775|A0115649|SCUI||R31979041||MSH|MSH|||N||
C0000039|A0016515|AUI|SY|C0000039|A11754881|AUI|translation_of|R101808683||MSHSWE|MSHSWE|||N||
C0000039|A0016515|AUI|SY|C0000039|A12080359|AUI|sort_version_of|R64565540||MSH|MSH|||N||
C0000039|A0016515|AUI|SY|C0000039|A12091182|AUI|entry_version_of|R64592881||MSH|MSH|||N||
C0000039|A0016515|AUI|SY|C0000039|A13042554|AUI|translation_of|R193408122||MSHCZE|MSHCZE|||N||
C0000039|A0016515|AUI|SY|C0000039|A13096036|AUI|translation_of|R73331672||MSHPOR|MSHPOR|||N||
C0000039|A0016515|AUI|SY|C0000039|A1317708|AUI|permuted_term_of|R28482432||MSH|MSH|||N||
C0000039|A0016515|AUI|SY|C0000039|A18972171|AUI|translation_of|R124061564||MSHPOL|MSHPOL|||N||
C0000039|A0016515|AUI|SY|C0000039|A28315139|AUI||R173174221||RXNORM|RXNORM|||N||
$ head MRREL.RRF.nlm
C0000005|A13433185|SCUI|RB|C0036775|A7466261|SCUI||R86000559||MSHFRE||||N||
C0000005|A26634265|SCUI|RB|C0036775|A0115649|SCUI||R31979041||MSH||||N||
C0000039|A0016515|AUI|SY|C0000039|A11754881|AUI|translation_of|R101808683||MSHSWE||||N||
C0000039|A0016515|AUI|SY|C0000039|A12080359|AUI|sort_version_of|R64565540||MSH||||N||
|||SY|C0000039|A12091182||entry_version_of|R64592881||||||N||
C0000039|A0016515|AUI|SY|C0000039|A13042554|AUI|translation_of|R193408122||MSHCZE||||N||
C0000039|A0016515|AUI|SY|C0000039|A13096036|AUI|translation_of|R73331672||MSHPOR||||N||
C0000039|A0016515|AUI|SY|C0000039|A1317708|AUI|permuted_term_of|R28482432||MSH||||N||
C0000039|A0016515|AUI|SY|C0000039|A18972171|AUI|translation_of|R124061564||MSHPOL||||N||
C0000039|A0016515|AUI|SY|C0000039|A28315139|AUI||R173174221||RXNORM||||N||
You can see how the 5th row is produced from the 4th row by copying over the previous columns into empty columns.
That seems to be the issue.
I had the same issue. Thanks for the autoreply!
I wanted to ensure that they fill empty items with previous ones and yes, they do that.
For example this is an extract of the code to parse MRREL:
(lines 2865-2891 of file: gov.nih.nlm.umls.meta/src/gov/nih/nlm/umls/meta/io/RRFConceptInputStream.java)
//
// Process matching line
//
else if (line.startsWith(concept.getUi())
|| (prevCui != null && line.startsWith("|") && prevCui.equals(concept
.getUi()))) {
//
// Parse line and count fields
// CUI1,AUI1,STYPE1,REL,CUI2,AUI2,STYPE2,RELA,RUI,SRUI,SAB,SL,RG,DIR,SUPPRESS,CVF
//
String[] tokens = FieldedStringTokenizer.split(line, "|", 17);
// Set blank fields based on prev values.
if (tokens[0].equals("")) {
tokens[0] = prevCui;
tokens[1] = prevAui;
tokens[2] = prevStype1;
tokens[6] = prevStype2;
tokens[10] = prevSab;
} else {
prevCui = tokens[0];
prevAui = tokens[1];
prevStype1 = tokens[2];
prevStype2 = tokens[6];
prevSab = tokens[10];
}

Useful way to convert string to dictionary using python

I have the below string as input:
'name SP2, status Online, size 4764771 MB, free 2576353 MB, path /dev/sde, log 210 MB, port 5660, guid 7478a0141b7b9b0d005b30b0e60f3c4d, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sde /dev/sdf /dev/sdg, dare 0'
I wrote function which convert it to dictionary using python:
def str_2_json(string):
str_arr = string.split(',')
#str_arr{0} = name SP2
#str_arr{1} = status Online
json_data = {}
for i in str_arr:
#remove whitespaces
stripped_str = " ".join(i.split()) # i.strip()
subarray = stripped_str.split(' ')
#subarray{0}=name
#subarray{1}=SP2
key = subarray[0] #key: 'name'
value = subarray[1] #value: 'SP2'
json_data[key] = value
#{dict 0}='name': SP2'
#{dict 1}='status': online'
return json_data
The return turns the dictionary into json (it has jsonfiy).
Is there a simple/elegant way to do it better?
You can do this with regex
import re
def parseString(s):
dict(re.findall('(?:(\S+) ([^,]+)(?:, )?)', s))
sample = "name SP1, status Offline, size 4764771 MB, free 2406182 MB, path /dev/sdb, log 230 MB, port 5660, guid a48134c00cda2c37005b30b0e40e3ed6, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sdb /dev/sdc /dev/sdd, dare 0"
parseString(sample)
Output:
{'name': 'SP1',
'status': 'Offline',
'size': '4764771 MB',
'free': '2406182 MB',
'path': '/dev/sdb',
'log': '230 MB',
'port': '5660',
'guid': 'a48134c00cda2c37005b30b0e40e3ed6',
'clusterUuid': '-8650609094877646407--116798096584060989',
'disks': '/dev/sdb /dev/sdc /dev/sdd',
'dare': '0'}
Your approach is good, except for a couple weird things:
You aren't creating a JSON anything, so to avoid any confusion I suggest you don't name your returned dictionary json_data or your function str_2_json. JSON, or JavaScript Object Notation is just that -- a standard of denoting an object as text. The objects themselves have nothing to do with JSON.
You can use i.strip() instead of joining the splitted string (not sure why you did it this way, since you commented out i.strip())
Some of your values contain multiple spaces (e.g. "size 4764771 MB" or "disks /dev/sde /dev/sdf /dev/sdg"). By your code, you end up everything after the second space in such strings. To avoid this, do stripped_str.split(' ', 1) which limits how many times you want to split the string.
Other than that, you could create a dictionary in one line using the dict() constructor and a generator expression:
def str_2_dict(string):
data = dict(item.strip().split(' ', 1) for item in string.split(','))
return data
print(str_2_dict('name SP2, status Online, size 4764771 MB, free 2576353 MB, path /dev/sde, log 210 MB, port 5660, guid 7478a0141b7b9b0d005b30b0e60f3c4d, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sde /dev/sdf /dev/sdg, dare 0'))
Outputs:
{
'name': 'SP2',
'status': 'Online',
'size': '4764771 MB',
'free': '2576353 MB',
'path': '/dev/sde',
'log': '210 MB',
'port': '5660',
'guid': '7478a0141b7b9b0d005b30b0e60f3c4d',
'clusterUuid': '-8650609094877646407--116798096584060989',
'disks': '/dev/sde /dev/sdf /dev/sdg',
'dare': '0'
}
This is probably the same (practically, in terms of efficiency / time) as writing out the full loop:
def str_2_dict(string):
data = dict()
for item in string.split(','):
key, value = item.strip().split(' ', 1)
data[key] = value
return data
Assuming these fields cannot contain internal commas, you can use re.split to both split and remove surrounding whitespace. It looks like you have different types of fields that should be handled differently. I've added a guess at a schema handler based on field names that can serve as a template for converting the various fields as needed.
And as noted elsewhere, there is no json so don't use that name.
import re
test = 'name SP2, status Online, size 4764771 MB, free 2576353 MB, path /dev/sde, log 210 MB, port 5660, guid 7478a0141b7b9b0d005b30b0e60f3c4d, clusterUuid -8650609094877646407--116798096584060989, disks /dev/sde /dev/sdf /dev/sdg, dare 0'
def decode_data(string):
str_arr = re.split(r"\s*,\s*", string)
data = {}
for entry in str_arr:
values = re.split(r"\s+", entry)
key = values.pop(0)
# schema processing
if key in ("disks"): # multivalue keys
data[key] = values
elif key in ("size", "free"): # convert to int bytes on 2nd value
multiplier = {"MB":10**6, "MiB":2**20} # todo: expand as needed
data[key] = int(values[0]) * multiplier[values[1]]
else:
data[key] = " ".join(values)
return data
decoded = decode_data(test)
for kv in sorted(decoded.items()):
print(kv)
import json
json_data = json.loads(string)

How does one specify the input when using a CSV with Kur

I'm trying to feed a CSV file to Kur, but I don't know how to specify more than one column in the input without the program crashing. Here's a small example:
model:
- input:
- SepalWidthCm
- SepalLengthCm
- dense: 10
- activation: tanh
- dense: 3
- activation: tanh
name: Species
train:
data:
- csv:
path: Iris.csv
header: yes
epochs: 1000
weights: best.w
log: tutorial-log
loss:
- target: Species
name: mean_squared_error
The error:
File "/Users/bytter/.pyenv/versions/3.5.2/bin/kur", line 11, in <module>
sys.exit(main())
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/__main__.py", line 269, in main
sys.exit(args.func(args) or 0)
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/__main__.py", line 48, in train
func = spec.get_training_function()
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/kurfile.py", line 282, in get_training_function
model = self.get_model(provider)
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/kurfile.py", line 148, in get_model
self.model.build()
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/model/model.py", line 282, in build
self.build_graph(input_nodes, output_nodes, network)
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/model/model.py", line 356, in build_graph
for layer in node.container.build(self):
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/containers/container.py", line 281, in build
self._built = list(self._build(model))
File "/Users/bytter/.pyenv/versions/3.5.2/lib/python3.5/site-packages/kur/containers/layers/placeholder.py", line 122, in _build
'Placeholder "{}" requires a shape.'.format(self.name))
kur.containers.parsing_error.ParsingError: Placeholder "..input.0" requires a shape.
Using - input: SepalWidthCm works as expected.
The problem with your approach is that Kur doesn't know how you want the inputs concatenated. Should your input become 2D tensor of dimensions (2, N) (where N is the number of data points in your CSV file), like this?
[
[SepalWidthCm_0, SepalWidthCm_1, ...],
[SepalLengthCm_0, SepalLengthCm_1, ...]
]
(N.B., that example isn't a very deep-learning friendly structure.) Or should it be combined into a tensor of dimensions (N, 2), like this?
[
[SepalWidthCm_0, SepalLengthCm_0],
[SepalWidthCm_1, SepalLengthCm_1],
...
]
Or maybe you want to apply the same operations to each column in parallel? Regardless, this problem gets a lot harder / more ambiguous to answer when your input data is multi-dimensional (e.g., instead of scalars like length or width, you have vectors or even matrices).
Instead of trying to guess what you want (and possibly getting it wrong), Kur expects each input to be a single data source, which you can then combine however you see fit.
Here are a couple ways you might want your data combined, and how to do it in Kur.
Row-wise Combination. This is the second example above, where we want to combine "rows" of CSV data into tuples, so that the input tensor has dimensionality (batchSize, 2). Then your Kur model would look like:
model:
# Define the model inputs.
- input: SepalWidthCm
- input: SepalLengthCm
# Concatenate the inputs.
- merge: concat
inputs: [SepalWidthCm, SepalLengthCm]
# Do processing on these "vectorized" inputs.
- dense: 10
- activation: tanh
- dense: 1
- activation: tanh
# Output
- output: Species
Independent Processing, and then Combining. This is the setup where you do some operations on each input column independently, and then you merge them together (potentially with some more operations afterwards). In ASCII-art, this might look like:
INPUT_1 --> dense, activation --\
+---> dense, activation --> OUTPUT
INPUT_2 --> dense, activation --/
In this case, you would have a Kur model that looks like this:
model:
# First "branch" of processing.
- input: SepalWidthCm
- dense: 10
- activation: tanh
name: WidthBranch
# Second "branch" of processing.
- input: SepalLengthCm
- dense: 10
- activation: tanh
name: LengthBranch
# Fuse things together.
- merge:
inputs: [WidthBranch, LengthBranch]
# Continue some processing
- dense: 1
- activation: tanh
# Output
- output: Species
Keep in mind that the merge layer has been around since Kur 0.3, so make sure you using a recent version.
(Disclaimer: I am the core maintainer of Kur.)

TCL Expect. How to get specific symbols from output?

Has very little experience in programming and allmost no expirience in TCL Expect, but forced to use it.
I has output like that:
SMG2016-[CONFIG]-SIP-USERS> add user 1200011 adm.voip.partner.ru 0
Creating new Sip-User.
'SIP USER' [00] ID [1]:
name: Subscriber#000
IPaddr: 0.0.0.0
SIP domain: adm.voip.partner.ru
dynamic registration: off
number: 1200011
Numplan: 0
number list:
00) ---
01) none
02) none
03) none
04) none
AON number:
AON type number: subscriber
profile: 0
category: 1
access cat: 0
auth: none
cliro: off
pbxprofile: 0
access mode: on
lines: 1
No src-port control: off
BLF usage: off
BLF subscribers: 10
Intercom mode: sendonly
Intercom priority: 3
So i need to put in variable 00 from the 'SIP USER' [00] string, and the number in bracers could be up to four digits in row.
How shoud i do this? Any help please?
UPD:
Done this, works for me, even while having trouble with first zero.
expect -indices -re "'SIP USER' .{1}(\[0-9]{2,4}).{1}"
set userid [string trimleft $expect_out(1,string) "0"]

Reading index content, possible?

Is there a way to analyze the contents of a specific index (fdb file)? I know I can see the index creation statement and try to guess from there but it would be nice if there is a way to see the contents/records inside an fdb file.
two tools cbindex and forestdb_dump can help. These are available in the bin folder along with other couchbase binaries. Note that, these tools are not supported, as documented at http://developer.couchbase.com/documentation/server/4.5/release-notes/relnotes-40-ga.html
given bucket/indexname, tool cbindex gets index level details:
couchbases-MacBook-Pro:bin varakurprasad$ pwd
/Users/varakurprasad/Downloads/couchbase-server-enterprise_451_GA/Couchbase Server.app/Contents/Resources/couchbase-core/bin
couchbases-MacBook-Pro:bin varakurprasad$ ./cbindex -server 127.0.0.1:8091 -type scanAll -bucket travel-sample -limit 4 -index def_type -auth Administrator:couch1
ScanAll index:
[airline] ... airline_10
[airline] ... airline_10123
[airline] ... airline_10226
[airline] ... airline_10642
Total number of entries: 4
Given a forestdb file, the tool forestdb_dump gets more low level details:
couchbases-MacBook-Pro:varakurprasad$ pwd
/Users/varakurprasad/Library/Application Support/Couchbase/var/lib/couchbase/data/#2i/travel-sample_def_type_1018858748122363634_0.index
couchbases-MacBook-Pro:varakurprasad$ forestdb_dump data.fdb.53 | more
[FDB INFO] Forestdb opened database file data.fdb.53
DB header info:
BID: 1568 (0x620, byte offset: 6422528)
DB header length: 237 bytes
DB header revision number: 3
...
Doc ID: airline_10
KV store name: back
Sequence number: 14637
Byte offset: 2063122
Indexed by the main index
Length: 10 (key), 0 (metadata), 24 (body)
Status: normal
Metadata: (null)
Body:^Fairline
...