Zabbix LLD : how to create items from json? - zabbix

A part of my application looks like this example :
We focus on car brands and number of cars in a garage
Ford 12
Toyota 20
Honda 8
etc...
Not very real but let's imagine that numbers change every hour (discovery rule period).
I use a userparameter on my host.
UserParameter=discovery.garage.cars, /home/data/car_count.sh
This script rerurns the refreshed number of cars :
{ "data": [
{
"{#BRAND}": "Ford",
"{#NB}": "10",
},
{
"{#BRAND}": "Toyota",
"{#NB}": "21",
},
etc ...
]
}
​
I create the discovery rule :
name : car brand
type : zabbix agent
key : discovery.garage.cars
update interval : 1h
I don't get errors in GUI, but I would like to create the dynamic items (brand) and see their dynamic value (nb) for my host.
I read several times the documentation, but I find that it explains very badly how to do this, and I can't do it.
Thanks a lot in advance for your help.

You need two script for for a LLD:
a script gives you the LLD list, so [{"{#BRAND}: "..."}], and you usually set a long interval, 1h or more.
a script gives you the item value, for each brand, so the item value can return 10 for Ford, and you usually set a short interval, 1m or less.
The LLD is meant to create a lot of similar items, with the same configuration and different values.
You can also make it so you have a single script, making the LLD a Dependent Item of the item that gives you everything, like this:
a principal item that returns all data in a single json
a dependent item as discovery rule
with some JSONpath to (gr)ease the {#MACRO} syntax
and finally some dependent prototype items
with the appropriate JSONpath
And after all of this... what's the overhead and performance cost of having a LLD every few minutes, instead of every hour or so? Am I uselessly stressing the system?

Related

data.medicare.gov/resource/4pq5-n9py.json Numbers as Dates

Calling the API https://data.medicare.gov/resource/4pq5-n9py.json returns erratic results.
{
...
"reported_cna_staffing_hours_per_resident_per_day" : "2.53304",
"cycle_2_number_of_complaint_health_deficiencies" : "2017-06-22T00:00:00",
"cycle_2_health_deficiency_score" : "0",
...
}
I believe cycle_2_number_of_complaint_health_deficiencies should be a number. The data on the website is correct so I'm assuming that it is a problem with the API
It appears that field is defined as a floating timestamp. It appears the human readable name, Rating Cycle 1 Standard Survey Health Date, differs from the API name which you see on the API call. Looks like it's more an issue with confusing naming conventions.
Take a look at the metadata page for the underlying API names.

JSON feed - How can I have only 1 item parsed daily for an Amazon Alexa Flash briefing

I'm creating a Flash Briefing for Amazon Alexa enabled devices that will provide information each day. I've started creating the json file with the information needed and did a test on my Echo Dot to ensure the json is setup correctly.
My question now is how do I make it so that Alexa only reads 1 item each day? Currently when I ask Alexa to ready my flash briefing she reads all 3. I'd like to have a months worth or more entered into the json file and not have to update it daily.
[
{
"uid": "DAILY_TIP_ITEM_JSON_TTS_0001",
"updateDate": "2018-02-20T00:00:00.0Z",
"titleText": "Today's Motivation",
"mainText": "This is number one.",
"redirectionUrl": "#"
},
{
"uid": "DAILY_TIP_ITEM_JSON_TTS_0002",
"updateDate": "2018-02-21T00:00:00.0Z",
"titleText": "Today's Motivation",
"mainText": "This is number two.",
"redirectionUrl": "#"
},
{
"uid": "DAILY_TIP_ITEM_JSON_TTS_0003",
"updateDate": "2018-02-23T00:00:00.0Z",
"titleText": "Today's Motivation",
"mainText": "This is number three.",
"redirectionUrl": "#"
}
]
My first thought was to create a date key in each item and use an if statement to compare the dates. Is that the best approach or does anyone have a better idea? As you can see in the 3rd item, I used a future date to see if Alexa would ignore it until that date, but she still reads it even though today is the 21st.
Old question, but I can answer for this as a reference to others.
You're better off updating the feed (dynamically somehow) to show only 1 item if you only want one item read. per the Flashing Briefing API Feed Reference:
Provide between 1 and 5 unique feed items at a time.
If more items are provided, Alexa ignores items beyond the first five.
Items should be provided in order from newest to oldest, based on the date value for the item. Alexa may ignore older items.
So, Alexa will read up to five items, from newest to oldest.

What is the impact (performance wise) on using linq statements like Where, GroupJoin etc on a mobile app in Xamarin Forms

Although the question might sound a bit vague and a bit misleading but I will try to explain it.
In Xamarin.Forms, I would like to present a list of products. The data are coming from an api call that delivers json.
The format of the data is as follows: A list of products and a list of sizes for each product. An example is the following:
{
"product": {
"id": 1,
"name": "P1",
"imageUrl": "http://www.image.com"
}
}
{
"sizes": [
{
"productId": 1,
"size": "S",
"price": 10
},
{
"productId": 1,
"size": "M",
"price": 12
}
]
}
It seems to me that I have 2 options:
The first is to deliver the data from the api call with the above format and transform them into the list that I want to present by using limq GroupJoin command (hence the title of my question)
The second option is to deliver the finalized list as json and just present it in the mobile application without any transformation.
The first option will deliver less amount of data but will use a linq statement to restructure the data and the second option will deliver a larger amount of data but the data will already be structured in the desired way.
Obviously, delivering less amount of data is preferable (first option), but my question is, will the use of a linq GroupJoin command “kill” the performance of the application?
Just for clarification, the list that will be presented in the mobile application will have 2 items and the items will be the following:
p1-size: s – price 10
p2-size: m – price 12
Thanks
I've had rather complex sets of linq statements; I think the most lists I was working with was six, with a few thousand items in a couple of those lists, and hundreds or less in the others; to join and where things, and the performance impact is negligible. This was in Xamarin.Forms PCL on Droid/iOS.
(I did manage really bad performance once when I was calling linq on a linq on a linq, rather than calling linq on a list. i.e. I had to ensure I ToList()ed a given linq statement before trying to use it in another join statement; understandably due to the deferred/lazy execution of linq).

Creating Family Tree with Neo4J

I have a set of data for a family tree in Neo4J and am trying to build a Cypher query that produces a JSON data set similar to the following:
{Name: "Bob",
parents: [
{Name: "Roger",
parents: [
Name: "Robert",
Name: "Jessica"
]},
{Name: "Susan",
parents: [
Name: "George",
Name: "Susan"
]}
]}
My graph has a relationship of PARENT between MEMBER nodes (i.e. MATCH (p.Member)-[:PARENT]->(c.Member) ). I found Nested has_many relationships in cypher and neo4j cypher nested collect which ends up grouping all parents together for the main child node I am searching for.
Adding some clarity based on feedback:
Every member has a unique identifier. The unions are currently all associated with the PARENT relationship. Everything is indexed so that performance will not suffer. When I run a query to just get back the node graph I get the results I expect. I'm trying to return an output that I can use for visualization purposes with D3. Ideally this will be done with a Cypher query as I'm using the API to access neo4j from the frontend being built.
Adding a sample query:
MATCH (p:Person)-[:PARENT*1..5]->(c:Person)
WHERE c.FirstName = 'Bob'
RETURN p.FirstName, c.FirstName
This query returns a list of each parent for five generations, but instead of showing the hierarchy, it's listing 'Bob' as the child for each relationship. Is there a Cypher query that would show each relationship in the data at least? I can format it as I need to from there...
Genealogical data might comply with the GEDCOM standard and include two types of nodes: Person and Union. The Person node has its identifier and the usual demographic facts. The Union nodes have a union_id and the facts about the union. In GEDCOM, Family is a third element bringing these two together. But in Neo4j, I found it suitable to also include the union_id in Person nodes. I used 5 relationships: father, mother, husband, wife and child. The family is then two parents with an inward vector and each child with an outward vector. The image illustrates this. This is very handy for visualizing connections and generating hypotheses. For example, consider the attached picture and my ancestor Edward G Campbell, the product of union 1917 where three brothers married three Vaught sisters from union 8944 and two married Gaither sisters from union 2945. Also, in the upper left, how Mahala Campbell married her step-brother John Greer Armstrong. Next to Mahala is an Elizabeth Campbell who is connected by marriage to other Campbell, but is likely directly related to them. Similarly, you can hypothesize about Rachael Jacobs in the upper right and how she might relate to the other Jacobs.
I use bulk inserts which can populate ~30000 Person nodes and ~100,000 relationships in just over a minute. I have a small .NET function that returns the JSon from a dataview; this generic solution works with any dataview so it is scalable. I'm now working on adding other data, such as locations (lat/long), documentation (particularly that linking folks, such as a census), etc.
You might also have a look at Rik van Bruggens Blog on his family data:
Regarding your query
You already create a path pattern here: (p:Person)-[:PARENT*1..5]->(c:Person) you can assign it to a variable tree and then operate on that variable, e.g. returning the tree, or nodes(tree) or rels(tree) or operate on that collection in other ways:
MATCH tree = (p:Person)-[:PARENT*1..5]->(c:Person)
WHERE c.FirstName = 'Bob'
RETURN nodes(tree), rels(tree), tree, length(tree),
[n in nodes(tree) | n.FirstName] as names
See also the cypher reference card: http://neo4j.com/docs/stable/cypher-refcard and the online training http://neo4j.com/online-training to learn more about Cypher.
Don't forget to
create index on :Person(FirstName);
I'd suggest building a method to flatten out your data into an array. If they objects don't have UUIDs you would probably want to give them IDs as you flatten and then have a parent_id key for each record.
You can then run it as a set of cypher queries (either making multiple requests to the query REST API, or using the batch REST API) or alternatively dump the data to CSV and use cypher's LOAD CSV command to load the objects.
An example cypher command with params would be:
CREATE (:Member {uuid: {uuid}, name: {name}}
And then running through the list again with the parent and child IDs:
MATCH (m1:Member {uuid: {uuid1}}), (m2:Member {uuid: {uuid2}})
CREATE m1<-[:PARENT]-m2
Make sure to have an index on the ID for members!
The only way I have found thus far to get the data I am looking for is to actually return the relationship information, like so:
MATCH ft = (person {firstName: 'Bob'})<-[:PARENT]-(p:Person)
RETURN EXTRACT(n in nodes(ft) | {firstName: n.firstName}) as parentage
ORDER BY length(ft);
Which will return a dataset I am then able to morph:
["Bob", "Roger"]
["Bob", "Susan"]
["Bob", "Roger", "Robert"]
["Bob", "Susan", "George"]
["Bob", "Roger", "Jessica"]
["Bob", "Susan", "Susan"]

Atomic in MongoDB with transfer money

I'm new for MongoDB
I make a simple application abount account in bank.an account can transfer money to others
I design Account collection like that
account
{
name:A
age: 24
money: 100
}
account
{
name:B
age: 22
money: 300
}
assuming that user A transfer 100$ for user B , there are 2 operations :
1) decrease 100$ in user A // update for document A
2) increase 100$ for user B // update with document B
It said that atomic only applied for only single document but no mutiple document.
I have a alter desgign
Bank
{
name:
address:
Account[
{
name:A
age: 22
money: SS
},
{
name:B
age: 23
money: S1S
}
]
}
I have some question :
If I use later way, How can I write transaction query (Can I use findAndModify() function?) ?
Does MongoDB support transaction operations like Mysql (InnoDB)?
Some pepple tell me that use Mysql for this project is the best way, and just only use MongoDB to save transaction information.(use extra
collection named Transaction_money to save them), If I use both
MongoDB and Mysql (InnoDB) how can make some operations below are
atomic (fail or success whole):
> 1) -100$ with user A
> 2) +100$ with user B
> 3) save transaction
information like
transaction
{
sender: A
receiver: B
money : 100
date: 05/04/2013
}
Thank so much.
I am not sure this is what you are looking for:
db.bank.update({name : "name"},{ "$inc" : {'Account.0.money' : -100, 'Account.1.money' : 100}})
update() operation is satisfies ACI properties of ( ACID ). Durability ( D ) depends on the mongo and application configuration while making query.
You can prefer to use findAndModify() which won't yield lock on page fault
MongoDB provides transactions within a document
I can't understand, if your application requirement is very simple, then why you are trying to use mongodb. No doubt its a good data-store, but I guess MySql will satisfy all your requirements.
Just FYI : There is a doc which is exactly the problem you are trying to solve. http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
But I won't recommend you to use this because a single query ( transferring money) has been turned into sequence of queries.
Hope it helped
If I use later way, How can I write transaction query (Can I use findAndModify() function?) ?
There are a lot of mis-conceptions about what findAndModify does; it is not a transaction. That being said it is atomic, which is quite different.
The reason for two phase commits and transactions in this sense is so that if something goes wrong you can fix it (Or at least have a 99.99% chance that corruption hasn't occurred)
The problem with findAndModify is that it has no such transactional behaviour, not only that but MongoDB only provides atomic state upon single document level which means that, in the same call, if your functions change multiple documents you could actually have an in-consistent in-between state in your database data. This, of course, won't do for money handling.
It is noted that MongoDB is not extremely great in these scenarios and you are trying to use MongoDB away from its purpose, with this in mind it is clear you have not researched your question well, as your next question shows:
Does MongoDB support transaction operations like Mysql (InnoDB)?
No it does not.
With all that background info aside let's look at your schema:
Bank
{
name:
address:
Account[{
name:A
age: 22
money: SS
},{
name:B
age: 23
money: S1S
}]
}
It is true that you could get a transaction query on here whereby the document would never be able to exist in a in-between state, only one or the other; as such no in-consistencies would exist.
But then we have to talk more about the real world. A document in MongoDB is 16mb big. I do not think you would fit an entire bank into one document, so this schema is badly planned and useless.
Instead you would require (maybe) a document per account holder in your bank with a subdocument of their accounts. With this you now have the problem that in-consistencies can occur.
MongoDB, as #Abhishek states, does support client side 2 phase commits but these are not going to be as good as server-side within the database itself whereby the mongod can take safety precautions to ensure that the data is consistent at all times.
So coming back to your last question:
Some pepple tell me that use Mysql for this project is the best way, and just only use MongoDB to save transaction information.(use extra collection named Transaction_money to save them), If I use both MongoDB and Mysql (InnoDB) how can make some operations below are atomic (fail or success whole):
I would say something a bit more robust than MySQL personally, I heard MSSQL is quite good for this.