How to keep only some metrics of Windows discovery system in Zabbix? - zabbix

I used zabbix to monitor some Windows server in AWS.
Automatically, the Windows discovery system will create many many items as well as triggers in Zabbix.
I tried to disable all of them, just keep only 2 items. They're
Service "Dhcp" (DHCP Client) is not running (startup type automatic)
Service "MpsSvc" (Windows Firewall) is not running (startup type automatic)
But after a while, there are many items like
Service "OneSyncSvc_xxxxxxx" (Sync Host_xxxxxxx) is not running (startup type automatic delayed)
I disabled them too, and then later some items with ↑ structure came in.
If i disable Windows discovery system, i'm afraid that 2 of the above-mentioned items would not work too.
Is there any ways to handle this?

Most zabbix templates you find are best thought of as examples; what one organization needs to monitor is often quite different from another. This is especially true of low level discovery which is what you are dealing with.
The best approach is to find the template and see how it generates the LLD items. There are many ways this can be done, from scripts to zabbix walks through SNMP items or Agent items. Regardless, each will have a discovery definition (Configure, Templates, Discovery Rule). On the second tab is a "Filters" page. There you can create a logical set of conditions which will allow (or not) discovery.
As a simple example, I have a list of names patterns of interfaces I do not want, e.g. "Unrouted VLAN" or "StackSub". If those names are found, the item is not discovered. If you have several templates that will have similar lists, put the list in a regular expression (Administration, General, Regular Expressions). Be careful of the match sense (include/exclude) and you can test them in the regular expression page (second tab).
In other words, the way you really want to handle it is not to have them discovered at all, as opposed to dealing with them afterwards.
Note that items no longer discovered (e.g. if you start filtering and eliminate some) are removed after the "Keep lost resources period (in days)" on the discovery rule. It is wise when changing the filters to set this to something larger than 0, then review the items for a device to see which are not discovered and being deleted (they appear with a orange exclamation point, if I recall -- some kind of flag and hover over and it explains).
If your needs are more complex than static filters can determine, you can script the discovery itself, and have the logic in the script to decide what is needed and what not, but clearly that is a lot more complex to implement.

Related

Reagent - methods of storing ui state together with (or separate from?) server-persisted state?

I'm using multiple atoms within a map called app-state and it's working quite well architecturally so far. The state distributed across those atoms is normalised, reflecting as it is stored in datomic, and of course what the client is initialised with is a specific subset of what's in datomic. This is me preparing the way to try out datascript (which is what gave me the aha moment of why client state is so much better fully normalised, even if not using datascript).
I have a question at this point. We all know that some state in reagent is a reflection of what's in the server's database (typically), but there's also state in reagent concerning solely the current condition of the ui. That state will vanish when the page is re-loaded and there's (typically) no need to store that on the server.
So, I'm looking at my list of atoms and realising that I have some atoms which hold database-record-like maps, i.e. they contain exact reflections of datomic entities, (which arrive by transit), which is great.
But now I notice I also want some ui state per datomic entity.
So the question arises whether to (this seems wrong to me) add some keys to what came from datomic, of the ui state that is irrelevant to datomic, but that the client needs (i.e., dump it into the same nested map). That is entirely possible, but seems wrong, and so suggests.... (this being my idea as of now), How about a parallel atom per "entity", like #<entity-name>-ui, containing a map (or even a vector of maps, if multiple entities), with a set of keys for ui state.
That seems an improvement on what I have ended up with by default as of now, which is separate atom for every piece of ui state (I've avoided component local state up to now). (Currently the ui only holds ui state for one record at a time, so these ui atoms need only be concerned with a single current entity).
But if, say, I made a parallel atom (to avoid mixing ephemeral ui and server state), then ui state could perhaps manageably extend deeper. We could hold, say, ui state per entity so switching current-entity back and forth would remember ui state.
Since this is Stack Overflow, I have to ask a specific question, rather than this just be discussion, so: given what I've described, what are some sensible architectural choices in this case, to store state in reagent?
If you are storing your app state in several component-independent reagent atoms already - you can check https://github.com/day8/re-frame which is a widely-adopted reagent-powered framework exactly for your case. Essentially it stores all the application state in a reagent atom but has a well-developed infrastructure to support coordinated storage and updates. They have brilliant documentation with a great high-level explanation of the idea.
Regarding your initial question about server/ui state separation - I think you should definitely go this way. It'll give you a better way of separating concerns and give you an easier way to update server/ui data separately. It is very easy to achieve this with re-frame by storing both parts of the state under separate top-level keys in re-frame application db. E.g.
{:server {:entity-name ...}
:ui {:entity-name ...}}
and then create suitable subscriptions(see re-frame docs) to retrieve it.

Is it possible to combine a dom-repeat on a platinum service worker?

I want to provide a user the ability to cache up to 2,600+ items, by groupings (categories of book, individual books, or possibly even just chapters of a certain book if they don't want the whole book). It is not possible, as far as I can tell, to precache all of these items because there are 2,600+ of them, and will be more in the future - the service worker will timeout with under a couple hundred. And since service workers either get all or none on install (if I understand correctly), do I need to use multiple services workers (with different ids?), or am I thinking about this wrong?
What I am thinking is something like...
<iron ajax></iron-ajax>
<template is="dom-repeat" items="...">
<platinum-sw-register auto-register clients-claim skip-waiting>
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="../someGenerator.php"></platinum-sw-cache>
</platinum-sw-register>
In other words:
Get a list of wanted URLs via iron-ajax (based upon what the user enables for cache)
Iterate through the URLs as groups via dom-repeat
Create a service worker with a customized cache-config for the URL group
Repeat 2 and 3 until done, then present a toast
That someGenerator.php would return a JSON config setup for the particular group of URLs.
My app is a single page app - with neon-animated-pages - one page representing categories, one for book listings, one for table of contents for each book, and then one of each the chapter contents. All of the data is obtained via iron-ajax.
Here are some links to demonstrate the issues:
The App
A large non-functional cache-config generated
I suspect, in order to not have service workers errors due to redundancy, or overwrite existing caches, I will need to assign individual ids, and include them in the generated cache-configs. Does that sound right?
No, I don't think that's the right approach. <dom-repeat> and creating multiple service workers isn't going to accomplish what you want.
It does look like you're bumping into some service worker-imposed timeouts during your install handler due to the delays in fetching the JSON configuration and performing all of the precaching. Taking a step back, are you sure that you need that entire set of URLs precached?
<platinum-sw> will give you runtime caching as well, so that when a browser loads a given URL when there's a network connection available, the resources will be automatically added to the cache and available offline during subsequent return visits.
There are other approaches that would use either window.caches to cache resources from within your controlled page, or using something like postMessage() to communicate a list of additional URLs to cache from your controlled page to your service worker. Both of those approaches would involve going beyond the default functionality you get from using <platinum-sw> and digging into the internals a bit.

How does Chrome update URL bar completions?

I really enjoy using Chrome's URL bar because it remembers commonly-visited sites and often suggests a good completion based on what I've typed and/or visited before. So, for example, I can type t in the URL bar and Chrome will automatically fill it in with twitter.com, or I can type maps and Chrome will fill in the .google.com. This gives me the convenience of data-driven domain name shortcuts without having to maintain an explicit list.
What I'm wondering, though, is how Chrome determines that an old shortcut should be replaced with a new one. For example, if I visit twitter.com often, then that becomes the completion when I type t. But if I then start visiting twilio.com often enough, then, after some time, Chrome will start to fill that in as the default completion for t. What I can't figure out is how or when that transition takes place. It also seems that there are (at least) two cases involved : one for domain names, and another for path strings, because if I visit a certain full URL often, and then want to get to the root of the same domain, I end up having to type the entire domain name out to get Chrome to ignore the full-URL completion.
If I had to guess, I'd imagine that Chrome stores the things that I type in the URL bar in a trie whose values are the number of times that a particular string has been typed (and/or visited ?). Then I'd imagine it has some sort of exponential decay model for the "counts" in the trie. But this is just a guess. Does anyone know how this updating process happens ?
Well, I ended up finding some answers by having a look at the Chromium source code ; I'd imagine that Chrome itself uses this code without too much modification.
When you type something into the search/URL bar (which is apparently called the "Omnibox"), Chrome starts looking for suggestions and completions that match what you've typed. To do this, there are several "providers" registered with the browser, each of which knows how to make a particular type of suggestion. The URL history provider is one of these.
The querying process is pretty cool, actually. It all happens asynchronously, with particular attention paid to which activity happens in which thread (the main thread being especially important not to block). When the providers find suggestions, they call back to the omnibox, which appears to merge and sort things before updating the UI widget.
History provider
It turns out that URLs in Chrome are stored in at least one, and probably two, sqlite databases (one is on disk, and the second, which I know less about, seems to be in memory).
This comment at the top of HistoryURLProvider explains the lookup process, complete with multithreaded ASCII art !
Sqlite lookup
Basically, typing in the omnibox causes sqlite to run this SQL query for looking up URLs by prefix. The suggestions are ordered by the number of visits to the URL, as well as by the number of times that a URL has been typed.
Interestingly, this is not a trie ! The lookup is indeed based on prefix, but the scoring of those lookups does not appear to be aggregated by prefix, like I'd imagined.
I had a little less success in determining how the scores in the database are updated. This part of the code updates a URL after a visit, but I haven't yet run across where the counts are decremented (if at all ?).
Updating suggestions
What I think is happening regarding the updating of suggestions -- and this is still just a guess right now -- is that the in-memory sqlite database essentially has priority over the on-disk DB, and then whenever Chrome restarts or otherwise flushes the contents of the in-memory DB to disk, the visit and typed counts for each URL get updated at that time. Again, just a guess, but I'll keep looking as I get time.
The code is really nice to read through, actually. I definitely recommend it if you have similar questions about Chrome.

Which verb should be used for custom actions?

Let's say we have a site where we have a list of items. On each of these items you can start a couple of different process that will result in somekind of output related to the item in question. How should you design for the most appropriate use of the http verbs? What I would like to have is multiple links per item and each link trigger one of the actions, but in my scenario that doesn't match the HTTP-VERB get, which will be used if I am using links. On the other hand, I don't want to have buttons which all are in a separate form with different actions.
It's somewhat hard to explain but hopefully you understand, it should be some best practices to apply here.
You should NOT use GET. GET requests should be safe which means they are intended only for information retrieval and should not change the state of the server. (i.e. things like logging are OK, but things that actually update the state of the application are a no-no.) Think of a crawler going over your application. Anything you wouldn't mind a crawler going through is fine for GET, but that doesn't sound like your situation (because you said, "start a couple of different processes", but I could be misinterpreting your use case).
That leaves PUT, DELETE and POST. PUT and DELETE must be idempotent, meaning that multiple identical requests should have the same effect as a single request. So if you had a request that updated a person's name, for example, if you called it once or 100 times, the person's name would still be the same, so it is idempotent.
POST is the most flexible verb. If the processes you are kicking off are not safe or idempotent (or even if they are) you can use POST, which simply doesn't guarantee anything about safety or idempotency. The disadvantages there are:
If you use POST when GET is more semantically correct, it is less communicative of the intent of your request, since POST usually means you are sending a payload.
You just couldn't take advantage of the web's caching infrastructure that makes it so scalable.
In the past, I have used POST with query args to specify custom actions. It made sense in my use case because I had a majority of custom actions needing to pass a payload. Since you do not want to use buttons, you can use GET with query args to specify the different actions, but you have to be very careful that the action you are taking does not have any side effects and is idempotent. As noted in the comment by #jhericks below, there are many things in the network that assume that GET's are safe and may repeat GET's.
From a pure RESTful perspective though, this is not ideal. Your items will have a specific URI and GET on the URI will return the items representation. Running actions on the item is effectively a change in the state of the item representation and that should be done with a POST(or a PUT depending on who you ask and if your web server supports PUT). In real life though, using query args is an easy work around and it may make sense to your use case.
Im not sure i fully understand your question.
But here's a quick paragraph which might help you.
REST is about making smart clients and simple servers. GET, PUT, DELETE represent the basic operations of file access at the lowest level. What you should be doing is completely ignoring anything the server can offer and be offloading that work onto clients.
So, the question is, why is the server being triggered to do many things. why can't the client do all of these things itself.
Mike Brown

How do you handle exceptional cases

This is often situation, but here is latest example:
Companies have various contact data (addresses, phone numbers, e-mails...) when they make job ad, they have checkboxes where they choose how they want to be contacted. It is basically descriptive data. User when reading an ad sees something like "You can apply by mail, in person...", except if it's "through web portal" or "by e-mail" because then appropriate buttons should appear. These options are stored in database, and client (owner of the site, not company making an ad) can change them (e.g. they can add "by telepathy" or whatever), yet if they tamper with "e-mail" and "web-portal" options, they screw their web site.
So how should I handle data where everything behaves same way except "this thing" that behaves this way, and "that thing" that behaves some other way, and data itself is live should be editable by client.
You've tagged your question as "language-agnostic", and not all languages cleanly support polymorphism, but that's the way I would approach this.
Each option has some type, and different types require different properties to be set. However, every type supports some sort of "render" method that can display the contact method as needed. Since the properties (phone number, or web address, etc.) are type-specific, you can validate the administrator's input when creating these "objects", to make sure that the necessary data is provided and valid. Since you implement the render method, rather than spitting out HTML provided by a user, you can ensure that the rendered page is correct. It's less flexible, but safer and more user friendly.
In the database, you can have one sparsely populated table that holds data for all types of contacts, or a "parent" table with common properties and sub-tables with type-specific properties. It depends on how many types you have and how different they are. In either case, you would have some sort of type indicator, so that you know the type of object to which the data should be be bound.
First of all, think twice do you really need it. Reason is simple. You are supposed to serve specific need and input data is a mean to provide that service. If data does not fit with existing service then what is its value and who are consumer of that specific information?
There are two possible answers: You are expanding your client base or you need to change existing service because of change of demand. In both cases you need to star from development of business model. If you describe what service you need and what information it should provide you will avoid much of specific data and come with clear requirements easy to implement in software.
I'd recommend the resolution pattern for this, based on the mention of a database. The link above describes it, but it's actually a lot simpler than it sounds. You write a database query that returns all the possible options (for example, you read the standard options and the customized options together using perhaps a UNION or a JOIN depending on your schema) - the COALESCE SQL keyword is then useful to find the first 'resolution' of the option value that isn't NULL.
Well, if all it is is that you have two options that are special, and then anything else is dealt with in the same way, then store your options as strings, and if either of the two special ones appears in that list, then show the appropriate stuff for that special item.
Just check your list of items for the two special ones. Nothing fancy.
By writing a very simple Rules Engine. You can use an out-of-the box implementation, or you can roll your own. Since your case seems so simple, I tend to roll my own, because it means less dependencies (YMMV).