sensu client subscriptions non-responding - json

I have setup sensu-server and client successfully and all is working except one thing . in this image
you can see that there are alerts for mysql and web ports.but I have given only "mysql" subscription right now in my client.json file in my client system. I have removed the "webserver" subscription from client.json (which I added initially before replacing it with "mysql" ) but still the checks associated with the "webserver" subscription are displayed. why is this? and how to display only the checks associated with the given subcription. here is my client.json
{
"client": {
"name": "sensuclient2",
"address": "127.0.0.1",
"keepalive": {
"thresholds": {
"warning": 60,
"critical": 120
},
"handlers": ["default", "mailer", "sns"]
},
"subscriptions": [
"mysql"
]
}
}

It's possible Uchiwa is showing older checks, prior to the change you made to your client configuration file (at least I went through that once!). Try deleting the events. If the API is not running the checks anymore, the events won't come up again.
You can either use sensu-cli to delete the events:
sensu-cli event delete sensuclient2 check_http
https://github.com/agent462/sensu-cli
Or make an API call...
curl -s -i -X DELETE http://yourhost:yourport/events/sensuclient2/check_http
https://sensuapp.org/docs/1.1/api/events-api.html#eventsclientcheck-delete
If the checks do come back you should check both server and and client side checks and client configuration.
Also, the simplest is the best, #vishal.k himself reminded me:
you can always delete the events using Uchiwa's interface. :)

Related

Couchbase deleted documents reappearing in database

We are experiencing a problem where deleted documents are reappearing on our Couchbase server.
We have a scenario where documents are created on CBL. These documents are synced up to the server. The user realizes an error has been made and flags the document as incorrect. On the server the admin can then view all of the flagged documents and delete them from the server. The sync gateway has been setup to only sync up these types of documents, i.e. once an edit has been made to these documents on the server the changes are not synced back down to CBL.
Here is the process of what is happening:
Document is created on CBL with a TTL of 15 days and synced to sync
Document is updated on CBL and synced to sync gateway.
Document is deleted from the Couchbase Server bucket with a DELETE
N1QL query.
After the document is deleted from the bucket it gets
randomly added again within a few days.
Only documents that are
still on the devices i.e. not older than TTL of 15 days, are added
back to the bucket.
We tried increasing the Metadata Purge Interval to more than 15 days but this did not resolve the problem.
Does anybody have any suggestions or possibly know what could be the problem here?
Couchbase Server Community Edition 6.5.1 build 6299
Sync gateway 2.7.3
Couchbase Lite Android 2.8.1
Thanks in advance!
PS: Here is our Sync Gateway config with the sync function:
"log": [
"*"
],
"adminInterface": "0.0.0.0:4985",
"interface": "0.0.0.0:4984",
"databases": {
"prod": {
"server": "http://localhost:8091",
"bucket": "prod_bukcet",
"username": "sync_gateway",
"password": "XXX",
"enable_shared_bucket_access": true,
"import_docs": "continuous",
"use_views": true,
"users": {
"user_X": {
"password": "XXX",
"admin_channels": ["*"],
"disabled": false
}
},
"sync":`
function sync(doc, oldDoc) {
/* sanity check */
// check if document was removed from server or via SDK
// In this case, just return
if (isRemoved()) {
return;
}
//Only sync down documents that are created on the server
if (doc.deviceDoc == true) {
channel("server");
} else {
if (doc.siteId) {
channel(doc.siteId);
} else {
channel("devices");
}
}
// This is when document is removed via SDK or directly on server
function isRemoved() {
return (isDelete() && oldDoc == null);
}
function isDelete() {
return (doc._deleted == true);
}
}`,
}
}
}
In shared bucket access mode (enable_shared_bucket_access:true),a N1QL delete on a document creates a tombstone. Tombstones are always synced. The metadata purge interval setting on the server determines the period after which the tombstone gets purged on the server. So it is typical to set it to a value that matches the maximum partition window of the client- that is to ensure that all disconnected clients have the opportunity to get the deleted document. So setting it to > 15 days just means that the tombstone will be purged after 15 days and so tombstoned documents will be synced down to clients in the meantime.
In your case, if you don't want documents to be synced down to clients because the lifetime of the document is managed independently on CBL side via the expirationDate(), then purge the document instead of deleting it on server.

Unable to extend schema within a verified sub domain directory

I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All

Consul 0.8 ACL migration - how to migrate

TLTR
How to migrate the pre 0.8 ACL permissions to 0.7.3?
Current setup
I am currently running an ACL enabled Consul 0.7.3 stack.
With Consul 0.8 ACLs will finally also include services and nodes, so that nodes / service (Consul) are not longer shown to anonymous users. This is exactly what I need. Today I tried to enable the new ACL "pre 0.8" using https://www.consul.io/docs/agent/options.html#acl_enforce_version_8
After doing so, my nodes could no longer authenticate against the master ( if authentication is the problem at all ).
I run the consul-network with gossip enabled, I have configured a acl_master_token:
"{acl_master_token":"<token>}"
and a token for the agents:
"{acl_token":"<token>}"
which all agents use / are configured with.
I have these ACL defaults:
{
"acl_datacenter": "stable",
"acl_default_policy": "deny",
"acl_down_policy": "deny"
}
and my Consul config looks like this:
{
"datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "dwconsul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap": true,
"acl_enforce_version_8": true
}
What happens
When I boot, I cannot see my nodes/services using my token at all, neither the nodes/agents can register at the master,
Question
What is exactly needed to get the following:
All agents can see all nodes and all services and all KVs
Anonymous sees nothing, not KV, services or nodes (thats what is possible with 0.8 )
I looked at https://www.consul.io/docs/internals/acl.html "ACL Changes Coming in Consul 0.8" but I could not wrap my head around it. Should I now use https://www.consul.io/docs/agent/options.html#acl_agent_master_token instead of acl_token?
Thank you for any help. I guess I will not be the only one on this migration path and this particular interest, a lot of people are interested in this. You help all of them :)
It looks like the new node policy is preventing the nodes from registering properly. This should fix things:
On your Consul servers configure them with an acl_agent_token that has a policy that can write to any node, like this: node "" { policy = "write" }.
On your Consul agents, configure them with a similar one to the servers to keep things open, or you can give them a token with a more specific policy that only lets them write to some allowed prefix.
Note this gets set as the acl_agent_token which is used for internal registration operations. The acl_agent_master_token is used as kind of an emergency token to use the /v1/agent APIs if there's something wrong with the Consul servers, but it only applies to the /v1/agent APIs.
For "all agents can see all nodes and all services and all KVs" you'd add node read privileges to whatever token you are giving to your agents via the acl_token, so you'd add a policy like:
node "" { policy = "read" }
service "" { policy = "read" }
key "" { policy = "read" }
Note that this allows anyone with access to the agent's client interface to read all these things, so you want to be careful with what you bind to (usually only loopback). Or don't set acl_token at all and make callers pass in a token with each request.

Zabbix API hidden hostgroup/hosts

I'm trying to get all hostgroups/hosts through the zabbix API.
I have used the following json requests:
{
"jsonrpc": "2.0",
"method": "hostgroup.get",
"params": {
"output": "extend"
},
"auth": "6f38cddc44cfbb6c1bd186f9a220b5a0",
"id": 1
}
The one for hosts differs only in "host.get" instead of "hostgroup.get".
But unfortunately some information is hidden. The frontend shows everything correct. But the API output is missing some hostgroupts/hosts.
It's bizarre because one of my self created hostgroups is displayed, the other one is not. Same happens with the hosts that are currently inside this hostgroup. As you can see I don't use any filter option.
Does somebody have a clue?
Thanks in advance!
If your user is not a Zabbix "superadmin", it must have permissions on those host groups. Otherwise you would not be able to retrieve groups or their members.

Context Broker, ONTIMEINTERVAL subscribe immediatelly sends request to reference

The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).