I am currently working on implementing our naming policies using azure policy. I am having some issues with the match / equals / like operators. They seem to match even though I think they should not. For instance
"value": "[substring(field('name'), sub(length(field('name')), 6), 6)]",
"match": "Prod##"
matches autorb-PermProd-01 , and as far as I can understand, the index here starts at number six from the right, which means "rod-01" == "Prod##" . This just does not seem right to me? Also I wonder if there are ways to test these functions locally, as it takes forever to upload them and test in my sandbox environment.
Ok, so what I did wrong was; I misunderstood the effect. It does not match, but for some reason it lists as compliant when the resource is already made, and denies me making new resources when the effect is set to deny. Some more details
"value": "[substring(field('name'), sub(length(field('name')), 5), 5)]",
"match": "Dev##"
---- snipped for clarity ---
"then": {
"effect": "deny"
Will not trigger an alarm for autorb-permdev-01 when it is already made, but will deny it's creation.
Related
I have multiple subscriptions to different entities, and at some random time one of the subscriptions stops being notified.
This is because the lastNotification attribute is set in the future. Here is an example :
curl 'http://localhost:1026/v2/subscriptions/xxxxxxxxxxxxxxx'
{
"id": "xxxxxxxxxxxxxxx",
...
"status": "active",
...
"notification": {
"timesSent": 1316413,
"lastNotification": "2021-01-20T18:33:39.000Z",
...
"lastFailure": "2021-01-18T12:11:26.000Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2021-01-20T17:12:09.000Z",
"lastSuccessCode": 204
}
}
In this example, lastNotification is ahead of time. It is not until 2021-01-20T18: 33: 39.000Z for the subscription to be notified again.
I tried to modify lastNotification in the mongo database but that does not change it. Looks like the value 2021-01-20T18: 33: 39.000Z is cached.
The field lastFailureReason specifies the reason of the last notification failure associated to that subscription. The diagnose notification reception problems section in the documentation explains possible causes for "Timeout was reached". It makes sense to have this fail randomly if your network connection is somehow unstable.
With regards to timestamping in the future (no matter if it happens for lastNotification, lastFailure or lastSuccess) is pretty weird and probably not associated to Orion Context Broker operation. Orion takes timestamp from internal clock of the system where it is running, so maybe your system clock is set in the future.
Except if you run Orion Context Broker with -noCache (which in general is not recommended), subscriptions are cached. Thus, if you "hack" them in the DB you will not see the effect until next cache refresh (refreshes takes place at regular intervals, defined by -subCacheIval parameter).
I am creating CloudFormation stack with Elasticsearch service, however it fails for AdvancedSecurityOptions, which works perfectly fine with aws es create-elasticsearch-domain
my JSON template snippet is below:
...
"AdvancedOptions": {
"rest.action.multi.allow_explicit_index": true
},
"AdvancedSecurityOptions": {
"Enabled": true,
"InternalUserDatabaseEnabled": false,
"MasterUserOptions": {
"MasterUserARN": "arn:aws:iam::1234567890:role/role_name"
}
},
"DomainName": {
"Ref": "ESDomainName"
}
...
I am unable to get this code working, any help related to fine grain access control would be really appreciated.
The AdvancedSecurityOptions is the latest addition to Amazon Elasticsearch service added recently as part of Fine Grained Access Control. This is available only via Console, CLI and API for now.
I am not sure if the thread is with outdated info, but according to the official AWS documentation on this link it should be possible to use the AdvancedSecurityOptions for Fine Grained Access Control. It even states that it is meant to be used for FGAC at the top of the page.
Continuing from DNakevski# answer above, for FGAC we need to ensure the following three settings in the CFN template are set to true since they serve as pre-requisites:
EncryptionAtRestOptions
NodeToNodeEncryptionOptions and
HTTPS.
Further, the important parameter for FGAC in the CFN template is AdvancedSecurityOptions and needs to be set to Enabled: true
AmazonES/Opendistro-for-ES provides two ways for security with FGAC. One is through using a IAM user as a master-user and other is through having basic auth.
If you need to take the IAM way then set the InternalUserDatabaseEnabled to false and only have the parameter *MasterUserARN: "IAM User ARN" under the MasterUserOptions field.
If you need to take the basic auth (username and password) approach set the InternalUserDatabaseEnabled to true and have the MasterUserName: "any-name" and the MasterUserPassword: "xxx"* Please have at least one lower case, one upper case, one digit and one special character for the password else the CFN template will rollback. However, the failure message is easily seen on the CFN console under events.
I have a simple working CFN yaml here doing the same just in case.
I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All
I am designing a web service to regularly receive updates to lists. At this point, a list can still be modeled as a single entity (/lists/myList) or an actual collection with many resources (/lists/myList/entries/<ID>). The lists are large (millions of entries) and the updates are small (often less than 10 changes).
The client will get web service URLs and lists to distribute, e.g.:
http://hostA/service/lists: list1, list2
http://hostB/service/lists: list2, list3
http://hostC/service/lists: list1, list3
It will then push lists and updates as configured. It is likely but undetermined if there is some database behind the web service URLs.
I have been researching and it seems a HTTP PATCH using the JSON patch format is the best approach.
Context and examples:
Each list has an identifying name, a priority and millions of entries. Each entry has an ID (determined by the client) and several optional attributes. Example to create a list "requiredItems" with priority 1 and two list entries:
PUT /lists/requiredItems
Content-Type: application/json
{
"priority": 1,
"entries": {
"1": {
"color": "red",
"validUntil": "2016-06-29T08:45:00Z"
},
"2": {
"country": "US"
}
}
}
For updates, the client would first need to know what the list looks like now on the server. For this I would add a property "revision" to the list entity.
Then, I would query this attribute:
GET /lists/requiredItems?property=revision
Then the client would see what needs to change between the revision on the server and the latest revision known by the client and compose a JSON patch. Example:
PATCH /list/requiredItems
Content-Type: application/json-patch+json
[
{ "op": "test", "path": "revision", "value": 3 },
{ "op": "add", "path": "entries/3", "value": { "color": "blue" } },
{ "op": "remove", "path": "entries/1" },
{ "op": "remove", "path": "entries/2/country" },
{ "op": "add", "path": "entries/2/color", "value": "green" },
{ "op": "replace", "path": "revision", "value": 10 }
]
Questions:
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH. Is there a more compatible approach without sacrificing HTTP compatibility (idempotency et cetera)?
Modelling the individual list entries as separate resources and using PUT and DELETE (perhaps with ETag and/or If-Match) seems an option (PUT /lists/requiredItems/entries/3, DELETE /lists/requiredItems/entries/1 PUT /lists/requiredItems/revision), but how would I make sure all those operations are applied when the network drops in the middle of an update chain? Is a HTTP PATCH allowed to work on multiple resources?
Is there a better way to 'version' the lists, perhaps implicitly also improving how they are updated? Note that the client determines the revision number.
Is it correct to query the revision number with GET /lists/requiredItems?property=revision? Should it be a separate resource like /lists/requiredItems/revision? If it should be a separate resource, how would I update it atomically (i.e. the list and revision are both updated or both not updated)?
Would it work in JSON patch to first test the revision value to be 3 and then update it to 10 in the same patch?
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH.
As far as I can tell, PATCH is really only appropriate if your server is acting like a dumb document store, where the action is literally "please update your copy of the document according to the following description".
So if your resource really just is a JSON document that describes a list with millions of entries, then JSON-Patch is a great answer.
But if you are expecting that the patch will, as a side effect, update an entity in your domain, then I'm suspicious.
Is a HTTP PATCH allowed to work on multiple resources?
RFC 5789
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources
I'm not keen on querying the revision number; it doesn't seem to have any clear advantage over using an ETag/If-Match approach. Some obvious disadvantages - the caches between you and the client don't know that the list and the version number are related; a cache will happily tell a client that version 12 of the list is version 7, or vice versa.
Answering my own question. My first bullet point may be opinion-based and, as has been pointed out, I've asked many questions in one post. Nevertheless, here's a summary of what was answered by others (VoiceOfUnreason) and my own additional research:
ETags are HTTP's resource 'hashes'. They can be combined with If-Match headers to have a versioning system. However, ETag-headers are normally not used to declare the ETag of a resource that is being created (PUT) or updated (POST/PATCH). The server storing the resource usually determines the ETag. I've not found anything explicitly forbidding this, but many implementations may assume that the server determines the ETag and get confused when it is provided with PUT or PATCH.
A separate revision resource is a valid alternative to ETags for versioning. This resource must be updated at the same time as the resource it is the revision of.
It is not semantically enforceable on a HTTP level to have commit/rollback transactions, unless by modelling the transaction itself as a ReST resource, which would make things much more complicated.
However, some properties of PATCH allow it to be used for this:
A HTTP PATCH must be atomic and can operate on multiple resources. RFC 5789:
The server MUST apply the entire set of changes atomically and never provide (e.g., in response to a GET during this operation) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH. PATCH is neither safe nor idempotent
JSON PATCH can consist of multiple operations on multiple resources and all must be applied or none must be applied, making it an implicit transaction. RFC 6902: Operations are applied sequentially in the order they appear in the array.
Thus, the revision can be modeled as a separate resource and still be updated at the same time. Querying the current revision is a simple GET. Committing a transaction is a single PATCH request containing first a test of the revision, then the operations on the resource(s) and finally the operation to update the revision resource.
The server can still choose to publish the revision as ETag of the main resource.
The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).