Context Broker, ONTIMEINTERVAL subscribe immediatelly sends request to reference - fiware

The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}

At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).

Related

Deploying ARM template with 2 hostnamebindings returns conflict error can't modify because another operation is in progress

I am trying to deploy an ARM template through Azure DevOps. I've tried doing a test deployment (Test-AzResourceGroupDeployment) through PowerShell without any issues.
This issue has persisted for several weeks, and i've read some posts stating it dissapeared after a few hours or after a day, however this has not been the case for me.
In Azure DevOps my build is succeeding just fine. But when i try to create a release through my release pipeline using the resource "Azure resource group deployment" it will fail stating the error:
"Code": "Conflict",
"Message": "Cannot modify this site because another operation is in progress. Details: Id: 4f18af87-8848-4df5-82f0-ec6be47fb599, OperationName: Update, CreatedTime: 9/27/2019 8:55:26 AM, RequestId: 691b5183-aa8b-4a38-8891-36906a5e2d20, EntityType: 3"
Update
I have later noticed that the error surfaces when trying to deploy my hostNameBindings for the site.
I have 2 different hostNameBindings in my template which causes the failure.
It fails apparently because it tries to deploy both of them at the same time, though i am not aware of an apparent fix for this so any help would still be appreciated!
I tried to use the copy function but as far as i know that will make an exact copy for both hostNameBindings which is not what i need. first of all they have different names and properties, anyone got a fix for this?
Make the one hostNameBindings depend on the other host name binding. Then they will be executed 1 after another and you should not get the same error message.
"dependsOn": [
"[resourceId('Microsoft.Web/sites/', variables('websitename'))]",
"[resourceId('Microsoft.Web/sites/hostNameBindings/',variables('websitename'), variables('firstbindingame-aftertheslash-sowithoutthewebsitename'))]"
],
Look like people already notice this issue and trying to fix it.
https://status.azure.com/
I had the same issue when using the Copy function in order to add multiple Custom Domains. Thanks to David Gnanasekaran's blog I was able to fix this issue.
By default the copy function will execute in parallel. By setting the mode to serial and setting the batchSize to 1 I did not receive any operation is in progress errors.
Here is my piece of ARM template to set the custom domains.
"copy": {
"name": "hostNameBindingsCopy",
"count": "[length(parameters('customDomainNames'))]",
"mode": "Serial",
"batchSize": 1
},
"apiVersion": "[variables('webApiVersion')]",
"name": "[concat(variables('webAppName'), '/', parameters('customDomainNames')[copyIndex()])]",
"type": "Microsoft.Web/sites/hostNameBindings",
"kind": "string",
"location": "[resourceGroup().location]",
"condition": "[greater(length(parameters('customDomainNames')), 0)]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('webAppName'))]"
],
"properties": {
"customHostNameDnsRecordType": "CName",
"hostNameType": "Verified",
"siteName": "parameters('webAppName')"
}

Unable to extend schema within a verified sub domain directory

I live in an enterprise environment where most of our production domains are currently non-routable (e.g. .local).
I tried extending the schema but since the non-routable cannot be verified and the default .onmicrosoft I don't think could either. My enterprise allows me to easily create subdomains so I attached it and verified for testing purposes and ran into the same verified domain error.
Per the documentation, I should be able to either us the ID of my domain name or just the scheme name and get 8 random-alpha-chars added. Neither approach works in this case.
POST: https://graph.microsoft.com/v1.0/schemaExtensions
{
"id": "idmdomain.sub.domain.net_Owners",
"description": "Owners of the group",
"targetTypes": [
"Group"
],
"properties": [{
"name": "PrimaryOwners",
"type": "String"
},
{
"name": "SecondaryOwners",
"type": "String"
}
]
}
Message Received:
{
"code": "BadRequest",
"message": "Your organization must own the namespace idmdomain.sub.domain.net as a part of one of the verified domains.",
"request-id": "1c7363f9-d54b-408a-8b29-2c0d2a94280a",
"date": "2018-03-22T21:47:22"
}
From the documentation:
If you already have a vanity .com,.net, .gov, .edu or a .org domain that you have verified with your tenant, you can use the domain name along with the schema name to define a unique name, in this format {domainName}_{schemaName}.
For example, if your vanity domain is contoso.com, you can define an id of, contoso_mySchema. This is the preferred option.
So in your example, idmdomain.sub.domain.net_Owners should simply be domain_Owners. It shouldn't include idmdomain, sub, net or any ..
Thank you Marc for pointing me in the correct direction. Even though my app had the correct delegated permissions (Directory.AccessAsUser.All) I now understand that I needed to execute this change in the user context instead of application as application is not supported.
For those that come behind me {domainName}_{schemaName} works if you validate your domain, if dont and you just leave schemename then the generated guid works as documented. I recommended reviewing the two links below as they were what finally unlocked the puzzle for me.
Helped me understand how this is working (authentication vs authorization)
https://developer.microsoft.com/en-us/graph/docs/concepts/rest
Helped me setup postman to quickly validate
https://blogs.msdn.microsoft.com/softwaresimian/2017/10/05/using-postman-to-call-the-graph-api-using-azure-active-directory-aad/
I should add for the postman route, a few changes...
Auth URL
https://login.microsoftonline.com/yourtennantid/oauth2/authorize?resource=https%3A%2F%2Fgraph.microsoft.com
Access Token URL
https://login.microsoftonline.com/yourtennantid/oauth2/token
Scope = Directory.AccessAsUser.All

HTTP ReST: update large collections: better approach than JSON PATCH?

I am designing a web service to regularly receive updates to lists. At this point, a list can still be modeled as a single entity (/lists/myList) or an actual collection with many resources (/lists/myList/entries/<ID>). The lists are large (millions of entries) and the updates are small (often less than 10 changes).
The client will get web service URLs and lists to distribute, e.g.:
http://hostA/service/lists: list1, list2
http://hostB/service/lists: list2, list3
http://hostC/service/lists: list1, list3
It will then push lists and updates as configured. It is likely but undetermined if there is some database behind the web service URLs.
I have been researching and it seems a HTTP PATCH using the JSON patch format is the best approach.
Context and examples:
Each list has an identifying name, a priority and millions of entries. Each entry has an ID (determined by the client) and several optional attributes. Example to create a list "requiredItems" with priority 1 and two list entries:
PUT /lists/requiredItems
Content-Type: application/json
{
"priority": 1,
"entries": {
"1": {
"color": "red",
"validUntil": "2016-06-29T08:45:00Z"
},
"2": {
"country": "US"
}
}
}
For updates, the client would first need to know what the list looks like now on the server. For this I would add a property "revision" to the list entity.
Then, I would query this attribute:
GET /lists/requiredItems?property=revision
Then the client would see what needs to change between the revision on the server and the latest revision known by the client and compose a JSON patch. Example:
PATCH /list/requiredItems
Content-Type: application/json-patch+json
[
{ "op": "test", "path": "revision", "value": 3 },
{ "op": "add", "path": "entries/3", "value": { "color": "blue" } },
{ "op": "remove", "path": "entries/1" },
{ "op": "remove", "path": "entries/2/country" },
{ "op": "add", "path": "entries/2/color", "value": "green" },
{ "op": "replace", "path": "revision", "value": 10 }
]
Questions:
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH. Is there a more compatible approach without sacrificing HTTP compatibility (idempotency et cetera)?
Modelling the individual list entries as separate resources and using PUT and DELETE (perhaps with ETag and/or If-Match) seems an option (PUT /lists/requiredItems/entries/3, DELETE /lists/requiredItems/entries/1 PUT /lists/requiredItems/revision), but how would I make sure all those operations are applied when the network drops in the middle of an update chain? Is a HTTP PATCH allowed to work on multiple resources?
Is there a better way to 'version' the lists, perhaps implicitly also improving how they are updated? Note that the client determines the revision number.
Is it correct to query the revision number with GET /lists/requiredItems?property=revision? Should it be a separate resource like /lists/requiredItems/revision? If it should be a separate resource, how would I update it atomically (i.e. the list and revision are both updated or both not updated)?
Would it work in JSON patch to first test the revision value to be 3 and then update it to 10 in the same patch?
This approach has the drawback of slightly less client support due to the not-often-used HTTP verb PATCH.
As far as I can tell, PATCH is really only appropriate if your server is acting like a dumb document store, where the action is literally "please update your copy of the document according to the following description".
So if your resource really just is a JSON document that describes a list with millions of entries, then JSON-Patch is a great answer.
But if you are expecting that the patch will, as a side effect, update an entity in your domain, then I'm suspicious.
Is a HTTP PATCH allowed to work on multiple resources?
RFC 5789
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources
I'm not keen on querying the revision number; it doesn't seem to have any clear advantage over using an ETag/If-Match approach. Some obvious disadvantages - the caches between you and the client don't know that the list and the version number are related; a cache will happily tell a client that version 12 of the list is version 7, or vice versa.
Answering my own question. My first bullet point may be opinion-based and, as has been pointed out, I've asked many questions in one post. Nevertheless, here's a summary of what was answered by others (VoiceOfUnreason) and my own additional research:
ETags are HTTP's resource 'hashes'. They can be combined with If-Match headers to have a versioning system. However, ETag-headers are normally not used to declare the ETag of a resource that is being created (PUT) or updated (POST/PATCH). The server storing the resource usually determines the ETag. I've not found anything explicitly forbidding this, but many implementations may assume that the server determines the ETag and get confused when it is provided with PUT or PATCH.
A separate revision resource is a valid alternative to ETags for versioning. This resource must be updated at the same time as the resource it is the revision of.
It is not semantically enforceable on a HTTP level to have commit/rollback transactions, unless by modelling the transaction itself as a ReST resource, which would make things much more complicated.
However, some properties of PATCH allow it to be used for this:
A HTTP PATCH must be atomic and can operate on multiple resources. RFC 5789:
The server MUST apply the entire set of changes atomically and never provide (e.g., in response to a GET during this operation) a partially modified representation. If the entire patch document cannot be successfully applied, then the server MUST NOT apply any of the changes.
The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH. PATCH is neither safe nor idempotent
JSON PATCH can consist of multiple operations on multiple resources and all must be applied or none must be applied, making it an implicit transaction. RFC 6902: Operations are applied sequentially in the order they appear in the array.
Thus, the revision can be modeled as a separate resource and still be updated at the same time. Querying the current revision is a simple GET. Committing a transaction is a single PATCH request containing first a test of the revision, then the operations on the resource(s) and finally the operation to update the revision resource.
The server can still choose to publish the revision as ETag of the main resource.

Slow responce whie parsing large JSON responce

I have developed a web application based on RESTFul design where the application takes JSON responce from JAVA based web service and displays in UI and it refreshes the data in every 5 seconds.
The application uses Bootstrap for UI design, Backbone and require.js for implementing an MVC stucture where JSON response is parsed as Backbone collection.
When an admin is using this application the JSON response size is too large(from 800 to 1100 objects).
This is where things get messy. As per my analysis the browser is taking up too much resource.So rest of the application is very slow. For eg if I try to open a modal, system freezes for some time and opens slowly thus giving a very poor user experience.
As per my analysis time is being taken in parsing the data
As a remedy I am removing all comments in code and trying to implement Gzip compression for JSON files/html/css/js.
Sample of the JSON object is pasted below
{
"name": "TEST",
"state": "Lunch",
"time": "00:00:09",
"manager": "TEST",
"site": "C",
"skill": "TEST",
"center": "TEST",
"teamLead": "TEST",
"workGroup": "TEST",
"lanId": "TEST",
"dbID": "TETS",
"loginId": "TEST",
"avgAcwTime": "nn",
"avgHandleTime": "nn",
"avgTalkTime": "nn",
"callsAnswered": "nn",
"dispSkill": "-",
"errCode": null,
"errDesc": null,
"avgAcwTimeth": "medium",
"avgHandleTimeth": "high",
"avgTalkTimeTh": "medium",
"callsAnsweredTh": "medium",
"stateTh": "high"
}
Pagenation can't be done due to some requirements.
Can any one suggest something to improve the perfomance
Also I am fetching data using Backbone.Collection.fetch()
getAgentMetric(){
this.metrices.fetch({
url : (isLocal) ? ('http://localhost:8080/jsons/agent.json') : (prev_this.url + '/agentstat'),
data: JSON.stringify(param),
type: "POST",
dataType: "JSON",
contentType: "application/json",
})
.done(function() {
// passing the datasource from ajax call
prev_this.agentLoacalSource.localdata = prev_this.metrices.toJSON();
});
timeout = setTimeout(_.bind(this.getAgentMetric, this), 5000);
},
Browsers can handle a heck of a lot more than a thousand objects without any strain, so I don't think it's the fact that you are simply requesting a large amount of data from the backend. It's more likely that some of your parsing or rendering code is slow.
A few possibilities without seeing any more of your code:
It really depends on what you're doing here, but I'm going to assume that you aren't using a templating library (hoganjs, handlebarsjs, etc). You should definitely look into using one as they speed things up quite a bit and make generating html a lot easier.
Are you running .append() for each individual model that you render? This will really slow things down. You should generate all of the html that needs to be generated, and then run .append() once.
What kind of event listeners are you adding for each model (if any)? Listing to scroll events without a debounce ends up slowing down your browser, especially if you add a bunch of them.
Unrelated to your slowness issues, there are a few problems that I see with this code:
Your timeout should be called from an .always() function in ajax to prevent concurrent requests from going out if for whatever reason a request is slow.
this.metrices.fetch(...)
.always(function() {
timeout = setTimeout(...);
}.bind(this));
Requests that are simply fetching data should use a GET instead of a POST request type. You can see https://stackoverflow.com/a/3477374/5780021 for more info about this.
I would recommend timing some of your code to see where the slowness is actually happening. This will allow you to actually determine how long things are taking between to points in code.
Firefox console.time
Chrome console.time
IE console.time

PATCH multiple resources

Short: Is it standard-compliant, RESTful and otherwise good idea to enable PATCH requests to update a collection of resources, not just a single one, but still individually?
Long:
I'm considering exposing a method for enabling batch, atomic updates to my collection of resources. Example:
PATCH /url/myresources
[
{
"op": "add",
"path": "/1", // ID if the individual resource
"value":
{
... full resource representation ...
}
},
{
"op": "remove",
"path": "/2"
},
{
"op": "replace",
"path": "/3/name",
"value": "New name"
}
]
The context is a public API of a commercial solution. The benefits of allowing such PATCHes is the atomicity as well as batch-friendliness without spamming requests, handling failures individually etc.
I've consulted https://www.rfc-editor.org/rfc/rfc6902 and https://www.rfc-editor.org/rfc/rfc5789 but couldn't find a definitive answer if this is compliant. The RFCs mostly refer to "a resource", but a collection of resources could also be treated as such.
Is this a good idea? Are there better alternatives?
I like this idea. A collection is a resource, too. So acting on it is perfectly good REST.
The semantic of your PATCH request would be that every subresource not listed in the request body is to be left as it is. Every subresource that is listed is to be changed as described. Yes, that sounds good to me.
As long as every segment of the request can be executed in a single request, I see no problems. Both your "all in one" request and single requests like this would be fine.
PATCH /url/myresources/1
{
"op": "add",
"value":
{
... full resource representation ...
}
}