I have an entity in my orion DB:
{
"id"=>"User-121",
"type"=>"User",
"location"=>{
"type"=>"geo:point",
"value"=>"59.851484, 30.199463"
},
"time"=>{"type"=>"none", "value"=>222909, "metadata"=>{}}
}
Also, I have 3 subscriptions to this entity, which have the same coordinates in condition's expression:
Should trigger when the entity is located, at least 100 meters far away from the reference point.
{
"id"=>"...",
"expires"=>"...",
"status"=>"active",
"subject"=>{
"entities"=>[{"id"=>"User-121", "idPattern"=>"", "type"=>"User"}],
"condition"=>{
"attributes"=>["location", "time"],
"expression"=>{
"q"=>"",
"geometry"=>"point",
"coords"=>"59.851484, 30.199463",
"georel"=>"near;minDistance:100"}
}
},
"notification"=>{
"callback"=>"http://callback",
"attributes"=>["time"]
}
}
Should trigger when the entity is located, at a maximum, 100 meters far away from the reference point
{
"id"=>"...",
"expires"=>"...",
"status"=>"active",
"subject"=>{
"entities"=>[{"id"=>"User-121", "idPattern"=>"", "type"=>"User"}],
"condition"=>{
"attributes"=>["location", "time"],
"expression"=>{
"q"=>"",
"geometry"=>"point",
"coords"=>"59.851484, 30.199463",
"georel"=>"near;maxDistance:100"}
}
},
"notification"=>{
"callback"=>"http://callback",
"attributes"=>["time"]
}
},
Should trigger when the entity is located at the reference point (has the same coordinates)
{
"id"=>"...",
"expires"=>"...",
"status"=>"active",
"subject"=>{
"entities"=>[{"id"=>"User-121", "idPattern"=>"", "type"=>"User"}],
"condition"=>{
"attributes"=>["location", "time"],
"expression"=>{
"q"=>"",
"geometry"=>"point",
"coords"=>"59.851484, 30.199463",
"georel"=>"equals"}
}
},
"notification"=>{
"callback"=>"http://callback",
"attributes"=>["time"]
}
}
The problem is that all of the subscriptions send notifications each time I update the entity. It doesn't even depend on the entity's coordinates values. Whatever the coordinates are, I always receive 3 notifications of any update.
What am I doing wrong ?
The Orion Context Broker version is 0.28.0 (git version: 5c1afdb3dd748580f10e1809f82462d83d2a17d4)
Geo-location features in NGSIv2 subscriptions have not been yet implemented (at Orion 0.28.0). Note that NGSIv2 is yet in beta status and sometimes the specification (where the geometry, georel and coords are defined as part of expression) is a step forward the implementation.
There is a github issue about this, to which you can subscribe in order to know when this feature gets implemented.
EDIT: geo-location features in NGSIv2 subscriptions will be available in Orion 1.3.0 (to be released by the end of August or begining of September). If you don't want to wait, note the functionality is also available in the develop branch (and associated Docker).
Related
I want to write data to CloudWatch using the AWS-SDK (or whatever may work).
I see this:
The only method that looks remotely like publishing data to CloudWatch is the putMetricData method..but it's hard to find an example of using this.
Does anyone know how to publish data to CloudWatch?
When I call this:
cw.putMetricData({
Namespace: 'ec2-memory-usage',
MetricData: [{
MetricName:'first',
Timestamp: new Date()
}]
}, (err, result) => {
console.log({err, result});
});
I get this error:
{ err:
{ InvalidParameterCombination: At least one of the parameters must be specified.
at Request.extractError (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/protocol/query.js:50:29)
at Request.callListeners (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/Users/alex/codes/interos/jenkins-jobs/jobs/check-memory-ec2-instances/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
message: 'At least one of the parameters must be specified.',
code: 'InvalidParameterCombination',
time: 2019-07-08T19:41:41.191Z,
requestId: '688a4ff3-a1b8-11e9-967e-431915ff0070',
statusCode: 400,
retryable: false,
retryDelay: 7.89360948163893 },
result: null }
You're getting this error because you're not specifying any metric data. You're only setting the metric name and the timestamp. You also need to send some values for the metric.
Let's say your application is measuring the latency of requests and you observed 5 requests, with latencies 100ms, 500ms, 200ms, 200ms and 400ms. You have few options for getting this data into CloudWatch (hence the At least one of the parameters must be specified. error).
You can publish these 5 values one at a time by setting the Value within the metric data object. This is the simplest way to do it. CloudWatch does all the aggregation for you and you get percentiles on your metrics. I would not recommended this approach if you need to publish many observations. This option will result in the most requests made to CloudWatch, which may result in a big bill or throttling from CloudWatch side if you start publishing too many observations.
For example:
MetricData: [{
MetricName:'first',
Timestamp: new Date(),
Value: 100
}]
You can aggregate the data yourself and construct and publish the StatisticValues. This is more complex on your end, but results in the fewest requests to CloudWatch. You can aggregate for a minute for example and execute 1 put per metric every minute. This will not give you percentiles (since you're aggregating data on your end, CloudWatch doesn't know the exact values you observed). I would recommend this if you do not need percentiles.
For example:
MetricData: [{
MetricName:'first',
Timestamp: new Date(),
StatisticValues: {
Maximum: 500,
Minimum: 100,
SampleCount: 5,
Sum: 1400
}
}]
You can count the observations and publish Values and Counts. This is kinda the best of both worlds. There is some complexity on your end, but counting is arguably easier than aggregating into StatisticValues. You're still sending every observation so CloudWatch will do the aggregation for you, so you'll get percentiles. The format also allows more data to be sent than in the option 1. I would recommend this if you need percentiles.
For example:
MetricData: [{
MetricName:'first',
Timestamp: new Date(),
Values: [100, 200, 400, 500],
Counts: [1, 2, 1, 1]
}]
See here for more details for each option: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudWatch.html#putMetricData-property
What is the proper JSON syntax to update a multi-choice list item field using the Microsoft Graph?
Multi choice fields return a json array of strings like:
GET: /v1.0/sites/{siteId}/lists/{listId}/items/{itemId}
"CAG_x0020_Process_x0020_Status": [
"Proposed Funding - Customer Billed",
"Proposed Funding - Sales Funded",
"SOW - Needed"
]
However, when using the same syntax to update the field a 400 invalid request is returned.
PATCH: /v1.0/sites/{siteId}/lists/{listId}/items/{itemId}/fields
"CAG_x0020_Process_x0020_Status": [
"Proposed Funding - Customer Billed",
"Proposed Funding - Sales Funded",
"SOW - Needed"
]
Error returned:
{
"error": {
"code": "invalidRequest",
"message": "The request is malformed or incorrect.",
"innerError": {
"request-id": "2251e25f-e4ce-491f-beb9-e463c7d8d5af",
"date": "2018-05-16T15:16:23"
}
}
}
I am able to update all other fields requested, but this last field is holding up a release of the application.
To elaborate on what #muhammad-obaidullah-ather wrote in the comments, for string multiple choices you need to define the type as Collection(Edm.String) and then his solutions works for me. Repeating what he wrote as complete answer.
This should be sent as a PATCH like this:
PATCH /v1.0/sites/{SiteId}/lists/{ListId}/items/{ItemId}/fields
{"*FieldName*#odata.type":"Collection(Edm.String)","*FieldName*":["*Value1*","*Value2*"]}
This works for me
graph.api(url)
.version('beta')
.post({
'fields': {
'AssignedToLookupId#odata.type': 'Collection(Edm.Int32)',
'AssignedToLookupId': [5,13]
}
});
Unfortunately, a number of column types, including MultiChoice, cannot be updated via Microsoft Graph today. I would recommend adding this to the Office Dev UserVoice so it remains on the radar of the SharePoint/Graph team.
Me and my colleague are working on REST API. We've been arguing quite a lot whether status of a resource/item should be a string or an integer---we both need to read, understand and modify this resource (using separate applications). As this is a very general subject, google did not help to settle this argument. I wonder what is your experience and which way is better.
For example, let's say we have Job resource, which is accesible through URI http://example.com/api/jobs/someid and it has the following JSON representation which is stored in NoSQL DB:
JOB A:
{
"id": "someid",
"name": "somename",
"status": "finished" // or "created", "failed", "compile_error"
}
So my question is - maybe it should be more like following?
JOB B:
{
"id": "someid",
"name": "somename",
"status": 0 // or 1, 2, 3, ...
}
In both cases each of us would have to create a map, that we use to make sense of status in our application logic. But I myself am leaning towards first one, as it is far more readable... You can also easily mix up '0' (string) and 0 (number).
However, as the API is consumed by machines, readability is not that important. Using numbers also has some other advantages - it is widely accepted when working with applications in console and can be beneficial when you want to include arbitrary new failed statuses, say:
status == 50 - means you have problem with network component X,
status > 100 - means some multiple special cases.
When you have numbers, you don't need to make up all those string names for them. So which way is best in you opinion? Maybe we need multiple fields (this could make matters a bit confusing):
JOB C:
{
"id": "someid",
"name": "somename",
"status": 0, // or 1, 2, 3...
"error_type": "compile_error",
"error_message": "You coding skill has failed. Please go away"
}
Personally I would look at handling this situation with a combination of both approaches you have mentioned. I would store the statuses as integers within a database, but would create an enumeration or class of constants to map status names to numeric status values.
For example (in C#):
public enum StatusType
{
Created = 0,
Failed = 1,
Compile_Error = 2,
// Add any further statuses here.
}
You could then convert the numeric status stored in the database to an instance of this enumeration, and use this for decision making throughout your code.
For example (in C#):
StatusType status = (StatusType) storedStatus;
if(status == StatusType.Created)
{
// Status is created.
}
else
{
// Handle any other statuses here.
}
If you're being pedantic, you could also store these mappings in your DB.
For access via an API, you could go either way depending on your requirements. You could even return a result with both the status number and status text:
object YourObject
{
status_code = 0,
status = "Failed"
}
You could also create an API to retrieve the status name from a code. However returning both the status code and name in the API would be the best from a performance standpoint.
What is the behavior of DynamoDb BatchGetItem API if none of the keys exist in dynamodb?
Does it returns an empty list or throws an exception?
I am not sure about this after reading their doc: link
but I may be missing something.
BatchGetItem will not throw an exception. The results for those items will not be present in the Responses map in the response. This is also stated in the BatchGetItem documentation:
If a requested item does not exist, it is not returned in the result.
Requests for nonexistent items consume the minimum read capacity units
according to the type of read. For more information, see Capacity
Units Calculations in the Amazon DynamoDB Developer Guide.
This behavior is also easy to verify. This is for a Table with a hash key attribute named customer_id (the full example I am using is here):
dynamoDB.batchGetItem(new BatchGetItemSpec()
.withTableKeyAndAttributes(new TableKeysAndAttributes(EXAMPLE_TABLE_NAME)
.withHashOnlyKeys("customer_id", "ABCD", "EFGH")
.withConsistentRead(true)))
.getTableItems()
.entrySet()
.stream()
.forEach(System.out::println);
dynamoDB.batchGetItem(new BatchGetItemSpec()
.withTableKeyAndAttributes(new TableKeysAndAttributes(EXAMPLE_TABLE_NAME)
.withHashOnlyKeys("customer_id", "TTTT", "XYZ")
.withConsistentRead(true)))
.getTableItems()
.entrySet()
.stream()
.forEach(System.out::println);
Output:
example_table=[{ Item: {customer_email=jim#gmail.com, customer_name=Jim, customer_id=ABCD} }, { Item: {customer_email=garret#gmail.com, customer_name=Garret, customer_id=EFGH} }]
example_table=[]
I have been tasked with mapping a json file to a mysql database and I am trying to define the appropriate schema a sample of the json file is below
"configurationItems":[
{
"ARN":"",
"availabilityZone":"",
"awsAccountId":"hidden from sight ",
"awsRegion":"",
"configuration":{
"amiLaunchIndex":,
"architecture":"",
"blockDeviceMappings":[
{
"deviceName":"",
"ebs":{
"attachTime":"",
"deleteOnTermination":true,
"status":"attached",
"volumeId":""
}
}
],
"clientToken":"",
"ebsOptimized":,
"hypervisor":"",
"imageId":"",
"instanceId":"",
"instanceType":"",
"kernelId":"aki-",
"keyName":"",
"launchTime":"",
"monitoring":{
"state":""
},
"networkInterfaces":[
{ etc
am I right in thinking that the way to do this is essentially wherever there is a bracket /child element there would be a new table eg; configuration items down to aws region would be in a table then configuration through architecture followed by block device mappings etc etc if that is the case then where would the client token through lanch time belong ? many thanks in advance folks
That certainly is a way to use it.
It gives a more parent child relation approach to the setup.
E.g.
"blockDeviceMappings":[
{
"deviceName":"/dev/sda1",
"ebs":{
"attachTime":"2014-01-06T10:37:40.000Z",
"deleteOnTermination":true,
"status":"attached",
"volumeId":""
}
}
]
Probably could have more than one devices so it would be a 1 to many relation.