Is it possible to configure different resolutions for a sth-comet sink? - fiware

We have deployed in docker an instance of Orion Context Broker, an instance of Cygnus and an instance of STH-Comet using the formal approach. We need to save some entities in the MongoDB aggregates with the resolution of month, day and others with the resolution of month, day, hour and, finally, others with the resolution of month, day, hour, minute.
Is it possible to achieve this task?
Thank you very much in advance.

Yes It is possible.
At Cygnus you will need to configure different STHSink instances, one for each desired resolution set.
Then there are several options to drive the NGSI notifications to the corresponding Sink:
One simple approach could be to associate different flume-sources-channel-sink to each sink (with a given port each). So you may store different resolutions depending to which port is included in the subscription.
You may use also NGSI Custom Notifications (by modifying Fiware-ServicePath header for example) along with Cygnus header multiplexing capability so you can route notifications to different channels-sink
<Agent>.sources.<Source1>.selector.type = multiplexing
<Agent>.sources.<Source1>.selector.header = <someHeader>
<Agent>.sources.<Source1>.selector.mapping.<Value1> = <Channel1>
<Agent>.sources.<Source1>.selector.mapping.<Value2> = <Channel1>
<Channel2>
<Agent>.sources.<Source1>.selector.mapping.<Value3> = <Channel2>

Related

Is it possible to get GCP's ANY distribution for Kubernetes GKE node pool?

I have a GKE Kubernetes cluster running on GCP. This cluster has multiple node pools set with autoscale ON and placed at us-central1-f.
Today we started getting a lot of errors on these Node pools' Managed Instance Groups saying that us-central1-f had run out of resources. The specific error: ZONE_RESOURCE_POOL_EXHAUSTED_WITH_DETAILS
I've found another topic on Stackoverflow with a similar question, where the answer points to a discussion on Google Groups with more details. I know that one of the recommended ways of avoiding this is to use multiple zones and/or regions.
When I first faced this issue I wondered if there is a way to set multiple region as a fallback system, instead of redundancy system. In that sense, I would set my VMs to be placed wherever zone that has available resources prioritizing the ones closer to, lets say, us-central1-f.
Then, reading the discussion on the Google Group I found a feature that caught my attentions which is the ANY distribution method for Managed Instance Groups. It seems that this feature does exactly what I need - the zone fallback.
So, my question: Does the ANY distribution method resolve my issue? Can I use it for GKE Node Pools? If not, is there any other solution other than using multiple zones?
It is possible to get a regional (i.e. multi-zonal) GKE deployment, however this will use multiple zonal MIGs as the underlying compute layer. So technically speaking you will not use the ANY distribution method, but you should achieve pretty much the same result.

Hazelcast dynamic imap configuration propagation to members

If I have multiple Hazelcast cluster members using the same IMap and I want to configure the IMap in a specific manner programmatically, do I then need to have the configuration code in all the members, or should it be enough to have the configuration code just once in one of the members?
In other words, are the MapConfigs only member specific or cluster wide?
Why I'm asking is that Hazelcast documentation http://docs.hazelcast.org/docs/latest/manual/html-single/index.html#configuring-programmatically
says that
As dynamically added data structure configuration is propagated across
all cluster members, failures may occur due to conditions such as
timeout and network partition. The configuration propagation mechanism
internally retries adding the configuration whenever a membership
change is detected.
this gives me the impression that the configurations propagate.
Now if member A specifies a certain MapConfig for IMap "testMap", should member B see that config when it does
hzInstance.getConfig.findMapConfig("testMap") #or .getMapConfig("testMap")
In my testing B did not see the MapConfig done by A.
I also tried specifying at A mapConfig.TimeToLiveSeconds(60), and at B mapConfig.TimeToLiveSeconds(10).
It seemed that the items in the IMap that were owned by A were evicted in 60 seconds, while the items owned by B were evicted in 10 seconds. This supports the idea that each member needs to do the same configuration if I want consistent behaviour for the IMap.
Each member owns certain partitions of the IMap. A member's IMap configuration has effect only on its owned partitions.
So it is normal to see different TTL values of the entries of the same map in different members when they have different configurations.
As you said, all members have should have same IMap configuration to have a cluster-wide persistent behavior.
Otherwise, each member will apply its own configuration to its own partitions.
But if add a dynamic configuration as described here, then that configuration is propagated to all members and change their behavior as well.
In brief, if you add the configuration before creating the instance, that is local configuration. But, if you add it after creating the instance, that is dynamic configuration and propagates to all members.

Set TTL for documents in Couchbase Server

I want to set TTL (time to live) at couchbase server for all documents getting pushed from mobile devices continuously for replication. I did not find any clear explanation/example in documentation to do this.
What should be the approach to set TTL for documents coming from mobile devices to Server through Sync Gateway.
Approach 1:
One approach is to create a view at server side which would return createdDate as key. We will query that view for keys of today date which would return today documents and we can set TTL for those documents. But how and when would we call this view and is it a good approach?
Approach 2:
Should I do it by using webhooks where it will listen to document changes (creations) made through Couchbase Lite push replications, set TTL for new documents and save back to Couchbase server?
Is there any other better way to do it?
Also what is the way to set TTL for only specific documents?
EDIT: My final approach:
I will create following view at couchbase server:
function (doc, meta) {
emit(dateToArray(doc.createdDate));
}
I will have a job which would run daily at EOD, query view to get documents created today and set TTL for those documents.
In this way I would be able to delete documents regularly.
Let me know if there is any issue with it or there is any better way.
Hopefully someone from the mobile team can chime in and confirm this, but I don't think that the mobile SDK allows to set an expiry at the moment.
I guess you could add the creation/update time to your document and create a batch job that uses the "core" SDKs to periodically find old documents (either via N1QL or a dedicated view) and remove them.
It is not currently possible to set a TTL for documents via Sync Gateway like you can with a Couchbase Server smart-client. There is a conceptual impedance mismatch with Sync Gateway using the native-style TTLs on documents. This is because the Sync Gateway protocol functions on the basis of revision trees and even when a document is 'deleted', there is still a document in place to mark that there is a document that has been deleted.
I would also be wary of workloads which might require TTLs (e.g. a cache), Sync Gateway documents take up space even after they've been deleted so your dataset may continue to grow unbounded.
If you do require TTLs, and if you do not think the dataset size will be an issue then the best way would be to store a field in the document that represents the time the document would expire. Then you would do two things:
When accessing the document, if it has expired then you manually delete it
Periodically iterate over the all docs endpoint and delete any documents you find with an expiry time in the past.
Couchbase does not delete when TTL reached;
Instead, when you access (expired) document,
then Couchbase check expiry, remove it at that moment.
http://developer.couchbase.com/documentation/server/4.0/developer-guide/expiry.html

Best FIWare architecture?

We are developing a FiWare city sensor network that:
Inside the sensor processes data in real time and publishes their average every N minutes to our server;
some server side math to do with those reported averages, which will generate new fields or averages of already reported fields (e.g. average by day);
In the end, there will be a Wirecloud component showing a map with the location of every sensor and a plot showing the several fields acquired, by sensor.
Aditionally, sensors can raise alarms and every server and sensor access must be secure and server database scalability it's a future concern. At the moment we have this architecture (OCB stands for Orion Context Broker):
Where the "Webservice" and "Processing" components are house made, but after reading a little bit more about FIWare components (particulary the IOT stack) I've realised that there are more components that we might integrate in here.
What are you using for a solution like this? It looks fairly generic (secure attributes publish, storage, post-processing and value plot).

how to change automatically UTC time to current local time in my web application.

In my web application like eCommerce site. Every projects have time limit.I will display time left details in each and every projects.i am using UTC time.how to convert current localtime for every users.For example, USA have 4 or 5 different time zone.i am using php codeigniter and mysql for my web application
If I was you, I would keep all times on your server side UTC and only convert to local time in the client via JavaScript. However, dealing with client time is tricky, since you can't really know their wall time.
I have seen 3 approaches:
1) Use var offset = new Date().getTimezoneOffset();
Probably the best way of getting the client system timezone in an automatic fashion.
2) Try to figure out the timezone by the client IP.
If you need to do the time rendering on your server after all, this might me the only automatic way you can do it. It is very error prone though, because your clients might be using proxies, VPNs etc. Also, the geoIP databases might not be accurate enough.
3) Let the user set the timezone.
This is playing it safe. the user can decide. You can also kepp the setting in a cookie or such.
The momentjs timezone library might help you with all three approaches.