In Fiware CEP's User Manual (pdf), page 12, it's mentioned you can create an event Producer of the type 'Timed', that will retrieve events from a file at intervals of time based on their 'OccurranceTime' property.
In my Fi-Lab's intance I don't find this 'Timed' type of producer in the dropdown list, only: File, JMS, Rest and Custom.
So I thought this feature could be implemented in the type 'File', but I can't get it to work, the property 'sendingDelay' in the Producer, always dictates the reading speed, not 'OccurrenceTime' in the event payload. Deleting 'sendingDelay' from the Producer makes it not send events at all.
OccurranceTime is said, in the manual, to be in milliseconds and in the authoring tool it has variable type of 'Date', so "OccurranceTime":"1000" should mean one second.
So, how can I get events produced at desired times? Is it just a matter of correct formating?
(BTW: in the manual OccurranceTime is spelled in two diferent ways: 'OccuranceTime' and 'OccurranceTime'. I believe the correct one is with double 'r', as it's what the authoring tools gives by default when creating a new event.)
Thank you,
Arthur
Event producer of type 'Timed' is a new feature that is part of release 4 of the CEP. It should be available in FIWARE Lab on October.
When available, you could choose it as the producer's type in the CEP Authoring tool. Then, the CEP will read events from an input file. In this file, you will write the expected occurrence time of each event.
For example, if the content of the input event file in JSON format is:
{"Name":"TrafficReport", "volume":"1000", "OccurrenceTime":"1000"}
{"Name":"TrafficReport", "volume":"1600", "OccurrenceTime":"6000"}
{"Name":"TrafficReport", "volume":"2500", "OccurrenceTime":"11000"}
The producer will process the second input event 5 seconds after the first input event, since it said to occur 5000 ms after the first one.
Related
I'm developing an OBD-II reader where I want to query requests to read PID parameters with a stm32 processor. I already understand what should go on the data field, but the ID is giving me a headache. As I have read, one must send 0x7DF to broadcast a request, and each ECU will respond with his own ID. However, I have been asked to do this within the SAE J1939 protocol, which uses the 29 bit extended identifier, and I don't know what I need to add to this ID.
As I stated in the title, could someone show me some actual data from a bus using this method? I've been searching on the internet for real frames but did not have any luck so far.
I woud also appreciate if someone could shred some light to if the OBD-II communication needs some acknowledgment to work properly.
Thanks
I would suggest you to take a look on the SAE J1939 documentation, in the more specifically on the J1939/21,J1939-71 and J1939/73.
Generally, a J1939 transport protocol response sequence can be processed as follows:
Identify the BAM frame, indicating a new sequence being initiated
(via the PGN 60416 - 0xEC00 can be reach by 0x1CECFF00 )
Extract the J1939 PGN from bytes 6-8 of the BAM payload to use as the
identifier of the new frame
Construct the new data payload by concatenating bytes 2-8 of the data
transfer frames (i.e. excl. the 1st byte)
A J1939 data transfer messages with ID 1CEBFF00 (PGN 60160 or EB00).
Above, the last 3 bytes of the BAM equal E3FE00. When reordered, these equal the PGN FEE3 aka Engine Configuration 1 (EC1). Further, the payload is found by combining the the first 39 bytes across the 6 data transfer packets/fram
The administrative control device or any device issuing the vehicle use status PID should be sensitive to the run switch status (SPN 3046 - 0xFDC0 which probably can be reach by 0xCFDC000) and any other locally defined criteria for authorized use (i.e., driver log-ons) before the vehicle use status PID is used to generate an unauthorized use alarm.
Also, you can't forget to uses a read/send to extend ID message, since that is a 24-bit.
In fact, i will suggest you to use can-utils to make your a analyses even easier. A simple can-dump or can-sniffer you can see what is coming on your broadcast.
Some car's dbc https://github.com/commaai/opendbc
We used to use the guava cache and we want to change it to caffeine.
We want to set for each entity its own "expiration time", something like - put(K key, V value, long expiration_time).
I saw the 3 functions above and I wonder what exactly they are doing, if you can explain me the meaning ant the operations of each one of them it will be great.
For example, the return value of expireAfterCreate should be the duration we want for this entity from it's creation untill it's expiration? or something else?
I'm also wondering why we have the parameter "currentTime" in both expireAfterRead and expireAfterUpdate if we don't use it in the function?
When we used the guava cache we used the expireAfterAccess, what is the substitution for it in caffeine?
My last question is how can I set a default value for entities without a unique expiration time.
Thank you,
May
When we used the guava cache we used the expireAfterAccess, what is the substitution for it in caffeine?
We mirror the Guava API, so this is also available on the cache builder.
My last question is how can I set a default value for entities without a unique expiration time.
Use expireAfterAccess, expireAfterWrite, or return a constant duration with expireAfter(Expiry).
I saw the 3 functions above and I wonder what exactly they are doing, if you can explain me the meaning ant the operations of each one of them it will be great.
Expiry is a callback interface where a single timestamp value is updated. The invoked method corresponds to the operation performed on the cache entry (created, updated, read). An update or read that should have no effect can return currentDuration to no-op.
For example, the return value of expireAfterCreate should be the duration we want for this entity from it's creation untill it's expiration? or something else?
Yes. However if the expireAfterUpdate returns a custom value (something other than currentDuration), then that overrides the prior expiration duration.
I'm also wondering why we have the parameter "currentTime" in both expireAfterRead and expireAfterUpdate if we don't use it in the function?
This can most often be ignored, but is provided if somehow useful. It is the current nano timestamp from the Ticker (not wall clock time).
We want to set for each entity its own "expiration time", something like - put(K key, V value, long expiration_time).
The callback Expiry is required and generally recommended, because ideally entries are loaded through the cache to avoid stampedes (e.g. LoadingCache). A stampede is when multiple threads lookup the same entry, miss, load it, and overwrite each other putting it in. That wasted work rather than having only one thread perform the load and others wait for the results.
That said, this method is available under Cache.policy().expiresVariably(). Those configuration-specific methods are stashed in that area to offer more power when deemed necessary.
Thank you,
You're very welcome.
I am trying to ingest data from a 3rd party API into a Dataflow pipeline. Since the 3rd party doesn't make webhooks available, I wrote a custom script that constantly polls their endpoint for more data.
The data is refreshed every 15 minutes, but since I don't want to miss any datapoints and I want to consume as soon as new data is available, my "crawler" runs every 1 minute. The script then sends the data to a PubSub topic. Easy to see that PubSub will receive about 15 repeated messages for each datapoint in the source.
My first attempt to identify and discard those repeated messages was to add a custom attribute to each PubSub message (eventid), created from a hash of its [ID + updated_time] at source.
const attributes = {
eventid: Buffer.from(`${item.lastupdate}|${item.segmentid}`).toString('base64'),
timestamp: item.timestamp.toString()
};
const dataBuffer = Buffer.from(JSON.stringify(item))
publisher.publish(dataBuffer, attributes)
Then I configured Dataflow with a withIdAttribute() (which is the new idLabel(), based on Record IDs).
PCollection<String> input = p
.apply("ReadFromPubSub", PubsubIO
.readStrings()
.fromTopic(String.format("projects/%s/topics/%s", options.getProject(), options.getIncomingDataTopic()))
.withTimestampAttribute("timestamp")
.withIdAttribute("eventid"))
.apply("OutputToBigQuery", ...)
With that implementation, I was expecting that when the script sends the same datapoint a second time, the repeated eventid would be the same and the message discarded. But for some reason, I still see duplicates on the output dataset.
Some questions:
Is there a clever way to ingest the data to dataflow from that 3rd party API if they don't provide webhooks?
Any ideas on why dataflow is not discarding the messages on this situation?
I know about the 10-minute restriction for deduplication on dataflow, but I see duplicated data even on the 2nd insertion (2 minutes).
Any help will be greatly appreciated!
I think you are on the right track, instead of the hash I recommend to use timestamps. A better way to to this is by using windows. Review this document which filters data that is outside of the window.
Regarding the additional duplicate data, if you are using pull subscriptions and the acknowledgement deadline is reached before having the data processed the message will be resent as per the at-least-once delivery. In this case change the acknowledgement deadline, the defaults is 10 seconds.
I am trying to tap into the CUPS raster and obtain some lower level info such as pixel data, color mode, bits per pixel, bits per color, and anything else really. I can't figure out how CUPS uses the raster. Whenever I print something to PDF it never goes through any of the functions in the filter/raster.c file.
Is my approach/reasoning incorrect? I've tried printing images (png), text and PDF and the result is the same.
CUPS does not have any component called 'rasterizer'.
When CUPS needs to process a submitted file (you can print on the command line, like 'lp -d printername the.file', did you know?), ...
...the first thing it does is auto-typing the incoming file in order to determine its mime type;
...next, it checks which target print queue the user requested ('printername' in above commend); each target printer requires its own file format, which also is a mime type of its own (that is of course different for a PCL, a PostScript, an ESC/P, a GDI, a proprietary "whatever" or even a PDF-consuming printer);
...based on input and required final output file types of the current job, CUPS constructs an appropriate filtering chain and runs the input data through these filters.
You can follow the course of these conversions by enabling LogLevel debug in /etc/cups/cupsd.conf (restart CUPS daemon after modifying this). Then, check the log file:
less /var/log/cups/error_log
This will now show lines containing 'Started filter /usr/lib/cups/filter/...' indicating the time when each filter in the chain is started.
The raster/raster.c source code file contains code which is used if the filtering chain contains any of the ABCDtoraster or rastertoXYZ filters. These filters may or may not be present on your system in the directory /usr/lib/cups/filter/, and they create or post-process a CUPS-specific raster format defined here: https://www.cups.org/doc/spec-raster.html
My json custom formatted events are from a log file which contains parameters names with dots like id.orig_h etc. Sample event is:
{"ts":"2016-05-08 08:59:47.363764Z","uid":"CLuCgz3HHzG7LpLwH9","id.orig_h":"172.30.26.119","id.orig_p":51976,"id.resp_h":"172.30.26.160","id.resp_p":22,"version":2,"client":"SSH-2.0-OpenSSH_5.0","server":"SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6","cipher_alg":"arcfour256","mac_alg":"hmac-md5","compression_alg":"none","kex_alg":"diffie-hellman-group-exchange-sha1","host_key_alg":"ssh-rsa","host_key":"8d:df:71:ac:29:1f:67:6f:f3:dd:c3:e5:2e:5f:3e:b4"}
But event receiver does not take such events and gives mapping errors saying:
Could not find any matches for the incoming event with JSONPath : com.jayway.jsonpath.JsonPath#543abe49 ,hence dropping the event
If I cant change my log file, How can I make receiver to accept such parameters?
Also unless my events are not segregated with *****, receiver does not bother any further coming events. Why is so? How can I avoid it?
I simply have modified my log files before sending it via any client. I was using sample 0002, So I changed my message to comply with the receiver. However I still dont know why receiver does not accept parameters with dots in it. This sample also considered events to be separated by asterixLine i.e. *****. On removing a couple of sample lines, I made it work.