FIWARE's subscription do not notify - fiware

I have multiple subscriptions to different entities, and at some random time one of the subscriptions stops being notified.
This is because the lastNotification attribute is set in the future. Here is an example :
curl 'http://localhost:1026/v2/subscriptions/xxxxxxxxxxxxxxx'
{
"id": "xxxxxxxxxxxxxxx",
...
"status": "active",
...
"notification": {
"timesSent": 1316413,
"lastNotification": "2021-01-20T18:33:39.000Z",
...
"lastFailure": "2021-01-18T12:11:26.000Z",
"lastFailureReason": "Timeout was reached",
"lastSuccess": "2021-01-20T17:12:09.000Z",
"lastSuccessCode": 204
}
}
In this example, lastNotification is ahead of time. It is not until 2021-01-20T18: 33: 39.000Z for the subscription to be notified again.
I tried to modify lastNotification in the mongo database but that does not change it. Looks like the value 2021-01-20T18: 33: 39.000Z is cached.

The field lastFailureReason specifies the reason of the last notification failure associated to that subscription. The diagnose notification reception problems section in the documentation explains possible causes for "Timeout was reached". It makes sense to have this fail randomly if your network connection is somehow unstable.
With regards to timestamping in the future (no matter if it happens for lastNotification, lastFailure or lastSuccess) is pretty weird and probably not associated to Orion Context Broker operation. Orion takes timestamp from internal clock of the system where it is running, so maybe your system clock is set in the future.
Except if you run Orion Context Broker with -noCache (which in general is not recommended), subscriptions are cached. Thus, if you "hack" them in the DB you will not see the effect until next cache refresh (refreshes takes place at regular intervals, defined by -subCacheIval parameter).

Related

Keeping sync between two devices accessing the same account (different/same session)

Kind of curious as I'm aiming for a stateless setup, how some people go about coding/setting up their session handling when many devices accessing a single account occurs.
I work with Node.JS currently but the pseudo is appreciated,
This is how my sessions look currently, ID is a unique value. (Redis stored JSON by KEY)
{"cookie": {
"originalMaxAge": null,
"expires": null,
"secure": true,
"httpOnly": true,
"domain": "",
"path": "/",
"sameSite": "strict"
},
"SameSite": "7e5b3108-2939-4b4b-afdc-39ed5dbd00d0",
"loggedin": 1,
"validated": 1,
"username": "Tester12345",
"displayself": 1,
"avatar": "{ \"folder\": \"ad566c0b-aeac-4db8-9f54-36529c99ef15/\", \"filetype\": \".png\" }",
"admin": 0,
"backgroundcolor": "#ffffff",
"namebackgroundcolor": "#000000",
"messagetextcolor": "#5d1414"}
I have no issues with this setup until I have a user logged in twice different devices and one decides to adjust their colors or avatar; one session is up to date and the other is completely lost.
I do my best when possibly to call out to database to ensure the information is up to date when it's most important but curious for this small slip up what I should be doing? I'd hate to call for database each request to get this information but think most do this any-how?
I could set up in my mind a hundred different ways to go about this but was hoping maybe someone who has dealt with this has some excellent ideas about this. I'd like to just be efficient and not make my databases work as hard if they don't need to, but I know session handling makes the call each request so trying to determine a final thought.
Open to all ideas, and my example above is a JSON insert into Redis; I'm open to changing to MySQL or another store.
One way to notify devices and keep them up-to-date about changes made elsewhere is with a webSocket or socket.io connection from device to the server. When the device logs in as a particular user and then makes a corresponding webSocket or socket.io connection to the server, the server keeps track of what user that connection belongs to. The connection stays connected for the duration of the user's presence.
Then, if a client changes something (let's use a background color as an example), and tells the server to update its state to that new color, the server can look in its list of connections to see if there are any other connections for this same account. If so, the server sends that other connection a message indicating what change has been made. That client will then receive that notification and can update their view immediately. This whole thing can happen in milliseconds without any polling by the client.
If you aren't familiar with socket.io, it is a layer on top of webSocket that offers some additional features.
In socket.io, you can add each device that connects on behalf of a specific account to a socket.io room that has a unique name derived from the account (often an email address or username). Upon login:
// join this newly connected socket to a room with the name
// of the account it belongs to
socket.join(accountName);
Then, you can easily broadcast to all devices connected to that room with one simple socket.io API call:
// send a message to all currently connected devices using this account
io.emit(accountName, msg);
When socket.io connections are disconnected, they are automatically removed from any rooms that they have been placed in.
A room is a lightweight collection of currently connected sockets so it works quite well for a use like this.

Actionable Messages sometimes are not parsed correctly

We've got reports that sometimes actionable messages are not displayed correctly by some clients. It doesn't matter if they message is displayed on Outlook OWA or Outlook Desktop app.
I asked one of the clients to install Actionable Message debugger app and check the diagnostics section and others. Here are some details I've managed to read off from it:
No card is attached to this message.
Actionable messages processing has not been performed on this message. Actionable messages are only enabled for Office 365.
Adaptive card payload found but could not be parsed. Please validate the payload.
And diagnostics section:
"CardEnabledForMessage": false,
"ClientName": "OutlookWebApp",
"ClientVersion": "16.2528.7.2602797",
"InternetMessageId": "<ID>",
"Error": "EntityDocument does not exist.",
-
"AdaptiveCardPayload": {
"found": true,
"type": "AdaptiveCard"
},
-
"MessageCardPayload": {
"found": false,
"type": null
},
-
"AuthHeader": {
"results": "<address>; dkim=none (message not signed) header.d=none;<address>; dmarc=none action=none header.from=<address>;",
"authAs": "Internal"
}
Up until recently, I wasn't able to reproduce the issue on my end. During some tests, I've sent myself a test message and it has not been parsed correctly.
When I sent another test message afterwards, it was working perfectly fine.
Of course after comparison of both messages' sources yielded that both sources were identical. Headers were a little different but mostly in time and what appears to be the server. Diagnostics and error sections from debugger are almost identical.
The method we use is SMTP (there were some issues with EWS) and we're thinking of switching back if that causes the issue.
Is there something that can be done in regards to this issue? It's probably worth noting that the payload we send is quite 'heavy' (as in, we had to limit ourselves with the amount of data we send because we were hitting something what looked like size limit)
Changing the script slightly makes the messages render for some users. Others had to wait some time (possibly for some OWA update?) for them to work.
So in the end nothing has been changed and it started working after some time.

What implementations of SMTP typically do with the mail data in response to RSET after DATA?

Here is what I gathered from the RFC 5321:
4.1.1.5. RESET (RSET)
This command specifies that the current mail transaction will be aborted. Any stored sender, recipients, and mail data MUST be discarded, and all buffers and state tables cleared. The receiver MUST send a "250 OK" reply to a RSET command with no arguments. A reset command may be issued by the client at any time. It is effectively equivalent to a NOOP (i.e., it has no effect) if issued immediately after EHLO, before EHLO is issued in the session, after an end of data indicator has been sent and acknowledged, or immediately before a QUIT.
The emphases are mine. This says that if we receive the RSET after the end of data indicator ".", but before we sent the acknowledgement, then we must discard the content of the message, which is currently being delivered. This does not seem practical. Moreover, the server can easily acts as if it received the RSET after he sent the acknowledgement - the client would not be able to know. Trying to know what is usually done, I found this discussion https://www.ietf.org/mail-archive/web/ietf-smtp/current/msg00946.html where they say:
Under a RFC5321 compliant "No Quit/Mail" cancellation implementation, after
completing the DATA state, the server is waiting for a pending RSET, MAIL
or QUIT command:
QUIT - complete transaction, if any
MAIL - complete transaction, if any
perform a "reset"
RSET - cancel any pending DATA transaction delivery,
perform a "reset"
drop - cancel any pending DATA transaction delivery
We added this support in 2008 as a local policy option (EnableNoQuitCancel)
which will alter your SMTP state flow, your optimization and now you MUST
follow RSET vs QUIT/MAIL correctly. RSET (after DATA) aborts the
transaction, QUIT/MAIL (after DATA) does not. RSET is not an NOOP at this
point.
The specification says that discarding is a MUST. However, the above extract suggests that in practice it is interpreted as a MAY. I could look at the code of known implementations of SMTP/LMTP, such as Dovecot, but perhaps someone already reviewed that and this would save me time.
The text says "end of data indicator has been sent and acknowledged" which suggests that the client has received the server's response to the DATA command. Since the base protocol doesn't support command pipelining, I don't think sending anything after DATA but before the server's response (after the dot which terminates the DATA but before you receive a reply from the server) is well-defined behavior.
Personally, I can't think of any more reasonable server behavior than "pretend it didn't happen."
The answer is here : https://www.rfc-editor.org/rfc/rfc1047 . They basically says that you can acknowledge before you start the processing and it is actually recommended to do so. This does not violate RFC 5321. Of course, more information on this issue would be useful, but I am happy with rfc1047.

Orion Context Broker delivery guarantees?

Thinking of 'production' usage of Orion Context Broker, I wonder what kind of guarantees are provided by the Orion Context Broker in terms of delivery of messages -- both from producer and consumer perspective? In particular, keeping in mind various possible failure scenarios (CB failure/restart, network transient failure, consumer failure/restart, etc), as well as possibility of resource congestion in the CB. Few examples:
1) if a context update operation succeeds, is it guaranteed that consequent queries will return the latest data (e.g., even if CB failed right after acknowledging the update request, and then restarted)?
2) if a consumer subscribed for certain context information, is it guaranteed that it will receive all the relevant updates -- exactly once, at least once, or even at all? (e.g., in case of transient network failure between CB and the consumer)
3) if a consumer updated its subscription, is it guaranteed that the consequent updates will accurately reflect it? (e.g., if CB failed right after acknowledging the subscription request, and then restarted)
4) if a consumer is subscribed for context changes ('onchange', no throttling), and there are multiple consequent updates from the producer affecting the same attribute, is it guaranteed that each of the changes will be sent (or some might be skipped -- e.g., due to too many notifications that CB needs to send during a certain period of time), in any particular order?
etc...
Thanks!
Answering bullet by bullet:
In general, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database, so subsequent queries will return that data (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk).
In order to keep things simpler, CB uses a "fire and forget" notification policy. However, CB can be combined with HTTP relaying software (e.g. Rush, event buses, etc.) in order to implement retries, etc.
Similar to case 1, if the client receives a 2xx response (inside of the response payload in the case of NGSIv1, HTTP response code in the case of NGSIv2) it can assume that the update has been persisted in context database (except in the case of running CB with -writeConcern 0 if the DB fails before the update can be persisted from DB memory to disk), so notifications of such data (due to either existing subscriptions or new ones) will use the new value.
All notifications will be sent as long as thread saturation (in the case of -notificationMode transient) or queue saturation (-notification threadpool:q:n) don't occur. You can find more information about notification modes in Orion documentation.

Is NServiceBus (AsA_Server) without DTC possible?

I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}
When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.
There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>
With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});
When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}