Where should I be logging exceptions? At the data service tier(ExecuteDataSet, etc.) and/or at the data access layer and/or at the business layer?
At a physical tier boundary.
Also in the top-level exception handler in the client.
I.e. if your business tier is running on a server, log exceptions before propagating to the client. This is easy if you're exposing your business tier as WCF web services, you can implement an error handler that does the logging before propagating a SOAP fault to the client.
If you are throwing the exception you should log it when it occurs and then bubble it up. Otherwise only the end user should log an exception (you may have lots of tracing on of course in which case it may get logged quite a bit).
The end user may be a UI component or a Service or something...
If you handle an exception in you code somewhere - then that is the end user and you should log it there. In most apps and in most cases it should be logged by the UI when it displays the error message to the user.
I usually allow exceptions to propagate up and log them when they reach the very top level. For example
main {
try {
application code
} catch {
preform logging
}
}
But that only makes sense for fatal exceptions. Other exceptions I usually log them in the block that handles the recover from said exception.
Related
I'm creating a Forge application which needs to get version information from a BIM 360 hub. Sometimes it works, but sometimes (usually after the code has already been run once this session) I get the following error:
Exception thrown: 'Autodesk.Forge.Client.ApiException' in mscorlib.dll
Additional information: Error calling GetItem: {
"fault":{
"faultstring":"Unexpected EOF at target",
"detail": {
"errorcode":"messaging.adaptors.http.flow.UnexpectedEOFAtTarget"
}
}
}
The above error will be thrown from a call to an api, such as one of these:
dynamic item = await itemApi.GetItemAsync(projectId, itemId);
dynamic folder = await folderApi.GetFolderAsync(projectId, folderId);
var folders = await projectApi.GetProjectTopFoldersAsync(hubId, projectId);
Where the apis are initialized as follows:
ItemsApi itemApi = new ItemsApi();
itemApi.Configuration.AccessToken = Credentials.TokenInternal;
The Ids (such as 'projectId', 'itemId', etc.) don't seem to be any different when this error is thrown and when it isn't, so I'm not sure what is causing the error.
I based my application on the .Net version of this tutorial: http://learnforge.autodesk.io/#/datamanagement/hubs/net
But I adapted it so I can retrieve multiple nodes asynchronously (for example, all of the nodes a user has access to) without changing the jstree. I did this to allow extracting information in the background without disrupting the user's workflow. The main change I made was to add another Route on the server side that calls "GetTreeNodeAsync" (from the tutorial) asynchronously on the root of the tree and then calls it on each of the returned children, then each of their children, and so on. The function waits until all of the nodes are processed using Task.WhenAll, then returns data from each of the nodes to the client;
This means that there could be many api calls running asynchronously, and there might be duplicate api calls if a node was already opened in the jstree and then it's information is requested for the background extraction, or if the background extraction happens more than once. This seems to be when the error is most likely to happen.
I was wondering if anyone else has encountered this error, and if you know what I can do to avoid it, or how to recover when it is caught. Currently, after this error occurs, it seems that every other api call will throw this error as well, and the only way I've found to fix it is to rerun the code (I use Visual Studio so I just rerun the server and client, and my browser launches automatically)
Those are sporadic errors from our apigee router due to latency issues in the authorization process that we are currently looking into internally.
When they occur please cease all your upcoming requests, wait for a few minutes and retry again. Take a look at stuff like this or this to help you out.
And our existing reports calling out similar errors seem to point to concurrency as one of the factors leading up to the issue so you might also want to limit your concurrent requests and see if that mitigate the issue.
Mule documentation states that catch-exception-strategy is similar to java catch block. But unfortunately, the payload is consumed (message is consumed); from the catch block the payload is lost (unlike a java method where you can access the method input parameters from a catch/finally block).
The problem with this design is that at any instance, (from the catch strategy flow) it is impossible to know the error and last known enriched payload which was used (which caused the error?). This complicates auditing of data which caused the error.
Suppose if there is a flow with 10 message processors, it becomes tedious to identify the message processor which threw error.
I can see 2 workarounds regarding the payload:
1) After the inbound endpoint, push the payload to a flow variable before every message processor (again another disadvantage is what happens to the Inbound properties and attachments?)
2) Use Rollback exception strategy with zero attempts (the transaction will be rolled back), and original input message may be available. (drawback: it is difficult to introspect on why the error happend and on which message processor - example: I may have 5 or 6 DB processors)
The reason why this becomes important is supporting an ESB application in production becomes easier.
For example, from the catch-block if we are able to pipe the payload and exception details (linked to a single UID), then you can run a log monitor tool, push it to a real time dashboard for monitoring purpose/raise Alerts. The same approach can be uniformly applied to all the applications/flows and java components, etc.
MMC is weak in this area - for example, if you want to supress Alerts from a batch job after 5 occurrences, MMC cannot do it.
My questions are:
1) Is there any reason on why the payload is made unavailable?
Possible workaround is to push (last known data) to another variable as part of message called originalPayload or originalInboundProperties?
2) Any other straight forward way of piping the exception and payload to an appender (instead of workarounds)?
Ananth Krishnan (WHISHWORKS.com)
I am using NServiceBus for the first time and have a small, simple application where a user submits a form, the form fields are then sent to the queue, and the handler collects this data and writes it to the database using linq-to-sql.
Any changes within Component Services is a complete no-no as far as the DBA is concerned, so I'm now looking for an alternative to DTC (which is not enabled on the DB server), but using AsA_Server so that messages do not get purged.
I have tried removing AsA_Server after IConfigureThisEndpoint and specifying the configuration myself, but this doesn't seem to work (the console appears, page loads but nothing happens, it doesn't even stop at breakpoints.) AsA_Client does work, but as I understand it the messages will be purged at startup which I need to avoid.
Any suggestions?
Thanks,
OMK
EDIT: This has now been resolved by using wrapping the call to the database in a suppress transaction scope, which allows the database work to be done with no ambient transaction to enlist in:
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Suppress))
{
// code here
sc.Complete();
}
When you use AsA_Server, you are specifying you want durable queues and you will need to configure transactional queues.
With a transactional send/receive MSMQ requires you to send, transmit, receive, and process as part of one transaction. However, actually all these stages take place in their own transactions.
For example, the send transaction is complete when the sender sends a message onto their local MSMQ subsystem (even if the queue address is remote, the sender still sends to a local queue which acts as a kind of proxy to the remote queue).
The transmit transaction is complete when the MSMQ subsystem on the senders machine successfully transmits the message to the MSMQ subsystem on the receivers machine.
Even though this may all happen on one machine, I am guessing that your Handle() method is writing to a database on a different machine.
The problem here is that for the receive operation to complete satisfactorily from a transaction perspective, your call to the database must be successful. Only then will the message be de-queued from your input queue. This prevents any chance that the message is lost during processing failure.
However, in order to enforce that across the network you need to involve DTC to coordinate the distributed transaction to the database.
Bottom line, if you want durable queues in a distributed environment then you will need to use MSDTC.
Hope this helps.
There is an alternative. In your connection string you can add the option to not enlist in a distributed transaction and this will have your DB connection ignored in the DTC.
Of course, if this is set in the config then all database transactions for the application are ignored by the DTC rather than just a specific one.
Example:
<add key="DatabaseConnectionString" value="Data Source=SERVERNAME;Initial Catalog=DBNAME;Integrated Security=True;Enlist=False"/>
With NServiceBus 4.0 you can now do the following, which finally worked for me:
Configure.Transactions.Advanced(t =>
{
t.DisableDistributedTransactions();
t.DoNotWrapHandlersExecutionInATransactionScope();
});
When you use the As (AsA_Client, AsA_Server) interfaces, the configuration is applied after Init() so all the settings that you make there regarding MsmqTransport and UnicastBus are overriden.
It's possible to override those settings using IWantTheConfiguration in a IHandleProfile implementation. You get the Configuration after the default roles are applied but before the bus is started.
This way you can change the default profile settings and tailor them to your needs: deactivate transactions, enable impersonation...
Example:
public class DeactivateTransactions : IHandleProfile<Lite>, IWantTheEndpointConfig
{
private IConfigureThisEndpoint configure;
public IConfigureThisEndpoint Config
{
get { return configure; }
set
{
this.configure = value;
Configure.Instance.MsmqTransport()
.PurgeOnStartup(false)
.IsTransactional(false); // Or other changes
}
}
public void ProfileActivated()
{
}
}
I am writing an instant messaging library. Currently, when a SocketException is raised while reading or writing to the socket, I start the logout routine from inside the application, passing the SocketException to the enduser as an argument of the LogoutEventArgs. This gives the end user a way of seeing what underlying exception actually caused the unrequested logout.
My question, is what am I to do, if during a user call to the Logout function, the socket actually throws an Exception.
Example - End user calls Logout function, and while the logout function is waiting for existing requests to end gracefully, the socket throws an exception in the reading thread.
I have two options as I see it -
Pretend the error didn't occur, and just act like the socket disconnected as part of our Logout.
When the socket exception is raised, see if a logout request is taking place, and if so, override it. Resulting in the original Logout request throwing an AlreadyLoggedOutException, as well as a separate logout event which passes the exception in the LogoutEventArgs.
Also, slightly related - What am I to do if the server initiates a shutdown that wasn't requested (ie.. the read call returns null).. the .NET Messenger server has a tendency to do this if you send a request it doesn't like. Do I treat this as an exception in itself?
I have found the whole disconnecting/logging out part of my library to be a major thorn in my side. I just can't seem to wrap my head around it. Does anyone know of any open source code applications that handle this situation beautifully?
I have been trying to tackle this thing in my head for so long, it's driving me mad.
I decided not to pass the SocketException to the end user, as a disconnect is not truly an exception and should be expected and dealt with. Instead there is a LogoutReason property on the LogoutEventArgs which specifies why the logout occured.
I decided that if the disconnect occurs during Logout then that's not actually an exception for, as the logout was going to disconnect anyway. I simply disregard the exception in this case.
I have a sequential workflow, which is hosted in IIS as a Workflow Service.
My workflow starts with a ReceiveActivity, and inside the ReceiveActivity a call is made to a WCF service with a SendActivity. If this call receives an exception, there is a FaultHandlerActivity on my ReceiveActivity which is meant to handle the call, and send a default value back to the client.
What is happening in my client is that an exception on the SendActivity is bubbling back to the client as a FaultException, even though my FaultHandlerActivity is running (I verified this by logging the beginning and end of the single CodeActivity in my fault handler)
My question is: How can I swallow exceptions ocurring in the SendActivity, without a FaultException being returned to the client?
OK, I figured it out.
My receiveActivity had a fault handler directly on it. What happens then is that if any child activity raises an exception, the fault handler on the receive activity is invoked, and it is also set to a Faulted state, and the exception received is returned to the client application - whether I wanted that or not.
The solution was to add a sequence activity inside the receiveActivity, do all of the processing inside the sequence activity, and add a faultHandlerActivity to the Sequence, which sets up my default return value.
The receive activity is never faulted, and the exception is not returned to my client, but the default value set up in the Sequence's FaultHandler is returned.
Hopefully this will help someone else with the same issue