We're using Ninject 3.0 to resolve a class.
Fairly boring standard stuff:
IKernel kernel = GetKernel();
var foo = kernel.Get<IFoo>();
However on one particular machine, we're getting an exception when constructing... something. We know what's blowing up, but it's in the logging framework (Common.Logging), and that code is used throughout our codebase, in all/most of the constructors.
Other than putting every single constructor in a try/catch and wrapping the exceptions with type info, I get no useful information from Ninject as to what it's having trouble with.
Is there some way I can get Ninject to tell us which class that it's failing to create?
Another person asks a similar question, but their solution doesn't help - we are getting the exceptions from the logging framework, not Ninject.
This isn't really a solution but when I've had problems I've manually deleted constructor dependencies one at a time until I've found the problem dependency. When I verify all works without that one dependency, I then go into that concrete implementation. I essentially recursively, from the entry point go down through each dependency, removing each dependency until the unbound dependency is identified. Tiring, but brute force hasn't failed me yet. Even when I encountered a private constructor being the cause of a problem, I'd never of gotten that without brute force. More commonly though, it's an unbound type somewhere or a conditional injection.
Related
So, I added this image, hoping it would help :
My question is: What is the point of doing this? I have created my global exception class(with my own messages), I have raised it in a method of a global class and I have also caught this exception - and I have done all these without giving that particular method the exception, so does it help in any way to give a method exceptions?
Short update, this is my method code:
A coworker told me to give the method an exception parameter instead of writing the code from picture 2. If I do so, I don't see any changes and this is why I do not see the point of doing what is in the first picture.
This is a very good question. Because of the overhead, such exceptions create either by creating them, adding them, linking them to a message-class and throwing/catching them, it really, also for me, seems sometimes a kind of "shooting rockets at birds". Most things can really be caught by catching CX_ROOT.
Nevertheless there are really cases, where it is important to distinguish between exceptions, and therefore it is a nice-oop-standard to create some own,
If:
the exception's class type/meaning cannot be covered by the abap-standard exceptions
the exception class should be linked against own message-class-messages
the exception class should provide special features like "resumable".
In the end this question is some kind of "best practice/best usecase" question and I also would be glad to see some other answers, which can pinpoint to other points of view about this topic.
To refer to your original question: Adding the exception to the signature of your method simply states that the method might throw that exception (which makes sense for dynamically checked exceptions and is required for statically checked exceptions, to keep things simple). This is of little use for the method implementation (besides the fact that it makes raising an exception without errors possible), but it is an important message to the caller: When calling method FOO, be aware that ZCX_BAR might occur - deal with it or ignore it at your own peril.
I derived several classes from various exceptions. Now VS gives warning as in the title of this question.
Could someone explain what implications of suppressing this rule?
Could you explain rule from here saying "Do not suppress a warning from this rule for exception classes because they must be serializable to work correctly across application domains."?
P.S. Well I've got an answer myself. You indeed have to mark exceptions as serializable. They work fine without this attribute in same AppDomain. However if you try to catch it from some other domain, it will have to get serialized in order to get across app boundaries. And that is the main reason I found for this.
This is not exactly a Visual Studio warning, it is a warning produced by the FxCop tool. Which you can run from the VS Analyze menu. FxCop is a static analyzer that looks for common gotchas in a .NET program that a compiler won't flag. Most of its warnings are pretty obscure and are rarely really serious problems, you need to treat it as a "have you thought of this?" kind of tool.
The little factoid it is trying to remind you about here is that the Exception class implements ISerializable and has the [Serializable] attribute. Which is a pretty hard requirement, it makes the base Exception object serializable across app-domains. Necessary because Exception doesn't derive from MarshalByRefObject. And necessary to allow code that you run in another app domain to throw exceptions that you can catch.
So FxCop notes that you didn't do the same for your own Exception derived class. Which is really only a problem if you ever intend to have code that throws your exception run in another app-domain. FxCop isn't otherwise smart enough to know if you do so it can only remind you that it goes wrong when you do. It is pretty uncommon so feel free to ignore the warning when you just don't know yet whether you will or not or if it all sounds like Chinese to you.
If you're not going to use multiple AppDomain in your application, I think you can ignore it or suppress.
I need some direction how to best use Exceptions in a Java EE environment, serving clients via JAX-RS.
At the moment, I have a number of exceptions, all extending RuntimeException, and annotated with #ApplicationException(rollback=false). In order to transport them to the clients, they carry a JAXB-annotated entity; and an ExceptionMapper is ready to convert them to proper, meaningful HTTP Responses (HTTP Status codes included).
I have nothing specified regarding transactional behaviour, so I guess it defaults to CMT.
Great stuff so far: when the server decides, it cannot fulfill a request, because input data is not valid/sufficient/whatever, it throws one of my BadRequestException, which makes it to the JAX-RS resource, where it gets mapped to a HTTP Response. Client is informed about what went wrong.
The issue I have is that I always get a javax.ejb.TransactionRolledbackLocalException, caused by BadRequestException! I don't want the transaction to be rolled back! The #ApplicationException seems to be ignored...
Should I not extend from RuntimeException but rather use checked exceptions? I though #ApplicationException was supposed to be the right way...
For background information: all of my Exceptions leave the container/beans in a working state. No need for the bean instance to be destroyed or stuff like that.
For others struggling with same problem:
Annotation #ApplicationException is ignored(Not scanned/not processed) when Exception class is not included in ejb-jar. That is a common case when our ApplicationException is a part of API jar. In that case we have to use XML descriptor to mark ApplicationException.
Looking here helped me -> https://www.java.net//node/665096
Ok, turns out reading the manuals does help sometimes :).
An #ApplicationException is by definition not a RuntimeException. In fact, throwing RuntimeExceptions seems to be a very bad idea, that's what'll tear down a bean instance, rollback transactions, etc.
After switching everything to be based on checked Exceptions, my code not only looks much better, the IDE supports me much better as well. And it works like a charm. Now I can control, if my ApplicationException should cause transaction rollback or not.
I found this link useful, even though it describes it for Bea Weblogic.
My application's persistence layer is formed by a Storage trait and an implementing class. I'm vacillating on this issue: should the fetchFoo(key: Key) methods should return Option[Foo], or should they throw FooNotFound exceptions if the key cannot be found?
To add flavour to the issue, the persistence layer - it is written in Scala - is called by Java code. Dealing with scala.Option in Java code? Hmmm.
In fact, until yesterday, the persistence layer was written in Java; I've just re-written it in Scala. As a Java code base, it relied on exceptions rather than returning nulls; but now that I've encountered scala.Option, I'm reconsidering. It seems to me that Scala is less enamoured of exceptions than Java.
My take on the general problem is that it depends on where the keys are coming from.
If they are being entered by some user or untrusted system, then I use Option so I can meaningfully denote the possibility of an unknown key and deal with it appropriately.
On the other hand, if the keys are coming from a known system (this includes things like keys embedded in links that originally came from the system), and are assumed to be valid and exist, I would leave it as a runtime exception, handled by a catch-all at the outer level. For the link example, if someone manually changes the key in a url for one reason or another, it should be considered as undefined behaviour and an exception is appropriate, IMO.
Another way to think of it is how you would handle the situation when it arises. If you're using Option and are just delegating the None case to some catch-all error handling, then an exception is probably more appropriate. If you're explicitly catching the NotFound exception and altering the program flow (eg, asking the user to re-enter the key), then use Option, or a checked exception (or Either in Scala) to ensure that the situation is dealt with.
In relation to integrating with Java, Option is easy enough to use from there once the Scala runtime library is available on the classpath. Alternatively, there's an Option implementation in the Functional Java library. In any case, I would steer clear of using null to indicate "not found".
In Java, you can call Option's isEmpty, isDefined and get without any special hassle (the really useful Option methods, such as getOrElse, are another matter.) Checking the result of the isDefined method in an if-clause should be faster than checking exceptions in a try-catch block.
In some cases (like your example) an Option is fine and the "monadic" behavior (map, flatMap, filter...) very convenient, but in other cases you need more information about the cause of problem, which can be better expressed with an exception. Now you probably want your error handling as "uniform" as possible, so I would suggest to use Either, which gives you both a behavior similar to Option and an expressiveness like an exception.
In Java, you need just a helper function which "unpacks" an Either. If it finds a Right(value), it gives the value back, if it finds a Left(Exception), it re-throws it. After this, your back to normal Java behavior.
When should you throw a custom exception?
e.g. I have some code that connects to a server. The code that connects to the server throws an IOException when it fails to connect. In the context of the method it's called, this is fine. It's also fine in the network code.
But as this represents not having a connection (and therefore not working) the exception goes all the way up to the ui. At this stage, an IOException is very ambigous. Something like NoConnectionException would be better.
So, my question is:
At what stage should you catch an exception to instead throw another (custom) exception that better fits the abstraction?
I would expect exceptions to talk in terms of what I've asked the originating method to do. e.g.
read -> ReadException
connect -> ConnectException
buildPortfolio -> FailedToBuildPortfolioException
etc. This abstracts away what's going on under the covers (i.e. are you connecting via sockets etc.). As a general rule, when I create an interface for a component, I often create a corresponding exception or set of exceptions. My interface will be called Component, and my exceptions are usually ComponentException (e.g. RateSource and RateSourceException). It's consistent and easy to export to different projects as a complete component set.
The downside is that you create quite a lot of exceptions, and you may have to perform quite a lot of translations. The upside is that (as you've identified) you get little to no abstraction leakage.
At some point during the hierarchy of method calls (and thus exceptions) you may decide that no recovery can take place (or it's at an inappropriate place) and translate to unchecked exceptions to be handled later.
I know this is tagged as "language-agnostic", but I don't think it really is. Coming from a C++ perspective, I expect very few basic operations to throw an exception - the C++ Standard Library only uses exceptions in a very few places. So my own code is often the first place where exceptions can be generated. In that code, I like a very flat hierarchy - I don't want to be messing with hundreds of catch() clauses later in the code, and have never understood Java and C#'s apparent obsession with creating Baroque heirarchies of class and namespace.
So, for my C++ code - one type of exception, containing a meaningful error message, per library. And one for the final executable.
I think there are two questions hidden here:
a) When should one hide an exception behind a different exception.
b) When should one use a custom exception for this.
a) I'd say: when ever an exception travels across the border of two layers in the application, it should get hidden behind an exception that is more apropriate for the new layer.
Example: because you are doing some remote stuff, you get a ConnectionWhatEverException.
But the caller shouldn't be aware of Connection problems. Since he just wants to get some service performed, so he gets a ServiceOutOfOrderException. The reason for this is: Inside the layer, doing remoting, you might to do something usefull with a ConnectionException (retry, write into a backout queue ..). Once you left that layer, nobody knows how to handle a ConnectionException. But they should be able to decide, what do do, when the Service does not work.
b) When there is no matching existing Exception. There are a couple of useful Exception in Java for example. I use IllegalState and IllegalArgument quite often. A strong argument for a new exception class is, if you have some useful context to provide. For example the name of the service that failed could be an argument of a ServiceFailedException. Just don't create a class for every method call, or anything to that effect. 100 Exception classes aren't a problem, as long as they have different behavior (i.e. at least different fields). If they differ only by name and reside on the same abstraction level, make them one Exception, and put the different names in the message or a single field of that exception class.
c) At least in java there is the discussion about checked exceptions. I wrap those directly in an unchecked one, because I hate the checked kind. But that is more an opinion then advice.
Is there any case where you would get NoConnectionException which isn't caused by an IO issue? Conversely, is knowing whether the cause is IO based or not going to help the client recover sensibly?
When should you throw a custom exception?
I. When you can provide more (diagnostic) information.
Note: this additional information may not be available at the place where the original exception (IOException) was thrown. Progressive layers of abstractions may have more information to add like what were you trying to do which led to this exception?
II. When you must not expose implementation details: i.e. you want the (illusion of?) abstraction to continue.
This may be important when the underlying implementation mechanism can change. Wrapping the underlying exception in a custom exception is a good way of insulating your clients from implementation details (by lifting the level of abstraction)
III. Both I and II
NOTE: Furthermore your clients should be able to tune into the exact level of information they are interested in or rather they should be able to tune out anything they are not interested in. So it's a good idea to derive your custom exceptions from IOException.