When is it right for a constructor to throw an exception? - exception

When is it right for a constructor to throw an exception? (Or in the case of Objective C: when is it right for an init'er to return nil?)
It seems to me that a constructor should fail -- and thus refuse to create an object -- if the object isn't complete. I.e., the constructor should have a contract with its caller to provide a functional and working object on which methods can be called meaningfully? Is that reasonable?

The constructor's job is to bring the object into a usable state. There are basically two schools of thought on this.
One group favors two-stage construction. The constructor merely brings the object into a sleeper state in which it refuses to do any work. There's an additional function that does the actual initialization.
I've never understood the reasoning behind this approach. I'm firmly in the group that supports one-stage construction, where the object is fully initialized and usable after construction.
One-stage constructors should throw if they fail to fully initialize the object. If the object cannot be initialized, it must not be allowed to exist, so the constructor must throw.

Eric Lippert says there are 4 kinds of exceptions.
Fatal exceptions are not your fault, you cannot prevent them, and you cannot sensibly clean up from them.
Boneheaded exceptions are your own darn fault, you could have prevented them and therefore they are bugs in your code.
Vexing exceptions are the result of unfortunate design decisions. Vexing exceptions are thrown in a completely non-exceptional circumstance, and therefore must be caught and handled all the time.
And finally, exogenous exceptions appear to be somewhat like vexing exceptions except that they are not the result of unfortunate design choices. Rather, they are the result of untidy external realities impinging upon your beautiful, crisp program logic.
Your constructor should never throw a fatal exception on its own, but code it executes may cause a fatal exception. Something like "out of memory" isn't something you can control, but if it occurs in a constructor, hey, it happens.
Boneheaded exceptions should never occur in any of your code, so they're right out.
Vexing exceptions (the example is Int32.Parse()) shouldn't be thrown by constructors, because they don't have non-exceptional circumstances.
Finally, exogenous exceptions should be avoided, but if you're doing something in your constructor that depends on external circumstances (like the network or filesystem), it would be appropriate to throw an exception.
Reference link: https://blogs.msdn.microsoft.com/ericlippert/2008/09/10/vexing-exceptions/

There is generally nothing to be gained by divorcing object initialization from construction. RAII is correct, a successful call to the constructor should either result in a fully initialized live object or it should fail, and ALL failures at any point in any code path should always throw an exception. You gain nothing by use of a separate init() method except additional complexity at some level. The ctor contract should be either it returns a functional valid object or it cleans up after itself and throws.
Consider, if you implement a separate init method, you still have to call it. It will still have the potential to throw exceptions, they still have to be handled and they virtually always have to be called immediately after the constructor anyway, except now you have 4 possible object states instead of 2 (IE, constructed, initialized, uninitialized, and failed vs just valid and non-existent).
In any case I've run across in 25 years of OO development cases where it seems like a separate init method would 'solve some problem' are design flaws. If you don't need an object NOW then you shouldn't be constructing it now, and if you do need it now then you need it initialized. KISS should always be the principle followed, along with the simple concept that the behavior, state, and API of any interface should reflect WHAT the object does, not HOW it does it, client code should not even be aware that the object has any kind of internal state that requires initialization, thus the init after pattern violates this principle.

As far as I can tell, no-one is presenting a fairly obvious solution which embodies the best of both one-stage and two-stage construction.
note: This answer assumes C#, but the principles can be applied in most languages.
First, the benefits of both:
One-Stage
One-stage construction benefits us by preventing objects from existing in an invalid state, thus preventing all sorts of erroneous state management and all the bugs which come with it. However, it leaves some of us feeling weird because we don't want our constructors to throw exceptions, and sometimes that's what we need to do when initialization arguments are invalid.
public class Person
{
public string Name { get; }
public DateTime DateOfBirth { get; }
public Person(string name, DateTime dateOfBirth)
{
if (string.IsNullOrWhitespace(name))
{
throw new ArgumentException(nameof(name));
}
if (dateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow
{
throw new ArgumentOutOfRangeException(nameof(dateOfBirth));
}
this.Name = name;
this.DateOfBirth = dateOfBirth;
}
}
Two-Stage via validation method
Two-stage construction benefits us by allowing our validation to be executed outside of the constructor, and therefore prevents the need for throwing exceptions within the constructor. However, it leaves us with "invalid" instances, which means there's state we have to track and manage for the instance, or we throw it away immediately after heap-allocation. It begs the question: Why are we performing a heap allocation, and thus memory collection, on an object we don't even end up using?
public class Person
{
public string Name { get; }
public DateTime DateOfBirth { get; }
public Person(string name, DateTime dateOfBirth)
{
this.Name = name;
this.DateOfBirth = dateOfBirth;
}
public void Validate()
{
if (string.IsNullOrWhitespace(Name))
{
throw new ArgumentException(nameof(Name));
}
if (DateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow
{
throw new ArgumentOutOfRangeException(nameof(DateOfBirth));
}
}
}
Single-Stage via private constructor
So how can we keep exceptions out of our constructors, and prevent ourselves from performing heap allocation on objects which will be immediately discarded? It's pretty basic: we make the constructor private and create instances via a static method designated to perform an instantiation, and therefore heap-allocation, only after validation.
public class Person
{
public string Name { get; }
public DateTime DateOfBirth { get; }
private Person(string name, DateTime dateOfBirth)
{
this.Name = name;
this.DateOfBirth = dateOfBirth;
}
public static Person Create(
string name,
DateTime dateOfBirth)
{
if (string.IsNullOrWhitespace(Name))
{
throw new ArgumentException(nameof(name));
}
if (dateOfBirth > DateTime.UtcNow) // side note: bad use of DateTime.UtcNow
{
throw new ArgumentOutOfRangeException(nameof(DateOfBirth));
}
return new Person(name, dateOfBirth);
}
}
Async Single-Stage via private constructor
Aside from the aforementioned validation and heap-allocation prevention benefits, the previous methodology provides us with another nifty advantage: async support. This comes in handy when dealing with multi-stage authentication, such as when you need to retrieve a bearer token before using your API. This way, you don't end up with an invalid "signed out" API client, and instead you can simply re-create the API client if you receive an authorization error while attempting to perform a request.
public class RestApiClient
{
public RestApiClient(HttpClient httpClient)
{
this.httpClient = new httpClient;
}
public async Task<RestApiClient> Create(string username, string password)
{
if (username == null)
{
throw new ArgumentNullException(nameof(username));
}
if (password == null)
{
throw new ArgumentNullException(nameof(password));
}
var basicAuthBytes = Encoding.ASCII.GetBytes($"{username}:{password}");
var basicAuthValue = Convert.ToBase64String(basicAuthBytes);
var authenticationHttpClient = new HttpClient
{
BaseUri = new Uri("https://auth.example.io"),
DefaultRequestHeaders = {
Authentication = new AuthenticationHeaderValue("Basic", basicAuthValue)
}
};
using (authenticationHttpClient)
{
var response = await httpClient.GetAsync("login");
var content = response.Content.ReadAsStringAsync();
var authToken = content;
var restApiHttpClient = new HttpClient
{
BaseUri = new Uri("https://api.example.io"), // notice this differs from the auth uri
DefaultRequestHeaders = {
Authentication = new AuthenticationHeaderValue("Bearer", authToken)
}
};
return new RestApiClient(restApiHttpClient);
}
}
}
The downsides of this method are few, in my experience.
Generally, using this methodology means that you can no longer use the class as a DTO because deserializing to an object without a public default constructor is hard, at best. However, if you were using the object as a DTO, you shouldn't really be validating the object itself, but rather invaliding the values on the object as you attempt to use them, since technically the values aren't "invalid" with regards to the DTO.
It also means that you'll end up creating factory methods or classes when you need to allow an IOC container to create the object, since otherwise the container won't know how to instantiate the object. However, in a lot of cases the factory methods end up being one of Create methods themselves.

Because of all the trouble that a partially created class can cause, I'd say never.
If you need to validate something during construction, make the constructor private and define a public static factory method. The method can throw if something is invalid. But if everything checks out, it calls the constructor, which is guaranteed not to throw.

A constructor should throw an exception when it is unable to complete the construction of said object.
For example, if the constructor is supposed to allocate 1024 KB of ram, and it fails to do so, it should throw an exception, this way the caller of the constructor knows that the object is not ready to be used and there is an error somewhere that needs to be fixed.
Objects that are half-initialised and half-dead just cause problems and issues, as there really is no way for the caller to know. I'd rather have my constructor throw an error when things go wrong, than having to rely on the programming to run a call to the isOK() function which returns true or false.

It's always pretty dodgy, especially if you're allocating resources inside a constructor; depending on your language the destructor won't get called, so you need to manually cleanup. It depends on how when an object's lifetime begins in your language.
The only time I've really done it is when there's been a security problem somewhere that means the object should not, rather than cannot, be created.

It's reasonable for a constructor to throw an exception so long as it cleans itself up properly. If you follow the RAII paradigm (Resource Acquisition Is Initialization) then it is quite common for a constructor to do meaningful work; a well-written constructor will in turn clean up after itself if it can't fully be initialized.

See C++ FAQ sections 17.2 and 17.4.
In general, I have found that code that is easier to port and maintain results if constructors are written so they do not fail, and code that can fail is placed in a separate method that returns an error code and leaves the object in an inert state.

If you are writing UI-Controls (ASPX, WinForms, WPF, ...) you should avoid throwing exceptions in the constructor because the designer (Visual Studio) can't handle them when it creates your controls. Know your control-lifecycle (control events) and use lazy initialization wherever possible.

Note that if you throw an exception in an initializer, you'll end up leaking if any code is using the [[[MyObj alloc] init] autorelease] pattern, since the exception will skip the autorelease.
See this question:
How do you prevent leaks when raising an exception in init?

You absolutely should throw an exception from a constructor if you're unable to create a valid object. This allows you to provide proper invariants in your class.
In practice, you may have to be very careful. Remember that in C++, the destructor will not be called, so if you throw after allocating your resources, you need to take great care to handle that properly!
This page has a thorough discussion of the situation in C++.

Throw an exception if you're unable to initialize the object in the constructor, one example are illegal arguments.
As a general rule of thumb an exception should always be thrown as soon as possible, as it makes debugging easier when the source of the problem is closer to the method signaling something is wrong.

Throwing an exception during construction is a great way to make your code way more complex. Things that would seem simple suddenly become hard. For example, let's say you have a stack. How do you pop the stack and return the top value? Well, if the objects in the stack can throw in their constructors (constructing the temporary to return to the caller), you can't guarantee that you won't lose data (decrement stack pointer, construct return value using copy constructor of value in stack, which throws, and now have a stack that just lost an item)! This is why std::stack::pop does not return a value, and you have to call std::stack::top.
This problem is well described here, check Item 10, writing exception-safe code.

The usual contract in OO is that object methods do actually function.
So as a corrolary, to never return a zombie object form a constructor/init.
A zombie is not functional and may be missing internal components. Just a null-pointer exception waiting to happen.
I first made zombies in Objective C, many years ago.
Like all rules of thumb , there is an "exception".
It is entirely possible that a specific interface may have a contract that says that
there exists a method "initialize" that is allowed to thron an exception.
That an object inplementing this interface may not respond correctly to any calls except property setters until initialize has been called. I used this for device drivers in an OO operating system during the boot process, and it was workable.
In general, you don't want zombie objects. In languages like Smalltalk with become things get a little fizzy-buzzy, but overuse of become is bad style too. Become lets an object change into another object in-situ, so there is no need for envelope-wrapper(Advanced C++) or the strategy pattern(GOF).

I can't address best practice in Objective-C, but in C++ it's fine for a constructor to throw an exception. Especially as there's no other way to ensure that an exceptional condition encountered at construction is reported without resorting to invoking an isOK() method.
The function try block feature was designed specifically to support failures in constructor memberwise initialization (though it may be used for regular functions also). It's the only way to modify or enrich the exception information which will be thrown. But because of its original design purpose (use in constructors) it doesn't permit the exception to be swallowed by an empty catch() clause.

Yes, if the constructor fails to build one of its internal part, it can be - by choice - its responsibility to throw (and in certain language to declare) an explicit exception , duly noted in the constructor documentation.
This is not the only option: It could finish the constructor and build an object, but with a method 'isCoherent()' returning false, in order to be able to signal an incoherent state (that may be preferable in certain case, in order to avoid a brutal interruption of the execution workflow due to an exception)
Warning: as said by EricSchaefer in his comment, that can bring some complexity to the unit testing (a throw can increase the cyclomatic complexity of the function due to the condition that triggers it)
If it fails because of the caller (like a null argument provided by the caller, where the called constructor expects a non-null argument), the constructor will throw an unchecked runtime exception anyway.

I'm not sure that any answer can be entirely language-agnostic. Some languages handle exceptions and memory management differently.
I've worked before under coding standards requiring exceptions never be used and only error codes on initializers, because developers had been burned by the language poorly handling exceptions. Languages without garbage collection will handle heap and stack very differently, which may matter for non RAII objects. It is important though that a team decide to be consistent so they know by default if they need to call initializers after constructors. All methods (including constructors) should also be well documented as to what exceptions they can throw, so callers know how to handle them.
I'm generally in favor of a single-stage construction, as it's easy to forget to initialize an object, but there are plenty of exceptions to that.
Your language support for exceptions isn't very good.
You have a pressing design reason to still use new and delete
Your initialization is processor intensive and should run async to the thread that created the object.
You are creating a DLL that may be throwing exceptions outside it's interface to an application using a different language. In this case it may not be so much an issue of not throwing exceptions, but making sure they are caught before the public interface. (You can catch C++ exceptions in C#, but there are hoops to jump through.)
Static constructors (C#)

The OP's question has a "language-agnostic" tag... this question cannot be safely answered the same way for all languages/situations.
The following C# example's class hierarchy throws in class B's constructor, skipping an immediate call to class A's IDisposeable.Dispose upon exit of the main's using, skipping explicit disposal of class A's resources.
If, for example, class A had created a Socket at construction, connected to a network resource, such would likely still be the case after the using block (a relatively hidden anomaly).
class A : IDisposable
{
public A()
{
Console.WriteLine("Initialize A's resources.");
}
public void Dispose()
{
Console.WriteLine("Dispose A's resources.");
}
}
class B : A, IDisposable
{
public B()
{
Console.WriteLine("Initialize B's resources.");
throw new Exception("B construction failure: B can cleanup anything before throwing so this is not a worry.");
}
public new void Dispose()
{
Console.WriteLine("Dispose B's resources.");
base.Dispose();
}
}
class C : B, IDisposable
{
public C()
{
Console.WriteLine("Initialize C's resources. Not called because B throws during construction. C's resources not a worry.");
}
public new void Dispose()
{
Console.WriteLine("Dispose C's resources.");
base.Dispose();
}
}
class Program
{
static void Main(string[] args)
{
try
{
using (C c = new C())
{
}
}
catch
{
}
// Resource's allocated by c's "A" not explicitly disposed.
}
}

Speaking strictly from a Java standpoint, any time you initialize a constructor with illegal values, it should throw an exception. That way it does not get constructed in a bad state.

To me it's a somewhat philosophical design decision.
It's very nice to have instances which are valid as long as they exist, from ctor time onwards. For many nontrivial cases this may require throwing exceptions from the ctor if a memory/resource allocation can't be made.
Some other approaches are the init() method which comes with some issues of its own. One of which is ensuring init() actually gets called.
A variant is using a lazy approach to automatically call init() the first time an accessor/mutator gets called, but that requires any potential caller to have to worry about the object being valid. (As opposed to the "it exists, hence it's valid philosophy").
I've seen various proposed design patterns to deal with this issue too. Such as being able to create an initial object via ctor, but having to call init() to get your hands on a contained, initialized object with accesors/mutators.
Each approach has its ups and downs; I have used all of these successfully. If you don't make ready-to-use objects from the instant they're created, then I recommend a heavy dose of asserts or exceptions to make sure users don't interact before init().
Addendum
I wrote from a C++ programmers perspective. I also assume you are properly using the RAII idiom to handle resources being released when exceptions are thrown.

Using factories or factory methods for all object creation, you can avoid invalid objects without throwing exceptions from constructors. The creation method should return the requested object if it's able to create one, or null if it's not. You lose a little bit of flexibility in handling construction errors in the user of a class, because returning null doesn't tell you what went wrong in the object creation. But it also avoids adding the complexity of multiple exception handlers every time you request an object, and the risk of catching exceptions you shouldn't handle.

The best advice I've seen about exceptions is to throw an exception if, and only if, the alternative is failure to meet a post condition or to maintain an invariant.
That advice replaces an unclear subjective decision (is it a good idea) with a technical, precise question based on design decisions (invariant and post conditions) you should already have made.
Constructors are just a particular, but not special, case for that advice. So the question becomes, what invariants should a class have? Advocates of a separate initialization method, to be called after construction, are suggesting that the class has two or more operating mode, with an unready mode after construction and at least one ready mode, entered after initialization. That is an additional complication, but acceptable if the class has multiple operating modes anyway. It is hard to see how that complication is worthwhile if the class would otherwise not have operating modes.
Note that pushing set up into a separate initialization method does not enable you to avoid exceptions being thrown. Exceptions that your constructor might have thrown will now be thrown by the initialization method. All the useful methods of your class will have to throw exceptions if they are called for an uninitialized object.
Note also that avoiding the possibility of exceptions being thrown by your constructor is troublesome, and in many cases impossible in many standard libraries. This is because the designers of those libraries believe that throwing exceptions from constructors is a good idea. In particular, any operation that attempts to acquire a non shareable or finite resource (such as allocating memory) can fail, and that failure is typically indicated in OO languages and libraries by throwing an exception.

Related

Grails 2.4.4: How to reliably rollback in a complex service method

Consider the following service (transactional by default). A player must always have one account. A player without at least one corresponding account is an error state.
class playerService {
def createPlayer() {
Player p new Player(name: "Stephen King")
if (!p.save()) {
return [code: -1, errors:p.errors]
}
Account a = new Account(type: "cash")
if (!a.save()) {
// rollback p !
return [code: -2, errors:a.errors]
}
// commit only now!
return [code: 0, player:p]
}
}
I have seen this pattern by experienced grails developers, and when I tell them that if creation of the account of the player fails for any reason, it wont rollback the player, and will leave the DB in an invalid state, they look at me like I am mad because grails handles rolling back the player because services are transactional right?
So then, being a SQL guy, I look for a way to call rollback in grails. There isn't one. According to various posts, there are only 2 ways to force grails to rollback in a service:
throw an unchecked exception. You know what this is right?
don't use service methods or transactional annotations, use this construct:
.
DomainObject.withTransaction {status ->
//stuff
if (someError) {
status.setRollbackOnly()
}
}
1. throw an unchecked exception
1.1 So we must throw runtime exceptions to rollback. This is ok for me (I like exceptions), but this wont gel with the grails developers we have who view exceptions as a throwback to Java and is uncool. It also means we have to change the whole way the app currently uses its service layer.
1.2 If an exception is thrown, you lose the p.errors - you lose the validation detail.
1.3 Our new grails devs don't know the difference between an unchecked and an checked exception, and don't know how to tell the difference. This is really dangerous.
1.4. use .save(failOnError: true)
I am a big fan of using this, but its not appropriate everywhere. Sometimes you need to check the reason before going further, not throw an exception. Are the exceptions it can generate always checked, always unchecked, or either? I.e. will failOnError AWLAYS rollback, no matter what the cause? No one I have asked knows the answer to this, which is disturbing, they are using blind faith to avoid corrupted/inconsistent DBs.
1.5 What happens if a controller calls service A, which calls Service B, then service C. Service A must catch any exception and return a nicely formatted return value to the controller. If Service C throws an exception, which is caught by Service A, will service Bs transactions be rolled back? This is critical to know to be able to construct a working application.
UPDATE 1:
Having done some tests, it appears that any runtime exception, even if thrown and caught in some unrelated child calls, will cause everything in the parent to rollback. However, it is not easy to know in the parent session that this rollback has happened - you need to make sure that if you catch any exception, you either rethrow, or pass some notice back to the caller to show that it has failed in such a way that everything else will be rolled back.
2. withTransaction
2.1 This seems a bazaar construct. How do I call this, and what do I pass in for the "status" parameter? What is "setRollbackOnly" exactly. Why is it not just called "rollback". What is the "Only" part? It is tied to a domain object, when your method may want to do update several different domain objects.
2.2 Where are you supposed to put this code? In with the DomainObject class? In the source folder (i.e. not in a service or controller?)? Directly in the controller? (we don't want to duplicate business logic in the controllers)
3. Ideal situation.
3.1 The general case is we want every thing we do in a service method to roll back if anything in that service method cant be saved for any reason, or throws any exception for any reason (checked or unchecked).
3.2 Ideally I would like service methods to "always rollback, unless I explicitly call commit", which is the safest strategy , but this is not possible I believe.
The question is how do I achieve the ideal situation?
Will calling save(failOnError:true) ALWAYS rollback everything, no matter what the reason for failing? This is not perfect, as it is not easy for the caller to know which domain object save caused the issue.
Or do people define lots of exception classes which subclass runtimeException, then explicit catch each of them in the controller to create the appropriate response? This is the old Java way, and our groovy devs pooh pooh this approach due to the amount of boiler plate code we will have to write.
What methods do people use to achieve this?
I wouldn't call myself an expert, and this question is over a year old, but I can answer some of these questions, if only for future searchers. I'm just now refactoring some controllers to use services in order to take advantage of transactions.
I have seen this pattern by experienced grails developers, and when I tell them that if creation of the account of the player fails for any reason, it wont rollback the player, and will leave the DB in an invalid state, they look at me like I am mad because grails handles rolling back the player because services are transactional right?
I'm not seeing in the documentation where it explicitly states that returning from a service method does not rollback the transaction, but I can't imagine that this would be a very sane behavior. Still, testing is an easy way to prove yourself.
1.2 If an exception is thrown, you lose the p.errors - you lose the validation detail.
Since you're the one throwing the exception, you can throw the errors along with it. For instance:
// in service
if (!email.save()) {
throw new ValidationException("Couldn't save email ${params.id}", email.errors)
}
When you catch the exception, you reload the instance (because throwing an exception clears the session), put the errors back into the instance, and then pass that to the view as usual:
// in controller
} catch (ValidationException e) {
def email = Email.read(id)
email.errors = e.errors
render view: "edit", model: [emailInstance: email]
}
This is discussed under the heading "Validation Errors and Rollback", down the page from http://grails.github.io/grails-doc/2.4.4/guide/single.html#transactionsRollbackAndTheSession.
1.4. use .save(failOnError: true) I am a big fan of using this, but its not appropriate everywhere. Sometimes you need to check the reason before going further, not throw an exception. Are the exceptions it can generate always checked, always unchecked, or either? I.e. will failOnError AWLAYS rollback, no matter what the cause? No one I have asked knows the answer to this, which is disturbing, they are using blind faith to avoid corrupted/inconsistent DBs.
failOnError will cause save() to throw a ValidationException, so yes, if you're in a transaction and aren't checking that exception, the transaction will be rolled back.
Generally speaking, it seems to be un-"Grailsy" to use failOnError a lot, probably for the reasons you listed (e.g., lack of control). Instead, you check whether save() failed (if (!save()) ...), and take action based on that.
withTransaction
I'm not sure the point of this, because SpringSource really encourages the use of services for everything. I personally don't like it, either.
If you want to make a particular service non-transactional, and then make one method of it transactional, you can just annotate that one method with #Transactional (unless your developers also dislike annotations because they're too "Java" ;) ).
Note! As soon as you mark a single method with #Transactional, the overall service will become non-transactional.
3.1 The general case is we want every thing we do in a service method to roll back if anything in that service method cant be saved for any reason, or throws any exception for any reason (checked or unchecked).
I feel like checked exceptions are generally considered not "Groovy" (which also makes them not Grails-y). Not sure about the reason for that.
However, it looks like you can tell your service to rollback on your checked exceptions, by listing them in the rollbackFor option to #Transactional.
Or do people define lots of exception classes which subclass runtimeException, then explicit catch each of them in the controller to create the appropriate response? This is the old Java way, and our groovy devs pooh pooh this approach due to the amount of boiler plate code we will have to write.
The nice thing about Groovy is that you can write your boiler plate once and then call it repeatedly. A pattern I've seen a lot, and am currently using, is something like this:
private void validate(Long id, Closure closure) {
try {
closure()
} catch (ValidationException e) {
def email = Email.read(id)
email.errors = e.errors
render view: "edit", model: [emailInstance: email]
} catch (OtherException e) {
def email = Email.read(id)
flash.error = "${e.message}: ${e.reasons}"
render view: "show", model: [emailInstance: email]
} catch (Throwable t) {
flash.error = "Unexpected error $t: ${t.message}"
redirect action: "list"
}
}
And then call it in each controller action like so:
def update(Long id, Long version) {
withInstance(id, version) { Email emailInstance ->
validate(emailInstance.id) {
emailService.update(emailInstance, params)
flash.message = "Email $id updated at ${new Date()}."
redirect action: "show", id: emailInstance.id
}
}
}
(withInstance is another similar method that DRYs up the check for existence and optimistic locking.)
This approach has downsides. You get the same set of redirects in every action; you probably want to write one set of methods for each controller; and it seems kind of silly to pass a closure into a method and expect the method to know what exceptions the closure will throw. But hey, programming's all about tradeoffs, right?
Anyway, hope that is at least interesting.
If you have a service such as:
In a Grails 2 app, the recommended way would be to use transactionStatus.setRollbackOnly().
import grails.transaction.Transactional
Class RoleService {
#Transactional
Role save(String authority) {
Role roleInstance = new Role(authority: authority)
if ( !roleInstance.save() ) {
// log errors here
transactionStatus.setRollbackOnly()
}
roleInstance
}
}
See: https://github.com/grails/grails-core/issues/9212

Why Exception inside a method should be handled within the method itself and not outside?

Consider a method() that throws some exception in its body.
Consider the following scenarios:
Scenario 1:
{
//..
// ..
method(); //Exception handled inside the method
//..
//..
}
In this case the exception should be handled within the method() itself.
Also consider this:
Scenario 2:
{
//..
//..
try{
method(); //Exception not handled with-in the method
}
catch(){}
//..
// ..
}
Why a situation like scenario 2 is not allowed? ie why it is forced that the exception should be handled within the method?
You can add throws clause to your method and make it throw an exception, which can be handled, as mentioned in Situation 2. So its allowed, like this:-
void method() throws Exception{
}
Both scenarios are allowed. The restriction (if that's the right term) that Java imposes is that the exception must be handled somewhere.
In the first scenario, the called method handles the exception itself - so there's no need for the try-catch in the calling method.
The second scenario is valid - but only if method() includes the throws declaration to indicate that the exception must be handled somewhere else.
void method() throws Exception
{
}
It is allowed!
use as follows
public void method() throws ClassNotFoundException {
to catch a ClassNotFoundException.
(In most cases you should not throw a bare Exception, like other posters simplified)
Whether to chatch inside or outside is software design. There is no rule of thumb.
My experience is that catching outside leads to less complex code, which can be better unit tested .
Look for design pattern "last line of defense":
That means that in your top most class e.g in main() there is a catch (Exception ex) {
which catches when no other method catched it. in that case you can log the exception and (try to) safely shut down your system. This is the last line of defense.
Note: spmetimes it also makes sense to catch(Throweable th),
If you handle exception using senario-2, then cannot map the exception to the actual one. i.e you cannot state it as per the code in block. So please handle the exception to the spot itself, so that you can specifically explain why the exception has occurred.
As said above, the idea is to throw early and catch late. Overall, you want to minimize exceptions you throw by simply making your logic "flow". But there are always situations which are tricky to handle, or obfuscate code too much, so it is worth ensuring the user understands the risks of using certain methods and handles them accordingly.
TL;DR: Never trap InterruptedException unless you really mean to. It almost always needs to be thrown.
This great question is actually a very important design decision. Getting this right is difficult and getting this right across an entire team is even more difficult. The answer to the question lies within your implementation. Does the error need to be immediately shown to the user? Do you have an auxiliary method to recover from the failure? Is the exception a fatal event?
Please do yourself a favor and never write an empty exception handler. You will be very happy to have put in a log.debug statement in there when things go awry. Other than that: learn what every exception means and respond to it in the most graceful way possible.

Mock methods not directly called in unit test with JMock

I have a method under test. Within its call stack, it calls a DAO which intern uses JDBC to chat with the DB. I am not really interested in knowing what will happen at the JDBC layer; I already have tests for that, and they work wonderfully.
I am trying to mock, using JMock, the DAO layer, so I can focus on the details this method under test. Here is a basic representation of what I have.
#Test
public void myTest()
{
context.checking(new Expectations() {
{
allowing(myDAO).getSet(with(any(Integer.class)));
will(returnValue(new HashSet<String>()));
}
});
// Used only to show the mock is working but not really part of this test.
// These asserts pass.
Set<String> temp = myDAO.getSet(Integer.valueOf(12));
Assert.assertNotNull(temp);
Assert.assertTrue(temp.isEmpty());
MyTestObject underTest = new MyTestObject();
// Deep in this call MyDAO is initialized and getSet() is called.
// The mock is failing to return the Set as desired. getSet() is run as
// normal and throws a NPE since JDBC is not (intentionally) setup. I want
// getSet() to just return an empty set at this layer.
underTest.thisTestMethod();
...
// Other assertions that would be helpful for this test if mocking
// was working.
}
It, from what I have learned creating this test, that I cannot mock indirect objects using JMock. OR I am not seeing a key point. I'm hoping for the second half to be true.
Thoughts and thank you.
From the snippet, I'm guessing that MyTestObject uses reflection, or a static method or field to get hold of the DAO, since it has no constructor parameters. JMock does not do replacement of objects by type (and any moment now, there'll be a bunch of people recommending other frameworks that do).
This is on purpose. A goal of JMock is to highlight object design weaknesses, by requiring clean dependencies and focussed behaviour. I find that burying DAO/JDBC access in the domain objects eventually gets me into trouble. It means that the domain objects have secret dependencies that make them harder to understand and change. I prefer to make those relationships explicit in the code.
So you have to get the mocked object somehow into the target code. If you can't or don't want to do that, then you'll have to use another framework.
P.S. One point of style, you can simplify this test a little:
context.checking(new Expectations() {{
allowing(myDAO).getSet(12); will(returnValue(new HashSet<String>()));
}});
within a test, you should really know what values to expect and feed that into the expectation. That makes it easier to see the flow of values between the objects.

Is there a language with object-based access levels?

A common misconception about access level in Java, C#, C++ and PHP is that it applies to objects rather than classes. That is, that (say) an object of class X can't see another X's private members. In fact, of course, access level is class-based and one X object can effortlessly refer to the private members of another.
Does there exist a language with object-based access levels? Are they instead of, or in addition to, class-based access? What impact does this feature have on program design?
Ruby has object-based access level. Here's a citation from Programming Ruby:
The difference between "protected"
and "private" is fairly subtle, and
is different in Ruby than in most
common OO languages. If a method is
protected, it may be called by any
instance of the defining class or its
subclasses. If a method is private, it
may be called only within the context
of the calling object---it is never
possible to access another object's
private methods directly, even if the
object is of the same class as the
caller.
And here's the source: http://whytheluckystiff.net/ruby/pickaxe/html/tut_classes.html#S4
Example difference between Java and Ruby
Java
public class Main {
public static void main(String[] args) {
Main.A a1 = new A();
Main.A a2 = new A();
System.out.println(a1.foo(a2));
}
static class A
{
public String foo(A other_a)
{
return other_a.bar();
}
private String bar()
{
return "bar is private";
}
}
}
// Outputs
// "bar is private"
Ruby
class A
def foo other_a
other_a.bar
end
private
def bar
"bar is private"
end
end
a1 = A.new
a2 = A.new
puts a1.foo(a2)
# outputs something like
# in `foo': private method `bar' called for #<A:0x2ce9f44> (NoMethodError)
The main reason why no language has support for this at the semantic level is that the various needs are too different to find a common denominator that is big enough for such a feature. Data hiding is bad enough as it is, and it gets only worse when you need even more fine grained control.
There would be advantages to such a language, for example, you could mark certain data as private for anyone but the object which created it (passwords would be a great example: Not even code running in the same application could read them).
Unfortunately, this "protection" would be superficial since at the assembler level, the protection wouldn't exist. In order to be efficient, the hardware would need to support it. In this case, probably at the level of a single byte in RAM. That would make such an application extremely secure and painfully slow.
In the real world, you'll find this in the TPM chip on your mainboard and, in a very coarse form, with the MMU tables of the CPU. But that's at a 4K page level, not at a byte level. There are libraries to handle both but that doesn't count as "language support" IMO.
Java has something like this in form of the Security API. You must wrap the code in question in a guardian which asks the current SecuityManager whether access is allowed or not.
In Python, you can achieve something similar with decorators (for methods and functions) or by implementing __setattr__ and __getattr__ for field access.
You could implement this in C# by having some method capable of walking the stack and checking which object the caller is, and throwing an exception if it's not the current class. I don't know why you would want to, but I thought I'd throw it out there.

How to separate data validation from my simple domain objects (POCOs)?

This question is language agnostic but I am a C# guy so I use the term POCO to mean an object that only preforms data storage, usually using getter and setter fields.
I just reworked my Domain Model to be super-duper POCO and am left with a couple of concerns regarding how to ensure that the property values make sense witin the domain.
For example, the EndDate of a Service should not exceed the EndDate of the Contract that Service is under. However, it seems like a violation of SOLID to put the check into the Service.EndDate setter, not to mention that as the number of validations that need to be done grows my POCO classes will become cluttered.
I have some solutions (will post in answers), but they have their disadvantages and am wondering what are some favorite approaches to solving this dilemma?
I think you're starting off with a bad assumption, ie, that you should have objects that do nothing but store data, and have no methods but accessors. The whole point of having objects is to encapsulate data and behaviors. If you have a thing that's just, basically, a struct, what behaviors are you encapsulating?
I always hear people argument for a "Validate" or "IsValid" method.
Personally I think this may work, but with most DDD projects you usually end up
with multiple validations that are allowable depending on the specific state of the object.
So I prefer "IsValidForNewContract", "IsValidForTermination" or similar, because I believe most projects end up with multiple such validators/states per class. That also means I get no interface, but I can write aggregated validators that read very well reflect the business conditions I am asserting.
I really do believe the generic solutions in this case very often take focus away from what's important - what the code is doing - for a very minor gain in technical elegance (the interface, delegate or whatever). Just vote me down for it ;)
A colleague of mine came up with an idea that worked out pretty well. We never came up with a great name for it but we called it Inspector/Judge.
The Inspector would look at an object and tell you all of the rules it violated. The Judge would decide what to do about it. This separation let us do a couple of things. It let us put all the rules in one place (Inspector) but we could have multiple Judges and choose the Judge by the context.
One example of the use of multiple Judges revolves around the rule that said a Customer must have an Address. This was a standard three tier app. In the UI tier the Judge would produce something that the UI could use to indicate the fields that had to be filled in. The UI Judge did not throw exceptions. In the service layer there was another Judge. If it found a Customer without an Address during Save it would throw an exception. At that point you really have to stop things from proceeding.
We also had Judges that were more strict as the state of the objects changed. It was an insurance application and during the Quoting process a Policy was allowed to be saved in an incomplete state. But once that Policy was ready to be made Active a lot of things had to be set. So the Quoting Judge on the service side was not as strict as the Activation Judge. Yet the rules used in the Inspector were still the same so you could still tell what wasn't complete even if you decided not to do anything about it.
One solution is to have each object's DataAccessObject take a list of Validators. When Save is called it preforms a check against each validator:
public class ServiceEndDateValidator : IValidator<Service> {
public void Check(Service s) {
if(s.EndDate > s.Contract.EndDate)
throw new InvalidOperationException();
}
}
public class ServiceDao : IDao<Service> {
IValidator<Service> _validators;
public ServiceDao(IEnumerable<IValidator<Service>> validators) {_validators = validators;}
public void Save(Service s) {
foreach(var v in _validators)
v.Check(service);
// Go on to save
}
}
The benefit, is very clear SoC, the disadvantage is that we don't get the check until Save() is called.
In the past I have usually delegated validation to a service unto its own, such as a ValidationService. This in principle still ad hears to the philosophy of DDD.
Internally this would contain a collection of Validators and a very simple set of public methods such as Validate() which could return a collection of error object.
Very simply, something like this in C#
public class ValidationService<T>
{
private IList<IValidator> _validators;
public IList<Error> Validate(T objectToValidate)
{
foreach(IValidator validator in _validators)
{
yield return validator.Validate(objectToValidate);
}
}
}
Validators could either be added within a default constructor or injected via some other class such as a ValidationServiceFactory.
I think that would probably be the best place for the logic, actually, but that's just me. You could have some kind of IsValid method that checks all of the conditions too and returns true/false, maybe some kind of ErrorMessages collection but that's an iffy topic since the error messages aren't really a part of the Domain Model. I'm a little biased as I've done some work with RoR and that's essentially what its models do.
Another possibility is to have each of my classes implement
public interface Validatable<T> {
public event Action<T> RequiresValidation;
}
And have each setter for each class raise the event before setting (maybe I could achieve this via attributes).
The advantage is real-time validation checking. But messier code and it is unclear who should be doing the attaching.
Here's another possibility. Validation is done through a proxy or decorator on the Domain object:
public class ServiceValidationProxy : Service {
public override DateTime EndDate {
get {return EndDate;}
set {
if(value > Contract.EndDate)
throw new InvalidOperationexception();
base.EndDate = value;
}
}
}
Advantage: Instant validation. Can easily be configured via an IoC.
Disadvantage: If a proxy, validated properties must be virtual, if a decorator all domain models must be interface-based. The validation classes will end up a bit heavyweight - proxys have to inherit the class and decorators have to implement all the methods. Naming and organization might get confusing.