In OOP, there are entities (e.g. Person) which has attributes (e.g. name, address, etc) and it has methods. How do you describe new? Is it a method or just special token to bring an abstract entity to real one?
Sometimes it's a method, sometimes it's just syntactic sugar that invokes an allocator method. The language matters.
To your CS student? Don't sugar-coat it, they need to be able to get their heads around the concepts pretty quickly and using metaphors unrelated to the computer field, while fine for trying to explain it to your 80-year-old grandmother, will not help out a CS student much.
Simply put, tell them that a class is a specification for something and that an object is a concrete instance of that something. All new does is create a concrete instance based on the specification. This includes both creation (not necessarily class-specific, which is why I'd hesitate to call it a method, reserving that term for class-bound functions) and initialisation (which is class-specific).
Depending on the language new is a keyword which can be used as an operator or as a modifier. For instance in C# new can be used:
new operator - used to create objects on the heap and invoke constructors.
new modifier - used to hide an inherited member from a base class member
For brand new students I would describe new only as a keyword and leave the modifier out of the discussion. I describe classes as blueprints and new as the mechanism by which those blueprints turn into something tangible - objects.
You may want to checkout my question: How to teach object oriented programming to procedural programmers for other great answers on teaching OOP to new developers.
In most object-oriented languages, new is simply a convention for naming a factory method. But it's only one of many conventions.
In Ruby, for example, it is conventional to name the factory method [] for collection classes. In Python, classes are simply their own factories. In Io, the factory method is generally called clone, in Ioke and Seph it is called mimic.
In Smalltalk, factory methods often have more descriptive names than just new:. Something like fromList: or with:.
Here's a simile that has worked for me in the past.
An object definition is a Jello Mold. "new" is the process that actually makes a Jello snack from that mold. Each Goopy Jello thing that you give to your new neighbors can be different, this one's green, this one has bits of fruit in it, etc. It's its own unique "object." But the mold is the same.
Or you can use a factory analogy or something, (or blueprints vs building).
As far as its role in the syntax, it's just a keyword that lets the compiler know the allocate memory on the heap and run the constructor. There's not much more to it.
Smalltalk: it's an instance method on the metaclass. So "new is a method that returns a newly-allocated instance."
I tell people that a class is like a plan on how to make an object. An object is made from the class by new. If they need more than that, well, I just don't know what to say. ;-)
new is a keyword that calls the class constructor of the class to the right of it with the arguments listed inside ().
String str = new String("asdf");
str is defined as being a String class variable using the constructor and argument "asdf"
At least that's how it was presented to me.
In Ruby, I believe it's a instance method on the metaclass. In CLOS it's a generic function called make-instance but otherwise roughly the same.
In some languages, like Java, new has special syntax, and the metaclass part is hidden. In the case where you have to teach somebody OO with such a language, I don't know that there's much you can do. (Taking a break from teaching OO and Java to teach a second object system would almost certainly just confuse them further!) Just explain what it does, and that it's a special case.
You can say that a class is a prototype/blueprint for an object. When you give it the keyword new, that prototype/blueprint comes to life. It's like you're giving a breath of life to those dead instance.
In Java,
new allocates memory for a new class instance (object)
new runs a class's constructor to initialize that instance
new returns a reference to that new instance
As far as the relationship between object/instance and class, I sometimes think:
class is to instance as blueprint is to building
new in most languages does some variation of the following:
designate some memory region for a class instance, and if neccesary, inform the garbage collector about how to free that memory later.
initialize that memory region in the manner specific to that class, transforming the bytes of raw memory into bytes of a valid instance of the class
return a reference to the memory location to the caller.
Related
I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.
Here is the problem statement: Calling a setter on the object should result in the object to change to an object of a different class, which language can support this?
Ex. I have a class called "Man" (Parent Class), and two children namely "Toddler" and "Old Man", they are its children because they override a behaviour in Man called as walk. ( i.e Toddler sometimes walks using both his hands and legs kneeled down and the Old man uses a stick to support himself).
The Man class has a attribute called age, I have a setter on Man, say setAge(int ageValue). I have 3 objects, 2 toddlers, 1 old-Man. (The system is up and running, I guess when we say objects it is obvious). I will make this call, toddler.setAge(80), I expect the toddler to change to an object of type Old Man. Is this possible? Please suggest.
Thanks,
This sounds to me like the model is wrong. What you have is a Person whose relative temporal grouping and some specific behavior changes with age.
Perhaps you need a method named getAgeGroup() which returns an appropriate Enum, depending on what the current age is. You also need an internal state object which encapsulates the state-specific behavior to which your Person delegates behavior which changes with age.
That said, changing the type of an instantiated object dynamically will likely only be doable only with dynamically typed languages; certainly it's not doable in Java, and probably not doable in C# and most other statically typed languages.
This is a common problem that you can solve using combination of OO modelling and design patterns.
You will model the class the way you have where Toddler and OldMan inherit from Man base class. You will need to introduce a Proxy (see GoF design pattern) class as your access to your Man class. Internally, proxy hold a man object/pointer/reference to either Toddler or OldMan. The proxy will expose all the interfaces that is exposed by Man class so that you can use it as it is and in your scenario, you will implement setAge similar to the pseudo code below:
public void setAge(int age)
{
if( age > TODDLER_MAX && myMan is Toddler)
myMan = new OldMan();
else
.....
myMan.setAge(age);
}
If your language does not support changing the classtype at runtime, take a look at the decorator and strategy patterns.
Objects in Python can change their class by setting the __class__ attribute. Otherwise, use the Strategy pattern.
I wonder if subclassing is really the best solution here. A property (enum, probably) that has different types of people as its possible values is one alternative. Or, for that matter, a derived property or method that tells you the type of person based on the age.
Javascript can do this. At any time you can take an existing object and add new methods to it, or change its existing methods. This can be done at the individual object level.
Douglas Crockford writes about this in Classical Inheritance in JavaScript:
Class Augmentation
JavaScript's dynamism allows us to add
or replace methods of an existing
class. We can call the method method
at any time, and all present and
future instances of the class will
have that method. We can literally
extend a class at any time.
Inheritance works retroactively. We
call this Class Augmentation to avoid
confusion with Java's extends, which
means something else.
Object Augmentation
In the static object-oriented
languages, if you want an object which
is slightly different than another
object, you need to define a new
class. In JavaScript, you can add
methods to individual objects without
the need for additional classes. This
has enormous power because you can
write far fewer classes and the
classes you do write can be much
simpler. Recall that JavaScript
objects are like hashtables. You
can add new values at any time. If the
value is a function, then it becomes a
method.
Common Lisp can: use the generic function CHANGE-CLASS.
I am surprised no one so far seemed to notice that this is the exact case for the State design pattern (although #Fadrian in fact described the core idea of the pattern quite precisely - without mentioning its name).
The state pattern is a behavioral software design pattern, also known as
the objects for states pattern. This pattern is used in computer
programming to represent the state of an object. This is a clean way for an
object to partially change its type at runtime.
The referenced page gives examples in Java and Python. Obviously it can be implemented in other strongly typed languages as well. (OTOH weakly typed languages have no need for State, as these support such behaviour out of the box.)
This question already has answers here:
Why use getters and setters/accessors?
(37 answers)
Closed 7 years ago.
Allen Holub wrote the following,
You can't have a program without some coupling. Nonetheless, you can minimize coupling considerably by slavishly following OO (object-oriented) precepts (the most important is that the implementation of an object should be completely hidden from the objects that use it). For example, an object's instance variables (member fields that aren't constants), should always be private. Period. No exceptions. Ever. I mean it. (You can occasionally use protected methods effectively, but protected instance variables are an abomination.)
Which sounds reasonable, but he then goes on to say,
You should never use get/set functions for the same reason—they're just overly complicated ways to make a field public (though access functions that return full-blown objects rather than a basic-type value are reasonable in situations where the returned object's class is a key abstraction in the design).
Which, frankly, just sounds insane to me.
I understand the principle of information hiding, but without accessors and mutators you couldn't use Java beans at all. I don't know how you would follow a MVC design without accessors in the model, since the model can not be responsible for rendering the view.
However, I am a younger programmer and I learn more about Object Oriented Design everyday. Perhaps someone with more experience can weigh in on this issue.
Allen Holub's articles for reference
Why Extends Is Evil
Why Getter And Setter Methods Are Evil
Related Questions:
Java: Are Getters and Setters evil?
Is it really that wrong not using setters and getters?
Are get and set functions popular with C++ programmers?
Should you use accessor properties from within the class, or just from outside of the class?
I don't have a problem with Holub telling you that you should generally avoid altering the state of an object but instead resort to integrated methods (execution of behaviors) to achieve this end. As Corletk points out, there is wisdom in thinking long and hard about the highest level of abstraction and not just programming thoughtlessly with getters/setters that just let you do an end-run around encapsulation.
However, I have a great deal of trouble with anyone who tells you that you should "never" use setters or should "never" access primitive types. Indeed, the effort required to maintain this level of purity in all cases can and will end up causing more complexity in your code than using appropriately implemented properties. You just have to have enough sense to know when you are skirting the rules for short-term gain at the expense of long-term pain.
Holub doesn't trust you to know the difference. I think that knowing the difference is what makes you a professional.
Read through that article carefully. Holub is pushing the point that getters and setters are an evil "default antipattern", a bad habit that we slip into when designing a system; because we can.
The thought process should be along the lines; What does this object do? What are its responsibilities? What are its behaviours? What does it know? Thinking long and hard on these questions leads you naturally towards designing classes which expose the highest-level interface possible.
A car is a good example. It exposes a well-defined, standardised high-level interface. I don't concern myself with setSpeed(60)... is that MPH or km/h? I just accelerate, cruise, decelerate. I don't have to think about the details in setSteeringWheelAngle(getSteeringWheelAngle()+Math.rad(-1.5)), I just turn(-1.5), and the details are taken care of under the hood.
It boils down to "You can and should figure out what every class will be used for, what it does, what it represents, and expose the highest level interface possible which fulfills those requirements. Getters and setters are usually a cop-out, when the programmer is just to lazy to do the analysis required to determine exactly what each class is and is-not, and so we go down the path of "it can do anything". Getters and setters are evil!
Sometimes the actual requirements for a class are unknowable ahead of time. That's cool, just cop-out and use getter/setter antipattern for now, but when you do know, through experience, what the class is being used for, you'll probably want to comeback and cleanup the dirty low level interface. Refactoring based on "stuff you wish you knew when you write the sucker in the first place" is par for the course. You don't have to know everything in order to make a start, it's just that the more you do know, the less rework is likely to be required upon the way.
That's the mentality he's promoting. Getters and setters are an easy trap to fall into.
Yes, beans basically require getters and setters, but to me a bean is a special case. Beans represent nouns, things, tangible identifiable (if not physical) objects. Not a lot of objects actually have automatic behaviours; most times things are manipulated by external forces, including humans, to make them productive things.
daisy.setColor(Color.PINK) makes perfect sense. What else can you do? Maybe a Vulcan mind-meld, to make the flower want to be pink? Hmmm?
Getters and setters have their ?evil? place. It's just, like all really good OO things, we tend to overuse them, because they are safe and familiar, not to mention simple, and therefore it might be better if noobs didn't see or hear about them, at least until they'd mastered the mind-meld thing.
I think what Allen Holub tried to say, rephrased in this article, is the following.
Getters and setters can be useful for variables that you specifically want to encapsulate, but you don't have to use them for all variables. In fact, using them for all variables is nasty code smell.
The trouble programmers have, and Allen Holub was right in pointing it out, is that they sometimes use getters/setters for all variables. And the purpose of encapsulation is lost.
(note I'm coming at this from a .NET "property" angle)
Well, simply - I don't agree with him; he makes a big fuss about the return type of properties being a bad thing because it can break your calling code - but exactly the same argument would apply to method arguments. And if you can't use methods either?
OK, method arguments could be changed as widening conversions, but.... just why... Also, note that in C# the var keyword could mitigate a lot of this perceived pain.
Accessors are not an implementation detail; they are the public API / contract. Yup, if you break the contracft you have trouble. When did that become a surprise? Likewise, it is not uncommon for accessors to be non-trivial - i.e. they do more than just wrap fields; they perform calculations, logic checks, notifications, etc. And they allow interface based abstractions of state. Oh, and polymorphism - etc.
Re the verbose nature of accessors (p3?4?) - in C#: public int Foo {get; private set;} - job done.
Ultimately, all of code is a means to express our intent to the compiler. Properties let me do that in a type-safe, contract-based, verifiable, extensible, polymorphic way - thanks. Why do I need to "fix" this?
Getters and setters are used as little more than a mask to make a private variable public.
There's no point repeating what Holub said already but the crux of it is that classes should represent behaviour and not just state.
Some opposing views are in italics:
Though getIdentity starts with "get," it's not an accessor because it doesn't just return a field. It returns a complex object that has reasonable behavior
Oh but wait... then it's okay to use accessors as long as you return objects instead of primitive types? Now that's a different story, but it's just as dumb to me. Sometimes you need an object, sometimes you need a primitive type.
Also, I notice that Allen has radically softened his position since his previous column on the same topic, where the mantra "Never use accessors" didn't suffer one single exception. Maybe he realized after a few year that accessors do serve a purpose after all...
Bear in mind that I haven't actually put any UI code into the business logic. I've written the UI layer in terms of AWT (Abstract Window Toolkit) or Swing, which are both abstraction layers.
Good one. What if you are writing your application on SWT? How "abstract" is really AWT in that case? Just face it: this advice simply leads you to write UI code in your business logic. What a great principle. After all, it's only been like at least ten years since we've identified this practice as one of the worst design decisions you can make in a project.
My problem is as a novice programmer is sometimes stumbling onto articles on the internet and give them more credence then I should. Perhaps this is one of those cases.
When ideas like these are presented to me, I like to take a look at libraries and frameworks I use and which I like using.
For example, although some will disagree, I like the Java Standard API. I also like the Spring Framework. Looking at the classes in these libraries, you will notice that very rarely there are setters and getters which are there just to expose some internal variable. There are methods named getX, but that does not mean it is a getter in the conventional sense.
So, I think he has a point, and it is this: every time you press choose "Generate getters/setters" in Eclipse (or your IDE of choice), you should take a step back and wonder what you are doing. Is it really appropriate to expose this internal representation, or did I mess up my design at some step?
I don't believe he's saying never use get/set, but rather that using get/set for a field is no better than just making the field public (e.g. public string Name vs. public string Name {get; set; }).
If get/set is used it limits the information hiding of OO which can potentially lock you into a bad interface.
In the above example, Name is a string; what if we want to change the design later to add multiple Names? The interface exposed only a single string so we can’t add more without breaking existing implementation.
However, if instead of using get/set you initially had a method such as Add(string name), internally you could process name singularly or add to a list or what not and externally call the Add method as many times as you want to add more Names.
The OO goal is to design with a level of abstraction; don’t expose more detail than you absolutely have to.
Chances are if you’ve just wrapped a primitive type with a get/set you’ve broken this tenet.
Of course, this is if you believe in the OO goals; I find that most don't, not really, they just use Objects as a convienient way to group functional code.
Public variables make sense when the class is nothing more than a bundle of data with no real coherency, or when it's really, really elementary (such as a point class). In general, if there's any variable in a class that you think probably shouldn't be public, that means that the class has some coherence, and variables have a certain relation that should be maintained, so all variables should be private.
Getters and setters make sense when they reflect some sort of coherent idea. In a polygon class, for example, the x and y coordinates of given vertices have a meaning outside the class boundary. It probably makes sense to have getters, and it likely makes sense to have setters. In a bank account class, the balance is probably stored as a private variable, and almost certainly should have a getter. If it has a setter, it needs to have logging built in to preserve auditability.
There are some advantages of getters and setters over public variables. They provide some separation of interface and implementation. Just because a point has a .getX() function doesn't mean there has to be an x, since .getX() and .setX() can be made to work just fine with radial coordinates. Another is that it's possible to maintain class invariants, by doing whatever's necessary to keep the class consistent within the setter. Another is that it's possible to have functionality that triggers on a set, like the logging for the bank account balance.
However, for more abstract classes, the member variables lose individual significance, and only make sense in context. You don't need to know all the internal variables of a C++ stream class, for example. You need to know how to get elements in and out, and how to perform various other actions. If you counted on the exact internal structure, you'd be bogged down in detail that could arbitrarily vary between compilers or versions.
So, I'd say to use private variables almost exclusively, getters and setters where they have a real meaning in object behavior, and not otherwise.
Just because getters and setters are frequently overused doesn't mean they're useless.
The trouble with getters/setters is they try to fake encapsulation but they actually break it by exposing their internals. Secondly they are trying to do two separate things - providing access to and controlling their state - and end up doing neither very well.
It breaks encapsulation because when you call a get/set method you first need to know the name (or have a good idea) of the field you want to change, and second you have to know it's type eg. you couldn't call
setPositionX("some string");
If you know the name and type of the field, and the setter is public, then anyone can call the method as if it were a public field anyway, it's just a more complicated way of doing it, so why not just simplify it and make it a public field in the first place.
By allowing access to it's state but trying to control it at the same time, a get/set method just confuses things and ends up either being useless boiler-plate, or misleading, by not actually doing what it says it does by having side-effects the user might not expect. If error checking is needed, it could be called something like
public void tryPositionX(int x) throws InvalidParameterException{
if (x >= 0)
this.x = x;
else
throw new InvalidParameterException("Holy Negatives Batman!");
}
or if additional code is needed it could be called a more accurate name based on what the whole method does eg.
tryPositionXAndLog(int x) throws InvalidParameterException{
tryPositionX(x);
numChanges++;
}
IMHO needing getters/setters to make something work is often a symptom of bad design.
Make use of the "tell, don't ask" principle, or re-think why an object needs to send it's state data in the first place. Expose methods that change an object's behaviour instead of it's state. Benefits of that include easier maintenance and increased extensibility.
You mention MVC too and say a model can't be responsible for it's view, for that case Allen Holub gives an example of making an abstraction layer by having a "give-me-a-JComponent-that-represents-your-identity class" which he says would "isolate the way identities are represented from the rest of the system." I'm not experienced enough to comment on whether that would work or not but on the surface it sounds a decent idea.
Public getters/setters are bad if they provide access to implementation details. Yet, it is reasonable to provide access to object's properties and use getters/setters for this. For example, if Car has the color property, it's acceptable to let clients "observe" it using a getter. If some client needs the ability to recolor a car, the class can provide a setter ('recolor' is more clear name though). It is important to do not let clients know how properties are stored in objects, how they are maintained, and so on.
Ummmm...has he never heard of the concept of Encapsulation. Getter and Setter methods are put in place to control access to a Class' members. By making all fields publicly visible...anybody could write whatever values they wanted to them thereby completely invalidating the entire object.
Just in case anybody is a little fuzzy on the concept of Encapsulation, read up on it here:
Encapsulation (Computer Science)
...and if they're really evil, would .NET build the Property concept into the language? (Getter and Setter methods that just look a little prettier)
EDIT
The article does mention Encapsulation:
"Getters and setters can be useful for variables that you specifically want to encapsulate, but you don't have to use them for all variables. In fact, using them for all variables is nasty code smell."
Using this method will lead to extremely hard to maintain code in the long run. If you find out half way through a project that spans years that a field needs to be Encapsulated, you're going to have to update EVERY REFERENCE of that field everywhere in your software to get the benefit. Sounds a lot smarter to use proper Encapsulation up front and safe yourself the headache later.
I think that getters and setters should only be used for variables which one needs to access or change outside a class. That being said, I don't believe variables should be public unless they're static. This is because making variables public which aren't static can lead to them being changed undesirably. Let's say you have a developer who is carelessly using public variables. He then accesses a variable from another class and without meaning to, changes it. Now he has an error in his software as a result of this mishap. That's why I believe in the proper use of getters and setters, but you don't need them for every private or protected variable.
I was confused when I first started to see anti-singleton commentary. I have used the singleton pattern in some recent projects, and it was working out beautifully. So much so, in fact, that I have used it many, many times.
Now, after running into some problems, reading this SO question, and especially this blog post, I understand the evil that I have brought into the world.
So: How do I go about removing singletons from existing code?
For example:
In a retail store management program, I used the MVC pattern. My Model objects describe the store, the user interface is the View, and I have a set of Controllers that act as liason between the two. Great. Except that I made the Store into a singleton (since the application only ever manages one store at a time), and I also made most of my Controller classes into singletons (one mainWindow, one menuBar, one productEditor...). Now, most of my Controller classes get access the other singletons like this:
Store managedStore = Store::getInstance();
managedStore.doSomething();
managedStore.doSomethingElse();
//etc.
Should I instead:
Create one instance of each object and pass references to every object that needs access to them?
Use globals?
Something else?
Globals would still be bad, but at least they wouldn't be pretending.
I see #1 quickly leading to horribly inflated constructor calls:
someVar = SomeControllerClass(managedStore, menuBar, editor, sasquatch, ...)
Has anyone else been through this yet? What is the OO way to give many individual classes acces to a common variable without it being a global or a singleton?
Dependency Injection is your friend.
Take a look at these posts on the excellent Google Testing Blog:
Singletons are pathologic liars (but you probably already understand this if you are asking this question)
A talk on Dependency Injection
Guide to Writing Testable Code
Hopefully someone has made a DI framework/container for the C++ world? Looks like Google has released a C++ Testing Framework and a C++ Mocking Framework, which might help you out.
It's not the Singleton-ness that is the problem. It's fine to have an object that there will only ever be one instance of. The problem is the global access. Your classes that use Store should receive a Store instance in the constructor (or have a Store property / data member that can be set) and they can all receive the same instance. Store can even keep logic within it to ensure that only one instance is ever created.
My way to avoid singletons derives from the idea that "application global" doesn't mean "VM global" (i.e. static). Therefore I introduce a ApplicationContext class which holds much former static singleton information that should be application global, like the configuration store. This context is passed into all structures. If you use any IOC container or service manager, you can use this to get access to the context.
There's nothing wrong with using a global or a singleton in your program. Don't let anyone get dogmatic on you about that kind of crap. Rules and patterns are nice rules of thumb. But in the end it's your project and you should make your own judgments about how to handle situations involving global data.
Unrestrained use of globals is bad news. But as long as you are diligent, they aren't going to kill your project. Some objects in a system deserve to be singleton. The standard input and outputs. Your log system. In a game, your graphics, sound, and input subsystems, as well as the database of game entities. In a GUI, your window and major panel components. Your configuration data, your plugin manager, your web server data. All these things are more or less inherently global to your application. I think your Store class would pass for it as well.
It's clear what the cost of using globals is. Any part of your application could be modifying it. Tracking down bugs is hard when every line of code is a suspect in the investigation.
But what about the cost of NOT using globals? Like everything else in programming, it's a trade off. If you avoid using globals, you end up having to pass those stateful objects as function parameters. Alternatively, you can pass them to a constructor and save them as a member variable. When you have multiple such objects, the situation worsens. You are now threading your state. In some cases, this isn't a problem. If you know only two or three functions need to handle that stateful Store object, it's the better solution.
But in practice, that's not always the case. If every part of your app touches your Store, you will be threading it to a dozen functions. On top of that, some of those functions may have complicated business logic. When you break that business logic up with helper functions, you have to -- thread your state some more! Say for instance you realize that a deeply nested function needs some configuration data from the Store object. Suddenly, you have to edit 3 or 4 function declarations to include that store parameter. Then you have to go back and add the store as an actual parameter to everywhere one of those functions is called. It may be that the only use a function has for a Store is to pass it to some subfunction that needs it.
Patterns are just rules of thumb. Do you always use your turn signals before making a lane change in your car? If you're the average person, you'll usually follow the rule, but if you are driving at 4am on an empty high way, who gives a crap, right? Sometimes it'll bite you in the butt, but that's a managed risk.
Regarding your inflated constructor call problem, you could introduce parameter classes or factory methods to leverage this problem for you.
A parameter class moves some of the parameter data to it's own class, e.g. like this:
var parameterClass1 = new MenuParameter(menuBar, editor);
var parameterClass2 = new StuffParameters(sasquatch, ...);
var ctrl = new MyControllerClass(managedStore, parameterClass1, parameterClass2);
It sort of just moves the problem elsewhere though. You might want to housekeep your constructor instead. Only keep parameters that are important when constructing/initiating the class in question and do the rest with getter/setter methods (or properties if you're doing .NET).
A factory method is a method that creates all instances you need of a class and have the benefit of encapsulating creation of the said objects. They are also quite easy to refactor towards from Singleton, because they're similar to getInstance methods that you see in Singleton patterns. Say we have the following non-threadsafe simple singleton example:
// The Rather Unfortunate Singleton Class
public class SingletonStore {
private static SingletonStore _singleton
= new MyUnfortunateSingleton();
private SingletonStore() {
// Do some privatised constructing in here...
}
public static SingletonStore getInstance() {
return _singleton;
}
// Some methods and stuff to be down here
}
// Usage:
// var singleInstanceOfStore = SingletonStore.getInstance();
It is easy to refactor this towards a factory method. The solution is to remove the static reference:
public class StoreWithFactory {
public StoreWithFactory() {
// If the constructor is private or public doesn't matter
// unless you do TDD, in which you need to have a public
// constructor to create the object so you can test it.
}
// The method returning an instance of Singleton is now a
// factory method.
public static StoreWithFactory getInstance() {
return new StoreWithFactory();
}
}
// Usage:
// var myStore = StoreWithFactory.getInstance();
Usage is still the same, but you're not bogged down with having a single instance. Naturally you would move this factory method to it's own class as the Store class shouldn't concern itself with creation of itself (and coincidentally follow the Single Responsibility Principle as an effect of moving the factory method out).
From here you have many choices, but I'll leave that as an exercise for yourself. It is easy to over-engineer (or overheat) on patterns here. My tip is to only apply a pattern when there is a need for it.
Okay, first of all, the "singletons are always evil" notion is wrong. You use a Singleton whenever you have a resource which won't or can't ever be duplicated. No problem.
That said, in your example, there's an obvious degree of freedom in the application: someone could come along and say "but I want two stores."
There are several solutions. The one that occurs first of all is to build a factory class; when you ask for a Store, it gives you one named with some universal name (eg, a URI.) Inside that store, you need to be sure that multiple copies don't step on one another, via critical regions or some method of ensuring atomicity of transactions.
Miško Hevery has a nice article series on testability, among other things the singleton, where he isn't only talking about the problems, but also how you might solve it (see 'Fixing the flaw').
I like to encourage the use of singletons where necessary while discouraging the use of the Singleton pattern. Note the difference in the case of the word. The singleton (lower case) is used wherever you only need one instance of something. It is created at the start of your program and is passed to the constructor of the classes that need it.
class Log
{
void logmessage(...)
{ // do some stuff
}
};
int main()
{
Log log;
// do some more stuff
}
class Database
{
Log &_log;
Database(Log &log) : _log(log) {}
void Open(...)
{
_log.logmessage(whatever);
}
};
Using a singleton gives all of the capabilities of the Singleton anti-pattern but it makes your code more easily extensible, and it makes it testable (in the sense of the word defined in the Google testing blog). For example, we may decide that we need the ability to log to a web-service at some times as well, using the singleton we can easily do that without significant changes to the code.
By comparison, the Singleton pattern is another name for a global variable. It is never used in production code.
In addition, are there any performance advantages to static methods over instance methods?
I came across the following recently: http://www.cafeaulait.org/course/week4/22.html :
When should a method be static?
Neither reads from nor writes to instance fields
Independent of the state of the object
Mathematical methods that accept arguments, apply an algorithm to those
arguments, and return a value
Factory methods that serve in lieu of constructors
I would be very interested in the feedback of the Stack Overflow community on this.
Make methods static when they are not part of the instance. Don't sweat the micro-optimisations.
You might find you have lots of private methods that could be static but you always call from instance methods (or each other). In that case it doesn't really matter that much. However, if you want to actually be able to test your code, and perhaps use it from elsewhere, you might want to consider making those static methods in a different, non-instantiable class.
Whether or not a method is static is more of a design consideration than one of efficiency. A static method belongs to a class, where a non-static method belongs to an object. If you had a Math class, you might have a few static methods to deal with addition and subtraction because these are concepts associated with Math. However, if you had a Car class, you might have a few non-static methods to change gears and steer, because those are associated with a specific car, and not the concept of cars in general.
Another problem with static methods is that it is quite painful to write unit tests for them - in Java, at least. You cannot mock a static method in any way. There is a post on google testing blog about this issue.
My rule of thumb is to write static methods only when they have no external dependencies (like database access, read files, emails and so on) to keep them as simple as possible.
Just remember that whenever you are writing a static method, you are writing an inflexible method that cannot have it's behavior modified very easily.
You are writing procedural code, so if it makes sense to be procedural, then do it. If not, it should probably be an instance method.
This idea is taken from an article by Steve Yegge, which I think is an interesting and useful read.
#jagmal I think you've got some wires crossed somewhere - all the examples you list are clearly not static methods.
Static methods should deal entirely with abstract properties and concepts of a class - they should in no way relate to instance specific attributes (and most compilers will yell if they do).
For the car example, speed, kms driven are clearly attribute related. Gear shifting and speed calculation, when considered at the car level, are attribute dependent - but consider a carModel class that inherits from car: at this point theyy could become static methods, as the required attributes (such as wheel diameter) could be defined as constants at that level.
Performance-wise, a C++ static method can be slightly faster than a non-virtual instance method, as there's no need for a 'this' pointer to get passed to the method. In turn, both will be faster than virtual methods as there's no VMT lookup needed.
But, it's likely to be right down in the noise - particularly for languages which allow unnecessary parameter passing to be optimized out.
Here is a related discussion as to why String.Format is static that will highlight some reasons.
Another thing to consider when making methods static is that anyone able to see the class is able to call a static method. Whereas when the mehtod is an instance method, only those who have access to an instance are able to call that method.