Related
There is an answered question which will help you understand what exactly I want to say.
How does the function passed to http.HandleFunc get access to http.ResponseWriter and http.Request?
There are many built-in Go functions where the function parameters get assigned this way. I want to use that coding style in my daily coding life.
I want to write a similar function/method which will get its parameter values from somewhere just like http.Handlefunc's w and r.
func (s SchoolStruct) GetSchoolDetails(name string){
// here the parameter "name" should get assigned exactly like http.HandleFunc()'s "w" and "r".
}
What http does is that it registers a callback and uses it when the time comes. You don't have to pass the arguments it takes, as servers implementation provides these arguments with correct state. If you want to copy this approach, first you have to ask:
Is there some kind of generic abstraction that computes these parameters? Is the function I write just reacting to something? Does this function have any side effects? Does it return value back to the system?
This approach is very good when you are modifying existing system, extending its behavior with independent units. So to speak, integrating into robust API.
You may be correct that this is a style of doing things, but you cannot use this style on everything. Its just too specific and good at certain group of tasks.
As #mkopriva pointed out, declaring rules and requirements, your logic should satisfy, is known way to execute this style in Go. You have to realize that your logic, encapsulated behind function pointer or interface, has to be passed and controlled by some other code you call indirectly.
I cannot possibly imagine going to such lengths when all components of the system are under your control and system has only one logic to run.
What are the pros and cons of closures against classes, and vice versa?
Edit:
As user Faisal put it, both closures and classes can be used to "describe an entity that maintains and manipulates state", so closures provide a way to program in an object oriented way using functional languages. Like most programmers, I'm more familiar with classes.
The intention of this question is not to open another flame war about which programming paradigm is better, or if closures and classes are fully equivalent, or poor man's one-another.
What I'd like to know is if anyone found a scenario in which one approach really beats the other, and why.
Functionally, closures and objects are equivalent. A closure can emulate an object and vice versa. So which one you use is a matter of syntactic convenience, or which one your programming language can best handle.
In C++ closures are not syntactically available, so you are forced to go with "functors", which are objects that override operator() and may be called in a way that looks like a function call.
In Java you don't even have functors, so you get things like the Visitor pattern, which would just be a higher order function in a language that supports closures.
In standard Scheme you don't have objects, so sometimes you end up implementing them by writing a closure with a dispatch function, executing different sub-closures depending on the incoming parameters.
In a language like Python, the syntax of which has both functors and closures, it's basically a matter of taste and which you feel is the better way to express what you are doing.
Personally, I would say that in any language that has syntax for both, closures are a much more clear and clean way to express objects with a single method. And vice versa, if your closure starts handling dispatch to sub-closures based on the incoming parameters, you should probably be using an object instead.
Personally, I think it's a matter of using the right tool for the job...more specifically, of properly communicating your intent.
If you want to explicitly show that all your objects share a common definition and want strong type-checking of such, you probably want to use a class. The disadvantage of not being able to alter the structure of your class at runtime is actually a strength in this case, since you know exactly what you're dealing with.
If instead you want to create a heterogeneous collection of "objects" (i.e. state represented as variables closed under some function w/inner functions to manipulate that data), you might be better off creating a closure. In this case, there's no real guarantee about the structure of the object you end up with, but you get all the flexibility of defining it exactly as you like at runtime.
Thank you for asking, actually; I'd responded with a sort of knee-jerk "classes and closures are totally different!" attitude at first, but with some research I realize the problem isn't nearly as cut-and-dry as I'd thought.
Closures are very lightly related to classes. Classes let you define fields and methods, and closures hold information about local variables from a function call. There is no possible comparison of the two in a language-agnostic manner: they don't serve the same purpose at all. Besides, closures are much more related to functional programming than to object-oriented programming.
For instance, look at the following C# code:
static void Main(String[] args)
{
int i = 4;
var myDelegate = delegate()
{
i = 5;
}
Console.WriteLine(i);
myDelegate();
Console.WriteLine(i);
}
This gives "4" then "5". myDelegate, being a delegate, is a closure and knows about all the variables currently used by the function. Therefore, when I call it, it is allowed to change the value of i inside the "parent" function. This would not be permitted for a normal function.
Classes, if you know what they are, are completely different.
A possible reason of your confusion is that when a language has no language support for closures, it's possible to simulate them using classes that will hold every variable we need to keep around. For instance, we could rewrite the above code like this:
class MainClosure()
{
public int i;
void Apply()
{
i = 5;
}
}
static void Main(String[] args)
{
MainClosure closure;
closure.i = 4;
Console.WriteLine(closure.i);
closure.Apply();
Console.WriteLine(closure.i);
}
We've transformed the delegate to a class that we've called MainClosure. Instead of creating the variable i inside the Main function, we've created a MainClosure object, that has an i field. This is the one we'll use. Also, we've built the code the function executes inside an instance method, instead of inside the method.
As you can see, even though this was an easy example (only one variable), it is considerably more work. In a context where you want closures, using objects is a poor solution. However, classes are not only useful for creating closures, and their usual purpose is usually far different.
This has been asked before (question no. 308581), but that particular question and the answers are a bit C++ specific and a lot of things there are not really relevant in languages like Java or C#.
The thing is, that even after refactorization, I find that there is a bit of mess in my source code files. I mean, the function bodies are alright, but I'm not quite happy with the way the functions themselves are ordered. Of course, in an IDE like Visual Studio it is relatively easy to find a member if you remember how it is called, but this is not always the case.
I've tried a couple of approaches like putting public methods first but that the drawback of this approach is that a function at the top of the file ends up calling an other private function at the bottom of the file so I end up scrolling all the time.
Another approach is to try to group related methods together (maybe into regions) but obviously this has its limits as if there are many non-related methods in the same class then maybe it's time to break up the class to two or more smaller classes.
So consider this: your code has been refactored properly so that it satisfies all the requirements mentioned in Code Complete, but you would still like to reorder your methods for ergonomic purposes. What's your approach?
(Actually, while not exactly a technical problem, this is problem really annoys the hell out of me so I would be really grateful if someone could come up with a good approach)
Actually I totally rely on the navigation functionality of my IDE, i.e. Visual Studio. Most of the time I use F12 to jump to the declaration (or Shift-F12 to find all references) and the Ctrl+- to jump back.
The reason for that is that most of the time I am working on code that I haven't written myself and I don't want to spend my time re-ordering methods and fields.
P.S.: And I also use RockScroll, a VS add-in which makes navigating and scrolling large files quite easy
If you're really having problems scrolling and finding, it's possible you're suffering from god class syndrome.
Fwiw, I personally tend to go with:
class
{
#statics (if any)
#constructor
#destructor (if any)
#member variables
#properties (if any)
#public methods (overrides, etc, first then extensions)
#private (aka helper) methods (if any)
}
And I have no aversion to region blocks, nor comments, so make free use of both to denote relationships.
From my (Java) point of view I would say constructors, public methods, private methods, in that order. I always try to group methods implementing a certain interface together.
My favorite weapon of choice is IntelliJ IDEA, which has some nice possibilities to fold methods bodies so it is quite easy to display two methods directly above each other even when their actual position in the source file is 700 lines apart.
I would be careful with monkeying around with the position of methods in the actual source. Your IDE should give you the ability to view the source in the way you want. This is especially relevant when working on a project where developers can use their IDE of choice.
My order, here it comes.
I usually put statics first.
Next come member variables and properties, a property that accesses one specific member is grouped together with this member. I try to group related information together, for example all strings that contain path information.
Third is the constructor (or constructors if you have several).
After that follow the methods. Those are ordered by whatever appears logical for that specific class. I often group methods by their access level: private, protected, public. But I recently had a class that needed to override a lot of methods from its base class. Since I was doing a lot of work there, I put them together in one group, regardless of their access level.
My recommendation: Order your classes so that it helps your workflow. Do not simply order them, just to have order. The time spent on ordering should be an investment that helps you save more time that you would otherwise need to scroll up and down.
In C# I use #region to seperate those groups from each other, but that is a matter of taste. There are a lot of people who don't like regions. I do.
I place the most recent method I just created on top of the class. That way when I open the project, I'm back at the last method I'm developing. Easier for me to get back "in the zone."
It also reflected the fact that the method(which uses other methods) I just created is the topmost layer of other methods.
Group related functions together, don't be hard-pressed to put all private functions at the bottom. Likewise, imitate the design rationale of C#'s properties, related functions should be in close proximity to each other, the C# language construct for properties reinforces that idea.
P.S.
If only C# can nest functions like Pascal or Delphi. Maybe Anders Hejlsberg can put it in C#, he also invented Turbo Pascal and Delphi :-) D language has nested functions.
A few years ago I spent far too much time pondering this question, and came up with a horrendously complex system for ordering the declarations within a class. The order would depend on the access specifier, whether a method or field was static, transient, volatile etc.
It wasn't worth it. IMHO you get no real benefit from such a complex arrangement.
What I do nowadays is much simpler:
Constructors (default constructor first, otherwise order doesn't matter.)
Methods, sorted by name (static vs. non-static doesn't matter, nor abstract vs. concrete, virtual vs. final etc.)
Inner classes, sorted by name (interface vs. class etc. doesn't matter)
Fields, sorted by name (static vs. non-static doesn't matter.) Optionally constants (public static final) first, but this is not essential.
i pretty sure there was a visual studio addin that could re-order the class members in the code.
so i.e. ctors on the top of the class then static methods then instance methods...
something like that
unfortunately i can't remember the name of this addin! i also think that this addin was for free!
maybe someone other can help us out?
My personal take for structuring a class is as follows:
I'm strict with
constants and static fields first, in alpha order
non-private inner classes and enums in alpha order
fields (and attributes where applicable), in alpha order
ctors (and dtors where applicable)
static methods and factory methods
methods below, in alpha order, regardless of visibility.
I use the auto-formatting capabilities of an IDE at all times. So I'm constantly hitting Ctrl+Shift+F when I'm working. I export auto-formatting capabilities in an xml file which I carry with me everywhere.
It helps down the lane when doing merges and rebases. And it is the type of thing you can automate in your IDE or build process so that you don't have to make a brain cell sweat for it.
I'm not claiming MY WAY is the way. But pick something, configure it, use it consistently until it becomes a reflex, and thus forget about it.
I was confused when I first started to see anti-singleton commentary. I have used the singleton pattern in some recent projects, and it was working out beautifully. So much so, in fact, that I have used it many, many times.
Now, after running into some problems, reading this SO question, and especially this blog post, I understand the evil that I have brought into the world.
So: How do I go about removing singletons from existing code?
For example:
In a retail store management program, I used the MVC pattern. My Model objects describe the store, the user interface is the View, and I have a set of Controllers that act as liason between the two. Great. Except that I made the Store into a singleton (since the application only ever manages one store at a time), and I also made most of my Controller classes into singletons (one mainWindow, one menuBar, one productEditor...). Now, most of my Controller classes get access the other singletons like this:
Store managedStore = Store::getInstance();
managedStore.doSomething();
managedStore.doSomethingElse();
//etc.
Should I instead:
Create one instance of each object and pass references to every object that needs access to them?
Use globals?
Something else?
Globals would still be bad, but at least they wouldn't be pretending.
I see #1 quickly leading to horribly inflated constructor calls:
someVar = SomeControllerClass(managedStore, menuBar, editor, sasquatch, ...)
Has anyone else been through this yet? What is the OO way to give many individual classes acces to a common variable without it being a global or a singleton?
Dependency Injection is your friend.
Take a look at these posts on the excellent Google Testing Blog:
Singletons are pathologic liars (but you probably already understand this if you are asking this question)
A talk on Dependency Injection
Guide to Writing Testable Code
Hopefully someone has made a DI framework/container for the C++ world? Looks like Google has released a C++ Testing Framework and a C++ Mocking Framework, which might help you out.
It's not the Singleton-ness that is the problem. It's fine to have an object that there will only ever be one instance of. The problem is the global access. Your classes that use Store should receive a Store instance in the constructor (or have a Store property / data member that can be set) and they can all receive the same instance. Store can even keep logic within it to ensure that only one instance is ever created.
My way to avoid singletons derives from the idea that "application global" doesn't mean "VM global" (i.e. static). Therefore I introduce a ApplicationContext class which holds much former static singleton information that should be application global, like the configuration store. This context is passed into all structures. If you use any IOC container or service manager, you can use this to get access to the context.
There's nothing wrong with using a global or a singleton in your program. Don't let anyone get dogmatic on you about that kind of crap. Rules and patterns are nice rules of thumb. But in the end it's your project and you should make your own judgments about how to handle situations involving global data.
Unrestrained use of globals is bad news. But as long as you are diligent, they aren't going to kill your project. Some objects in a system deserve to be singleton. The standard input and outputs. Your log system. In a game, your graphics, sound, and input subsystems, as well as the database of game entities. In a GUI, your window and major panel components. Your configuration data, your plugin manager, your web server data. All these things are more or less inherently global to your application. I think your Store class would pass for it as well.
It's clear what the cost of using globals is. Any part of your application could be modifying it. Tracking down bugs is hard when every line of code is a suspect in the investigation.
But what about the cost of NOT using globals? Like everything else in programming, it's a trade off. If you avoid using globals, you end up having to pass those stateful objects as function parameters. Alternatively, you can pass them to a constructor and save them as a member variable. When you have multiple such objects, the situation worsens. You are now threading your state. In some cases, this isn't a problem. If you know only two or three functions need to handle that stateful Store object, it's the better solution.
But in practice, that's not always the case. If every part of your app touches your Store, you will be threading it to a dozen functions. On top of that, some of those functions may have complicated business logic. When you break that business logic up with helper functions, you have to -- thread your state some more! Say for instance you realize that a deeply nested function needs some configuration data from the Store object. Suddenly, you have to edit 3 or 4 function declarations to include that store parameter. Then you have to go back and add the store as an actual parameter to everywhere one of those functions is called. It may be that the only use a function has for a Store is to pass it to some subfunction that needs it.
Patterns are just rules of thumb. Do you always use your turn signals before making a lane change in your car? If you're the average person, you'll usually follow the rule, but if you are driving at 4am on an empty high way, who gives a crap, right? Sometimes it'll bite you in the butt, but that's a managed risk.
Regarding your inflated constructor call problem, you could introduce parameter classes or factory methods to leverage this problem for you.
A parameter class moves some of the parameter data to it's own class, e.g. like this:
var parameterClass1 = new MenuParameter(menuBar, editor);
var parameterClass2 = new StuffParameters(sasquatch, ...);
var ctrl = new MyControllerClass(managedStore, parameterClass1, parameterClass2);
It sort of just moves the problem elsewhere though. You might want to housekeep your constructor instead. Only keep parameters that are important when constructing/initiating the class in question and do the rest with getter/setter methods (or properties if you're doing .NET).
A factory method is a method that creates all instances you need of a class and have the benefit of encapsulating creation of the said objects. They are also quite easy to refactor towards from Singleton, because they're similar to getInstance methods that you see in Singleton patterns. Say we have the following non-threadsafe simple singleton example:
// The Rather Unfortunate Singleton Class
public class SingletonStore {
private static SingletonStore _singleton
= new MyUnfortunateSingleton();
private SingletonStore() {
// Do some privatised constructing in here...
}
public static SingletonStore getInstance() {
return _singleton;
}
// Some methods and stuff to be down here
}
// Usage:
// var singleInstanceOfStore = SingletonStore.getInstance();
It is easy to refactor this towards a factory method. The solution is to remove the static reference:
public class StoreWithFactory {
public StoreWithFactory() {
// If the constructor is private or public doesn't matter
// unless you do TDD, in which you need to have a public
// constructor to create the object so you can test it.
}
// The method returning an instance of Singleton is now a
// factory method.
public static StoreWithFactory getInstance() {
return new StoreWithFactory();
}
}
// Usage:
// var myStore = StoreWithFactory.getInstance();
Usage is still the same, but you're not bogged down with having a single instance. Naturally you would move this factory method to it's own class as the Store class shouldn't concern itself with creation of itself (and coincidentally follow the Single Responsibility Principle as an effect of moving the factory method out).
From here you have many choices, but I'll leave that as an exercise for yourself. It is easy to over-engineer (or overheat) on patterns here. My tip is to only apply a pattern when there is a need for it.
Okay, first of all, the "singletons are always evil" notion is wrong. You use a Singleton whenever you have a resource which won't or can't ever be duplicated. No problem.
That said, in your example, there's an obvious degree of freedom in the application: someone could come along and say "but I want two stores."
There are several solutions. The one that occurs first of all is to build a factory class; when you ask for a Store, it gives you one named with some universal name (eg, a URI.) Inside that store, you need to be sure that multiple copies don't step on one another, via critical regions or some method of ensuring atomicity of transactions.
Miško Hevery has a nice article series on testability, among other things the singleton, where he isn't only talking about the problems, but also how you might solve it (see 'Fixing the flaw').
I like to encourage the use of singletons where necessary while discouraging the use of the Singleton pattern. Note the difference in the case of the word. The singleton (lower case) is used wherever you only need one instance of something. It is created at the start of your program and is passed to the constructor of the classes that need it.
class Log
{
void logmessage(...)
{ // do some stuff
}
};
int main()
{
Log log;
// do some more stuff
}
class Database
{
Log &_log;
Database(Log &log) : _log(log) {}
void Open(...)
{
_log.logmessage(whatever);
}
};
Using a singleton gives all of the capabilities of the Singleton anti-pattern but it makes your code more easily extensible, and it makes it testable (in the sense of the word defined in the Google testing blog). For example, we may decide that we need the ability to log to a web-service at some times as well, using the singleton we can easily do that without significant changes to the code.
By comparison, the Singleton pattern is another name for a global variable. It is never used in production code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
One of the biggest advantages of object-oriented programming is encapsulation, and one of the "truths" we've (or, at least, I've) been taught is that members should always be made private and made available via accessor and mutator methods, thus ensuring the ability to verify and validate the changes.
I'm curious, though, how important this really is in practice. In particular, if you've got a more complicated member (such as a collection), it can be very tempting to just make it public rather than make a bunch of methods to get the collection's keys, add/remove items from the collection, etc.
Do you follow the rule in general? Does your answer change depending on whether it's code written for yourself vs. to be used by others? Are there more subtle reasons I'm missing for this obfuscation?
It depends. This is one of those issues that must be decided pragmatically.
Suppose I had a class for representing a point. I could have getters and setters for the X and Y coordinates, or I could just make them both public and allow free read/write access to the data. In my opinion, this is OK because the class is acting like a glorified struct - a data collection with maybe some useful functions attached.
However, there are plenty of circumstances where you do not want to provide full access to your internal data and rely on the methods provided by the class to interact with the object. An example would be an HTTP request and response. In this case it's a bad idea to allow anybody to send anything over the wire - it must be processed and formatted by the class methods. In this case, the class is conceived of as an actual object and not a simple data store.
It really comes down to whether or not verbs (methods) drive the structure or if the data does.
As someone having to maintain several-year-old code worked on by many people in the past, it's very clear to me that if a member attribute is made public, it is eventually abused. I've even heard people disagreeing with the idea of accessors and mutators, as that's still not really living up to the purpose of encapsulation, which is "hiding the inner workings of a class". It's obviously a controversial topic, but my opinion would be "make every member variable private, think primarily about what the class has got to do (methods) rather than how you're going to let people change internal variables".
Yes, encapsulation matters. Exposing the underlying implementation does (at least) two things wrong:
Mixes up responsibilities. Callers shouldn't need or want to understand the underlying implementation. They should just want the class to do its job. By exposing the underlying implementation, you're class isn't doing its job. Instead, it's just pushing the responsibility onto the caller.
Ties you to the underlying implementation. Once you expose the underlying implementation, you're tied to it. If you tell callers, e.g., there's a collection underneath, you cannot easily swap the collection for a new implementation.
These (and other) problems apply regardless of whether you give direct access to the underlying implementation or just duplicate all the underlying methods. You should be exposing the necessary implementation, and nothing more. Keeping the implementation private makes the overall system more maintainable.
I prefer to keep members private as long as possible and only access em via getters, even from within the very same class. I also try to avoid setters as a first draft to promote value style objects as long as it is possible. Working with dependency injection a lot you often have setters but no getters, as clients should be able to configure the object but (others) not get to know what's acutally configured as this is an implementation detail.
Regards,
Ollie
I tend to follow the rule pretty strictly, even when it's just my own code. I really like Properties in C# for that reason. It makes it really easy to control what values it's given, but you can still use them as variables. Or make the set private and the get public, etc.
Basically, information hiding is about code clarity. It's designed to make it easier for someone else to extend your code, and prevent them from accidentally creating bugs when they work with the internal data of your classes. It's based on the principle that nobody ever reads comments, especially ones with instructions in them.
Example: I'm writing code that updates a variable, and I need to make absolutely sure that the Gui changes to reflect the change, the easiest way is to add an accessor method (aka a "Setter"), which is called instead of updating data is updated.
If I make that data public, and something changes the variable without going through the Setter method (and this happens every swear-word time), then someone will need to spend an hour debugging to find out why the updates aren't being displayed. The same applies, to a lesser extent, to "Getting" data. I could put a comment in the header file, but odds are that no-one will read it till something goes terribly, terribly wrong. Enforcing it with private means that the mistake can't be made, because it'll show up as an easily located compile-time bug, rather than a run-time bug.
From experience, the only times you'd want to make a member variable public, and leave out Getter and Setter methods, is if you want to make it absolutely clear that changing it will have no side effects; especially if the data structure is simple, like a class that simply holds two variables as a pair.
This should be a fairly rare occurence, as normally you'd want side effects, and if the data structure you're creating is so simple that you don't (e.g a pairing), there will already be a more efficiently written one available in a Standard Library.
With that said, for most small programs that are one-use no-extension, like the ones you get at university, it's more "good practice" than anything, because you'll remember over the course of writing them, and then you'll hand them in and never touch the code again. Also, if you're writing a data structure as a way of finding out about how they store data rather than as release code, then there's a good argument that Getters and Setters will not help, and will get in the way of the learning experience.
It's only when you get to the workplace or a large project, where the probability is that your code will be called to by objects and structures written by different people, that it becomes vital to make these "reminders" strong. Whether or not it's a single man project is surprisingly irrelevant, for the simple reason that "you six weeks from now" is as different person as a co-worker. And "me six weeks ago" often turns out to be lazy.
A final point is that some people are pretty zealous about information hiding, and will get annoyed if your data is unnecessarily public. It's best to humour them.
C# Properties 'simulate' public fields. Looks pretty cool and the syntax really speeds up creating those get/set methods
Keep in mind the semantics of invoking methods on an object. A method invocation is a very high level abstraction that can be implemented my the compiler or the run time system in a variety of different ways.
If the object who's method you are invoking exists in the same process/ memory map then a method could well be optimized by a compiler or VM to directly access the data member. On the other hand if the object lives on another node in a distributed system then there is no way that you can directly access it's internal data members, but you can still invoke its methods my sending it a message.
By coding to interfaces you can write code that doesn't care where the target object exists or how it's methods are invoked or even if it's written in the same language.
In your example of an object that implements all the methods of a collection, then surely that object actually is a collection. so maybe this would be a case where inheritance would be better than encapsulation.
It's all about controlling what people can do with what you give them. The more controlling you are the more assumptions you can make.
Also, theorectically you can change the underlying implementation or something, but since for the most part it's:
private Foo foo;
public Foo getFoo() {}
public void setFoo(Foo foo) {}
It's a little hard to justify.
Encapsulation is important when at least one of these holds:
Anyone but you is going to use your class (or they'll break your invariants because they don't read the documentation).
Anyone who doesn't read the documentation is going to use your class (or they'll break your carefully documented invariants). Note that this category includes you-two-years-from-now.
At some point in the future someone is going to inherit from your class (because maybe an extra action needs to be taken when the value of a field changes, so there has to be a setter).
If it is just for me, and used in few places, and I'm not going to inherit from it, and changing fields will not invalidate any invariants that the class assumes, only then I will occasionally make a field public.
My tendency is to try to make everything private if possible. This keeps object boundaries as clearly defined as possible and keeps the objects as decoupled as possible. I like this because when I have to rewrite an object that I botched the first (second, fifth?) time, it keeps the damage contained to a smaller number of objects.
If you couple the objects tightly enough, it may be more straightforward just to combine them into one object. If you relax the coupling constraints enough you're back to structured programming.
It may be that if you find that a bunch of your objects are just accessor functions, you should rethink your object divisions. If you're not doing any actions on that data it may belong as a part of another object.
Of course, if you're writing a something like a library you want as clear and sharp of an interface as possible so others can program against it.
Fit the tool to the job... recently I saw some code like this in my current codebase:
private static class SomeSmallDataStructure {
public int someField;
public String someOtherField;
}
And then this class was used internally for easily passing around multiple data values. It doesn't always make sense, but if you have just DATA, with no methods, and you aren't exposing it to clients, I find it a quite useful pattern.
The most recent use I had of this was a JSP page where I had a table of data being displayed, defined at the top declaratively. So, initially it was in multiple arrays, one array per data field... this ended in the code being rather difficult to wade through with fields not being next to eachother in definition that would be displayed together... so I created a simple class like above which would pull it together... the result was REALLY readable code, a lot more so than before.
Moral... sometimes you should consider "accepted bad" alternatives if they may make the code simpler and easier to read, as long as you think it through and consider the consequences... don't blindly accept EVERYTHING you hear.
That said... public getters and setters is pretty much equivalent to public fields... at least essentially (there is a tad more flexibility, but it is still a bad pattern to apply to EVERY field you have).
Even the java standard libraries has some cases of public fields.
When I make objects meaningful they are easier to use and easier to maintain.
For example: Person.Hand.Grab(howquick, howmuch);
The trick is not to think of members as simple values but objects in themselves.
I would argue that this question does mix-up the concept of encapsulation with 'information hiding'
(this is not a critic, since it does seem to match a common interpretation of the notion of 'encapsulation')
However for me, 'encapsulation' is either:
the process of regrouping several items into a container
the container itself regrouping the items
Suppose you are designing a tax payer system. For each tax payer, you could encapsulate the notion of child into
a list of children representing the children
a map of to takes into account children from different parents
an object Children (not Child) which would provide the needed information (like total number of children)
Here you have three different kinds of encapsulations, 2 represented by low-level container (list or map), one represented by an object.
By making those decisions, you do not
make that encapsulation public or protected or private: that choice of 'information hiding' is still to be made
make a complete abstraction (you need to refine the attributes of object Children and you may decide to create an object Child, which would keep only the relevant informations from the point of view of a tax payer system)
Abstraction is the process of choosing which attributes of the object are relevant to your system, and which must be completely ignored.
So my point is:
That question may been titled:
Private vs. Public members in practice (how important is information hiding?)
Just my 2 cents, though. I perfectly respect that one may consider encapsulation as a process including 'information hiding' decision.
However, I always try to differentiate 'abstraction' - 'encapsulation' - 'information hiding or visibility'.
#VonC
You might find the International Organisation for Standardization's, "Reference Model of Open Distributed Processing," an interesting read. It defines: "Encapsulation: the property that the information contained in an object is accessible only through interactions at the interfaces supported by the object."
I tried to make a case for information hiding's being a critical part of this definition here:
http://www.edmundkirwan.com/encap/s2.html
Regards,
Ed.
I find lots of getters and setters to be a code smell that the structure of the program is not designed well. You should look at the code that uses those getters and setters, and look for functionality that really should be part of the class. In most cases, the fields of a class should be private implementation details and only the methods of that class may manipulate them.
Having both getters and setters is equal to the field being public (when the getters and setters are trivial/generated automatically). Sometimes it might be better to just declare the fields public, so that the code will be more simple, unless you need polymorphism or a framework requires get/set methods (and you can't change the framework).
But there are also cases where having getters and setters is a good pattern. One example:
When I create the GUI of an application, I try to keep the behaviour of the GUI in one class (FooModel) so that it can be unit tested easily, and have the visualization of the GUI in another class (FooView) which can be tested only manually. The view and model are joined with simple glue code; when the user changes the value of field x, the view calls setX(String) on the model, which in turn may raise an event that some other part of the model has changed, and the view will get the updated values from the model with getters.
In one project, there is a GUI model which has 15 getters and setters, of which only 3 get methods are trivial (such that the IDE could generate them). All the others contain some functionality or non-trivial expressions, such as the following:
public boolean isEmployeeStatusEnabled() {
return pinCodeValidation.equals(PinCodeValidation.VALID);
}
public EmployeeStatus getEmployeeStatus() {
Employee employee;
if (isEmployeeStatusEnabled()
&& (employee = getSelectedEmployee()) != null) {
return employee.getStatus();
}
return null;
}
public void setEmployeeStatus(EmployeeStatus status) {
getSelectedEmployee().changeStatusTo(status, getPinCode());
fireComponentStateChanged();
}
In practice I always follow only one rule, the "no size fits all" rule.
Encapsulation and its importance is a product of your project. What object will be accessing your interface, how will they be using it, will it matter if they have unneeded access rights to members? those questions and the likes of them you need to ask yourself when working on each project implementation.
I base my decision on the Code's depth within a module.
If I'm writting code that is internal to a module, and does not interface with the outside world I don't encapsulate things with private as much because it affects my programmer performance (how fast I can write and rewrite my code).
But for the objects that server as the module's interface with user code, then I adhere to strict privacy patterns.
Certainly it makes a difference whether your writing internal code or code to be used by someone else (or even by yourself, but as a contained unit.) Any code that is going to be used externally should have a well defined/documented interface that you'll want to change as little as possible.
For internal code, depending on the difficulty, you may find it's less work to do things the simple way now, and pay a little penalty later. Of course Murphy's law will ensure that the short term gain will be erased many times over in having to make wide-ranging changes later on where you needed to change a class' internals that you failed to encapsulate.
Specifically to your example of using a collection that you would return, it seems possible that the implementation of such a collection might change (unlike simpler member variables) making the utility of encapsulation higher.
That being said, I kinda like Python's way of dealing with it. Member variables are public by default. If you want to hide them or add validation there are techniques provided, but those are considered the special cases.
I follow the rules on this almost all the time. There are four scenarios for me - basically, the rule itself and several exceptions (all Java-influenced):
Usable by anything outside of the current class, accessed via getters/setters
Internal-to-class usage typically preceded by 'this' to make it clear that it's not a method parameter
Something meant to stay extremely small, like a transport object - basically a straight shot of attributes; all public
Needed to be non-private for extension of some sort
There's a practical concern here that isn't being addressed by most of the existing answers. Encapsulation and the exposure of clean, safe interfaces to outside code is always great, but it's much more important when the code you're writing is intended to be consumed by a spatially- and/or temporally-large "user" base. What I mean is that if you plan on somebody (even you) maintaining the code well into the future, or if you're writing a module that will interface with code from more than a handful of other developers, you need to think much more carefully than if you're writing code that's either one-off or wholly written by you.
Honestly, I know what wretched software engineering practice this is, but I'll oftentimes make everything public at first, which makes things marginally faster to remember and type, then add encapsulation as it makes sense. Refactoring tools in most popular IDEs these days makes which approach you use (adding encapsulation vs. taking it away) much less relevant than it used to be.