Related
I have created a Singleton class that handles my project texts. What is the appropriate name of a Singleton class like this?
TextManager?
TextHandler?
TextController?
Is there a difference in meaning of these names?
UPDATE:
The class stores the project text as xml and have a method for returning the correct text.
function getText(uid : String) : String
I suppose it doesn't deal with adding/removing/... (-> managing) the texts (maybe just loading), so it isn't a "real" Manager.
It also doesn't "control" the texts (something "You're only accessible from ...", "Return another value for that key if ...").
The class provides you with texts.
I suppose it's some Kind of localized text provider, right?
So why don't you call it LocalizedTextProvider?
I usually call something like this
TextUtility
or
TextHelper
the problem with 'handler' is that it implies some sort of event handling. Same thing with 'Controller', it has meaning in a different context.
I believe Controller is 'reserved' for the MVC model but I may be wrong. TextHandler and TextManager may be better but at least at the place I work, 'Manager' in a service/class is generally discouraged since it is assumed that every class 'manages' something (this may just be culture-specific, though).
I'd vote for TextHandler out of those three. It may also depend slightly on your programming language.
This actually sounds like a service or repository to me...
TextService or TextRepository? TextModel?
But let me back up a bit... the Singleton pattern is a pretty bad way of accessing something like this. Just google "Singleton pattern problems" if you want to see what I am talking about. Plus, in AS3, you don't have private constructors so you can't implement the Singleton pattern in a pure way.
Instead, I really prefer composition via "Inversion of Control" (IoC) containers. There are plenty of them out there for ActionScript. They can be really lightweight but they decouple your components in a really elegant way.
Sorry to inject my thoughts here... ymmv :)
EDIT -- More on eliminating Singleton pattern
I have written about several strategies on eliminating singletons in your code. This article was written for C#, but all the same principles apply. In that article, I DON't talk explicitly about IoC containers.
Here is a pretty good article about IoC in Flex. In addition, several frameworks give you IoC capabilities:
Swiz
Robot Legs
fling
Cairngorm
flex-ioc
All three of the names you proposed can all be interpreted in the same way. Some people prefer handlers while others might say controllers... it really is a matter of semantics. Whatever convention you choose to adopt just be consistent. The common notion that you should capture though is that the class which you are describing is not doing anything. It should only be in charge of delegating, since that's what managers do to employees and controllers do in the classic MVC paradigm.
As I usually have Handler in the event/message handling context. Controller for actions and MVC stuff, I would go with something different:
TextResources.get(key)
I18n.get(key) (if your class is in fact used for internationalisation)
I usually reserve Helpers for classes allowing to simply transform some data into something to be used in the view.
TextCache? Sounds like you are just using it to store and retrieve data...
Why not : ProjectNameTexts
FooTexts.getInstance().getText('hello_world');
I often seem to run into the discussion of whether or not to apply some sort of prefix/suffix convention to interface type names, typically adding "I" to the beginning of the name.
Personally I'm in the camp that advocates no prefix, but that's not what this question is about. Rather, it's about one of the arguments I often hear in that discussion:
You can no longer see at-a-glance
whether something is an interface or a
class.
The question that immediately pops up in my head is: apart from object creation, why should you ever have to care whether an object reference is a class or an interface?
I've tagged this question as language agnostic, but as has been pointed out it may not be. I contend that it is because while specific language implementation details may be interesting, I'd like to keep this on a conceptual level. In other words, I think that, conceptually, you'd never have to care whether an object reference is typed as a class or an interface but I'm not sure, hence the question.
This is not a discussion about IDEs and what they do or don't do when visualizing the different types; caring about the type of an object is certainly a necessity when browsing through code (packages/sources/whatever form). Nor is it a discussion about the pros or cons about either naming convention. I just can't seem to figure out in what scenario, other than object creation, you actually care about wether or not you're referencing a concrete type or an interface.
Most of the time, you probably don't care. But here are some instances that I can think of where you would. There are several, and it does vary a little bit by language. Some languages don't mind as much as others.
In the case of inversion of control (where someone PASSES you a parameter) you probably don't care if it's an interface or an object as far as calling its methods etc. But when dealing with types, it definitely can make a difference.
In managed languages such as .NET languages, interfaces can usually only inherit one interface, whereas a class can inherit one class but implement many interfaces. The order of classes vs interfaces may also matter in a class or interface declaration. So you need to know which is which when defining a new class or interface.
In Delphi / VCL, interfaces are reference counted and automatically collected, whereas classes must be explicitly freed, so lifecyle management on the whole is affected, not just the creation.
Interfaces may not be viable sources for class references.
Interfaces can be cast to compatible interfaces, but in many languages, they cannot be cast to compatible classes. Classes can be cast to either.
Interfaces may be passed to parameters of type IID, or IUnknown, whereas classes cannot (without a cast and a supporting interface).
An interface's implementation is unknown. Its input and output are defined, but the implementation which creates the output is abstracted. In general, ones attitude may be that when working with a class, one may know how the class works. But when working with an interface, no such assumption should be made. In a perfect world, it might make no difference. But in reality, this most certainly can have affect your design.
I agree with you (and thereby do not use an "I" prefix for interfaces). We shouldn't have to care whether it is an abstract class or an interface.
Worth noting that Java needs to have a notion of interface solely because it does not support multiple inheritance. Otherwise, "abstract class" concept would suffice (which may be "all" abstract, or partially abstract, or almost concrete and just 1 tiny bit abstract, whatever).
Things that concrete class can have and the interfaces can't:
Constructors
Instance fields
Static methods and static fields
So if you use the convention of starting all interface names with 'I' then it indicates to the user of your library that the particular type will not have any of the above mentioned things.
But personally I feel that this is not a reason enough to start all interface names with 'I'. The modern IDEs are powerful enough to indicate if some type is an interface. Also it hides the true meaning of an interface name: imagine if Runnable and List interfaces were named IRunnable and IList repectively.
When a class is used, I can make the assumption that I will get objects from a relatively small and almost well-defined range of subclasses. That's because subclassing is - or at least it should be
- a decision that isn't made too easily, especially in languages that don't support multiple inheritance. In contrast, interfaces can be implemented by any class, and the implementation can be added later to any class.
So the information is useful, especially when browsing through code, and trying to get a feeling what the code author intended to do - but I think it should be enough, if the IDE shows interfaces/classes as distinctive icons.
You want to see at a glance which are the "interfaces" and which are the "concrete classes" so that you can focus your attention to the abstractions in the design instead of the details.
Good designs are based on abstractions - if you know and understand them you understand the system without knowing any of the details. So you know you can skip the classes without the I prefix, and focus on the ones that do have it while you are understanding the code, and you also know to avoid building new code around non-interface classes without having to refer to some other design document.
I agree that the I* naming convention is just not appropriate for modern OO languages, but truth is this question isn't really language agnostic. There are legitimate cases where you have an interface not for any architectural reason but because you simply don't have an implementation or have access to an implementation. For these cases you can read I* as *Stub or similar, and, in these cases, it might make sense to have an IBlah and a Blah class
These days, though, you rarely come up against this, and in modern OO languages when you say Interface you actually mean Interface not just I don't have the code for this. So there is no need for the I*, and in fact it encourages really bad OO design as you won't get the natural naming conflicts that would tell you something's gone wrong in your architecture. Say you had a List and an IList... what's the difference? when would you use one over the other? if you wanted to implement IList would you be constrained (conceptually at least) by what List does? I'll tell you what... if I found both an IBlah and a Blah class in any of my codebases I would purge one at random and take away that person's commit privileges.
Interfaces don't have fields, hence when you use IDisposable (or whatever), you know you're only declaring what you can do. That seems to me the main point of it.
Distinguishing between interfaces and classes may be useful, anywhere the type is referenced, in the IDE or out, to determine:
Can I make a new implementation of this type?
Can I implement this interface in a language that does not support multiple inheritance of implementation classes (e.g., Java).
Can there be multiple implementations of this type?
Can I easily mock this interface in an arbitrary mocking framework?
It is worth noting that UML distinguishes between interfaces and implementation classes. In addition, the "I" prefix is used in the examples in "The Unified Modeling Language User Guide" by the three amigos Booch, Jacobson and Rumbaugh. (Incidentally, this also provides an example why IDE syntax coloring alone is not sufficient to distinguish in all contexts.)
You should care, because :
An interface with capital "I" enables one, namely you or your co-workers to use any implementation which implements the interface. If in the future you figure out a better way to do something, say a better list sorting algorithm, you will be stuck with having the change ALL of the invoking methods as well.
It helps in understanding code - e.g. you don't need to memorize all 10 implementations of say, I_SortableList , you just care that it sorts a list (or something like that). Your code becomes practically self-documenting here.
To complete the discussion, here is a pseudocode example illustrating the above:
//Pseudocode - define implementations of ISortableList
Class SortList1 : ISortableLIst, SortList2:IsortableList, SortList3:IsortableList
//PseudoCode - the interface way
void Populate(ISortableList list, int[] nums)
{
list.set(nums)
}
//PseudoCode - the "i dont care way"
void Populate2( SortList1 list, int[] nums )
{
list.set(nums)
}
...
//Pseudocode - create instances
SortList1 list1 = new SortList1();
SortList2 list2 = new SortList2();
SortList3 list3 = new SortList3();
//Invoke Populate() - The "interface way"
Populate(list1,nums);//OK, list1 is ISortableList implementation
Populate(list2,nums);//OK, list2 is ISortableList implementation
Populate(list3,nums);//OK, list3 is ISortableList implementation
//Invoke Populate2() - the "I don't care way"
Populate(list1,nums);//OK, list1 is an instance of SortList1
Populate(list2,nums);//Not OK, list2 is not of required argument type, won't compile
Populate(list3,nums);//the same as above
Hope this helps,
Jas.
I've noticed that some programmers like to make interfaces for just about all their classes. I like interfaces for certain things (such as checking if an object supports a certain behavior and then having an interface for that behavior) but overuse of interfaces can sometimes bloat the code. When I declare methods or properties as public I'd expect people to just use my concrete classes and I don't really understand the need to create interfaces on top of that.
I'd like to hear your take on interfaces. When do you use them and for what purposes?
Thank you.
Applying any kind of design pattern or idea without thinking, just because somebody told you it's good practice, is a bad idea.
That ofcourse includes creating a separate interface for each and every class you create. You should at least be able to give a good reason for every design decision, and "because Joe says it's good practice" is not a good enough reason.
Interfaces are good for decoupling the interface of some unit of code from its implementation. A reason to create an interface is because you foresee that there might be multiple implementations of it in the future. It can also help with unit testing; you can make a mock implementation of the services that the unit you want to test depends on, and plug the mock implementations in instead of "the real thing" for testing.
Interfaces are a powerful tool for abstraction. With them, you can more freely substitute (for example) test classes and thereby decouple your code. They are also a way to narrow the scope of your code; you probably don't need the full feature set of a given class in a particular place - exactly what features do you need? That's a client-focused way of thinking about interfaces.
Unit tests.
With an interface describing all class methods and properties it is within the reach of a click to create a mock-up class to simulate behavior that is not within the scope of said test.
It's all about expecting and preparing for change.
One approach that some use (and I'm not necessarily advocating it)
is to create an IThing and a ThingFactory.
All code will reference IThing (instead of ConcreteThing).
All object creation can be done via the Factory Method.
ThingFactory.CreateThing(some params).
So, today we only have AmericanConcreteThing. And the possibility is that we may never need another. However, if experience has taught me anything, it is that we will ALWAYS need another.
You may not need EuropeanThing, but TexasAmericanThing is a distinct possibility.
So, In order to minimize the impact on my code, I can change the creational line to:
ThingFactory.CreateThing( Account )
and Create my class TexasAmericanThing : IThing.
Other than building the class, the only change is to the ThingFactory, which will require a change from
public static IThing CreateThing(Account a)
{
return new AmericanThing();
}
to
public static IThing CreateThing(Account a)
{
if (a.State == State.TEXAS) return new TexasAmericanThing();
return new AmericanThing();
}
I've seen plenty of mindless Interfaces myself. However, when used intelligently, they can save the day. You should use Interfaces for decoupling two components or two layers of an application. This can enable you to easily plug-in varying implementations of the interface without affecting the client, or simply insulate the client from constant changes to the implementation, as long as you stay true to the contract of the interface. This can make the code more maintainable in the long term and can save the effort of refactoring later.
However, overly aggressive decoupling can make for non-intuitive code. It's overuse can lead to nuisance. You should carefully identify the cohesive parts of your application and the boundaries between them and use interfaces there. Another benefit of using Interfaces between such parts is that they can be developed in parallel and tested independently using mock implementations of the interfaces they use.
OTOH, having client code access public member methods directly is perfectly okay if you really don't foresee any changes to the class that might also necessitate changes in the client. In any case, however, having public member fields I think is not good. This is extremely tight coupling! You are basically exposing the architecture of your class and making the client code dependent on it. Tomorrow if you realize that another data structure for a particular field will perform better, you can't change it without also changing the client code.
I primarily use interfaces for IoC to enable unit testing.
On the one hand, this could be interpreted as premature generalization. On the other hand, using interfaces as a rule helps you write code that is more easily composable and hence testable. I think the latter wins out in many cases.
I like interfaces:
* to define a contract between parts/modules/subsystems or 3rd party systems
* when there are exchangeable states or algorithms (state/strategy)
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Anyone else find naming classes and methods one of the most difficult part in programming?
Sometimes it seems i cant really find any name for a function i am writing, can this be because the function is not cohesive enough?
What do you do when no good name for a function comes to mind?
For naming functions, just avoid having simply nouns and rather name them after verbs. Some pointers:
Have function names that are unique visibly, e.g. don't have validateInput() and validateUserInput() since it's hard to say what one does over another. Also, avoid having characters that look very similar, e.g. the number 1 and lowercase 'l'. Sometimes it makes a difference.
Are you working on a project with multiple people? You should spend some time going over naming conventions as well, such as if the function name should have underscores, should be camelCase, etc.
Hungarian notation is a bad idea; avoid doing it.
Think about what the function is doing. The cohesion that you mentioned in your question comes to mind. Generally, functions should do just one thing, so don't name it constructCarAndRunCar() but rather have one function that constructs and another that runs it. If your functions are between, say 20 and 40 lines, you're good.
Sometimes, and this depends on the project, you might also want to prefix your function names with the class if the class is purely procedural (only composed of functions). So if you have a class that takes care of running a simulation, name your functions sim_pauseSimulation() and sim_restartSimulation(). If your class is OOP-based, this isn't an issue as much.
Don't use the underlying data structures in the functions themselves; these should be abstracted away. Rather than having functions like addToVector() or addToArray(), have them be addToList() instead. This is especially true if these are prototypes or the data structures might change later.
Finally, be consistent in your naming conventions. Once you come up with a convention after some thinking, stick to it. PHP comes to mind when thinking of inconsistent function names.
Happy coding! :)
Give it your best-shot and re-factor later if it still doesn't fit.
Sometimes it could be that your function is too large and therefore doing too many things. Try splitting up your function into other functions and it might be clearer what to call each individual function.
Don't worry about naming things with one or two words. Sometimes if functions do something that can be explained in a mini-sentence of sorts, go ahead and name the function a little longer if it'll help other developers understand what is going on.
Another suggestion is to get feedback from others. Often others who come from another perspective and seeing the function for the first time will have a better idea on what to call the function.
I follow following rule: Name according to the purpose (Why? - design decision) and not to the contents (What, How? - can be seen in the code).
For functions it is almost always an action (verb) followed by the noun of parameters and (or results. (Off-topic but for variables do not use "arrayOfNames" or "listOfNames", these are type information but simply "names"). This will also avoid inconsistencies if you refactor the code partly.
For given patterns like object creation, be consistent and always use the same naming like "Create..." (and not sometimes "Allocate..." or "Build..." otherwise you or your collegues will end up in scratching their head wound)
I find it easier to name functions when I don't have to cut back on the words. As long as your not doing javascript for the google start page you can do longer names.
For example you have the method dequeueReusableCellWithIdentifierandmergeChangesFromContextDidSaveNotification in apples cocoa framework.
As long as it's clear what the function is doing you can name it whatever you want and refactor it later.
Almost as important as the function name is that you are consistent with comments. Many IDEs will user your properly formatted comments not only to provide context sensitive help for a function you might be using, but they can be used to generate documentation. This is invaluable when returning to a project after a long period or when working with other developers.
In academic settings, they provide an appreciated demonstration of your intentions.
A good rule of thumb is [verb]returnDescription. This is easy with GetName() type functions and can't be applied universally. It's tough to find a balance between unobtrusive and descriptive code.
Here's a .Net convention guide, but it is applicable to most languages.
Go to www.thesaurus.com and try to find a better suited name though synonyms.
As a practical rule of my own, if a function name is too long, it should be atomized in a new object. Yet, i agree with all posts above. btw, nice noob question
In the context of programming, how do idioms differ from patterns?
I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern".
The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?
Idioms are language-specific.
Patterns are language-independent design principles, usually written in a "pattern language" (a uniform template) describing things such as the motivating circumstances, pros & cons, related patterns, etc.
When people observing program development from On High (Analysts, consultants, academics, methodology gurus, etc) see developers doing the same thing over and over again in various situations and environments, then the intelligence gained from that observation can be distilled into a Pattern. A pattern is a way of "doing things" with the software tools at hand that represent a common abstraction.
Some examples:
OO programming took global variables away from developers. For those cases where they really still need global variables but need a way to make their use look clean and object oriented, there's the Singleton Pattern.
Sometimes you need to create a new object having one of a variety of possible different types, depending on some circumstances. An ugly way might involve an ever-expanding case statement. The accepted "elegant" way to achieve this in an OO-clean way is via the "Factory" or "Factory Method" pattern.
Sometimes, a lot of developers do things in a certain way but it's a bad way that should be disrecommended. This can be formalized in an antipattern.
Patterns are a high-level way of doing things, and most are language independent. Whether you create your objects with new Object or Object.new is immaterial to the pattern.
Since patterns are something a bit theoretical and formal, there is usually a formal pattern (heh - word overload! let's say "template") for their description. Such a template may include:
Name
Effect achieved
Rationale
Restrictions and Limitations
How to do it
Idioms are something much lower-level, and usually operate at the language level. Example:
*dst++ = *src++
in C copies a data element from src to dst while incrementing the pointers to both; it's usually done in a loop. Obviously, you won't see this idiom in Java or Object Pascal.
while <INFILE> { print chomp; }
is (roughly quoted from memory) a Perl idiom for looping over an input file and printing out all lines in the file. There's a lot of implicit variable use in that statement. Again, you won't see this particular syntax anywhere but in Perl; but an old Perl hacker will take a quick look at the statement and immediately recognize what you're doing.
Contrary to the idea that patterns are language agnostic, both Paul Graham and Peter Norvig have suggested that the need to use a pattern is a sign that your language is missing a feature. (Visitor Pattern is often singled out as the most glaring example of this.)
I generally think the main difference between "patterns" and "idioms" to be one of size. An idiom is something small, like "use an interface for the type of a variable that holds a collection" while Patterns tend to be larger. I think the smallness of idioms does mean that they're more often language specific (the example I just gave was a Java idiom), but I don't think of that as their defining characteristic.
Since if you put 5 programmers in a room they will probably not even agree on what things are patterns, there's no real "right answer" to this.
One opinion that I've heard once and really liked (though can't for the life of me recall the source), is that idioms are things that should probably be in your language or there is some language that has them. Conversely, they are tricks that we use because our language doesn't offer a direct primitive for them. For instance, there's no singleton in Java, but we can mimic it by hiding the constructor and offering a getInstance method.
Patterns, on the other hand, are more language agnostic (though they often refer to a specific paradigm). You may have some infrastructure to support them (e.g., Spring for MVC), but they're not and not going to be language constructs, and yet you could need them in any language from that paradigm.