Question regarding lazy evaluation for Diplomacy (rocket-chip)? - chisel

I have been reading through the Diplomacy model for Chisel. I had a question regarding the design philosophy behind this. As I can understand, the lazy evaluation of Scala is used to register some compile time information which can be forced to evaluate during elaboration before FIRRTL generation to do meta-operations like parameter negotiation.
My question is, is this the only approach? Would it be possible to create a proxy object in Scala which registers these meta-properties and evaluates them when a particular function is called? Then this function can be called before evaluation, to do the negotiation.
The reason I am asking this is because I am incrementally learning Scala and Chisel, thus would like to understand how to build abstractions as incrementally as possible, without as basic primitives as possible.

I doubt that Diplomacy is the only possible approach to this problem. It has evolved over time to meet the needs of an adaptable generator based approach. One of the key features is the ability to acquire information from modules as they evaluate their parameters. It's possible some proxy system might accomplish the same functionality, but there would be a risk, I think, that the meta-property code associated with the putative proxy objects would become decoupled from the paramter evaluation logic.

Related

How to represent standalone functions calling other standalone functions

I'm documenting the current state of a javascript package which is comprised of several modules predominantly consisting of standalone functions. As the result of using callbacks extensively, the package includes nested calls between standalone functions from multiple packages.
With this in mind, does anyone know what is the best way to represent calls between standalone functions in a sequence diagram?
Are the details of the standalone function worth it?
A common wisdom recommend to avoid the trap of UML as graphical programming language. Things that are easier expressed in code and easy to understand by readers better stay as code. Prefer to use UML to give the big picture, and explain complex relationships that are less obvious to spot in the code.
Automate obvious documentation?
Manually modelling a very precise sequence diagram is time consuming. Moreover, such diagram is quickly outdated with the next version of code.
Therefore, if your interest is to give an overview on how the function relate to each other, you may be interested to provide instead a visual overview using a simpler call graph. The reader can grasp the overall structure easily and look for more details in the code:
The advantage is that this task can be automated, using one of the many call graph generators available on the market (just google for javascript call graph generator to find some). There's by the way an excellent book on further automating documentation, that I can only recommend with enthousiasm: "Living Documentation: Continuous Knowledge Sharing by Design, First Edition"
If you have to set the focus on the detailed chronological sequencing of the calls a call graph would however not be sufficient. In this case, the sequence diagrams may then indeed be more relevant.
UML sequence diagrams with standalone functions?
A sequence diagram shows interactions between lifelines within an enclosing classifier. Usually a lifeline is used to represent an object, i.e. an instance of a class, but its definition is flexible enough to accommodate with any participant in an interaction.
Individual standalone functions can moreover be considered as individual objects that instantiate a more general class of functions (that's the concept behind a functor, like C++ std::function). This is particularly relevant in javascript where functions can be assigned to variables or used as parameters. So you may just use a lifeline that clarifies this. Up to you to decide how you will name the call message (e.g. operator()(a,b,c) or using its real name for readability ? ):
You can also group a bunch of related standalone functions into a pseudo-class that would represent in your model a module, compilation unit or namespace. Although a module is not stricto sensu an object, you may in your modeling deal with it as if it was a class with only one (anonymous) instance (i.e. its state would be the global variables defined in the module scope. The related standalone functions could be seen as operations of this imaginary class). The lifeline would correspond to a module, and function calls would be represented as synchronous messages either to another module or a message to itself with nested activation to visualise the “nested” calls.

Groovy performance

Hi
We are going to start a CRUD project.
I have some experience using groovy and
I think it is the right tool.
My concern is about performance.
How good is groovy compared to a java solution.
It is estimated that we can have up to 100
simultaneosly users. We are going to use a
MySql DB and a tomcat server.
Any comment or suggestion?
Thanks
I've recently gathered five negative votes (!) on an answer on Groovy performance; however, I think there should be, indeed, a need for objective facts. Personally, I think it's productive and fun to work with Groovy and Grails; nevertheless, there is a performance issue that needs to be addressed.
There are a number of benchmark comparisons on the web, including this one. You can never trust single benchmarks (and the cited one isn't even close to being scientific), but you'll get the idea.
Groovy strongly relies on runtime meta programming. Every object in Groovy (well, except Groovy scripts) extends from GroovyObject with its invokeMethod(..) method, for example. Every time you call a method in your Groovy classes, the method will not be called, directly, as in Java, but by invoking the aforementioned invokeMethod(..) (which does a whole bunch of reflection and lookups).
Additionally, every GroovyObject has an associated MetaClass. The concepts of method invocation, etc., are similar.
There are other factors that decrease Groovy performance in comparison to Java, including boxing of primitive data types and (optional) weak typing, but the aforementioned concept of runtime meta programming is crucial. You cannot even think of a JIT compiler with Groovy, that compiles Java bytecode to native code to speed up execution.
To address these issues, there's the Groovy++ project. You simply annotate your Groovy classes with #Typed, and they'll be statically compiled to (real) Java bytecode. Unfortunately, however, I found Groovy++ to be not quite mature, and not well integrated with the main Groovy line, and IDEs. Groovy++ also contradicts basic Groovy programming paradigms. Moreover, Groovy++' #Typed annotation does not work recursively, that is, does not affect underlying libraries like GORM or the Grails controllers infrastructure.
I guess you're evaluating employing a Grails project, as well.
When looking at Grails' GORM, that framework makes heavily use of runtime meta programming, using Hibernate directly, should perform much better.
At the controllers or (especially) services level, extensive computations can be externalized to Java classes. However, GORMs proportion in typical CRUD applications is higher.
Potential performance in Grails are typically addressed by caching layers at the database level or by avoiding to call service or controllers methods (see the SpringCache plugin or the Cache Filter plugin). These are typically implemented on top of the Ehcache infrastructure.
Caching, obviously, may suit well with static data in contrast to (database) data that frequently changes, or web output that is rather variable.
And, finally, you can "throw hardware at it". :-)
In conclusion, the most decisive factor for or against using Groovy/Grails in a large-scaling website ought to be the question whether caching fits with the specific website's nature.
EDIT:
As for the question whether Java's JIT compiler had a chance to step in ...
A simple Groovy class
class Hello {
def getGreeting(name) {
"Hello " + name
}
}
gets compiled to
public class Hello
implements GroovyObject
{
public Hello()
{
Hello this;
CallSite[] arrayOfCallSite = $getCallSiteArray();
}
public Object getGreeting(Object name) {
CallSite[] arrayOfCallSite = $getCallSiteArray();
return arrayOfCallSite[0].call("Hello ", name);
}
static
{
Long tmp6_3 = Long.valueOf(0L);
__timeStamp__239_neverHappen1288962446391 = (Long)tmp6_3;
tmp6_3;
Long tmp20_17 = Long.valueOf(1288962446391L);
__timeStamp = (Long)tmp20_17;
tmp20_17;
return;
}
}
This is just the top of an iceberg. Jochen Theodoru, an active Groovy developer, put it that way:
A method invocation in Groovy consists
usually of several normal method
calls, where the arguments are stored
in a array, the classes of the
arguments must be retrieved, a key is
generated out of them, a hashmap is
used to lookup the method and if that
fails, then we have to test the
available methods for compatible
methods, select one of the methods
based on the runtime type, create a
key for the hasmap and then in the
end, do a reflection like call on the
method.
I really don't think that the JIT inlines such dynamic, highly complex invocations.
As for a "solution" to your question, there is no "do it that way and you're fine". Instead, the task is to identify the factors that are more crucial than others and possible alternatives and mitigation strategies, to evaluate their impact on your current use cases ("can I live with it?"), and, finally, to identify the mix of technologies that meets the requirements best (not completely).
Performance (in the context of web applications) is an aspect of your application and not of the framework/language you are using. Any discussion and comparison about method invocation speed, reflection speed and the amount of framework layers a call goes through is completely irrelevant. You are not implementing photoshop filters, fractals or a raytracer. You are implementing web based CRUD.
Your showstopper will most probably be inefficient database design, N+1 queries (in case you use ORM), full table scans etc.
To answer your question: use any modern language/web framework you feel more confident with and focus on correct architecture/design to solve the business problem at hand.
Thanks for the answers and advices. I like groovy. It might be some performance problems under some circumstances. Groovy++ might be a better choice. At his point I would prefer to give a chance to "spring roo" which has a huge overlapping with Groovy but you remain at java and NO roo.jar is added to your project. Therefore you are not paying any extra cost for using it.
Moreover "roo" allows backward engineering and roundtrip engineering.
Unfortunately the plug-in library is pretty small up to now.
Luis
50 to 100 active users is not much of a traffic. As long as you have cached pages correctly, mysql queries are properly indexes, you should be ok.
Here is a site I am running in my basement in a $1000 server. It's written in Grails.
Checkout performance yourself http://www.ewebhostguide.com
Caution: Sometimes Comcast connections are down and site may appear down. But that happens only for few minutes. Cons of running site in basement.

Understanding complex post-conditions in DbC

I have been reading over design-by-contract posts and examples, and there is something that I cannot seem to wrap my head around. In all of the examples I have seen, DbC is used on a trivial class testing its own state in the post-conditions (e.g. lots of Bank Accounts).
It seems to me that most of the time when you call a method of a class, it does much more work delegating method calls to its external dependencies. I understand how to check for this in a Unit-Test with specific scenarios using dependency inversion and mock objects that focus on the external behavior of the method, but how does this work with DbC and post-conditions?
My second question has to deal with understanding complex post-conditions. It seems to me that to write out a post-condition for many functions, that you basically have to re-write the body of the function for your post-condition to know what the new state is going to be. What is the point of that?
I really do like the notion of DbC and I think that it has great promise, particularly if I can figure out how to reproduce some failure state once I find a validated contract. Over the past couple of hours I have been reading some neat stuff wrt. automatic test generation in Eiffel. I am currently trying to improve my processes in C++ development, but I am open to learning something new if I can figure out how to not lose all of the ground I have made in my current projects. Thanks.
but how does this work with DbC and
post-conditions?
Every function is basically one of these:
A sequence of statements
A conditional statement
A loop
The idea is that you should check any postconditions about the results of the function that go beyond the union of the postconditions of all the functions called.
that you basically have to re-write
the body of the function for your
post-condition to know what the new
state is going to be
Think about it the other way round. What made you write the function in the first place? What were you pursuing? Can that be expressed in a postcondition which is more simple than the function body itself? A postcondition will typically use queries (what in C++ are const functions), while the body usually combines commands and queries (methods that modify the object and methods which only get information from it).
In some cases, yes, you will find out that you can really add little value with postconditions. In these cases, writing a bunch of tests will typically be enough.
See also:
Bertrand Meyer, Contract Driven
Development
Related questions 1, 2
Delegation at the contract level
most of the time when you call a
method of a class, it does much more
work delegating method calls to its
external dependencies
As for this first question: the implementation of a function/method may call many other function/methods, but if the designer of the code had a clear mind, this does not imply that the specification of the caller is the concatenation of the specifications of the callees. For a method that calls many others, the size of the specification can remain contained if the method accomplishes a precise and well-defined task. Which it should if the whole system was well designed.
You are clearly asking your question from the point of view of run-time assertion checking. In this context, the above would perhaps be expressed as "you don't need to re-check in the post-condition of the caller that all the callees have respected their respective contracts. These checks will already be made on each call. In the post-condition of the caller, only check the functionally visible result of the caller."
Understanding complex post-conditions
You may find this "ACSL by example" document interesting (although probably different from what you're used to). It contains many examples of formal contracts for C functions. The language of the contracts is intended for static verification instead of run-time checking, with all the advantages and the drawbacks that it entails. They are a little more sophisticated than the "Bank Accounts" that you mention — these functions implement real algorithms, although simple ones. The document keeps the contracts short and readable by introducing well-thought-out auxiliary predicates (which would be called queries in Eiffel, as Daniel points out in his answer).

What do you wish was automatic in your favorite programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As a programmer, I often look at some features of the language I'm currently using and think to myself "This is pretty hard to do for a programmer, and could be taken care of automatically by the machine".
One example of such a feature is memory management, which has been automatic for a while in a variety of languages. While memory management is not that hard to do manually most of the time, doing it perfectly all the way through your application without leaking memory is extremely hard. Automation has made it easy again so that we programmers could concentrate on more critical questions.
Are there any features that you think programming languages should automate because the reward/difficulty ratio is just too low (say, for example concurrency)?
This question is intended to be a brainstorm about what the future of programming could be like, and what languages could do for us to let us focus on more important tasks, so please post your wishes even if you don't think automation is practical/feasible. Good answers will point to stuff that is genuinely hard to do in many languages, as opposed to single-language pet-peeves.
Whatever the language can do for me automatically, I will want a way of doing for myself.
Concurrent programming/parallelism that is (semi-)automated, opposed to having to mess around with threads, callbacks, and synchronisation. Being able to parallelise for loops, such as:
Parallel.ForEach(fooList, item =>
{
item.PerformLongTask();
}
is just made of win.
Certain languages already support such functionality to a degree, however. Notably, F# has asynchronous workflows. Coming with the release of .NET 4.0, the Parallel Extensions library will make concurrency much easier in C# and VB.NET. I believe Python also has some sort of concurrency library, though I personally haven't used it.
What would also be cool is fully automated parallelism in purely functional languages, i.e. not having to change your code even slightly and automatically have it run near optimally across multiple cores. Note that this can only be done with purely functional languages (such as Haskell, but not CAML/F#). Still, constructs such as example given above would be very handy for automating parallelism in object-oriented and other languages.
I would imagine that libraries, design patterns, and even entire programming languages oriented towards simple and high-level support for parallelism will become increasingly widespread in the near future, as desktop computers start to move from 2 cores to 4 and then 8 cores and the advantage of automated concurrency becomes much more evident.
exec("Build a system to keep the customer happy, based on requirements.txt");
In Java, create beans less verbosely.
For example:
bean Student
{
String name;
int id;
type1 property1;
type2 property2;
}
and this would create a bean private fields, default accessors, toString, hashCode, equals, etc.
In Java I would like a keyword that would make the entire class immutable.
E.g.
public immutable class Xyz {
}
And the compiler would warn me if any conditions of immutability were broken.
Concurrency. That was my main idea when asking this question. This is going to get more and more important with time, since current CPUs already have up to 8 logical cores (4 cores + hyperthreading), and 12 logical cores will appear in a few months. In the future, we are going to have a hell of a lot of cores at our disposal, but most programing languages only make it easy for us to use one at a time.
The Threads + Synchronization model that is exposed by most programming languages is extremely low level, and very close to what the CPU does. To me, the current level of concurrency language support is roughly equivalent to the memory management support in C: Not integrated, but some things can be delegated to the OS (malloc, free).
I wish some language would come up with a suitable abstraction that either makes the Threads + Synchronization model easier, or that simply completely hides it for us (just as automatic memory management make good old malloc/free obsolete in Java).
Some functional languages such as Erlang have a reputation of having good multithreading support, but the brain-switch required to do functional programming doesn't really make the whole ordeal much easier.
.Net:
A warning when manipulating strings with methods such as Replace and not returning the value (new string) to a variable, because if you don't know that a string is immutable this issue will frustrate you.
In C++, type inference for variable declarations, so that I don't need to write
for (vector<some_longwinded_type>::const_iterator i = v.begin(); i != v.end(); ++i) {
...
}
Luckily this is coming in C++1x in the form of auto:
for (auto i = v.begin(); i != v.end(); ++i) {
...
}
Coffee. I mean the language is call Java - so it should be able to make my coffee! I hate getting up from programming, going to the coffee pot, and finding out someone from marketing has taken the last cup and not made another pot.
Persistence, it seems to me we write far too much code to deal with persistence when this really should be a configuration problem.
In C++, enum-to-string.
In Ada, the language defines the 'image attribute of an enumerated type as a function that returns a string corresponding to the textual representation of an enumeration value.
C++ provides no such clean facility. It takes several lines of very arcane preprocessor macro black magic to get a rough equivalent.
For languages that provide operator overloading, provide automatically generated overloads for symmetric operations when only one operation is defined. For example, if the programmer provides an equality operator but not an inequality operator, the language could easily generate one. In C++, the same could be done for copy constructors and assignment operators.
I think that automatically generating one-side of a symmetric operation would be nice. Of course, I would definitely want to be able to explicitly say don't do that when needed. I guess providing the implementation of both sides with one of them being private and empty could do the job.
Everything that LINQ does. C# has spoiled me and I now find it hard to do anything with collections in any other language. In Python I use list comprehensions a lot, but they are not quite as powerful as LINQ. I haven't found any other language that makes working with collections as easy as in C#.
In Visual Studio environment I want "Remove unused usings" to run across all file in the project. I find it a significant loss of time to have to manually open each individual file and call this operation of a file basis.
From a dynamic languages perspective, I'd like to see better tool support. Steve Yegge has a great post on this. For instance, there are lots of cases where a tool could look inside various code paths and determine if the methods or functions existed and provide the equivalent of the compiler smoke test. Obviously, if you're using lots and lots of truly dynamic code, this won't work, but the fact is, you probably aren't, so it would be pretty nice if Python, for instance, would tell you that .ToLowerCase() wasn't a valid function at compile time, rather than waiting until you hit the else clause.
s = "a Mixed Case String"
if True:
s = s.lower()
else:
s = s.ToLowerCase()
Easy: initialize variables in C/C++ just like C# does. It would have saved me multiple sessions of debugging in other people's code.
Of course there would be a keyword when you specifically do not want to init a var.
noinit float myVal; // undefined
float my2ndVal; // 0.0f

Singletons: good design or a crutch? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Singletons are a hotly debated design pattern, so I am interested in what the Stack Overflow community thought about them.
Please provide reasons for your opinions, not just "Singletons are for lazy programmers!"
Here is a fairly good article on the issue, although it is against the use of Singletons:
scientificninja.com: performant-singletons.
Does anyone have any other good articles on them? Maybe in support of Singletons?
In defense of singletons:
They are not as bad as globals because globals have no standard-enforced initialization order, and you could easily see nondeterministic bugs due to naive or unexpected dependency orders. Singletons (assuming they're allocated on the heap) are created after all globals, and in a very predictable place in the code.
They're very useful for resource-lazy / -caching systems such as an interface to a slow I/O device. If you intelligently build a singleton interface to a slow device, and no one ever calls it, you won't waste any time. If another piece of code calls it from multiple places, your singleton can optimize caching for both simultaneously, and avoid any double look-ups. You can also easily avoid any deadlock condition on the singleton-controlled resource.
Against singletons:
In C++, there's no nice way to auto-clean-up after singletons. There are work-arounds, and slightly hacky ways to do it, but there's just no simple, universal way to make sure your singleton's destructor is always called. This isn't so terrible memory-wise -- just think of it as more global variables, for this purpose. But it can be bad if your singleton allocates other resources (e.g. locks some files) and doesn't release them.
My own opinion:
I use singletons, but avoid them if there's a reasonable alternative. This has worked well for me so far, and I have found them to be testable, although slightly more work to test.
Google has a Singleton Detector for Java that I believe started out as a tool that must be run on all code produced at Google. The nutshell reason to remove Singletons:
because they can make testing
difficult and hide problems with your
design
For a more explicit explanation see 'Why Singletons Are Controversial' from Google.
A singleton is just a bunch of global variables in a fancy dress.
Global variables have their uses, as do singletons, but if you think you're doing something cool and useful with a singleton instead of using a yucky global variable (everyone knows globals are bad mmkay), you're unfortunately misled.
The purpose of a Singleton is to ensure a class has only one instance, and provide a global point of access to it. Most of the time the focus is on the single instance point. Imagine if it were called a Globalton. It would sound less appealing as this emphasizes the (usually) negative connotations of a global variable.
Most of the good arguments against singletons have to do with the difficulty they present in testing as creating test doubles for them is not easy.
There's three pretty good blog posts about Singletons by Miško Hevery in the Google Testing blog.
Singletons are Pathological Liars
Where Have All the Singletons Gone?
Root Cause of Singletons
Singleton is not a horrible pattern, although it is misused a lot. I think this misuse is because it is one of the easier patterns and most new to the singleton are attracted to the global side effect.
Erich Gamma had said the singleton is a pattern he wishes wasn't included in the GOF book and it's a bad design. I tend to disagree.
If the pattern is used in order to create a single instance of an object at any given time then the pattern is being used correctly. If the singleton is used in order to give a global effect, it is being used incorrectly.
Disadvantages:
You are coupling to one class throughout the code that calls the singleton
Creates a hassle with unit testing because it is difficult to replace the instance with a mock object
If the code needs to be refactored later on because of the need for more than one instance, it is more painful than if the singleton class were passed into the object (using an interface) that uses it
Advantages:
One instance of a class is represented at any given point in time.
By design you are enforcing this
Instance is created when it is needed
Global access is a side effect
Chicks dig me because I rarely use singleton and when I do it's typically something unusual. No, seriously, I love the singleton pattern. You know why? Because:
I'm lazy.
Nothing can go wrong.
Sure, the "experts" will throw around a bunch of talk about "unit testing" and "dependency injection" but that's all a load of dingo's kidneys. You say the singleton is hard to unit test? No problem! Just declare everything public and turn your class into a fun house of global goodness. You remember the show Highlander from the 1990's? The singleton is kind of like that because: A. It can never die; and B. There can be only one. So stop listening to all those DI weenies and implement your singleton with abandon. Here are some more good reasons...
Everybody is doing it.
The singleton pattern makes you invincible.
Singleton rhymes with "win" (or "fun" depending on your accent).
I think there is a great misunderstanding about the use of the Singleton pattern. Most of the comments here refer to it as a place to access global data. We need to be careful here - Singleton as a pattern is not for accessing globals.
Singleton should be used to have only one instance of the given class. Pattern Repository has great information on Singleton.
One of the colleagues I have worked with was very Singleton-minded. Whenever there was something that was kind of a manager or boss like object he would make that into a singleton, because he figured that there should be only one boss. And each time the system took up some new requirements, it turned out there were perfectly valid reasons to allow multiple instances.
I would say that singleton should be used if the domain model dictates (not 'suggests') that there is one. All other cases are just accendentally single instances of a class.
I've been trying to think of a way to come to the poor singelton's rescue here, but I must admit it's hard. I've seen very few legitimate uses of them and with the current drive to do dependency injection andd unit testing they are just hard to use. They definetly are the "cargo cult" manifestation of programming with design patterns I have worked with many programmers that have never cracked the "GoF" book but they know 'Singelton' and thus they know 'Patterns'.
I do have to disagree with Orion though, most of the time I've seen singeltons oversused it's not global variables in a dress, but more like global services(methods) in a dress. It's interesting to note that if you try to use Singeltons in the SQL Server 2005 in safe mode through the CLR interface the system will flag the code. The problem is that you have persistent data beyond any given transaction that may run, of course if you make the instance variable read only you can get around the issue.
That issue lead to a lot of rework for me one year.
Holy wars! Ok let me see.. Last time I checked the design police said..
Singletons are bad because they hinder auto testing - instances cannot be created afresh for each test case.
Instead the logic should be in a class (A) that can be easily instantiated and tested. Another class (B) should be responsible for constraining creation. Single Responsibility Principle to the fore! It should be team-knowledge that you're supposed to go via B to access A - sort of a team convention.
I concur mostly..
Many applications require that there is only one instance of some class, so the pattern of having only one instance of a class is useful. But there are variations to how the pattern is implemented.
There is the static singleton, in which the class forces that there can only be one instance of the class per process (in Java actually one per ClassLoader). Another option is to create only one instance.
Static singletons are evil - one sort of global variables. They make testing harder, because it's not possible to execute the tests in full isolation. You need complicated setup and tear down code for cleaning the system between every test, and it's very easy to forget to clean some global state properly, which in turn may result in unspecified behaviour in tests.
Creating only one instance is good. You just create one instance when the programs starts, and then pass the pointer to that instance to all other objects which need it. Dependency injection frameworks make this easy - you just configure the scope of the object, and the DI framework will take care of creating the instance and passing it to all who need it. For example in Guice you would annotate the class with #Singleton, and the DI framework will create only one instance of the class (per application - you can have multiple applications running in the same JVM). This makes testing easy, because you can create a new instance of the class for each test, and let the garbage collector destroy the instance when it is no more used. No global state will leak from one test to another.
For more information:
The Clean Code Talks - "Global State and Singletons"
Singleton as an implementation detail is fine. Singleton as an interface or as an access mechanism is a giant PITA.
A static method that takes no parameters returning an instance of an object is only slightly different from just using a global variable. If instead an object has a reference to the singleton object passed in, either via constructor or other method, then it doesn't matter how the singleton is actually created and the whole pattern turns out not to matter.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
I must say you're not really describing somthing that should be a single object and it's debatable that any of them, other than Data Serialization should have been a singelton.
I can see at least 3 sets of classes that I would normally design in, but I tend to favor smaller simpler objects that do a narrow set of tasks very well. I know that this is not the nature of most programmers. (Yes I work on 5000 line class monstrosities every day, and I have a special love for the 1200 line methods some people write.)
I think the point is that in most cases you don't need a singelton and often your just making your life harder.
The biggest problem with singletons is that they make unit testing hard, particularly when you want to run your tests in parallel but independently.
The second is that people often believe that lazy initialisation with double-checked locking is a good way to implement them.
Finally, unless your singletons are immutable, then they can easily become a performance problem when you try and scale your application up to run in multiple threads on multiple processors. Contended synchronization is expensive in most environments.
Singletons have their uses, but one must be careful in using and exposing them, because they are way too easy to abuse, difficult to truly unit test, and it is easy to create circular dependencies based on two singletons that accesses each other.
It is helpful however, for when you want to be sure that all your data is synchronized across multiple instances, e.g., configurations for a distributed application, for instance, may rely on singletons to make sure that all connections use the same up-to-date set of data.
I find you have to be very careful about why you're deciding to use a singleton. As others have mentioned, it's essentially the same issue as using global variables. You must be very cautious and consider what you could be doing by using one.
It's very rare to use them and usually there is a better way to do things. I've run into situations where I've done something with a singleton and then had to sift through my code to take it out after I discovered how much worse it made things (or after I came up with a much better, more sane solution)
I've used singletons a bunch of times in conjunction with Spring and didn't consider it a crutch or lazy.
What this pattern allowed me to do was create a single class for a bunch of configuration-type values and then share the single (non-mutable) instance of that specific configuration instance between several users of my web application.
In my case, the singleton contained client configuration criteria - css file location, db connection criteria, feature sets, etc. - specific for that client. These classes were instantiated and accessed through Spring and shared by users with the same configuration (i.e. 2 users from the same company). * **I know there's a name for this type of application but it's escaping me*
I feel it would've been wasteful to create (then garbage collect) new instances of these "constant" objects for each user of the app.
I'm reading a lot about "Singleton", its problems, when to use it, etc., and these are my conclusions until now:
Confusion between the classic implementation of Singleton and the real requirement: TO HAVE JUST ONE INSTANCE OF a CLASS!
It's generally bad implemented. If you want a unique instance, don't use the (anti)pattern of using a static GetInstance() method returning a static object. This makes a class to be responsible for instantiating a single instance of itself and also perform logic. This breaks the Single Responsibility Principle. Instead, this should be implemented by a factory class with the responsibility of ensuring that only one instance exists.
It's used in constructors, because it's easy to use and must not be passed as a parameter. This should be resolved using dependency injection, that is a great pattern to achieve a good and testable object model.
Not TDD. If you do TDD, dependencies are extracted from the implementation because you want your tests to be easy to write. This makes your object model to be better. If you use TDD, you won't write a static GetInstance =). BTW, if you think in objects with clear responsibilities instead classes, you'll get the same effect =).
I really disagree on the bunch of global variables in a fancy dress idea. Singletons are really useful when used to solve the right problem. Let me give you a real example.
I once developed a small piece of software to a place I worked, and some forms had to use some info about the company, its employees, services and prices. At its first version, the system kept loading that data from the database every time a form was opened. Of course, I soon realized this approach was not the best one.
Then I created a singleton class, named company, which encapsulated everything about the place, and it was completely filled with data by the time the system was opened.
It was not just a bunch of variables in a fancy dress because this was had dozens of responsibilities, like communicating with persistence layer to save/retrieve data about the company, deal with employees and prices collections, etc.
Plus, it was a fixed, system-wide, easily accessible point to have the company data.
Singletons are very useful, and using them is not in and of itself an anti-pattern. However, they've gotten a bad reputation largely because they force any consuming code to acknowledge that they are a singleton in order to interact with them. That means if you ever need to "un-Singletonize" them, the impact on your codebase can be very significant.
Instead, I'd suggest either hiding the Singleton behind a factory. That way, if you need to alter the service's instantiation behavior in the future, you can just change the factory rather than all types that consume the Singleton.
Even better, use an inversion of control container! Most of them allow you to separate instantiation behavior from the implementation of your classes.
One scary thing on singletons in for instance Java is that you can end up with multiple instances of the same singleton in some cases. The JVM uniquely identifies based on two elements: A class' fully qualified name, and the classloader responsible for loading it.
That means the same class can be loaded by two classloaders unaware of each other, and different parts of your application would have different instances of this singleton that they interact with.
Write normal, testable, injectable objects and let Guice/Spring/whatever handle the instantiation. Seriously.
This applies even in the case of caches or whatever the natural use cases for singletons are. There's no need to repeat the horror of writing code to try to enforce one instance. Let your dependency injection framework handle it. (I recommend Guice for a lightweight DI container if you're not already using one).