changed object state after behavior that used the state - language-agnostic

I want to give my previous question a second chance since I think I have chosen a bad example.
The question is how I should deal with situations where an object still can change after I have used it to do something and the new state is relevant for what is being done.
Example in pseudo-code:
class Book
method 'addChapter': adds a chapter
class Person
method 'readBook': read an object of class Book
Now when I ask the person the read a book, at least in PHP where the object will be passed by reference, the book object can still change. I could insert a chapter between chapter 3 and 4 while the person is already reading chapter 6. How can I deal with these kind of situations?

Maybe notifying the person that the book has changed? You can do it with events (not sure how events work in PHP). Another way is to implement the Observer/Observable pattern.

Any of the above answers is good for a good solution depending on your business demand.
You asked "How should I deal with situations..." it depends:
Is adding a chapter after you already shipped a book to the user legal? if not- you should throw an exception (I do not know if PHP supports exceptions, but anyway- you should treat is as an error situation).
Another solution would be to make sure that you expose and object that is already whole and does not expect changes to be made to it- this may be a valid solution especially if your decision to enable this kind of 'streaming' is performance oriented but you don't have real evidence that that this section is a performance bottleneck.
Now lets sat the addition of a new chapter is legal.
Do you want the change to be known to existing client or only to new clients?
If the former- you should implement some kind of notification logic (one of the suggeted forms is the a publisher/subscriber pattern, but there are others).
If the letter- you should make your book object immutable, so mutating operations will not be seen in existing clients, rather they would create an entirely new book the would be passed to new clients (Persons).
I could go on and on, But I suggest that next time you elaborate more on the exact problem you are trying to solve, since as you can see- the same problem can have a different solution in different domains.

Seems to me you are attempting to perform concurrent tasks. You might want to consider serializing activities to your objects instead, certainly in the case of PHP.

Related

Single Responsibility Principle Core Understanding

In brushing up on the SRP I read this document which I located via Uncle Bob's page on principles of OOD. I find the following passage puzzling and somewhat at odds with the rest of the document:
"If, on the other hand, the application is not changing in ways that cause the the two responsibilities to change at different times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity. There is a corollary here. An axis of change is only an axis of change if the changes actually occur. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom."
While I understand the answer to many software development questions is "it depends" principles like the SRP appear to be almost universally beneficial and to be implemented as a matter of course. The SRP itself affords code a high adaptability to future changes in requirements. Isn't the point to separate out responsibilities from the get-go to avoid struggling with highly coupled code and cascading changes later on?
I would really appreciate some clarification on this to make sure my understanding of this core principle is correct. Thanks in advance!
From my humble understanding, in the Modem example that is presented here, it might be possible that responsibilities of the modem (Connection and Data Exchange) will change as one.
You have two possibilities here :
When the protocol changes, it is possible that only the connection part change, or only the data exchange part change. It this case you should have two interfaces, because a change of protocol in the data does not imply a change of protocol in the connection.
When the protocol changes, it will always change the connection part and the data exchange part. In that case, you don't need two interfaces, because everytime you will have to rewrite the connection part, you are sure that the data exchange will change as well. In that case, you have two responsibilities put on the same change Axis (which is the protocol handled by the modem), so you can leave them inside a single interface.
The key to this statement is "not changing in ways that cause the two responsibilities to change at different times". Let's say for the sake of argument you have a PaymentLogger and a Payment class. Every time you create a new PaymentType (CreditCard, Cash, Paypal, etc) you need to update the PaymentLogger to log actions specific to those Payments. Instead of splitting out a PaymentLogger class you could have Payment class have a method called Log which does whatever is specific for itself.
In this case it could be that the act of recording actions should be build into the class itself since creating a new Payment requires also creating a new PaymentLogger. It's a responsibility that should have been part of Payment all along.

Polymer1.0: Single/multiple IronAjax?

I am wondering if it wise to use single IronAjax for whole app? Otherwise I run into a situation where almost every view require an IronAjax. I mean its not hurting me creating multiple ironajax, but naming them, and naming success and error messages need to be unique. otherwise they can conflict on any other ironajax method which is defined within the tree. I ran into this situation before.
So I am not looking for "what is possible", I am looking for best practices.
A single Iron Ajax would indeed be more efficient, as long as the code in the error and response handlers do not need to be too different based on the URL retrieved. This question is mainly a philosophical argument and does not have a definitive answer since both ways possible and the direction depends on your comfort level of supporting the application.

How do you refactor a class that is constantly being edited?

Over the course of time, my team has created a central class that handles an agglomeration of responsibilities and runs to over 8,000 lines, all of it hand-written, not auto-generated.
The mandate has come down. We need to refactor the monster class. The biggest part of the plan is to define categories of functionality into their own classes with a has-a relationship with the monster class.
That means that a lot of references that currently read like this:
var monster = new orMonster();
var timeToOpen = monster.OpeningTime.Subtract(DateTime.Now);
will soon read like this:
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
The question is: How on Earth do we coordinate such a change? References to "orMonster" litter every single business class. Some methods are called in literally thousands of places in the code. It's guaranteed that, any time we make such a chance, someone else (probably multiple someone elses) on the team will have code checked out that calls the .OpeningTime property
How do you coordinate such a large scale change without productivity grinding to a halt?
You should make the old method call the new method. Then over time change the references to the old method to call the new method instead. Once all the client references are changed, you can delete the old method.
For more information, see Move Method in Martin Fowler's classic, Refactoring.
One thing you can do is to temporarily leave proxy methods in the monster class that will delegate to the new method. After a week or so, once you are sure all code is using the new method, then you can safely remove the proxy.
I've handled this before by going ahead and refactoring the code, but then adding methods that match the old signature that forward the calls to the new method. If you add the "Obsolete" attribute to these temporary methods, your code will still build with both the old method calls and the new method calls. Then over time you can go back through and upgrade the code that is calling the old method. The difference here is that you'll get "Warnings" during the build to help you find all of the code that needs upgrading.
I'm not sure what language you're using but in .Net you can create compiler warnings which will allow you to leave the old references for a time so that they will function as expected but place a warning for your other developers to see.
http://dotnettipoftheday.org/tips/ObsoleteAttribute.aspx
Develop your changes in a branch. Break out a subset of code to a new class, make changes across the client base, test thoroughly, and then merge back.
That concentrates the breakage to when you merge — not the entire development cycle.
Combine this with Patrick's suggestion to have the monster call the small monsters. That'll let you easily revert if your merged client code breaks changes to that client. As Patrick says, you'll be able to remove the monster's methods (now stubs) once you prove nobody's using it.
I also echo several posters' advice to expose the broken out classes directly — not via the monster. Why apply only half a cure? With the same effort, you could apply a complete cure.
Finally: write unit tests. Write lots of unit tests. Oh, boy, do you need unit tests to safely pull this one off. Did I mention you need unit tests?
Keep the old method in place and forward to the new method (as others have said) but also send a log message in the forwarding method to remind yourself to remove it.
You could just add a comment but that's too easy to miss.
Suggest using a tool such as nDepend to identify all of the references to the class methods. The output from nDepend can be used to give you a better idea about how to group the methods.
var monster = new Monster();
var timeToOpen = monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now);
I'm not sure that divvying it up and just making portions of it publically available is any better. That's violating the law of demeter and can lead to NullReference pain.
I'd suggest exposing timekeeper to people without involving the monster.
If anything you'd be well off analysing the API and seeing what you can cut and encapsulate within monster. Certainly giving monster toys to play with as opposed to making monster do all of the work itself is a good call. The main effort is defining the toys monster needs to simplify his work.
Don't refactor it.
Start over and follow the law of demeter. Create a second monster class and start from scratch. When the second monster class is finished and working, then replace occurrences of the first monster. Swap it out. Hopefully they share an interface, or you can make that happen.
And instead of this: "monster.TimeKeeper.OpeningTime.Subtract(DateTime.Now)"
Do this: monster.SubtractOpeningTime(DateTime.Now). Don't kill yourself with dot-notation (hence the demeter)
Several people have provided good answers regarding the orchestration of the refactor itself. That's key. But you also asked about coordinating the changes between multiple people (which I think was the crux of your question). What source control are you using? Anything like CVS, SVN, etc can handle incoming changes from multiple developers at once. The key to making it go smoothly is that each person must make their commits granular and atomic, and each developer should pull other people's commits often.
I will look at first using partial class to split the single monster class over many files, grouping methods into categories.
You will need to stop anyone editing the monster class while you split in into the files.
From then on you are likely to get less merge conflicts as there will be less edits to each file. You can then change each method in the monster class, (one method per checkin) to call your new classes.
Such a huge class is really an issue. Since it grew so big and nobody felt uncomfortable, there must be something wrong with project policies. I'd say you should split into pairs and do pair programming. Create a branch for every pair of programmers. Work for 1-2 days on refactoring. Compare your results. This will help you avoid the situation when the refactoring would go from the start into the wrong direction and finally that would lead to the need of rewriting the monster class from scratch.

Most common examples of misuse of singleton class

When should you NOT use a singleton class although it might be very tempting to do so? It would be very nice if we had a list of most common instances of 'singletonitis' that we should take care to avoid.
Do not use a singleton for something that might evolve into a multipliable resource.
This probably sounds silly, but if you declare something a singleton you're making a very strong statement that it is absolutely unique. You're building code around it, more and more. And when you then find out after thousands of lines of code that it is not a singleton at all, you have a huge amount of work in front of you because all the other objects expect "the" sacred object of class WizBang to be a singleton.
Typical example: "There is only one database connection this application has, thus it is a singleton." - Bad idea. You may want to have several connections in the future. Better create a pool of database connections and populate it with just one instance. Acts like a Singleton, but all other code will have growable code for accessing the pool.
EDIT: I understand that theoretically you can extend a singleton into several objects. Yet there is no real life cycle (like pooling/unpooling) which means there is no real ownership of objects that have been handed out, i.e. the now multi-singleton would have to be stateless to be used simultaneously by different methods and threads.
Well singletons for the most part are just making things static anyway. So you're either in effect making data global, and we all know global variables are bad or you're writing static methods and that's not very OO now is it?
Here is a more detailed rant on why singletons are bad, by Steve Yegge. Basically you shouldn't use singletons in almost all cases, you can't really know that it's never going to be needed in more than one place.
I know many have answered with "when you have more than one", etc.
Since the original poster wanted a list of cases when you shouldn't use Singletons (rather than the top reason), I'll chime in with:
Whenever you're using it because you're not allowed to use a global!
The number of times I've had a junior engineer who has used a Singleton because they knew that I didn't accept globals in code-reviews. They often seem shocked when I point out that all they did was replace a global with a Singleton pattern and they still just have a global!
Here is a rant by my friend Alex Miller... It does not exactly enumerate "when you should NOT use a singleton" but it is a comprehensive, excellent post and argues that one should only use a singleton in rare instances, if at all.
I'm guilty of a big one a few years back (thankfully I've learned my lession since then).
What happened is that I came on board a desktop app project that had converted to .Net from VB6, and was a real mess. Things like 40-page (printed) functions and no real class structure. I built a class to encapsulate access to the database. Not a real data tier (yet), just a base class that a real data tier could use. Somewhere I got the bright idea to make this class a singleton. It worked okay for a year or so, and then we needed to build a web interface for the app as well. The singleton ended up being a huge bottleneck for the database, since all web users had to share the same connection. Again... lesson learned.
Looking back, it probably actually was the right choice for a short while, since it forced the other developers to be more disciplined about using it and made them aware of scoping issues not previously a problem in the VB6 world. But I should have changed it back after a few weeks before we had too much built up around it.
Singletons are virtually always a bad idea and generally useless/redundant since they are just a very limited simplification of a decent pattern.
Look up how Dependency Injection works. It solves the same problems, but in a much more useful way--in fact, you find it applies to many more parts of your design.
Although you can find DI libraries out there, you can also roll a basic one yourself, it's pretty easy.
I try to have only one singleton - an inversion of control / service locator object.
IService service = IoC.GetImplementationOf<IService>();
One of the things that tend to make it a nightmare is if it contains modifiable global state. I worked on a project where there were Singletons used all over the place for things that should have been solved in a completely different way (pass in strategies etc.) The "de-singletonification" was in some cases a major rewrite of parts of the system. I would argue that in the bigger part of the cases when people use a Singleton, it's just wrong b/c it looks nice in the first place, but turns into a problem especially in testing.
When you have multiple applications running in the same JVM.
A singleton is a singleton across the entire JVM, not just a single application. Even if multiple threads or applications seems to be creating a new singleton object, they're all using the same one if they run in the same JVM.
Sometimes, you assume there will only be one of a thing, then you turn out to be wrong.
Example, a database class. You assume you will only ever connect to your app's database.
// Its our database! We'll never need another
class Database
{
};
But wait! Your boss says, hook up to some other guys database. Say you want to add phpbb to the website and would like to poke its database to integrate some of its functionality. Should we make a new singleton or another instance of database? Most people agree that a new instance of the same class is preferred, there is no code duplication.
You'd rather have
Database ourDb;
Database otherDb;
than copy-past Database and make:
// Copy-pasted from our home-grown database.
class OtherGuysDatabase
{
};
The slippery slope here is that you might stop thinking about making new instance of classes and instead begin thinking its ok to have one type per every instance.
In the case of a connection (for instance), it makes sense that you wouldn't want to make the connection itself a singleton, you might need four connections, or you may need to destroy and recreate the connection a number of times.
But why wouldn't you access all of your connections through a single interface (i.e. connection manager)?

Best use pattern for a DataContext

What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this?
You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes.
If you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning.
Note the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure).
There is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!!
I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.
I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.