What is the difference between LSP and OCP? - solid-principles

I have been trying to break down the differences between the Open Closed Principal and Liskov's Substitution Principal. And the best and most common examples of either use the exact same problem. Finding the Area of a shape class.
They use slightly different means, but effectively solve the same problem with the same solution.
As these are both parts of SOLID, I'm really trying to find a reason to support why both are called out.
I'm looking for an explanation that doesn't work for both.
Thanks.

LSP:
Consumers target an abstraction (e.g. interface) and should not need to know which concrete implementation stands behind the interface. For example, a client (e.g. a DocumentProcessor class) holds a dependency on IDocumentStore. If in V1, you gave it a SqlSeverDocumentStore instance, and then in V2 you gave it a FileSystemDocumentStore, the client (DocumentProcessor) should work without modification. This can be achieved by making sure the contract of IDocumentStore is well defined and that DocumentProcessor, SqlSeverDocumentStore and FileSystemDocumentStore abide by this contract.
The contract means much more than an interface. Having two classes implement the same interface does not mean they abide by the same contract (although they should).
For example, does both implementations support saving documents which are smaller or equal to 20MB? Or does one of them support documents that are at most 10MB? If it is part of the contract that an implementation should support 20MB documents, then all implementations should support this.
OCP:
We should avoid modifying a unit of composition (e.g. a class or a function) after we release it. One way to achieve this is to make the units parameterized. For example, if you have a function (say, ProcessImages) that reads images from the file system, compresses them and then sends them to some web service, you can parameterize this function to accept other functions that are responsible for (1) reading images, (2) compressing them, (3) sending them.
E.g. (in C#):
public static void ProcessImages(
Func<Image[]> getImages,
Func<Image, CompressedImage> compressImage,
Action<CompressedImage> sendImage)
{
//... Orchestrate the operation here
}
And, in the Composition Root:
Action processImages = () => ProcessImages(ReadImages, CompressImage, SendImage);
Where ReadImages, CompressImage, SendImage are themselves functions.
This way, if you want to change how the images are compressed, you will not modify the ProcessImages function. Instead you will create a new compression function (say CompressImageInADifferentWay) and then compose the ProcessImages function in the Composition Root like this:
Action processImages =
() => ProcessImages(ReadImages, CompressImageInADifferentWay, SendImage);
If you apply the OCP in a perfect way, only the Composition Root itself will change.
LSP allows us to achieve OCP. For example, because CompressImage and CompressImageInADifferentWay abide by some contract that ProcessImages knows about, we can replace CompressImage with CompressImageInADifferentWay without modifying ProcessImages.

Related

Accessing regmap RegFields

I am trying to find a clean way to access the regmap that is used with *RegisterNode for creating documentation and testing files. The TLRegisterNode has methods for generating the json through some Annotations. These are done in the regmap method by adding them to the ElaborationArtefacts object. Other protocols don't seem to have these annotations.
Is there anyway to iterate over the "regmap" Register Fields post elaboration or during?
I cannot just access the regmap as it's not really a val/var since it's a method. I can't quite figure out where this information is being stored. I don't really believe it's actually "storing" any information as much as it is simply creating the hardware to attach the specified logic to the RegisterNode based logic.
The JSON output is actually fine for me as I could just write a post processing script to convert JSON to my required formats, but I'm wondering if I can access this information OR if I could add a custom function call at the end. I cannot extend the case class *RegisterNode, but I'm not sure if it's possible to add custom functions to run at the end of the regmap method.
Here is something I threw together quickly:
//in *RegisterRouter.scala
def customregmap(customFunc: (RegField.Map*) => Unit, mapping: RegField.Map*) = {
regmap(mapping:_*)
customFunc(mapping:_*)
}
def regmap(mapping: RegField.Map*) = {
//normal stuff
}
A user could then create a custom function to run and pass it to the regmap or to the RegisterRouter
def myFunc(mapping: RegField.Map*): Unit = {
println("I'm doing my custom function for regmap!")
}
// ...
node.customregmap(myFunc,
0x0 -> coreControlRegFields,
0x4 -> fdControlRegFields,
0x8 -> fdControl2RegFields,
)
This is just a quick example I have. I believe what would be better, if something like this was possible, would be to have a Seq of functions that could be added to the RegisterNode that are ran at the end of the regmap method, similar to how TLRegisterNode currently works. So a user could add an arbitrary number and you still use the regmap call.
Background (not directly part of question):
I have a unified register script that I have built over the years in which I describe the registers for a particular IP. It works very similar to the RegField/node.regmap, except it obviously doesn't know about diplomacy and the like. It will generate the Verilog, but also a variety of files for DV (basic `defines for simple verilog simulations and more complex uvm_reg_block defines also with the ability to describe multiple of the IPs for a subsystem all the way up to an SoC level). It will also print out C Header files for SW and Sphinx reStructuredText for documentation.
Diplomacy actually solves one of the main issues I've been dealing with so I'm obviously trying to push most of my newer designs to Chisel/Diplo.
I ended up solving this by creating my own RegisterNode which is the same as the rocketchip RegisterNodes except that I use a different Elaboration Artifact to grab the info and store it for later.

The best solution for background application logic?

I have 2 different types components. They both only use and contain an HTML5-canvas element, but need to show different types of data on a chart:
Component A (Only ever 1 of these)
Component B (0 to 4 of these)
Both components need the dateTime of the first data-entry from the two data-sets, but the dateTime of the last entry comes from their own respective data-sets.
Component A needs its first entry date from Component B.
Currently I do it like this:
Component B has the method that finds the date-limits from its own dataset. Using an Observer-pattern & Subjects, I broadcasts the returned dates through a service and into Component A.
The problem with this though, is that coupling suddenly becomes pretty tight. I can't initialize component A first, because it needs B to do its calculation first. Both components ideally should initialize and show/share their data simultaneously, and continue to do so. (E.g. If a user scrolls in one chart it should scroll all other components too, and so on.)
This is why I wanted an extra layer added on top of these components. A controller if you will.
I can't figure out what's best though:
A shared service that can take external data as input?
A container component? (Transclusion)
Another component, Component C, that A & B are children of?
As I'm still new to Angular 2, it's hard to tell which approach is best for future maintenance/development?
I'm being drawn towards creating another normal component as a parent, and have this component send and receive data to/from its children (A & B) as necessary.
I'm also uncertain as to what's "best practice" and if you can just use a component like an empty 'logic shell'. I've tried reading here and there, and I've found a lot, but I can't seem to get an exact answer to my question. It'll take time before I can comprehend all this knowledge and answer it myself, so I'm hoping someone could give me a helping nudge, thanks.
PS: I should add that my angular application will be a child-component in larger application, and will get its data from some other parent comp.
Why don't you put your logic into services?
You can also set up service hierarchies by injecting sub-services into a parent service.
If your logic is not UI / interaction related, you should put it into reusable services. If your logic is UI related, you could set up a parent component acting as a mediator between A and B (acting on their respective input/output parameters)
Whatever you do, it is a good idea to keep concerns separate.
Both A and B should not need to care about other components needing their output. Angular has Input/Output parameters for that.
Don't put some generic datetime calculation stuff into components. Make it reusable through services.
Keep coupling loose by introducing interfaces and injections.
Update:
Services should only use injectable constructor parameters, so you should pass your input using methods (or setters, but that is less expressive).
To pass arbitrary JSON objects, you could utilize any parameters. However if your json follows a certain structure, you could define an interface.
public doStuff(input: any): any { }
or
interface IMyDataContract {
dateField: string;
}
public doStuff(input: IMyDataContract): any { }

Custom JSON renderer in AEM/Sling

I've been playing around with this for a while now, and I think, I've - almost - cracked it, but I am still not fully satisfied with my solution.
So, what I want to do, is having a piece of content, a list of items, which would have two views: The standard HTML one, so people can view and edit it; and then a JSON endpoint for other services to consume.
First I thought it's a simple matter of creating two JSP scripts to render the content:
/apps/my-stuff/components/list-page/html.jsp
/apps/my-stuff/components/list-page/json.jsp
However the Apache Sling DefaultServlet seems to be rather ignorant of the json.jsp script.
As a second attempt, I created another script, in /apps/foundation/components/primary/cq/Page/json.jsp which will be actually called, and renders the page, as I expected. However there are a couple of worries/questions regarding this:
First of all, why is this being honoured by the system, and not the one in the more specific place?
The documentation states, that to find the appropriate renderer, first sling:resourceType will be inspected, then sling:resourceSuperType and then, only as a fallback will jcr:PrimaryType checked. However I think this is rather: jcr:PrimaryType, then the DefaultServlet, and then all the other things.
Most worryingly however, I have to admit, this is rather generic, so it'll break all the contnet with jcr:PrmaryType = Page, so that could have some side-effects.
A solution could be creating a new type: ListPage extends Page; and then create a renderer for that in /apps/foundation.... However I have this bad feeling, that might introduce other problems.
So my question is two fold: What is the proper way of doing this, and/or what am I missing from the way the URL -> script resolution is working in AEM/Sling. (Because it seems to be slightly different that described here and here.)
(Obviously I am trying to keep the default JSON renderer for other pages, as that might be needed for other things in the page. I am not even sure, changing this one page won't break the UI for this particular page...)
However the Apache Sling DefaultServlet seems to be rather ignorant of the json.jsp script.
Have you tried renaming your JSP like so: "list-page.json.jsp"?
If you're using AEM 6.3, you should look at Sling model Exporters. They allow you to automatically register a servlet against your Sling Model (that you can create to model your list content). That servlet can generate a JSON representation of the model for you using Jackson.
If you're not using AEM 6.3, I would suggest you create a servlet registered against your resource type and use an additional selector.
#SlingServlet(
selectors = "json",
resourceType = "my-stuff/components/list-page",
methods = "GET")
More information on Sling Servlets can be found here.

Is it possible to determine a method's callback if its type is void and its "return;" had skipped its execution?

I have a method which works like this:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
}
The userInput consist of individual checks in the deploy method. Now, I'd like to JUnit test if the user input check algorithms behave right (so if the deployment process would start or not depending on the right or wrong user input). So I need to test this with right and wrong user inputs. I could do this task by checking if anything has been deployed at all, but in this case this is very cumbersome.
So I wonder if it's somehow possible to know in the corresponding JUnit test if the deploy method has been aborted or not (due to wrong user inputs)? (By the way, changing the deploy method is no option.)
As you describe your problem, you can only check your method for side effects, or if it throws an Exception. The easiest way to do this is using a mocking framework like JMockit or Mockito. You have to mock the first method after the checking of user input has finished:
public void deploy(UserInput userInput) {
if (userInput is wrong)
return;
//start deployment process
startDeploy(); // mock this method
}
You can also extend the class under test, and override startDeploy() if it's possible. This would avoid having to use a mocking framework.
Alternative - Integration tests
It sounds like the deploy method is large and complex, and deals with files, file systems, external services (ftp), etc.
It is sometimes easier in the long run to just accept that you're dealing with external systems, and test these external systems. For instance, if deploy() copies a file to directory x, test that the file exists in the target directory. I don't know how complex deploy is, but often mocking these methods can be as hard as just testing the actual behaviour. This may be cumbersome, but like most tests, it would allow you refactor your code so it is simpler to understand. If your goal is refactoring, then in my experience, it's easier to refactor if you're testing actual behaviour rather than mocking.
You could create a UserInput stub / mock with the correct expectations and verify that only the expected calls (and no more) were made.
However, from a design point of view, if you were able to split your validation and the deployment process into separate classes - then your code can be as simple as:
if (_validator.isValid(userInput)) {
_deployer.deploy(userInput);
}
This way you can easily test that if the validator returns false the deployer is never called (using a mocking framework, such as jMock) and that it is called if the validator returns true.
It will also enable you to test your validation and deployment code seperately, avoiding the issue you're currently having.

How to separate data validation from my simple domain objects (POCOs)?

This question is language agnostic but I am a C# guy so I use the term POCO to mean an object that only preforms data storage, usually using getter and setter fields.
I just reworked my Domain Model to be super-duper POCO and am left with a couple of concerns regarding how to ensure that the property values make sense witin the domain.
For example, the EndDate of a Service should not exceed the EndDate of the Contract that Service is under. However, it seems like a violation of SOLID to put the check into the Service.EndDate setter, not to mention that as the number of validations that need to be done grows my POCO classes will become cluttered.
I have some solutions (will post in answers), but they have their disadvantages and am wondering what are some favorite approaches to solving this dilemma?
I think you're starting off with a bad assumption, ie, that you should have objects that do nothing but store data, and have no methods but accessors. The whole point of having objects is to encapsulate data and behaviors. If you have a thing that's just, basically, a struct, what behaviors are you encapsulating?
I always hear people argument for a "Validate" or "IsValid" method.
Personally I think this may work, but with most DDD projects you usually end up
with multiple validations that are allowable depending on the specific state of the object.
So I prefer "IsValidForNewContract", "IsValidForTermination" or similar, because I believe most projects end up with multiple such validators/states per class. That also means I get no interface, but I can write aggregated validators that read very well reflect the business conditions I am asserting.
I really do believe the generic solutions in this case very often take focus away from what's important - what the code is doing - for a very minor gain in technical elegance (the interface, delegate or whatever). Just vote me down for it ;)
A colleague of mine came up with an idea that worked out pretty well. We never came up with a great name for it but we called it Inspector/Judge.
The Inspector would look at an object and tell you all of the rules it violated. The Judge would decide what to do about it. This separation let us do a couple of things. It let us put all the rules in one place (Inspector) but we could have multiple Judges and choose the Judge by the context.
One example of the use of multiple Judges revolves around the rule that said a Customer must have an Address. This was a standard three tier app. In the UI tier the Judge would produce something that the UI could use to indicate the fields that had to be filled in. The UI Judge did not throw exceptions. In the service layer there was another Judge. If it found a Customer without an Address during Save it would throw an exception. At that point you really have to stop things from proceeding.
We also had Judges that were more strict as the state of the objects changed. It was an insurance application and during the Quoting process a Policy was allowed to be saved in an incomplete state. But once that Policy was ready to be made Active a lot of things had to be set. So the Quoting Judge on the service side was not as strict as the Activation Judge. Yet the rules used in the Inspector were still the same so you could still tell what wasn't complete even if you decided not to do anything about it.
One solution is to have each object's DataAccessObject take a list of Validators. When Save is called it preforms a check against each validator:
public class ServiceEndDateValidator : IValidator<Service> {
public void Check(Service s) {
if(s.EndDate > s.Contract.EndDate)
throw new InvalidOperationException();
}
}
public class ServiceDao : IDao<Service> {
IValidator<Service> _validators;
public ServiceDao(IEnumerable<IValidator<Service>> validators) {_validators = validators;}
public void Save(Service s) {
foreach(var v in _validators)
v.Check(service);
// Go on to save
}
}
The benefit, is very clear SoC, the disadvantage is that we don't get the check until Save() is called.
In the past I have usually delegated validation to a service unto its own, such as a ValidationService. This in principle still ad hears to the philosophy of DDD.
Internally this would contain a collection of Validators and a very simple set of public methods such as Validate() which could return a collection of error object.
Very simply, something like this in C#
public class ValidationService<T>
{
private IList<IValidator> _validators;
public IList<Error> Validate(T objectToValidate)
{
foreach(IValidator validator in _validators)
{
yield return validator.Validate(objectToValidate);
}
}
}
Validators could either be added within a default constructor or injected via some other class such as a ValidationServiceFactory.
I think that would probably be the best place for the logic, actually, but that's just me. You could have some kind of IsValid method that checks all of the conditions too and returns true/false, maybe some kind of ErrorMessages collection but that's an iffy topic since the error messages aren't really a part of the Domain Model. I'm a little biased as I've done some work with RoR and that's essentially what its models do.
Another possibility is to have each of my classes implement
public interface Validatable<T> {
public event Action<T> RequiresValidation;
}
And have each setter for each class raise the event before setting (maybe I could achieve this via attributes).
The advantage is real-time validation checking. But messier code and it is unclear who should be doing the attaching.
Here's another possibility. Validation is done through a proxy or decorator on the Domain object:
public class ServiceValidationProxy : Service {
public override DateTime EndDate {
get {return EndDate;}
set {
if(value > Contract.EndDate)
throw new InvalidOperationexception();
base.EndDate = value;
}
}
}
Advantage: Instant validation. Can easily be configured via an IoC.
Disadvantage: If a proxy, validated properties must be virtual, if a decorator all domain models must be interface-based. The validation classes will end up a bit heavyweight - proxys have to inherit the class and decorators have to implement all the methods. Naming and organization might get confusing.