SOLID Principles, Can JDBC interface be said as an example of Dependency Inversion Principle? - solid-principles

Of the five SOLID Principles, Dependency Inversion is one of them and the last one.
Can I quote JDBC interface as an example of Dependency Inversion Principle?

JDBC at its essence is a giant instance of the Adapter design pattern. While it may use Dependency Inversion somewhere in its implementation, this isn't what it's essentially about.

Related

Dependency Inversion Principle

I'm trying to learn SOLID Principles and I am very confuse with this dependency Inversion principle.
Can someone explain and see if my code below violates the mentioned principle?
Sorry for the messy code. asking this through my phone.
interface IFact{
Public int FindFact();
}
Class Factorial1:IFact{
//Itretive implementation for finding factorial
}
Class Factorial2:IFact{
//Recursive implementation for finding factorial
}
Class Factuser{
IFact fact;
Public FactUser(IFact f){
fact=f;
}
Public Calculate (int num){
Fact.findfact(num);
}
//Main method implementation
Main(){
Factuser obj=new Factuser(new Factorial2 ());
Int ans=obj.Calculate(5);
}
This answer is going to be unconventional but that's probably what you are looking for since plenty of more academic answers exist.
At the end of the day, Factuser is dispatching Factorial2 methods. It literally needs to access that part in memory where they are stored. We can say Factuser depends on Factorial2 at runtime.
Had you implemented Factuser with no injection via the constructor, it would have had to mention the name of Factorial2 in order to instantiate it and use it (and the other variant as well). In other words, Factuser would have depended on Factorial2 in the source code as well. In such case we say the source code dependencies flow in the same direction as the runtime dependencies.
Instead, you made both Factuser and Factorial2 mention the name of a common interface, IFact, which they both have to look up in their source code. What this achieves is decoupling Factuser from a specific implementation of IFact, and that's good. However, it doesn't mean Factorial2 is now magically depending on Factuser. Indeed, programming to an interface on itself is not inverting anything, it's just adding an indirection.
The inversion happens if you split the code in different modules and you decide that the IFact interface is owned by the user module. This module doesn't need to mention any name from the outside world, so it doesn't depend on anything. On the other hand, the module containing Factorial2 needs to import IFact from the user module and now depends on it. To sum it up: the implementations module depends on the user module, which is the opposite of the runtime dependency, we have our inversion.

Question regarding lazy evaluation for Diplomacy (rocket-chip)?

I have been reading through the Diplomacy model for Chisel. I had a question regarding the design philosophy behind this. As I can understand, the lazy evaluation of Scala is used to register some compile time information which can be forced to evaluate during elaboration before FIRRTL generation to do meta-operations like parameter negotiation.
My question is, is this the only approach? Would it be possible to create a proxy object in Scala which registers these meta-properties and evaluates them when a particular function is called? Then this function can be called before evaluation, to do the negotiation.
The reason I am asking this is because I am incrementally learning Scala and Chisel, thus would like to understand how to build abstractions as incrementally as possible, without as basic primitives as possible.
I doubt that Diplomacy is the only possible approach to this problem. It has evolved over time to meet the needs of an adaptable generator based approach. One of the key features is the ability to acquire information from modules as they evaluate their parameters. It's possible some proxy system might accomplish the same functionality, but there would be a risk, I think, that the meta-property code associated with the putative proxy objects would become decoupled from the paramter evaluation logic.

Correct design using dependency inversion principle across modules?

I understand dependency inversion when working inside a single module, but I would like to also apply it when I have a cross-module dependency. In the following diagrams I have an existing application and I need to implement some new requirements for reference data services. I thought I will create a new jar (potentially a stand-alone service in the future). The first figure shows the normal way I have approached such things in the past. The referencedataservices jar has an interface which the app will use to invoke it.
The second figure shows my attempt to use DIP, the app now owns its abstraction so it is not subject to change just because the reference data service changes. This seems to be a wrong design though, because it creates a circular dependency. MyApp depends on referencedataservices jar, and referencedataservices jar depends on MyApp.
So the third figure gets back to the more normal dependency by creating an extra layer of abstraction. Am I right? Or is this really not what DIP was intended for? Interested in hearing about other approaches or advice.
,
The second example is on the right track by separating the implementation from its abstraction. To achieve modularity, a concrete class should not be in the same package (module) as its abstract interface.
The fault in the second example is that the client owns the abstraction, while the service owns the implementation. These two roles must be reversed: services own interfaces; clients own implementations. In this way, the service presents a contract (API) for the client to implement. The service guarantees interaction with any client that adheres to its API. In terms of dependency inversion, the client injects a dependency into the service.
Kirk K. is something of an authority on modularity in Java. He had a blog that eventually turned into a book on the subject. His blog seems to be missing at the moment, but I was able to find it in the Wayback Machine. I think you would be particularly interested in the four-part series titled Applied Modularity. In terms of other approaches or alternatives to DIP, take a look at Fun With Modules, which covers three of them.
In second approach that you presented, if you move RefDataSvc abstraction to separate package you break the cycle and referencedataservices package use only package with RefDataSvc abstraction.
Other code apart from Composition Root in MyApp package should depend also on RefDataSvc. In Composition Root of your application you should then compose all dependencies that are needed in your app.

Gradle: configuration injection vs inheritance

The Gradle docs state (49.9):
Properties and methods declared in a project are inherited to all its
subprojects. This is an alternative to configuration injection. But we
think that the model of inheritance does not reflect the problem space
of multi-project builds very well. In a future edition of this user
guide we might write more about this.
I understand what configuration injection is doing in principle, but I'd like to understand more about the distinctions from inheritance, and why it's a better fit for multi-project builds.
Can anyone give me a few bullets on this?
Got the answer on the Gradle forums.
Essentially, configuration injection allows you to selectively apply properties to subprojects.

Dependency injection - best practice for fully decoupled components?

I want to use dependency injection (Unity) and at the moment I'm thinking about how to setup my project (it's a fancy demo I'm working on).
So, to fully decouple all components and have no more assembly dependencies, is it advisable to create an assembly ".Contracts" or something similar and put all interfaces and shared data structures there?
Would you consider this the best practice or am I on a wrong track here?
What I want to accomplish:
Full testability, I want all components as sharply decouples as possible and inject everything, no component will ever talk directly to a concrete implementation anymore.
The first and probably most important step is to program to interfaces, rather than concrete implementations.
Doing so, the application will be loosely coupled whether or not DI is used.
I wouldn't separate interfaces in other assembly. If you have to interact with something that is a part of your domain, why separate it? Examples of interfaces are repositories, an email sender, etc. Supose you have a Model assembly where you have your domain objects. This assembly exposes the interfaces, and implementations, obviously, reference Model in order to implement them.