Implementing with both Adapter Design Pattern and Facade Design pattern - mysql

I'm new to design patterns.
I'm implementing a tool which can connect to different databases as user need.
this is my code structure.
in controllers I have my API calls. Below I paste post APi call for get all databases in server
#PostMapping("/allDatabases")
public List<String> getDatabases(#RequestBody DatabaseModel db)
throws IOException, SQLException {
return migrationInterface.getAllDatabases(db);
}
for now I'm getting response by calling a method in interface inside service package.
But when database server is change(ex: postgres,mysql) I have to use different queries.
Ex:
public class PostgresPreparedStatements {
public PreparedStatement getAllDbs(Connection con) throws SQLException {
return con.prepareStatement(
"SELECT datname FROM pg_database
WHERE datistemplate = false;");
}
}
This query is not working in MySQL database. So I'll keep deferent prepared statements for deferent databases. My idea is calling to a BaseAdapter from controller and check server type like below.
public class BaseAdapter {
public void checkServerType(String server) {
switch(server) {
case "postgres" :
// postgres functions
break;
case "mysql" :
// mysql functions
break;
default:
break;
}
}
}
I want to call PostgresConnector.java if server is postgres. from Connector I want to call Facade to call functions and related queries.
Any idea how to do this?
please note: For now I'm implementing this for postgres and MySQL,but in future this should work with any database.

Adapter pattern is not used when you want to add new behaviour such as new databases in your case. The goal of adapter class is to allow other class to access the existing functionality. Adapter converts the interface of one class into something that may be used by another class.
It looks like BaseAdapter has a responsibility to choose SQL statement for different databases. We can paraphraze this responsibility like we want to have generated SQL query based on database. So it looks like
we can replace this switch statement with HashTable(Java) or Dictionary(C#). And this HashTable(Java) or Dictionary(C#) can be a simple factory that creates SQL queries. And our generated SQL queries can be strategies for concrete database.
So let's dive in code.
It looks like this is a place where Strategy pattern can be used:
Strategy pattern is a behavioral software design pattern that enables
selecting an algorithm at runtime. Instead of implementing a single
algorithm directly, code receives run-time instructions as to which in
a family of algorithms to use.
Let me show an example via C#. I am sorry I am not Java guy, however I provided comments about how code could look in Java.
We need to have some common behaviour that will be shared across all strategies. In our case, it would be just one GetAllDbs() method from different data providers:
public interface IDatabaseStatement
{
IEnumerable<string> GetAllDbs();
}
And its concrete implementations. These are exchangeable strategies:
public class PostgresDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new [] { "PostgresDatabaseStatement" };
}
}
public class MySQLDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "MySQLDatabaseStatement" };
}
}
public class SqlServerDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "SqlServerDatabaseStatement" };
}
}
We need a place where all strategies can be stored. And we should be able to get necessary strategy from this store. So this is a place where simple factory can be used. Simple factory is not Factory method pattern and not Abstract factory.
public enum DatabaseName
{
SqlServer, Postgres, MySql
}
public class DatabaseStatementFactory
{
private Dictionary<DatabaseName, IDatabaseStatement> _statementByDatabaseName
= new Dictionary<DatabaseName, IDatabaseStatement>()
{
{ DatabaseName.SqlServer, new SqlServerDatabaseStatement() },
{ DatabaseName.Postgres, new PostgresDatabaseStatement() },
{ DatabaseName.MySql, new MySQLDatabaseStatement() },
};
public IDatabaseStatement GetInstanceByType(DatabaseName databaseName) =>
_statementByDatabaseName[databaseName];
}
and then you can get instance of desired storage easier:
DatabaseStatementFactory databaseStatementFactory = new();
IDatabaseStatement databaseStatement = databaseStatementFactory
.GetInstanceByType(DatabaseName.MySql);
IEnumerable<string> allDatabases = databaseStatement.GetAllDbs(); // OUTPUT:
// MySQLDatabaseStatement
This design is compliant with the open/closed principle.

Related

Spring-boot Redis JMS JUnit

I am using Redis Server for message broker in my spring boot application.
Is there any simple way to Junit my publish and receive API?
e.g :
Publisher :
public String publish(Object domainObj) {
template.convertAndSend(topic.getTopic(), domainObj.toString());
return "Event Published";
}
Receiver :
public class Receiver implements MessageListener {
#Override
public void onMessage(Message message, byte[] bytes) {
System.out.println("Consumed Message {}" + message);
}
}
I am using JedisConnectionFactory and RedisMessageListenerContainer and RedisTemplate for my implementation
#Configuration
#EnableRedisRepositories
public class RedisConfig {
#Bean
public JedisConnectionFactory connectionFactory() {
RedisStandaloneConfiguration configuration = new RedisStandaloneConfiguration();
configuration.setHostName("localhost");
configuration.setPort(6379);
return new JedisConnectionFactory(configuration);
}
#Bean
public RedisTemplate<String, Object> template() {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory());
template.setKeySerializer(new StringRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
template.setHashKeySerializer(new JdkSerializationRedisSerializer());
template.setValueSerializer(new JdkSerializationRedisSerializer());
template.setEnableTransactionSupport(true);
template.afterPropertiesSet();
return template;
}
#Bean
public ChannelTopic topic() {
return new ChannelTopic("common-channel");
}
#Bean
public MessageListenerAdapter messageListenerAdapter() {
return new MessageListenerAdapter(new Receiver());
}
#Bean
public RedisMessageListenerContainer redisMessageListenerContainer() {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.addMessageListener(messageListenerAdapter(), topic());
return container;
}
Unit Testing Receiver and Publisher implementation is quite straight.
JUnit 5 coupled with Mockito extension should do the job.
For example for testing that :
public String publish(Object domainObj) {
template.convertAndSend(topic.getTopic(), domainObj.toString());
return "Event Published";
}
I expect that topic and template be fields of the current class.
These fields could be set by constructor.
So you could write something that check that convertAndSend() is eventually executed with the correct parameters :
#Mock
RedisTemplate<String, Object> templateMock;
#Test
void publish(){
Topic topicFixture = new Topic(...);
Object domainObjFixture = new FooBar(...);
Publisher publisher = new Publisher(templateMock, topicFixture);
//when
publisher.publish(domainObjFixture);
// then
Mockito.verify(templateMock)
.convertAndSend(topicFixture.getTopic(), domainObjFixture);
}
But I don't think that the unit test of these two classes be enough because it never tests the final things : the JMS processing performed by Redis backend.
Particularly, the RedisConfig part that you set with specific things as serializers that have important side effects on the processing.
For my part, I try to always write integration or partial integration tests for Redis backend stuffs to ensure a good no regression harness.
The java embedded-redis library is good for that. It allows to start a redis server
on localhost (works on Windows as well as on Linux).
Starting and stopping the redis server is as simple as :
RedisServer redisServer = new RedisServer(6379);
redisServer.start();
// do some work
redisServer.stop();
Move the start() in the #BeforeEach and the stop() in the #AfterEach and the server is ready.
Then it still requires some adjustments to ensure that the redis configuration specified in Spring is well setup during the tests while using your local redis server and not the "real" redis server. Not always simple to set but great when it is done !
The simplest way to unit test this is to use embedded-redis module. What you do is in BeforeAll you can start embedded Redis and stop the embedded Redis in AfterAll method.
You can also PostConstruct PreDestroy annotations to accomplish this.
If you're looking for Junit5 then you can find the code in my repo here
See BootstrapRedis annotation and their usage here
https://github.com/sonus21/rqueue/blob/7ef545c15985ef91ba719f070f7cc80745525047/rqueue-core/src/test/java/com/github/sonus21/rqueue/core/RedisScriptFactoryTest.java#L40

Castle windsor: how to pass arguments to deep dependencies?

I have the following dependency chain:
IUserAppService
IUserDomainService
IUserRepository
IUserDataContext - UserDataContextImpl(string conn)
All interfaces above and implementations are registered in a Windsor Castle container. When I use one connection string, everything works fine.
Now we want to support multiple databases, In UserAppServiceImpl.cs, we want to get different IUserRepository (different IUserDatabaseContext) according to userId as below:
// UserAppServiceImpl.cs
public UserInfo GetUserInfo(long userId)
{
var connStr = userId % 2 == 0 ? "conn1" : "conn2";
//var repo = container.Resolve<IUserRepository>(....)
}
How can I pass the argument connStr to UserDataContextImpl?
Since the connection string is runtime data in your case, it should not be injected directly into the constructor of your components, as explained here. Since however the connection string is contextual data, it would be awkward to pass it along all public methods in your object graph.
Instead, you should hide it behind an abstraction that allows you to retrieve the proper value for the current request. For instance:
public interface ISqlConnectionFactory
{
SqlConnection Open();
}
An implementation of the ISqlConnectionFactory itself could depend on a dependency that allows retrieving the current user id:
public interface IUserContext
{
int UserId { get; }
}
Such connection factory might therefore look like this:
public class SqlConnectionFactory : ISqlConnectionFactory
{
private readonly IUserContext userContext;
private readonly string con1;
private readonly string con2;
public SqlConnectionFactory(IUserContext userContext,
string con1, string con2) {
...
}
public SqlConnection Open() {
var connStr = userContext.UserId % 2 == 0 ? "conn1" : "conn2";
var con = new SqlConnection(connStr);
con.Open();
return con;
}
}
This leaves us with an IUserContext implementation. Such implementation will depend on the type of application we are building. For ASP.NET it might look like this:
public class AspNetUserContext : IUserContext
{
public string UserId => int.Parse(HttpContext.Current.Session["UserId"]);
}
You have to start from the beginning of your dependency resolver and resolve all of your derived dependencies to a "named" resolution.
Github code link:https://github.com/castleproject/Windsor/blob/master/docs/inline-dependencies.md
Example:
I have my IDataContext for MSSQL and another for MySQL.
This example is in Unity, but I am sure Windsor can do this.
container.RegisterType(Of IDataContextAsync, dbEntities)("db", New InjectionConstructor())
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)("UnitOfWork", New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db")))
'Exceptions example
container.RegisterType(Of IRepositoryAsync(Of Exception), Repository(Of Exception))("iExceptionRepository",
New InjectionConstructor(New ResolvedParameter(Of IDataContextAsync)("db"),
New ResolvedParameter(Of IUnitOfWorkAsync)("UnitOfWork")))
sql container
container.RegisterType(Of IDataContextAsync, DataMart)(New HierarchicalLifetimeManager)
container.RegisterType(Of IUnitOfWorkAsync, UnitOfWork)(New HierarchicalLifetimeManager)
'brands
container.RegisterType(Of IRepositoryAsync(Of Brand), Repository(Of Brand))
controller code:
No changes required at the controller level.
results:
I can now have my MSSQL context do its work and MySQL do its work without any developer having to understand my container configuration. The developer simply consumes the correct service and everything is implemented.

Entity Framework Code First - MySQL - error can't find table

I'm new to EF, EF Code First, and EF with MySQL. When would EF Code First create your tables within a ASP.NET MVC web project?
I created a Person model. Then generated the Controller and standard Views.
When I hit the Index method of the Person controller it tries to pull back a list of all People. Then I get the error:
An error occurred while executing the command definition. See the inner exception for details.
The inner exception:
Table 'testmvc.people' doesn't exist
So I've made it past the connection. But the table wasn't created. How do I create the tables? Also how do I prevent the pluralization of Person to People in the naming scheme?
The simplest way to generate the database schema (people table and others) is to set a database initializing strategy like this:
Database.SetInitializer<SomeContext>( new
DropCreateDatabaseAlways<SomeContext>());
This code needs to run before you attempt to load any data, so the Application_Start() method in Global.asax would be a good place to do that. There are several ways to initialize, so you may want to take a look at them before choosing one, see http://msdn.microsoft.com/en-us/library/system.data.entity%28v=vs.103%29.aspx and look at the methods that implement IDatabaseInitializer. Officially, there is a strategy by default, although I have never quite found that to work for me.
You should also be aware that while this method is great for prototyping and development, you can't quite use it on production database with live data since the database is first dropped and then recreated. There are other methods of doing this at that point - see Database migrations for Entity Framework 4 for possibilities.
Regarding your other question of using non-pluralized table names, there are several ways to do this. One way is to annotate the Person class like this:
[Table("Person")]
class Person
{
// some field attributes
}
To set this for all tables at once, you can use the fluent API, like this:
class SomeContext : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
}
}
MySql with entity framework needs some little tweaks. You need to create three classes(you can check https://learn.microsoft.com/en-us/aspnet/identity/overview/getting-started/aspnet-identity-using-mysql-storage-with-an-entityframework-mysql-provider for more details). First create a MySqlHistoryContext class.
public class MySqlHistoryContext : HistoryContext
{
public MySqlHistoryContext(
DbConnection existingConnection,
string defaultSchema)
: base(existingConnection, defaultSchema)
{
}
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<HistoryRow>().Property(h =>
h.MigrationId).HasMaxLength(100).IsRequired();
modelBuilder.Entity<HistoryRow>().Property(h =>
h.ContextKey).HasMaxLength(200).IsRequired();
}
}
Create a MySqlConfiguration class next
public class MySqlConfiguration : DbConfiguration
{
public MySqlConfiguration()
{
SetHistoryContext(
"MySql.Data.MySqlClient", (conn, schema) => new MySqlHistoryContext(conn, schema));
}
}
Create MySqlInitializer class next
public class MySqlInitializer : IDatabaseInitializer<ApplicationDbContext>
{
public void InitializeDatabase(ApplicationDbContext context)
{
if (!context.Database.Exists())
{
// if database did not exist before - create it
context.Database.Create();
}
else
{
// query to check if MigrationHistory table is present in the database
var migrationHistoryTableExists =
((IObjectContextAdapter)context).ObjectContext.ExecuteStoreQuery<int>(
"SELECT COUNT(*) FROM information_schema.tables WHERE table_schema =
'IdentityMySQLDatabase' AND table_name = '__MigrationHistory'");
// if MigrationHistory table is not there (which is the case first time
we run) - create it
if (migrationHistoryTableExists.FirstOrDefault() == 0)
{
context.Database.Delete();
context.Database.Create();
}
}
}
}
Open the IdentityModels.cs in the model folder.Add this to the ApplicationDbContext : IdentityDbContext class
static ApplicationDbContext()
{
Database.SetInitializer(new MySqlInitializer());
}

Entity Framework/Linq to sql model to business model

I'm coming from a stored procedure and creating the data access layer manually approach. I am trying to understand where I should fit Linq To SQL or entity frameworks into my normal planning. I normally seperate out the business layer from the DAL layer and use a repository inbetween.
It seems that people will either use the generated classes from linq to sql, extend them by using the partial class or do a full seperation and map the generated linq classes to seperate business entities. I am partial to the seperate Business entities. However, this seems to be counterintuitive.
One of my last projects used DDD and the entity framework. When needing to udpate an object it moved the business entity to the repistory layer which when going to the DAL layer would create a context and than requery the object. It would than update the values and resbumit.
I didn't see the large point as the data context wasn't saved and required an extra query to grab the object before updating. Normally I would just do the update(If concurrency wasn't an issue)
So my questions come down to:
Does it make sense to seperate linq to sql generated classes into Business entities?
Should the data context be saved or is that impractical?
Thanks for your time, trying to make sure I understand. I normally like to seperate out as it makes it cleaner to understand even in some smaller porjects.
I currently hand roll my own Dto classes and Datacontext instead of using auto-generated code files from Linq to Sql. To give some background of my solution architecture/modeling, I have a "Contract" project, and a "Dal" project. (Also a "Model" project, but I'll try to stay focused here on Dal only). Hand-rolling my own Dtos and Datacontext, makes everything a lot smaller and simpler, I'll give a few examples of how I do that here.
I never return out a Dto object outside of the Dal, in fact I make sure to declare them as internal. The way I return them out is I cast them as an interface (interfaces are located in my "Contract" layer). We'll make a simple "PersonRepository" that implements an "IPersonRetriever and IPersonSaver" interfaces.
Contracts:
public interface IPersonRetriever
{
IPerson GetPersonById(Guid personId);
}
public interface IPersonSaver
{
void SavePerson(IPerson person);
}
Dal:
public class PersonRepository : IPersonSaver, IPersonRetriever
{
private string _connectionString;
public PersonRepository(string connectionString)
{
_connectionString = connectionString;
}
IPerson IPersonRetriever.GetPersonById(Guid id)
{
using (var dc = new PersonDataContext(_connectionString))
{
return dc.PersonDtos.FirstOrDefault(p => p.PersonId == id);
}
}
void IPersonSaver.SavePerson(IPerson person)
{
using (var dc = new PersonDataContext(_connectionString))
{
var personDto = new PersonDto
{
Id = person.Id,
FirstName = person.FirstName,
Age = person.Age
};
dc.PersonDtos.InsertOnSubmit(personDto);
dc.SubmitChanges();
}
}
}
PersonDataContext:
internal class PersonDataContext : System.Data.Linq.DataContext
{
static MappingSource _mappingSource = new AttributeMappingSource(); // necessary for pre-compiled linq queries in .Net 4.0+
internal PersonDataContext(string connectionString) : base(connectionString, _mappingSource) { }
internal Table<PersonDto> PersonDtos { get { return GetTable<PersonDto>(); } }
}
[Table(Name = "dbo.Persons")]
internal class PersonDto : IPerson
{
[Column(Name = "PersonIdentityId", IsPrimaryKey = true, IsDbGenerated = false)]
internal Guid Id { get; set; }
[Column]
internal string FirstName { get; set; }
[Column]
internal int Age { get; set; }
#region IPerson implementation
Guid IPerson.Id { get { return this.Id; } }
string IPerson.FirstName { get { return this.FirstName; } }
int IPerson.Age { get { return this.Age; } }
#endregion
}
You will need to add the "Column" attribute to all of your Dto properties, but if you notice, if there is a one-to-one correlation between what you want the field to be exposed as on the interface, and the name of the actual table column, you won't need to add any of the Named Parameters. In this example my PersonId in the database is stored as "PersonIdentityId", yet I only want my interface to make the field say "Id".
That's how I do my Dal layer, I believe this layer should be dumb, real dumb. Dumb in the sense that it is only there for CRUD (Create, Retrieve, Update and Delete) operations. All of the business logic would go into my "Model" project, which would consume and utilize the IPersonSaver and IPersonRetriever interfaces.
Hope this helps!

Localization using a DI framework - good idea?

I am working on a web application which I need to localize and internationalize. It occurred to me that I could do this using a dependency injection framework. Let's say I declare an interface ILocalResources (using C# for this example but that's not really important):
interface ILocalResources {
public string OkString { get; }
public string CancelString { get; }
public string WrongPasswordString { get; }
...
}
and create implementations of this interface, one for each language I need to support. I would then setup my DI framework to instantiate the proper implementation, either statically or dynamically (for example based on the requesting browsers preferred language).
Is there some reason I shouldn't be using a DI framework for this sort of thing? The only objection I could find myself is that it might be a bit overkill, but if I'm using a DI framework in my web app anyway, I might as well use it for internationalization as well?
A DI framework is built to do dependency injection and localization could just be one of your services, so in that case there's no reason not to use a DI framework IMO. Perhaps we should start discussing the provided ILocalResources interface. While I'm a favor of having compile time support, I'm not sure the supplied interface will help you, because that interface will be probably the type in your system that will change the most. And with that interface the type/types that implement it. Perhaps you should go with a different design.
When we look at most localization frameworks/providers/factories (or whatever), they're all string based. Because of this, think about the following design:
public interface ILocalResources
{
string GetStringResource(string key);
string GetStringResource(string key, CultureInfo culture);
}
This would allow you to add keys and cultures to the underlying message data store, without changing the interface. Downside is of course that you should never change a key, because that will probably be a hell.
Another approach could be an abstract base type:
public abstract class LocalResources
{
public string OkMessage { get { return this.GetString("OK"); } }
public string CancelMessage { get { return this.GetString("Cancel"); } }
...
protected abstract string GetStringResource(string key,
CultureInfo culture);
private string GetString(string key)
{
Culture culture = CultureInfo.CurrentCulture;
string resource = GetStringResource(key, culture);
// When the resource is not found, fall back to the neutral culture.
while (resource == null && culture != CultureInfo.InvariantCulture)
{
culture = culture.Parent;
resource = this.GetStringResource(key, culture);
}
if (resource == null) throw new KeyNotFoundException(key);
return resource;
}
}
And implementation of this type could look like this:
public sealed class SqlLocalResources : LocalResources
{
protected override string GetStringResource(string key,
CultureInfo culture)
{
using (var db = new LocalResourcesContext())
{
return (
from resource in db.StringResources
where resource.Culture == culture.Name
where resource.Key == key
select resource.Value).FirstOrDefault();
}
}
}
This approach takes best of both worlds, because the keys won't be scattered through the application and adding new properties just has to be done in one single place. Using your favorite DI library, you can register an implementation like this:
container.RegisterSingleton<LocalResources>(new SqlLocalResources());
And since the LocalResources type has exactly one abstract method that does all the work, it is easy to create a decorator that adds caching to prevent requesting the same data from the database:
public sealed class CachedLocalResources : LocalResources
{
private readonly Dictionary<CultureInfo, Dictionary<string, string>> cache =
new Dictionary<CultureInfo, Dictionary<string, string>>();
private readonly LocalResources decoratee;
public CachedLocalResources(LocalResources decoratee) { this.decoratee = decoratee; }
protected override string GetStringResource(string key, CultureInfo culture) {
lock (this.cache) {
string res;
var cultureCache = this.GetCultureCache(culture);
if (!cultureCache.TryGetValue(key, out res)) {
cultureCache[key] = res= this.decoratee.GetStringResource(key, culture);
}
return res;
}
}
private Dictionary<string, string> GetCultureCache(CultureInfo culture) {
Dictionary<string, string> cultureCache;
if (!this.cache.TryGetValue(culture, out cultureCache)) {
this.cache[culture] = cultureCache = new Dictionary<string, string>();
}
return cultureCache;
}
}
You can apply the decorator as follows:
container.RegisterSingleton<LocalResources>(
new CachedLocalResources(new SqlLocalResources()));
Do note that this decorator caches the string resources indefinitely, which might cause memory leaks, so you wish to wrap the strings in WeakReference instances or have some sort of expiration timeout on it. But the idea is that you can apply caching without having to change any existing implementation.
I hope this helps.
If you cannot use an existing resource framework (like that built into ASP.Net) and would have to build your own, I will assume that you at some point will need to expose services that provide localized resources.
DI frameworks are used to handle service instantiation. Your localization framework will expose services providing localization. Why shouldn't that service be served up by the framework?
Not using DI for its purpose here is like saying, "I'm building a CRM app but cannot use DI because DI is not built for customer relations management".
So yes, if you're already using DI in the rest of your application, IMO it would be wrong to not use it for the services handling localization.
The only disadvantage I can see is that for any update to "resources", you would have to recompile the assembly containing resources. And depending on your project, this disadvantage may be a good advise to only use a DI framework for resolving a ResourceService of some kind, rather than the values itself.