i'm trying to change the displayed name of the minecraft version 'cause of some server has plugin which detects if a client is running forge by checking its version, i tried to decompile several .class files in search of that settings but i didn't found it, here the screen of what "displayed version" i'm talking about.
the only thing that i found is this "id", but if i change it i only get minecraft crashing.
{
"_comment_": [
"Please do not automate the download and installation of Forge.",
"Our efforts are supported by ads from the download page.",
"If you MUST automate this, please consider supporting the project through https://www.patreon.com/LexManos/"
],
"id": "1.16.5-forge-36.2.0",
"time": "2021-07-22T01:48:10+00:00",
"releaseTime": "2021-07-22T01:48:10+00:00",
"type": "release",
"mainClass": "cpw.mods.modlauncher.Launcher",
"inheritsFrom": "1.16.5",
"logging": {
},
pls help, this is getting me mad.
This is called a client brand. Vanilla Minecraft uses vanilla, Forge uses fml,forge, Lunar client uses Lunarclient:LunarVersionNumber, etc. I don't know about 1.16.5, but in 1.8.9 the class which is used for getting the client brand is net.minecraft.client.ClientBrandRetriever (It would be similar to 1.8.9). You would need to use mixins or another way of manipulating code at runtime to modify this.
ok i actually found something while trying making the mod, i found this 2 .class file.
Forge's ClientBrandRetriever.class
package net.minecraft.client;
import net.minecraftforge.api.distmarker.Dist;
import net.minecraftforge.api.distmarker.OnlyIn;
#OnlyIn(Dist.CLIENT)
public class ClientBrandRetriever {
public static String getClientModName() {
return net.minecraftforge.fml.BrandingControl.getClientBranding();
}
}
and this called BrandingControl.class
/*
* Minecraft Forge
* Copyright (c) 2016-2021.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation version 2.1
* of the License.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
package net.minecraftforge.fml;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.Lists;
import java.util.List;
import java.util.function.BiConsumer;
import java.util.stream.IntStream;
import net.minecraft.resources.IResourceManager;
import net.minecraft.resources.IResourceManagerReloadListener;
import net.minecraftforge.client.ForgeHooksClient;
import net.minecraftforge.versions.forge.ForgeVersion;
import net.minecraftforge.versions.mcp.MCPVersion;
public class BrandingControl
{
private static List<String> brandings;
private static List<String> brandingsNoMC;
private static List<String> overCopyrightBrandings;
private static void computeBranding()
{
if (brandings == null)
{
ImmutableList.Builder<String> brd = ImmutableList.builder();
brd.add("Forge " + ForgeVersion.getVersion());
brd.add("Minecraft " + MCPVersion.getMCVersion());
brd.add("MCP " + MCPVersion.getMCPVersion());
int tModCount = ModList.get().size();
brd.add(ForgeI18n.parseMessage("fml.menu.loadingmods", tModCount));
brandings = brd.build();
brandingsNoMC = brandings.subList(1, brandings.size());
}
}
private static List<String> getBrandings(boolean includeMC, boolean reverse)
{
computeBranding();
if (includeMC) {
return reverse ? Lists.reverse(brandings) : brandings;
} else {
return reverse ? Lists.reverse(brandingsNoMC) : brandingsNoMC;
}
}
private static void computeOverCopyrightBrandings() {
if (overCopyrightBrandings == null) {
ImmutableList.Builder<String> brd = ImmutableList.builder();
if (ForgeHooksClient.forgeStatusLine != null) brd.add(ForgeHooksClient.forgeStatusLine);
overCopyrightBrandings = brd.build();
}
}
public static void forEachLine(boolean includeMC, boolean reverse, BiConsumer<Integer, String> lineConsumer) {
final List<String> brandings = getBrandings(includeMC, reverse);
IntStream.range(0, brandings.size()).boxed().forEachOrdered(idx -> lineConsumer.accept(idx, brandings.get(idx)));
}
public static void forEachAboveCopyrightLine(BiConsumer<Integer, String> lineConsumer) {
computeOverCopyrightBrandings();
IntStream.range(0, overCopyrightBrandings.size()).boxed().forEachOrdered(idx->lineConsumer.accept(idx, overCopyrightBrandings.get(idx)));
}
public static String getClientBranding() {
return "forge";
}
public static String getServerBranding() {
return "forge";
}
public static IResourceManagerReloadListener resourceManagerReloadListener() {
return BrandingControl::onResourceManagerReload;
}
private static void onResourceManagerReload(IResourceManager resourceManager) {
brandings = null;
brandingsNoMC = null;
}
}
at this point i managed to rebuild this last .class file changing the client and server branding returns to "elberto", now when i'm running the sdk environment it says "elberto" on the version, but when i'm trying to do the same with the actual minecraft forge file it load everithing and at the end of loading it crash.
I'm pretty sure that this isn't a really good way to do what i want to do, if you know a better way doing a proper working mod i'm listening
Related
I'm new to design patterns.
I'm implementing a tool which can connect to different databases as user need.
this is my code structure.
in controllers I have my API calls. Below I paste post APi call for get all databases in server
#PostMapping("/allDatabases")
public List<String> getDatabases(#RequestBody DatabaseModel db)
throws IOException, SQLException {
return migrationInterface.getAllDatabases(db);
}
for now I'm getting response by calling a method in interface inside service package.
But when database server is change(ex: postgres,mysql) I have to use different queries.
Ex:
public class PostgresPreparedStatements {
public PreparedStatement getAllDbs(Connection con) throws SQLException {
return con.prepareStatement(
"SELECT datname FROM pg_database
WHERE datistemplate = false;");
}
}
This query is not working in MySQL database. So I'll keep deferent prepared statements for deferent databases. My idea is calling to a BaseAdapter from controller and check server type like below.
public class BaseAdapter {
public void checkServerType(String server) {
switch(server) {
case "postgres" :
// postgres functions
break;
case "mysql" :
// mysql functions
break;
default:
break;
}
}
}
I want to call PostgresConnector.java if server is postgres. from Connector I want to call Facade to call functions and related queries.
Any idea how to do this?
please note: For now I'm implementing this for postgres and MySQL,but in future this should work with any database.
Adapter pattern is not used when you want to add new behaviour such as new databases in your case. The goal of adapter class is to allow other class to access the existing functionality. Adapter converts the interface of one class into something that may be used by another class.
It looks like BaseAdapter has a responsibility to choose SQL statement for different databases. We can paraphraze this responsibility like we want to have generated SQL query based on database. So it looks like
we can replace this switch statement with HashTable(Java) or Dictionary(C#). And this HashTable(Java) or Dictionary(C#) can be a simple factory that creates SQL queries. And our generated SQL queries can be strategies for concrete database.
So let's dive in code.
It looks like this is a place where Strategy pattern can be used:
Strategy pattern is a behavioral software design pattern that enables
selecting an algorithm at runtime. Instead of implementing a single
algorithm directly, code receives run-time instructions as to which in
a family of algorithms to use.
Let me show an example via C#. I am sorry I am not Java guy, however I provided comments about how code could look in Java.
We need to have some common behaviour that will be shared across all strategies. In our case, it would be just one GetAllDbs() method from different data providers:
public interface IDatabaseStatement
{
IEnumerable<string> GetAllDbs();
}
And its concrete implementations. These are exchangeable strategies:
public class PostgresDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new [] { "PostgresDatabaseStatement" };
}
}
public class MySQLDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "MySQLDatabaseStatement" };
}
}
public class SqlServerDatabaseStatement : IDatabaseStatement // implements in Java
{
public IEnumerable<string> GetAllDbs()
{
return new[] { "SqlServerDatabaseStatement" };
}
}
We need a place where all strategies can be stored. And we should be able to get necessary strategy from this store. So this is a place where simple factory can be used. Simple factory is not Factory method pattern and not Abstract factory.
public enum DatabaseName
{
SqlServer, Postgres, MySql
}
public class DatabaseStatementFactory
{
private Dictionary<DatabaseName, IDatabaseStatement> _statementByDatabaseName
= new Dictionary<DatabaseName, IDatabaseStatement>()
{
{ DatabaseName.SqlServer, new SqlServerDatabaseStatement() },
{ DatabaseName.Postgres, new PostgresDatabaseStatement() },
{ DatabaseName.MySql, new MySQLDatabaseStatement() },
};
public IDatabaseStatement GetInstanceByType(DatabaseName databaseName) =>
_statementByDatabaseName[databaseName];
}
and then you can get instance of desired storage easier:
DatabaseStatementFactory databaseStatementFactory = new();
IDatabaseStatement databaseStatement = databaseStatementFactory
.GetInstanceByType(DatabaseName.MySql);
IEnumerable<string> allDatabases = databaseStatement.GetAllDbs(); // OUTPUT:
// MySQLDatabaseStatement
This design is compliant with the open/closed principle.
This question has two parts:
By default, what URL protocols are considered valid for specifying resources to Cypher's LOAD CSV command?
So far, I've successfully loaded CSV files into Neo4j using http and file protocols. A comment on this unrelated question indicates that ftp works as well, but I haven't had tried this because I have no use case.
What practical options do I have to configure non-standard URI protocols? I'm running up against a Neo.TransientError.Statement.ExternalResourceFailure: with "Invalid URL specified (unknown protocol)". Other than digging into the Neo4j source, is there anyway to modify this validation/setting, provided that the host machine is capable of resolving the resource with the specified protocol?
Neo4j relies on the capabilities of the JVM. According to https://docs.oracle.com/javase/7/docs/api/java/net/URL.html the default protocols are:
http, https, ftp, file, jar
Please note that file URLs are interpreted from the server's point of view and not from the client side (a common source of confusion).
To use custom URLs you need to understand how the JVM deals with those. The javadocs for URL class explain an approach by using a system property to provide custom URL handlers. It should be good enough to provide this system property in neo4j-wrapper.conf and drop the jar file containing your handler classes into the plugins folder. (Note: I did not validate that approach myself, but I'm pretty confident that it will work).
Here is a complete example, using the technique of implementing your own URLStreamHandler to handle the resource protocol. You must name your class 'Handler', and the last segment of the package name must be the protocol name (in this case, resource)
src/main/java/com/example/protocols/resource/Handler.java:
package com.example.protocols.resource;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.URL;
import java.net.URLConnection;
import java.net.URLStreamHandler;
public class Handler extends URLStreamHandler {
private final ClassLoader classLoader;
public Handler() {
this.classLoader = getClass().getClassLoader();
}
#Override
protected URLConnection openConnection(URL url) throws IOException {
URL resource = classLoader.getResource(url.getPath());
if (resource == null) {
throw new FileNotFoundException("Resource file not found: " + url.getPath());
}
return resource.openConnection();
}
}
From here, we need to set the system property java.protocol.handler.pkgs to include the base package com.example.protocols so that the protocol is registered. This can be done statically in a Neo4j ExtensionFactory. Since the class gets loaded by Neo4j, we know that the static block will be executed. We also need to provide our own URLAccessRule, since Neo4j by default only allows use of a few select protocols. This can also happen in the ExtensionFactory.
src/main/java/com/example/protocols/ProtocolInitializerFactory.java:
package com.example.protocols;
import org.neo4j.annotations.service.ServiceProvider;
import org.neo4j.graphdb.security.URLAccessRule;
import org.neo4j.kernel.extension.ExtensionFactory;
import org.neo4j.kernel.extension.ExtensionType;
import org.neo4j.kernel.extension.context.ExtensionContext;
import org.neo4j.kernel.lifecycle.Lifecycle;
import org.neo4j.kernel.lifecycle.LifecycleAdapter;
#ServiceProvider
public class ProtocolInitializerFactory extends ExtensionFactory<ProtocolInitializerFactory.Dependencies> {
private static final String PROTOCOL_HANDLER_PACKAGES = "java.protocol.handler.pkgs";
private static final String PROTOCOL_PACKAGE = ProtocolInitializerFactory.class.getPackageName();
static {
String currentValue = System.getProperty(PROTOCOL_HANDLER_PACKAGES, "");
if (currentValue.isEmpty()) {
System.setProperty(PROTOCOL_HANDLER_PACKAGES, PROTOCOL_PACKAGE);
} else if (!currentValue.contains(PROTOCOL_PACKAGE)) {
System.setProperty(PROTOCOL_HANDLER_PACKAGES, currentValue + "|" + PROTOCOL_PACKAGE);
}
}
public interface Dependencies {
URLAccessRule urlAccessRule();
}
public ProtocolInitializerFactory() {
super(ExtensionType.DATABASE, "ProtocolInitializer");
}
#Override
public Lifecycle newInstance(ExtensionContext context, Dependencies dependencies) {
URLAccessRule urlAccessRule = dependencies.urlAccessRule();
return LifecycleAdapter.onInit(() -> {
URLAccessRule customRule = (config, url) -> {
if ("resource".equals(url.getProtocol())) { // Check the protocol name
return url; // Optionally, you can validate the URL here and throw an exception if it is not valid or should not be allowed access
}
return urlAccessRule.validate(config, url);
};
context.dependencySatisfier().satisfyDependency(customRule);
});
}
}
After setting this up, follow the guide to packaging these classes as a Neo4j plugin and drop it into your database's plugins directory.
Admittedly, needing to override the default URLAccessRule feels a little bit shady. It may be better to simply implement the URLStreamHandler, and use another CSV loading method like APOC's apoc.load.csv. This will not require overriding the URLAccessRule, but it will require setting the Java system property java.protocol.handler.pkgs.
I am using Java EE 6 and need to load configuration from a ".properties" file. Is there a recommended way (best practice) to load the values from the configuration file using dependency injection? I found annotations for this in Spring, but I have not found a "standard" annotation for Java EE.
This guy have developed a solution from scratch:
http://weblogs.java.net/blog/jjviana/archive/2010/05/18/applicaction-configuration-java-ee-6-using-cdi-simple-example
"I couldn't find a simple example of how to configure your application
with CDI by reading configuration attributes from a file..."
But I wonder if there is a more standard way instead of creating a configuration factory...
Configuration annotation
package com.ubiteck.cdi;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import javax.enterprise.util.Nonbinding;
import javax.inject.Qualifier;
#Qualifier
#Retention(RetentionPolicy.RUNTIME)
public #interface InjectedConfiguration {
/**
* Bundle key
* #return a valid bundle key or ""
*/
#Nonbinding String key() default "";
/**
* Is it a mandatory property
* #return true if mandator
*/
#Nonbinding boolean mandatory() default false;
/**
* Default value if not provided
* #return default value or ""
*/
#Nonbinding String defaultValue() default "";
}
The configuration factory could look like :
import java.text.MessageFormat;
import java.util.MissingResourceException;
import java.util.ResourceBundle;
import javax.enterprise.inject.Produces;
import javax.enterprise.inject.spi.InjectionPoint;
public class ConfigurationInjectionManager {
static final String INVALID_KEY="Invalid key '{0}'";
static final String MANDATORY_PARAM_MISSING = "No definition found for a mandatory configuration parameter : '{0}'";
private final String BUNDLE_FILE_NAME = "configuration";
private final ResourceBundle bundle = ResourceBundle.getBundle(BUNDLE_FILE_NAME);
#Produces
#InjectedConfiguration
public String injectConfiguration(InjectionPoint ip) throws IllegalStateException {
InjectedConfiguration param = ip.getAnnotated().getAnnotation(InjectedConfiguration.class);
if (param.key() == null || param.key().length() == 0) {
return param.defaultValue();
}
String value;
try {
value = bundle.getString(param.key());
if (value == null || value.trim().length() == 0) {
if (param.mandatory())
throw new IllegalStateException(MessageFormat.format(MANDATORY_PARAM_MISSING, new Object[]{param.key()}));
else
return param.defaultValue();
}
return value;
} catch (MissingResourceException e) {
if (param.mandatory()) throw new IllegalStateException(MessageFormat.format(MANDATORY_PARAM_MISSING, new Object[]{param.key()}));
return MessageFormat.format(INVALID_KEY, new Object[]{param.key()});
}
}
Tutorial with explanation and Arquillian test
Even though it does not exactly cover your question, this part of the Weld documentation might be of interest for you.
Having mentioned this - no, there is no standard way to inject arbitrary resources / resource files. I guess it's simply beyond the scope of a spec to standardise such highly custom-dependent requirement (Spring is no specification, they can simply implement whatever they like). However, what CDI provides is a strong (aka typesafe) mechanism to inject configuration-holding beans on one side, and a flexible producer mechanism to read and create such beans on the other side. Definitely this is the recommended way you were asking about.
The approach you are linking to is certainly a pretty good one - even though it might me too much for your needs, depending on the kind of properties you are planning to inject.
A very CDI-ish way of continuing would be to develop a CDI extension (that would nicely encapsulate all required classes) and deploy it independently with your projects. Of course you can also contribute to the CDI-extension catalog or even Apache Deltaspike.
See #ConfigProperty of Apache DeltaSpike
The only "standard" way of doing this would be to use a qualifier with a nonbinding annotation member, and make sure all of your injections are dependent scoped. Then in your producer you can get a hold of the InjectionPoint and get the key off the qualifier in the injection point. You'd want something like this:
#Qualifier
public #interface Property {
#Nonbinding String value default "";
}
...
#Inject #Property("myKey") String myKey;
...
#Produces #Property public String getPropertyByKey(InjectionPoint ip) {
Set<Annotation> qualifiers = ip.getQualifiers
// Loop through qualifers looking for Property.class save that off
return ResourceBundle.getBundle(...).getString(property.key);
}
There are obviously some enhancements you can do to that code, but it should be enough to get you started down the right track.
I'm developing a web application running on Tomcat 6, with Flex as Frontend. I'm testing my backend with TestNG. Currently, I'm trying to test the following method in my Java-Backend:
public class UserDAO extends AbstractDAO {
(...)
public UserPE login(String mail, String password) {
UserPE dbuser = findUserByMail(mail);
if (dbuser == null || !dbuser.getPassword().equals(password))
throw new RuntimeException("Invalid username and/or password");
// Save logged in user
FlexSession session = FlexContext.getFlexSession();
session.setAttribute("user", dbuser);
return dbuser;
}
}
The method needs access to the FlexContext which only exists when i run it on the Servlet container (don't bother if you don't know Flex, it's more a Java-Mocking question in general). Otherwise i get a Nullpointer exception when calling session.setAttribute().
Unfortunately, I cannot set the FlexContext from outside, which would make me able to set it from my tests. It's just obtained inside the method.
What would be the best way to test this method with a Mocking framework, without changing the method or the class which includes the method? And which framework would be the easiest for this use case (there are hardly other things i have to mock in my app, it's pretty simple)?
Sorry I could try out all of them for myself and see how i could get this to work, but i hope that i'll get a quickstart with some good advices!
Obvious one approach is to re-factor it in a way that lets you inject things like the FlexContext. However this is not always possible. Some time ago a team I was part of hit a situation where we had to mock out some internal class stuff that we didn't have access to (like your context). We ended up using an api called jmockit which allows you to effective mock individual methods, including static calls.
Using this technology we where able to get around a very messy server implementation and rather than having to deploy to live servers and black box test, we were able to unit test at a fine level by overriding the server technology that was effective hard coded.
The only recommendation I would make about using something like jmockit is to ensure that in your test code there is clear documentation and seperation of jomockit from you main mocking framework (easymock or mockito would be my recommendations). Otherwise you risk confusing developers about the various responsibilities of each part of the puzzle, which usually leads to poor quality tests or tests that don't work that well. Ideally, as we ended up doing, wrap the jmockit code into you testing fixtures so the developers don't even know about it. Dealing with 1 api is enough for most people.
Just for the hell of it, here's the code we used to fix testing for an IBM class. WE basically need to do two things,
Have the ability to inject out own mocks to be returned by a method.
Kill off a constructor that went looking for a running server.
Do the above without having access to the source code.
Here's the code:
import java.util.HashMap;
import java.util.Map;
import mockit.Mock;
import mockit.MockClass;
import mockit.Mockit;
import com.ibm.ws.sca.internal.manager.impl.ServiceManagerImpl;
/**
* This class makes use of JMockit to inject it's own version of the
* locateService method into the IBM ServiceManager. It can then be used to
* return mock objects instead of the concrete implementations.
* <p>
* This is done because the IBM implementation of SCA hard codes the static
* methods which provide the component lookups and therefore there is no method
* (including reflection) that developers can use to use mocks instead.
* <p>
* Note: we also override the constructor because the default implementations
* also go after IBM setup which is not needed and will take a large amount of
* time.
*
* #see AbstractSCAUnitTest
*
* #author Derek Clarkson
* #version ${version}
*
*/
// We are going to inject code into the service manager.
#MockClass(realClass = ServiceManagerImpl.class)
public class ServiceManagerInterceptor {
/**
* How we access this interceptor's cache of objects.
*/
public static final ServiceManagerInterceptor INSTANCE = new ServiceManagerInterceptor();
/**
* Local map to store the registered services.
*/
private Map<String, Object> serviceRegistry = new HashMap<String, Object>();
/**
* Before runnin your test, make sure you call this method to start
* intercepting the calls to the service manager.
*
*/
public static void interceptServiceManagerCalls() {
Mockit.setUpMocks(INSTANCE);
}
/**
* Call to stop intercepting after your tests.
*/
public static void restoreServiceManagerCalls() {
Mockit.tearDownMocks();
}
/**
* Mock default constructor to stop extensive initialisation. Note the $init
* name which is a special JMockit name used to denote a constructor. Do not
* remove this or your tests will slow down or even crash out.
*/
#Mock
public void $init() {
// Do not remove!
}
/**
* Clears all registered mocks from the registry.
*
*/
public void clearRegistry() {
this.serviceRegistry.clear();
}
/**
* Override method which is injected into the ServiceManager class by
* JMockit. It's job is to intercept the call to the serviceManager's
* locateService() method and to return an object from our cache instead.
* <p>
* This is called from the code you are testing.
*
* #param referenceName
* the reference name of the service you are requesting.
* #return
*/
#Mock
public Object locateService(String referenceName) {
return serviceRegistry.get(referenceName);
}
/**
* Use this to store a reference to a service. usually this will be a
* reference to a mock object of some sort.
*
* #param referenceName
* the reference name you want the mocked service to be stored
* under. This should match the name used in the code being tested
* to request the service.
* #param serviceImpl
* this is the mocked implementation of the service.
*/
public void registerService(String referenceName, Object serviceImpl) {
serviceRegistry.put(referenceName, serviceImpl);
}
}
And here's the abstract class we used as a parent for tests.
public abstract class AbstractSCAUnitTest extends TestCase {
protected void setUp() throws Exception {
super.setUp();
ServiceManagerInterceptor.INSTANCE.clearRegistry();
ServiceManagerInterceptor.interceptServiceManagerCalls();
}
protected void tearDown() throws Exception {
ServiceManagerInterceptor.restoreServiceManagerCalls();
super.tearDown();
}
}
Thanks to Derek Clarkson, I successfully mocked the FlexContext, making the login testable. Unfortunately, it's only possible with JUnit, as far as i see (tested all versions of TestNG with no success - the JMockit javaagent does not like TestNG, See this and this issues).
So this is how i'm doing it now:
public class MockTests {
#MockClass(realClass = FlexContext.class)
public static class MockFlexContext {
#Mock
public FlexSession getFlexSession() {
System.out.println("I'm a Mock FlexContext.");
return new FlexSession() {
#Override
public boolean isPushSupported() {
return false;
}
#Override
public String getId() {
return null;
}
};
}
}
#BeforeClass
public static void setUpBeforeClass() throws Exception {
Mockit.setUpMocks(MockFlexContext.class);
// Test user is registered here
(...)
}
#Test
public void testLoginUser() {
UserDAO userDAO = new UserDAO();
assertEquals(userDAO.getUserList().size(), 1);
// no NPE here
userDAO.login("asdf#asdf.de", "asdfasdf");
}
}
For further testing i now have to implement things like the session map myself. But thats okay as my app and my test cases are pretty simple.
Can anybody explain the concept of pluggable adapter to me with good example?
From what I understood from a quick reading of Google results, a pluggable adapter is an adapter that isn't hard-coded against a specific adaptee. On the surface (the adapter's own interface), it's all the same but it can adapt to different adaptees with different interfaces. I found this thread pretty explanatory:
Basically, it allows you to put in an
adapter when the adaptee (receiver)
protocol is not known at compile time
by using reflection. When you create
the adapter instance, you pass it the
name of the adaptee's method to call,
and also any metadata that's necessary
to translate input types. When the
adapter receives a method call of the
target interface, it uses reflection
to call the corresponding method
specified on the adaptee.
And this:
The main responsibility of the Viewer
is to populate a widget from a domain
model without making any assumptions
about domain itself. JFace viewer uses
the Delegating Objects mechanism in
Pluggable Adapter Pattern to implement
the above requirement.
Think of it as a facehugger from Alien; when it hugs a face, all you see is the slimy back of the facehugger. You can poke it with a stick and try to pry off its arms (the adapter interface). But it basically can hug the face of any human (the adaptee), regardless of the face features. Maybe I'm pushing it a bit, but, hey, I love Alien.
You can read this article about adapter/pluggable pattern:
Table of content in this article:
* 1 Design Patterns
* 2 Intent of Adapter
* 3 Motivation
* 4 Structure
* 5 Applicability
* 6 Consequences
* 7 Implementation
o 7.1 Known Uses and Sample Code
o 7.2 Related Patterns
* 8 Conclusions
* 9 Appendix
o 9.1 References
o 9.2 Glossary
Quote:
Smalltalk introduced the concept of a
"pluggable adapter" to describe
classes with built-in interface
adaptation. This interesting concept
allows for classes to be introduced
into existing systems that might
expect different interfaces to the
class. This technique can help promote
class reuse across modules and even
projects.
Here is a small example:
We have two classes - Foo & Boo that outputs some string to console. Adapter class can adapt methods from both classes to provide interface (SaySomething) required by client. Note that there is no dependency on interface name - we can easily adapt both SayHey and Bark methods.
class Foo
{
public static void SayHey() { Console.WriteLine("Hey!"); }
}
class Boo
{
public static void Bark() { Console.WriteLine("Woof!"); }
}
class Adapter
{
public Action SaySomething { get; private set;} // "pluggable" adapter
public Adapter(Action saySomethingAction)
{
SaySomething = saySomethingAction;
}
}
class Program
{
static void Main(string[] args)
{
(new Adapter(Foo.SayHey)).SaySomething();
(new Adapter(Boo.Bark)).SaySomething();
}
}
A distinguish Feature of the Pluggable Adapter is that the method called by the client and the method existing in the interface can be different.
interface Ilegacy
{
float calculate(int a, int b);
}
class Legacy : Ilegacy
{
public float calculate(int a, int b)
{
return a * b;
}
}
class Adapter
{
public Func<int, int, float> legacyCalculator;
public Adapter()
{
this.legacyCalculator = new Legacy().calculate;
}
}
class Client
{
static void Main()
{
float result = new Adapter().legacyCalculator(5, 6);
}
}
This can normally acheived with the use of delegate,Func or Action in C#