In my Symfony2 app I'm having a very basic bundle named AnimalsBundle() with a very basic entity.
I can successfully extend this bundle by creating a new bundle MammalsBundle() via Bundle Inheritance. However, it is not possible to register one further bundle InsectsBundle() that also extends the AnimalsBundle(). Whenever I'm trying to do this, Symfony throws a
[LogicException]
Bundle "AnimalsTextBundle" is directly extended by two bundles "MammalsBundle" and "InsectsBundle".
So out of the box it's obviously not allowed. First of all, I'm not really sure why this is not allowed and - most important - how can I solve this?
I know it's been more than year now, but I just came across your question and the answer maybe useful to someone anyway..
Symfony doesn't allow a bundle to be extended directly by more than one bundle, simply because if two bundles are overriding the same files, it wouldn't be possible to determine what bundle should be used. However you can achieve what you want by doing the following :
AnimalsBundle <|---- MammalsBundle <|----- InsectsBundle
This way InsectsBundle indirectly has AnimalBundle as a parent and can override files from it.
I know, that this has been a log time since the question has been asked, but for those in need of an answer I'll suggest the following:
1) Try to avoid bundle inheritance except the cases you are 100% positive that you need to.
2) Given the example in question, the better setup will look smth like this: you have a CreatureBundle, which consists mostly of abstract classes and interfaces. Make each descendant bundle depend on CreatureBundle and implement each Creature specific code with those abstract classes and interfaces in CreatureBundle.
Based on personal experience I can tell that managing dependencies is much easier task than managing inheritance issues in case something goes wrong. If you'll ever need to alter the ancestor bundle's logic, you'll save yourself lot time by not having to dig through inherited code and alter the same logic in every descendant bundle.
Edit: Although my suggestions might lead to tighter coupling and basically contradicts latest Symfony's 'best practices' guide (which states that 'single bundle per app' is a best practice), in the end you'll realize that this approach eventually makes code maintenance easier.
I can't think of a use case where you could possibly need to do this. Bundles are meant to be almost like standalone applications. You have the dependency injection container at your disposal if you need resources from another bundle.
Perhaps you should re-think your project structure.
Related
Suppose you have a Schema that is used in a UI-App (e.g. Vue), a Node.js or Springboot Server and has to validate against Databases (e.g. SQL, mongoDB,...), and maybe some Micro-services running on whatever.
How and where do I manage a this JSON-Schema, so that if I have to change the schema for whatever Reason, that every architectural component can handle the new JSON-Schema(s).
Otherwise I need to update the Schema in up to 10 projects so none is incompatible.
Is it really as simple as having a git project full with just JSON-Schemas or do I need specific loaders for each language/environment?
Are there best practices that I am unaware of?
PS: I don't really think I need the automatically synchronized on runtime, so don't really think I need another Microservice to achieve that.
That being said, if a Microservice is the best way to go, then getting a Microservice it is.
If you keep them in a git project, how do you load them? Clone the project each time the app starts? It may work, but I would go with a more flexible approach that should take too much effort to be done:
Build a JSON schema repository accessible via a REST API
When the app starts, it makes a request to grab the schema (latest, or a specific version)
That way you get an uniform (and scalable) way of playing with the schemas. Even if you think about a hot-reload sometime in the future, you can leverage this approach to do that.
Here is an old project in this direction, you may give it a shot to see if it works (or for some inspiration, at least)
Sorry if this is a generic question, but I have been given a task to develop a web application, and I'd like to use this opportunity to dive into and learn about Polymer (and maybe Vaadin components?).
I'd like to avoid reinventing the wheel. But I'm a newbie regarding Polymer. So, given the following task, is there any approach and component that will make the developing quicker/smarter?
Create an application which will allow at least two users to log in simultaneously and manage items in categories. The categories should be in a hierarchy of potentially infinite depth. The items only require a label.
The users should be able to perform standard CRUD, plus if one user makes a change, the other user(s) should see the change (if appropriate) without manually refreshing their web browser.
How should I approach this with Polymer?
Has anyone done anything similar?
I'm also open to Vaadin components if it helps.
Any help or guideline?
🙏
One way would be to start from an application template like polymer3-webpack-starter or pwa-starter-kit.
If you are looking to use Vaadin components like vaadin-grid, use polymer3-webpack-starter or one of the starters from the vaadin.com/start page.
If you do not need an example specifically with Vaadin components, then pwa-starter-kit would be a good starting point. Though it assumes familiarity with redux.
Pro: You can quickly get a running application that you can modify to your needs, and you do not have to set a project from scratch (build tool chain, module bundler, tests, configuration, etc - all of that is done already).
Con: Making modifications to the project setup won't be necessarily easy because at that point you will have to dive into the project setup that somebody has done for you.
I have defined list_t in my project that got list module API like list_pop(). But now I have to use MySQL lib to communicate with DB, but the MySQL lib still got its list implements, and also defined a list_pop() API. In my other modules, I have to link both of them, and comes the conflict.
One of my solution is, separately include header file for different list API calling, this works well, but while some function need to call both of MySQL::list_pop() and local::list_pop(), how to notify the compiler the correct link point? Is there some GCC trick that can do these without any changes to local::list_pop()?
For most practical purposes, you are going to have to rename one or the other set of functions. It is probably easier to rename your own than those of MySQL.
The simplest approach is to simply add a prefix that has a higher probability of being unique (enough), such as your initials, or the codename of your project, or something. Or you can rename everything to avoid collisions, being aware that MySQL might add a new function in the future.
This is exactly why namespaces were invented for C++, and why C projects usually have systematic prefixes on sets of functions.
There is a way to solve this. Refactor your list_pop() to, say, my_list_pop().
There is one other way to solve this,
Looking at the header of the MySQL my_list.h here, https://github.com/lgsonic/mysql-trigger/blob/master/mysql/my_list.h you can see that list_pop is just a macro, and its binded at compile time, not at runtime(hence not a real library function). Changing list_pop of MySQL to list_pop_my(just in the #define) can make it do what you want it to do.
I currently put them in an interface folder but this wont help readability for people who do not know the code base no more than lumping all of your implementation classes in a folder called implementation.
How do you guys logically sort your project interfaces.
I assume you're talking about the kind of interfaces that classes implement in OO languages.
I'd say it's better to name the folder by function, if you really want to separate the interface from implementing classes - call the folder 'listeners' or whatever these interfaces represent. The fact they're interfaces (or abstract classes) should be obvious from the way they're named and used.
Then again, if it's not some form of a framework other people will use, but end up with an interface and a two or three implementing classes you write and leave them be, you might as well stick them all together in the same package. I don't think that making a package for a single class/interface does much for clarity.
Not part of the question but I'll write it anyway - I'm also not a fan of the "I" prefix for interfaces. If it's not obvious without it, then it could probably use a different name/structure.
After reading the nice answers in this question, I watched the screencasts by Justin Etheredge. It all seems very nice, with a minimum of setup you get DI right from your code.
Now the question that creeps up to me is: why would you want to use a DI framework that doesn't use configuration files? Isn't that the whole point of using a DI infrastructure so that you can alter the behaviour (the "strategy", so to speak) after building/releasing/whatever the code?
Can anyone give me a good use case that validates using a non-configured DI like Ninject?
I don't think you want a DI-framework without configuration. I think you want a DI-framework with the configuration you need.
I'll take spring as an example. Back in the "old days" we used to put everything in XML files to make everything configurable.
When switching to fully annotated regime you basically define which component roles yor application contains. So a given
service may for instance have one implementation which is for "regular runtime" where there is another implementation that belongs
in the "Stubbed" version of the application. Furthermore, when wiring for integration tests you may be using a third implementation.
When looking at the problem this way you quickly realize that most applications only contain a very limited set of component roles
in the runtime - these are the things that actually cause different versions of a component to be used. And usually a given implementation of a component is always bound to this role; it is really the reason-of-existence of that implementation.
So if you let the "configuration" simply specify which component roles you require, you can get away without much more configuration at all.
Of course, there's always going to be exceptions, but then you just handle the exceptions instead.
I'm on a path with krosenvold, here, only with less text: Within most applications, you have a exactly one implementation per required "service". We simply don't write applications where each object needs 10 or more implementations of each service. So it would make sense to have a simple way say "this is the default implementation, 99% of all objects using this service will be happy with it".
In tests, you usually use a specific mockup, so no need for any config there either (since you do the wiring manually).
This is what convention-over-configuration is all about. Most of the time, the configuration is simply a dump repeating of something that the DI framework should know already :)
In my apps, I use the class object as the key to look up implementations and the "key" happens to be the default implementation. If my DI framework can't find an override in the config, it will just try to instantiate the key. With over 1000 "services", I need four overrides. That would be a lot of useless XML to write.
With dependency injection unit tests become very simple to set up, because you can inject mocks instead of real objects in your object under test. You don't need configuration for that, just create and injects the mocks in the unit test code.
I received this comment on my blog, from Nate Kohari:
Glad you're considering using Ninject!
Ninject takes the stance that the
configuration of your DI framework is
actually part of your application, and
shouldn't be publicly configurable. If
you want certain bindings to be
configurable, you can easily make your
Ninject modules read your app.config.
Having your bindings in code saves you
from the verbosity of XML, and gives
you type-safety, refactorability, and
intellisense.
you don't even need to use a DI framework to apply the dependency injection pattern. you can simply make use of static factory methods for creating your objects, if you don't need configurability apart from recompiling code.
so it all depends on how configurable you want your application to be. if you want it to be configurable/pluggable without code recompilation, you'll want something you can configure via text or xml files.
I'll second the use of DI for testing. I only really consider using DI at the moment for testing, as our application doesn't require any configuration-based flexibility - it's also far too large to consider at the moment.
DI tends to lead to cleaner, more separated design - and that gives advantages all round.
If you want to change the behavior after a release build, then you will need a DI framework that supports external configurations, yes.
But I can think of other scenarios in which this configuration isn't necessary: for example control the injection of the components in your business logic. Or use a DI framework to make unit testing easier.
You should read about PRISM in .NET (it's best practices to do composite applications in .NET). In these best practices each module "Expose" their implementation type inside a shared container. This way each module has clear responsabilities over "who provide the implementation for this interface". I think it will be clear enough when you will understand how PRISM work.
When you use inversion of control you are helping to make your class do as little as possible. Let's say you have some windows service that waits for files and then performs a series of processes on the file. One of the processes is to convert it to ZIP it then Email it.
public class ZipProcessor : IFileProcessor
{
IZipService ZipService;
IEmailService EmailService;
public void Process(string fileName)
{
ZipService.Zip(fileName, Path.ChangeFileExtension(fileName, ".zip"));
EmailService.SendEmailTo(................);
}
}
Why would this class need to actually do the zipping and the emailing when you could have dedicated classes to do this for you? Obviously you wouldn't, but that's only a lead up to my point :-)
In addition to not implementing the Zip and email why should the class know which class implements the service? If you pass interfaces to the constructor of this processor then it never needs to create an instance of a specific class, it is given everything it needs to do the job.
Using a D.I.C. you can configure which classes implement certain interfaces and then just get it to create an instance for you, it will inject the dependencies into the class.
var processor = Container.Resolve<ZipProcessor>();
So now not only have you cleanly separated the class's functionality from shared functionality, but you have also prevented the consumer/provider from having any explicit knowledge of each other. This makes reading code easier to understand because there are less factors to consider at the same time.
Finally, when unit testing you can pass in mocked dependencies. When you test your ZipProcessor your mocked services will merely assert that the class attempted to send an email rather than it really trying to send one.
//Mock the ZIP
var mockZipService = MockRepository.GenerateMock<IZipService>();
mockZipService.Expect(x => x.Zip("Hello.xml", "Hello.zip"));
//Mock the email send
var mockEmailService = MockRepository.GenerateMock<IEmailService>();
mockEmailService.Expect(x => x.SendEmailTo(.................);
//Test the processor
var testSubject = new ZipProcessor(mockZipService, mockEmailService);
testSubject.Process("Hello.xml");
//Assert it used the services in the correct way
mockZipService.VerifyAlLExpectations();
mockEmailService.VerifyAllExceptions();
So in short. You would want to do it to
01: Prevent consumers from knowing explicitly which provider implements the services it needs, which means there's less to understand at once when you read code.
02: Make unit testing easier.
Pete