MVC - Theory Question - actionscript-3

This is more of a theoretical question rather than language specific, but I'm mostly referring to AS3.
Let's say I have an app that shows buttons through a couple of levels of navigation.
For example artistsAlphabetical->albums->songs or songsAlphabetical->song or albumsAlphabetical->album->song
so we'd set up a model(Data) with all the songs (the lowest common denominator - it would have the artist/album/year etc. info within the song object)
Let's say we don't have a dataset for the albums and need to extract that from songs. or we do have a datasaet and need to match them up.
Where should that fall under? Model or Controller?
I can find reasons for either, but am wondering what the "correcter" way would be.
Next we have the View with all the buttons to do the sorting.
When you click on the sort button Artists-A - that's the View sending a message to the Controller to do something (get alphabetical list of artists that start with A). The end result should be a new (paginated view if necessary) with buttons for each artist.
Which part of MVC should be responsible for each step?
Generating a list of artists that start with A
would it be the controller that says - 'Hey model - Artists that start with 'A' NOW!'
or, would it be more like 'Model - send me a list of all the Artists, I need to find the A-dawgs'?
essentially should the model do the sorting (and cache it if possible/needed) or should the logic be under the Controller, and the Model can store the cache?
So once we have a list of the artists, we need to create a button for all the ones that fit on the screen plus some previous/next buttons. Who should be creating the Buttons?
Should there be a View Controller? A Sub-Controller that only deals with the logic needed to create the views? Or would the logic be part of the View? It seems to me that the app's Controller would be all like 'not in my job description... I gave you the list you needed rest is up to you'
I feel like I'm conflicted between two views of MVC, one (mvC) - where the Controller is working like a motherlicker, and the Model is a glorified data holder and the view manages DisplayObjects. Or (an MVc) where the controller is the manager/delegator that makes sure that the Model and the View communicate properly, so that the model and view would have a fair amount of logic, and the controller would handle communication and delegate top level interaction.
While I am doing most of my stuff in AS3, I am curious how other languages would handle this. For example, in PHP Frameworks, you wouldn't need to worry about the button logic and event handling as much as with as3, where you need to watch Garbage collection, so the workload of one component there might be different than in a Cinder++, processing or actionscript app.

I don't know AS3. I do both Spring MVC/Java, which really lets you do either approach, and also Ruby on Rails, which favors smart models and dumb controllers. I like the smart model/dumb controller approach better.
So to me, anything to do with sorting and filtering the data should absolutely be done in the model, as should the process of extracting Albums from a list of Songs.
In Rails, I would put the process of creating the set of buttons in the view layer without question. If there's actual logic involved, that would go into a view helper method.

As you stated already, this comes down mostly to preference, but typically I would opt for the sorting to be done in the controller. I tend to keep my models purely about data storage and reduce the amount of functionality in them. So in your example, the model's job would be to hold all of the artists' data, and the controller's job would be to extract the artists starting with "A" from that data, and subsequently passing that sorted data to the view.

I don't use traditional MVC anymore, so I tend to think more in terms of PureMVC (proxies, mediators, and commands) now.
In general, I tend towards more functional proxies (think model) where the data gets abstracted (for the typical reasons for abstracting data). So, in your cases, there would be a MusicLibrary proxy, and would probably have methods like byArtist(), byTitle(), byRandom(), etc.
In a PureMVC app, a sequence for your scenario would look like:
Component catches mouse click and sends Mouse.CLICK
Mediator wrangling the component catches event, and sends notification SORTBY_ARTIST_CLICKED
Command which is requesting notification of SORTBY_ARTIST_CLICKED, calls the the byArtist() method on the proxy, builds a notification NEW_MUSIC_LIST w/ the data and sends it
Mediator requesting notification of NEW_MUSIC_LIST then does its thing and updates the components is is wrangling
I think this would align with what JacobM describes as smart model / dumb controller. I think that this approach leads better resuse, and also better isolates impact from chages.
I will say that in general, if I have different views of the same data (like in your case), I will use commands to push out the data, but if a view just gets a data update I will often have the view just pull the data from a proxy.

Related

Reagent - methods of storing ui state together with (or separate from?) server-persisted state?

I'm using multiple atoms within a map called app-state and it's working quite well architecturally so far. The state distributed across those atoms is normalised, reflecting as it is stored in datomic, and of course what the client is initialised with is a specific subset of what's in datomic. This is me preparing the way to try out datascript (which is what gave me the aha moment of why client state is so much better fully normalised, even if not using datascript).
I have a question at this point. We all know that some state in reagent is a reflection of what's in the server's database (typically), but there's also state in reagent concerning solely the current condition of the ui. That state will vanish when the page is re-loaded and there's (typically) no need to store that on the server.
So, I'm looking at my list of atoms and realising that I have some atoms which hold database-record-like maps, i.e. they contain exact reflections of datomic entities, (which arrive by transit), which is great.
But now I notice I also want some ui state per datomic entity.
So the question arises whether to (this seems wrong to me) add some keys to what came from datomic, of the ui state that is irrelevant to datomic, but that the client needs (i.e., dump it into the same nested map). That is entirely possible, but seems wrong, and so suggests.... (this being my idea as of now), How about a parallel atom per "entity", like #<entity-name>-ui, containing a map (or even a vector of maps, if multiple entities), with a set of keys for ui state.
That seems an improvement on what I have ended up with by default as of now, which is separate atom for every piece of ui state (I've avoided component local state up to now). (Currently the ui only holds ui state for one record at a time, so these ui atoms need only be concerned with a single current entity).
But if, say, I made a parallel atom (to avoid mixing ephemeral ui and server state), then ui state could perhaps manageably extend deeper. We could hold, say, ui state per entity so switching current-entity back and forth would remember ui state.
Since this is Stack Overflow, I have to ask a specific question, rather than this just be discussion, so: given what I've described, what are some sensible architectural choices in this case, to store state in reagent?
If you are storing your app state in several component-independent reagent atoms already - you can check https://github.com/day8/re-frame which is a widely-adopted reagent-powered framework exactly for your case. Essentially it stores all the application state in a reagent atom but has a well-developed infrastructure to support coordinated storage and updates. They have brilliant documentation with a great high-level explanation of the idea.
Regarding your initial question about server/ui state separation - I think you should definitely go this way. It'll give you a better way of separating concerns and give you an easier way to update server/ui data separately. It is very easy to achieve this with re-frame by storing both parts of the state under separate top-level keys in re-frame application db. E.g.
{:server {:entity-name ...}
:ui {:entity-name ...}}
and then create suitable subscriptions(see re-frame docs) to retrieve it.

Is it anti-pattern to alter domain model on front end?

We are making a quiz application, I'm trying to integrate my Angular 2 UI with the REST api.
Our Quiz domain model consist of the following (simplified) hierarchy:
Quiz
Category
Question
Choice
where parent doesn't know it's children, but child knows it's parent. For example, Choice has a reference for a Question, but Question doesn't have reference to choice. We chose this approach to be able to fetch the quiz data more flexible and modular approach, also avoiding circular references.
However, in front end it's counter-intuitive to use inverted linking, as views are built naturally layer-by-layer iterating deeper in the domain object structure. It makes sense to render view for Question first, and render sub-view for Choices after. It just seems impossible with the current domain model, where I should start from Choice.
My question is, if it's common or approved to convert the domain model on the front end, so I wold gather all data and add Choice reference to Question afterwards, making the model compatible for top-down approach? And of course convert it back when POSTing to REST api.
Does this indicate bad design, or is it approved to alter the domain model?
It's not "convert domain model on front-end" because your front-end is not playing with domain model. It's playing with Data Transfer Object (json object returned from calling server API in this case). So feel free to do everything you want with DTO on client.

Multiple data models in AngularJS app?

I'm currently rebuilding a client's small mobile page, on my own dime, as a way to learn MVC and AngularJS, and in preparation for a much larger project for another client. The page I'm learning on is a single-page online ordering app for a pita restaurant. It allows you to choose from a list of pitas, then a second View allows you to customize your pita with various vegetables, cheeses, sauces, etc., followed by a third View, a form which allows you to submit your order.
My question is about the Model. I've built a system which takes the output from the database of menu items (pitas and toppings) and generates a JSON file, the Model, for Angular to parse and display for the user, but I'm curious where the user's selections will be stored.
Do I create another Model specifically for the user's selections or do I modify the original Model to hold the user's selections in addition to the original data?
This is somewhat complicated question in the sense that there are many variables. I am just going to layout how I would approach the decision:
Modifying the existing model being passed to the client means a few things.
First, it means that I won't have another model to deal with, but that the model is not focused on one thing (SRP violation). It is specifying a model in my domain as well as the relationship it has to another thing, the order. This leads to the object being in odd states in the sense it represents different things at different times/situations.
Second, I will have do do this to all my models that maybe connected to an order. Multiplying the above problem.
Creating a new model to represent an order mean another model to deal with (creating an API/service to manage it as well). The plus side is that I will be able to keep it focused on one thing: tracking an order. This means that my other models don't have to change and I know what the object truly represents at all times.
I obviously lean toward creating a new model because it is extendable/flexible and more clear to understand/support.
This doesn't have much to do with Ng or MVC or JSON. It has more to do with creating your models to most accurately and clearly represent your domain.
There maybe other consideration as well given specifics. Let me know by commenting.

Component Based Architecture in game development

I have been thinking of trying out component based architecture for game development. I have read some blog posts and articles about it but I have a few things I have not sorted out yet. When I say component based I mean that you can add components to a ComponentManager that updates all components in its list. I want to try this to avoid getting classes that inherits variables and functions that they don´t need. I really like the idea of having an really simple entity class with a lot of components working side by side without getting bloated. Also, it is easy to remove and add functionality at runtime wich makes it really cool.
This is what I am aiming for.
// this is what the setup could be like
entity.componentManager.add(new RigidBody(3.0, 12.0));
entity.componentManager.add(new CrazyMagneticForce(3.0));
entity.componentManager.add(new DrunkAffection(42.0, 3.0));
// the game loop updates the component manager which updates all components
entity.componentManager.update(deltaTime);
Communication: Some of the components need to communicate with other components
I can´t rely on the components to be self-sustaining. Sometime they will need to communicate with the other components. How do I solve this? In Unity 3D, you can access the components using GetComponent().
I was thinking of doing it like this, but what happens if you have two components of the same type? Maybe you get a Vector back.
var someComponent : RigidBody = _componentManager.getComponent(RigidBody);
Priority: Some components need to update before others
Some components need to update before others to get the correct data for the current game loop. I am thinking of adding a PRIORITY to each component, but I am not sure that this is enough. Unity uses LateUpdate and Update but maybe there is a way to get an even better control of the order of execution.
Well, thats it. If you have any thoughts don´t hesitate to leave a comment. Or, if you have any good articles or blogs about this I will gladly take a look at them. Thanks.
Component based game engine design
http://cowboyprogramming.com/2007/01/05/evolve-your-heirachy/
EDIT:
My question is really how to solve the issue with priorities and communication between the components.
I settled for an RDMBS component entity system http://entity-systems.wikidot.com/rdbms-with-code-in-systems#objc
It is so far working really well. Each component only holds data and has no methods. Then I have subsystems that process the components and do the actual work. Sometimes a subsystem needs to talk to another and for that I use a service locator pattern http://gameprogrammingpatterns.com/service-locator.html
As for priorities. Each system is processed in my main game loop and so it is just a matter of which gets processed first. So for mine I process control, then physics, then camera and finally render.

How to avoid Anemic Domain Models and maintain Separation of Concerns?

It seems that the decision to make your objects fully cognizant of their roles within the system, and still avoid having too many dependencies within the domain model on the database, and service layers?
For example: Say that I've got an entity with a revision history, and several "lookup tables" that the data references, your entity object should have methods to get the details from some of the lookup tables, whether by providing access to the lookup table rows, or by delegating methods down to them, but in order to do so it depends on the database layer to read the data from those rows. Also, when the entity is saved, It needs to know not only how to save itself, but also to save entries into the revision history. Is it necessary to pass references to dozens of different data layer objects and service objects to the model object? This seems like it makes the logic far more complex to understand than just passing back and forth thin models to service layer objects, but I've heard many "wise men" recommending this sort of structure.
Really really good question. I have spent quite a bit of time thinking about such topics.
You demonstrate great insight by noting the tension between an expressive domain model and separation of concerns. This is much like the tension in the question I asked about Tell Don't Ask and Single Responsibility Principle.
Here is my view on the topic.
A domain model is anemic because it contains no domain logic. Other objects get and set data using an anemic domain object. What you describe doesn't sound like domain logic to me. It might be, but generally, look-up tables and other technical language is most likely terms that mean something to us but not necessarily anything to the customers. If this is incorrect, please clarify.
Anyway, the construction and persistence of domain objects shouldn't be contained in the domain objects themselves because that isn't domain logic.
So to answer the question, no, you shouldn't inject a whole bunch of non-domain objects/concepts like lookup tables and other infrastructure details. This is a leak of one concern into another. The Factory and Repository patterns from Domain-Driven Design are best suited to keep these concerns apart from the domain model itself.
But note that if you don't have any domain logic, then you will end up with anemic domain objects, i.e. bags of brainless getters and setters, which is how some shops claim to do SOA / service layers.
So how do you get the best of both worlds? How do you focus your domain objects only domain logic, while keeping UI, construction, persistence, etc. out of the way? I recommend you use a technique like Double Dispatch, or some form of restricted method access.
Here's an example of Double Dispatch. Say you have this line of code:
entity.saveIn(repository);
In your question, saveIn() would have all sorts of knowledge about the data layer. Using Double Dispatch, saveIn() does this:
repository.saveEntity(this.foo, this.bar, this.baz);
And the saveEntity() method of the repository has all of the knowledge of how to save in the data layer, as it should.
In addition to this setup, you could have:
repository.save(entity);
which just calls
entity.saveIn(this);
I re-read this and I notice that the entity is still thin because it is simply dispatching its persistence to the repository. But in this case, the entity is supposed to be thin because you didn't describe any other domain logic. In this situation, you could say "screw Double Dispatch, give me accessors."
And yeah, you could, but IMO it exposes too much of how your entity is implemented, and those accessors are distractions from domain logic. I think the only class that should have gets and sets is a class whose name ends in "Accessor".
I'll wrap this up soon. Personally, I don't write my entities with saveIn() methods, because I think even just having a saveIn() method tends to litter the domain object with distractions. I use either the friend class pattern, package-private access, or possibly the Builder pattern.
OK, I'm done. As I said, I've obsessed on this topic quite a bit.
"thin models to service layer objects" is what you do when you really want to write the service layer.
ORM is what you do when you don't want to write the service layer.
When you work with an ORM, you are still aware of the fact that navigation may involve a query, but you don't dwell on it.
Lookup tables can be a relational crutch that gets used when there isn't a very complete object model. Instead of things referencing things, you have codes, which must be looked up. In many cases, the codes devolve to little more than a static pool of strings with database keys. And the relevant methods wind up in odd places in the software.
However, if there is a more complete object model, we have first-class things instead of these degenerate lookup values.
For example, I've got some business transactions which have one of n different "rate plans" -- a kind of pricing model. Right now, the legacy relational database has the rate plan as a lookup table with a code, some pricing numbers, and (sometimes) a description.
[Everyone knows the codes -- the codes are sacred. No one is sure what the proper descriptions should be. But they know the codes.]
But really, a "rate plan" is an object that is associated with a contract; the rate plan has the method that computes the final price. When an app asks the contract for a price, the contract delegates some of the pricing work to the associated rate plan object.
There may have been some database query going on to lookup the rate plan when producing a contract price, but that's incidental to the delegation of responsibility between the two classes.
I aggree with DeadBeef - therein lies the tension. I don't really see though how a domain model is 'anemic' simply because it doesn't save itself.
There has to be much more to it. ie. It's anemic because the service is doing all the business rules and not the domain entity.
Service(IRepository) injected
Save(){
DomainEntity.DoSomething();
Repository.Save(DomainEntity);
}
'Do Something' is the business logic of the domain entity.
**This would be anemic**:
Service(IRepository) injected
Save(){
if(DomainEntity.IsSomething)
DomainEntity.SetItProperty();
Repository.Save(DomainEntity);
}
See the inherit difference ? I do :)
Try the "repository pattern" and "Domain driven design". DDD suggests to define certain entities as Aggregate-roots of other objects. Each Aggregate is encapsulated. The entities are "persistence ignorant". All the persistence-related code is put in a repository object which manages Data-access for the entity. This way you don't have to mix persistence-related code with your business logic. If you are interested in DDD, check out eric evans book.