To quickly get in to it, the models are:
public class foo{
public int ID;
[Required]
public bla bla;
}
public class bla{
public int ID;
public string test
}
Basically, this works fine and as expected.
My application is a normal application - I have a list of bla's each of which is associated with a foo.
I have a page that has a list of Foos, and, someone can click on one which will then show every bla that is associated with that Foo.
The code has the ID of the Foo passed to it and is:
bla bla = db.bla.Where(x=>x.id == id);
db.foo.Where(x=>x.bla == bla);
However, I really wanted to experiment with the FluentAPI, and, I used the following:
modelBuilder.Entity<foo>()
.HasRequired(x => x.bla)
.WithOptional()
.WillCascadeOnDelete();
I am guessing from looking at the database schema that I have inadvertently created a one to one relationship because the bla_id column doesn't exist. However, what I really don't understand is how my application continues to work without modification? (all be it, I can only create a maximum of one foo per bla).
I don't really understand why WithOptional would imply one to one - this is really getting on my nerves as the tooltips for the different options (and MSDN) imply very similar, if not the same information between the options. It is making it very hard to learn.
Also, am I right in thinking that the FluentAPI overwrites model annotations completely?
Lastly, I have been struggling here a bit. Does anyone know of a cheat sheet/list for the FluentAPI?
Take a look at Configuring Relationships with Fluent API (Code First).
Scroll down to Configuring a One-to–Zero-or-One Relationship.
Related
I'm currently building a HATEOAS/HAL based REST application with Spring MVC and JPA (Hibernate). Basically the application gives access to a database and allows data retrieval/creation/manipulation.
So far I've already got a lot of things done including a working controller for one of the resources, let's call it x.
But I don't want to give the API user the opportunity to create just an x resource, because this alone would be useless and could be deleted right away. He/she also has to define a new y and a z resource to make things work. So: Allowing to create all those resources independently would not break anything but maybe produce dead data like a z resource floating around without any connection, completely invisible und useless to the user.
Example: I don't want the user to create a new customer without directly attaching a business contract to the customer. (Two different resources: /customers and /contracts).
I did not really find any answers or best practice on the web, except for some sort of bulk POSTing, but only to one resource, where you would POST a ton of customers at once.
Now the following options come to my mind:
Let the user create the resources as he/she wants. If there are customers created and never connected to a contract - I don't care. The logic here would be: Allow the user to create /customers (and return some sort of id, of course). Then if he/she wants to POST a new /contract later I would check if the customer's id given exists and if it does: create the contract.
Expect the user, when POSTing to /customers, to also include contract data.
Option 1 would be the easiest way (and maybe more true to REST?).
Option 2 is a bit more complicated, since the user does not send single resources any more.
Currently, the controller method for adding a customer starts like that:
#RequestMapping(value = "", method = RequestMethod.POST)
public HttpEntity<Customers> addCustomer(#RequestBody Customers customer) {
//stuff...
}
This way the JSON in the RequestBody would directly fit in my customers class and I can continue working with it. Now with two (or more) expected resources included in the RequestBody this cannot be done the same way any more. Any ideas on how to handle that in a nice way?
I could create some sort of wrapper class (like CustomersContracts), that consists of customers and contract data and has the sole purpose of storing this kind of data in it. But this seems ugly.
I could also take the raw JSON in the RequestBody, parse it and then manually create a customer and a contract object from it, save the customer, get its id and attach it to the contract.
Any thoughts?
Coming back to here after a couple of months. I finally decided to create some kind of wrapper resource (these are example class names):
public class DataImport extends ResourceSupport implements Serializable {
/* The classes referenced here are #Entitys */
private Import1 import1;
private Import2 import2;
private List<Import3> import3;
private List<Import4> import4;
}
So the API user always has to send an Import1 and Import2 JSON object and an Import3 and Import4 JSON array (can also be empty).
In my controller class I do the following:
#RequestMapping(*snip*)
public ResponseEntity<?> add(#RequestBody DataImport dataImport) {
Import1 import1 = dataImport.getImport1();
Import2 import2 = dataImport.getImport2();
List<Import3> import3 = dataImport.getImport3();
List<Import4> import4 = dataImport.getImport4();
// continue...
}
I still don't know if it's the best way to do this, but it qorks quite well.
In the excellent mvvmcross-library I can use RIO binding to prevent unreadable code:
public INC<String>Title = new NC<String>();
Then I can read and write values using Title.Value. Makes the models much more readable.
Normally, this property would be written as:
private string _title;
public string Title
{
get { return _title; }
set
{
_title = value;
RaisePropertyChanged("Title");
}
}
But when I want to use sqlite-net, these fields cannot be streamed to the database because they are not basic types with a getter and setter.
I can think of a few options how to get around that:
Make a new simple object that is similar to the model, but only with
the direct db-fields. And create a simple import-export static
method on the model. This also could prevent struggling with complex
model-code that never needs to relate to the actual database.
Make sqlite-net understand reading NC-fields. I read into the code of the mapper, but it looks like this is going to be a lot of work because it relies on the getter-setter. I did not find a way to insert custom mapping to a type, that could be generic.
Remove RIO and just put in all the code myself instead of relying on RIO.
Maybe someone has some advice?
Thanks Stuart. It was exactly my thought, so I did implement it that way: my (DB) Models do not contain RIO. Only my viewmodels do, and they reference a Model that is DB-compatible.
So, for posterity the following tips:
- Do not use RIO in your models that need to be database-backed.
- Reference models in your viewmodels. In the binding you can use the . (dot) to reference this model.
This keeps them nicely separated. This gives you also another advantage: if you need to reuse a model (because the same object might be displayed twice on the screen), but under different circumstances, it is much easier to handle this situaties to find this already instantiated model.
This really is an architectural question. I feel like I'm going about this the wrong way and wanted some input on best practices.
Let's say I have a Transactions table and a TransactionTypes table. Views will submit the appropriate transaction data which is processed in my controller. The problem is that the logic in the controller may be a bit complex and the TransactionType is not provided by the view inputs, but computed in the controller. (Which may be part of my problem).
For example, let's say that the View submits a ViewModel that would map to a TransactionType of "Withdrawal". However, the controller detects that it needs to change this to an Overdraft" as funds aren't sufficient. What I don't want to do is this:
transaction.TypeId =
DataContext.TransactionTypes.Single(x => x.type == "Overdraft").id;
... as I'll be embedding string literals in my code. Right?
OK, so I could map the values to strong types that would allow me to do this:
class TranTypes
{
public const long Deposit = 1;
public const long Withdrawal = 2;
public const long Overdraft = 3;
}
...
transaction.TypeId =
DataContext.TransactionTypes.Single(x => x.id == TranTypes.Overdraft);
Now, if my lookups change in the DB, I have one place that I can update the mappings and my controllers still have insight into the model.
But this feels awkward too.
I feel like what I really want is for the Linq To SQL auto-code generation to be able to generate the association so I can just refer to strongly-typed names (Deposit, Withdrawal, and Draft) and be assured that it will always return the current values for these in the database. Changes made to the lookup table during runtime would result in problems, but it still seems so much cleaner.
What should I be digesting to understand how best to structure this?
Thanks in advance for enlarging my brain. :-)
Dont worry about whether you have an embedded string or a strongy typed value - either is perfectly acceptable - which ever makes sense fror your database design.
What you should do, however, is write a single routine in a repository or helper class that you can then call from whatever controller or action requires it - if anything changes there is only one place to make the change.
One simple approach I've always liked is the Enum approach.
public enum TransactionType {
Overdraft
}
transaction.TypeId =
DataContext.TransactionTypes.Single(x => x.type == TransactionType.Overdraft.ToString()).id;
It's pretty simple, but I like it.
A more sophisticated approach (not sure if this works with Linq to SQL, but more sophisticated ORMs support it (like EF, DO .NET, LLBLGen, etc.) is to use inheritance in your data model, with discriminators.
That is, have a subclass of TransactionType called OverdraftTransactionType with a discriminator (the key) that identifies different types of TransactionTypes from each other.
Random link:
http://weblogs.asp.net/zeeshanhirani/archive/2008/08/16/single-table-inheritance-in-entity-framework.aspx
I am implementing a screen using MVP pattern, with more feature added to the screen, I am adding more and more methods to the IScreen/IPresenter interface, hence, the IScreen/IPresenter interface is becoming bigger and bigger, what should I do to cope with this situation?
There is no specific limit on the number of artifacts (methods, constants, enumerations, etc) - say N - in an interface such that we can say if interface X has more than N artifacts, it is bloated.
At least not without a context. In this case, the context is what is the interface supposed to provide?, or better yet, what are implementations of this interface supposed to do? What is the intended behavior or role of classes implementing the interface?
I would strongly suggest you get familiar with certain metrics like cohesion and coupling (both in general and in specifics to OO.) In particular, I'd suggest you take a look at LCOM. Once you understand it, it will help you eyeball situations like the one you are encountering now.
http://javaboutique.internet.com/tutorials/coupcoh/
One of the last things you want to do with an interface or class (or even package or module if you were doing procedural programming) is to turn them into bags of methods and functions where you throw everything but the kitchen sink. That leads to either poorly cohesion or tight coupling (or both.)
One of the problems with interfaces is that we cannot easily compute or estimate their LCOM as one would with actual classes, which could guide you in deciding when to r-efactor. So for that you have to use a bit of intuition.
Let's assume your interface is named A for the sake of argument. Then,
Step 1:
Consider grouping the interface methods by arguments: is there a subset of methods that operate on the same type of arguments? If so, are they significantly different from other method groups?
interface A
{
void method1();
void method2(someArgType x);
someOtherType y method3();
...
void doSomethingOn( someType t );
boolean isUnderSomeCondition( someType t )
someType replaceAndGetPrev( someType t, someFields ... )
}
In such a case, consider splitting that group into its own interface, B.
Step 2:
Once you extract interface B, does it look like this?
interface B
{
void doSomethingOn( someType t );
...
boolean isUnderSomeCondition( someType t )
...
someType replaceAndGetPrev( someType t, someFields ... )
}
That is, it represents methods that do things on some type?
If so, your interface is mimicking a procedural module operation on an ADT (in this case, someType) - nothing wrong with if you are using a procedural or multi-paradigm language.
Within reason and while being pragmatic, in OO, you minimize procedures that do things on other objects. You call methods in those objects to do things to themselves on your behalf. Or more precisely, you signal them to do something internally.
In such a case, consider turning B into a class encapsulating the type (and, have it extend an interface with the same signature, but only if it makes sense, if you expect different implementations of artifacts encapsulating/managing elements of that type.)
class Bclass
{
someType t;
Bclass(){ t=new someType();}
...
void doSomethingOn();
...
boolean isUnderSomeCondition()
...
someType replaceAndGetPrev( someFields ... )
}
Step 3:
Determine the relationships between the interfaces and classes re-factored out from A.
If B represent things that can only exist when A does (A is a context for B, for example a servlet request exists in a servlet context in Java EE lingo), then have B define a method that returns A (for example A B.getContext() or something like that.)
If B represent things that are managed by A (A being a composite of things, including B), then have A define a method that returns B (B A.getBThingie())
If there is no such relationship between A and B, and they have nothing in common other than they were grouped together, then chances are that the original interface was poorly cohesive.
If you cannot disentangle one from the other without breaking a significant amount of your design, then that's a sign that pieces of your system had poor boundaries and are tightly coupled.
Hope it helps.
ps. Also, I would also avoid trying to fit your interfaces and classes into traditional patterns UNLESS doing so serves an application/business specific purpose. I gotta throw that in there just in case. Too many people run amok with the GoF book trying to fit their classes into patterns rather than asking 'what problem am I solving with this?'
In my opinion, a "perfect program world" contains public interfaces and internal implementations.
Each interface is strictly "in charge" of one thing only.
I try to view these entities is "little" human beings which interact with one another in order to complete a certain task.
(sorry if this is a bit of philosophizing)
What flavor of Model-View-Presenter are you using? I've found that Passive View rarely involves overlap between the view and presenter interfaces - normally they change at different times.
Typically the view's interface is essentially a view model, perhaps something like this (C#-style):
public interface IEditCustomerView {
string FirstName { get; set; }
string LastName { get; set; }
string Country { get; set; }
List<Country> AvailableCountries { get; set; }
// etc.
}
The view implementation usually has handlers for user gestures that are usually thin wrappers that call into the presenter:
public class EditCustomerView {
// The save button's 'click' observer
protected void SaveCustomer() {
this.presenter.SaveCustomer();
}
}
The presenter generally has a method for each user gesture, but none of the data, since it gets that directly from the view (which is generally passed to the presenter in the constructor, though you can pass it on each method call if it's more suitable):
public interface IEditCustomerPresenter {
void Load();
void SaveCustomer();
}
Can you break your interface into sub-interfaces representing sections of the screen? For example, if your screen is divided into groups such as a navigation section, or a form section, or a toolbar section, then your IPresenter/IScreen could have getters for interfaces for those sections, and those sections could contain relevant methods for each section. Your main IPresenter/IScreen would still have methods that are relevant to the whole interface.
If sections of the screen don't work as a logical category for your application, think of other things that might provide a logical breakdown. Workflow would be one.
EDIT For example:
For example, for a large UI which I did, I actually broke up not just my presenter but also my model and view code. The entire screen neatly broke up into a tree (in this case), with the main presenter delegating work to the children presenters and down the chain. When I had to later go back and add to this UI, I found fitting into the hierarchy fairly simple and maintainable.
In an example that works like this, the MainPresenter implementation of IMainPresenter knows about both it's model, it's view, and it's sub-presenters. Each SubPresenter controls its own view and model. Any operations on what logically belongs in that sub-section should be in the SubPresenter. If your screen is laid out in such a way that there are logical units like this, such a set-up should work well. Each SubPresenter should be able to return its SubView for the MainPresenter to plug into the MainView as appropriate.
I am very interested in Linq to SQL with Lazy load feature. And in my project I used AutoMapper to map DB Model to Domain Model (from DB_RoleInfo to DO_RoleInfo). In my repository code as below:
public DO_RoleInfo SelectByKey(Guid Key)
{
return SelectAll().Where(x => x.Id == Key).SingleOrDefault();
}
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
SelectAll method is run well, but when I call SelectByKey, I get the error:
Method “RealMVC.Data.DO_RoleInfo MapDB_RoleInfo,DO_RoleInfo” could not translate to SQL.
Is it that Automapper doesn't support Linq completely?
Instead of Automapper, I tried the manual mapping code below:
public IQueryable<DO_RoleInfo> SelectAll()
{
return from role in _ctx.DB_RoleInfo
select new DO_RoleInfo
{
Id = role.id,
name = role.name,
code = role.code
};
}
This method works the way I want it to.
While #Aaronaught's answer was correct at the time of writing, as often the world has changed and AutoMapper with it. In the mean time, QueryableExtensions were added to the code base which added support for projections that get translated into expressions and, finally, SQL.
The core extension method is ProjectTo1. This is what your code could look like:
using AutoMapper.QueryableExtensions;
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return _ctx.DB_RoleInfo.ProjectTo<DO_RoleInfo>();
}
and it would behave like the manual mapping. (The CreateMap statement is here for demonstration purposes. Normally, you'd define mappings once at application startup).
Thus, only the columns that are required for the mapping are queried and the result is an IQueryable that still has the original query provider (linq-to-sql, linq-to-entities, whatever). So it is still composable and this will translate into a WHERE clause in SQL:
SelectAll().Where(x => x.Id == Key).SingleOrDefault();
1 Project().To<T>() prior to v. 4.1.0
Change your second function to this:
public IEnumerable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo.ToList()
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
AutoMapper works just fine with Linq to SQL, but it can't be executed as part of the deferred query. Adding ToList() at the end of your Linq query causes it to immediately evaluate the results, instead of trying to translate the AutoMapper segment as part of the query.
Clarification
The notion of deferred execution (not "lazy load") does not make any sense once you've changed the resulting type to something that's not a data entity. Consider these two classes:
public class DB_RoleInfo
{
public int ID { get; set; }
public string Name { get; set; }
}
public class DO_RoleInfo
{
public Role Role { get; set; } // Enumeration type
}
Now consider the following mapping:
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>
.ForMember(dest => dest.Role, opt => opt.MapFrom(src =>
(Role)Enum.Parse(typeof(Role), src.Name)));
This mapping is completely fine (unless I made a typo), but let's say you write the SelectAll method in your original post instead of my revised one:
public IQueryable<DO_RoleInfo> SelectAll()
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return from role in _ctx.DB_RoleInfo
select Mapper.Map<DB_RoleInfo, DO_RoleInfo>(role);
}
This actually kind of works, but by calling itself a "queryable", it lies. What happens if I try to write this against it:
public IEnumerable<DO_RoleInfo> SelectSome()
{
return from ri in SelectAll()
where (ri.Role == Role.Administrator) ||
(ri.Role == Role.Executive)
select ri;
}
Think really hard about this. How could Linq to SQL possibly be able to successfully turn your where into an actual database query?
Linq knows nothing about the DO_RoleInfo class. It doesn't know how to do the mapping backward - in some cases, that may not even possible. Sure, you may look at this code and go "Oh, that's easy, just search for 'Administrator' or 'Executive' in the Name column", but you're the only one who knows that. As far as Linq to SQL is concerned, the query is pure nonsense.
Imagine that somebody gave you these instructions:
Go to the supermarket and bring back the ingredients for making Morton Thompson Turkey.
Unless you've made it before, and most people haven't, your response to that instruction is most likely going to be:
What the hell is that?
You can go to the market, and you can get specific ingredients by name, but you can't evaluate the condition I've given you while you're over there. I have to "un-map" the criteria first. I have to tell you, here are the ingredients we need for this recipe - now go and get them.
To summarize, this is not some simple incompatibility between Linq to SQL and AutoMapper. It is not unique to either of those two libraries. It doesn't matter how you actually do the mapping to a non-entity type - you could just as easily do the mapping manually, and you'd still get the same error, because you are now giving Linq to SQL a set of instructions that are no longer comprehensible, dealing with mysterious classes that don't have an intrinsic mapping to any particular entity type.
This issue is fundamental to the concept of O/R Mapping and deferred query execution. A projection is a one-way operation. Once you project, you can no longer go back to the query engine and say oh by the way, here are some more conditions for you. It's too late. The best you can do is take what it already gave you and evaluate the extra conditions yourself.
Last but not least, I'll leave you with a workaround. If the only thing you want to be able to do from your mapping is filter the rows, you can write this:
public IEnumerable<DO_RoleInfo> SelectRoles(Func<DB_RoleInfo, bool> selector)
{
Mapper.CreateMap<DB_RoleInfo, DO_RoleInfo>();
return _ctx.DB_RoleInfo
.Where(selector)
.Select(dbr => Mapper.Map<DB_RoleInfo, DO_RoleInfo>(dbr));
}
This is a utility method that handles the mapping for you and accepts a filter on the original entity, and not the mapped entity. It might be useful if you have many different kinds of filters but always need to do the same mapping.
Personally, I think you will be better off just writing out the queries properly, by first determining what you need to retrieve from the database, then doing any projections/mappings, and then, finally, if you need to do further filtering (which you shouldn't), then materialize the results with ToList() or ToArray() and write more conditions against the local list.
Don't try to use AutoMapper or any other tool to hide the real entities exposed by Linq to SQL. The domain model is your public interface. The queries you write are an aspect of your private implementation. It's important to understand the difference and maintain a good separation of concerns.