Related
I was looking at some code of a fellow developer, and almost cried. In the method definition there are 12 arguments. From my experience..this isn't good. If it were me, I would have sent in an object of some sort.
Is there another / more preferred way to do this (in other words, what's the best way to fix this and explain why)?
public long Save (
String today,
String name,
String desc,
int ID,
String otherNm,
DateTime dt,
int status,
String periodID,
String otherDt,
String submittedDt
)
ignore my poor variable names - they are examples
It highly depends on the language.
In a language without compile-time typechecking (e.g. python, javascript, etc.) you should use keyword arguments (common in python: you can access them like a dictionary passed in as an argument) or objects/dictionaries you manually pass in as arguments (common in javascript).
However the "argument hell" you described is sometimes "the right way to do things" for certain languages with compile-time typechecking, because using objects will obfuscate the semantics from the typechecker. The solution then would be to use a better language with compile-time typechecking which allows pattern-matching of objects as arguments.
Yes, use objects. Also, the function is probably doing too much if it needs all of this information, so use smaller functions.
Use objects.
class User { ... }
User user = ...
Save(user);
It decision provides easy way for adding new parameters.
It depends on how complex the function is. If it does something non-trivial with each of those arguments, it should probably be split. If it just passes them through, they should probably be collected in an object. But if it just creates a row in a table, it's not really big deal. It's less of a deal if your language supports keyword arguments.
I imagine the issue you're experiencing is being able to look at the method call and know what argument is receiving what value. This is a pernicious problem in a language like Java, which lacks something like keyword arguments or JSON hashes to pass named arguments.
In this situation, the Builder pattern is a useful solution. It's more objects, three total, but leads to more comprehensible code for the problem you're describing. So the three objects in this case would be as such:
Thing: stateful entity, typically immutable (i.e. getters only)
ThingBuilder: factory class, creates a Thing entity and sets its values.
ThingDAO: not necessary for using the Builder pattern, but addresses your question.
Interaction
/*
ThingBuilder is a static inner class of Thing, where each of its
"set" method calls returns the ThingBuilder instance being worked with
while the final "build()" call returns the instantiated Thing instance.
*/
Thing thing = Thing.createBuilder().
.setToday("2012/04/01")
.setName("Example")
// ...etc...
.build();
// the Thing instance as get methods for each property
thing.getName();
// get your reference to thingDAO however it's done
thingDAO.save(thing);
The result is you get named arguments and an immutable instance.
I have a class that requires some of its methods to be called in a specific order. If these methods are called out of order then the object will stop working correctly. There are a few asserts in the methods to ensure that the object is in a valid state. What naming conventions could I use to communicate to the next person to read the code that these methods need to be called in a specific order?
It would be possible to turn this into one huge method, but huge methods are a great way to create problems. (There are a 2 methods than can trigger this sequence so 1 huge method would also result in duplication.)
It would be possible to write comments that explain that the methods need to be called in order but comments are less useful then clearly named methods.
Any suggestions?
Is it possible to refactor so (at least some of) the state from the first function is passed as a paramter to the second function, then it's impossible to avoid?
Otherwise, if you have comments and asserts, you're doing quite well.
However, "It would be possible to turn this into one huge method" makes it sound like the outside code doesn't need to access the intermediate state in any way. If so, why not just make one public method, which calls several private methods successively? Something like:
FroblicateWeazel() {
// Need to be in this order:
FroblicateWeazel_Init();
FroblicateWeazel_PerformCals();
FroblicateWeazel_OutputCalcs();
FroblicateWeazel_Cleanup();
}
That's not perfect, but if the order is centralised to that one function, it's fairly easy to see what order they should come in.
Message digest and encryption/decryption routines often have an _init() method to set things up, an _update() to add new data, and a _final() to return final results and tear things back down again.
I have this problem:
The Vehicle type derives from the EntityObject type which has the property "ID".
I think i get why L2S can't translate this into SQL- it does not know that the WHERE clause should include WHERE VehicleId == value. VehicleId btw is the PK on the table, whereas the property in the object model, as above, is "ID".
Can I even win on this with an Expression tree? Because it seems easy enough to create an Expression to pass to the SingleOrDefault method but will L2S still fail to translate it?
I'm trying to be DDD friendly so I don't want to decorate my domain model objects with ColumnAttributes etc. I am happy however to customize my L2S dbml file and add Expression helpers/whatever in my "data layer" in the hope of keeping this ORM-business far from my domain model.
Update:
I'm not using the object initialization syntax in my select statement. Like this:
private IQueryable<Vehicle> Vehicles()
{
return from vehicle in _dc
select new Vehicle() { ID = vehicle.VehicleId };
}
I'm actually using a constructor and from what I've read this will cause the above problem. This is what I'm doing:
private IQueryable<Vehicle> Vehicles()
{
return from vehicle in _dc
select new Vehicle(vehicle.VehicleId);
}
I understand that L2S can't translate the expression tree from the screen grab above because it does not know the mappings which it would usually infer from the object initialization syntax. How can I get around this? Do I need to build a Expression with the attribute bindings?
I have decided that this is not possible from further experience.
L2S simply can not create the correct WHERE clause when a parameterized ctor is used in the mapping projection. It's the initializer syntax in conventional L2S mapping projections which gives L2S the context it needs.
Short answer - use NHibernate.
Short answer: Don't.
I once tried to apply the IQueryable<.IEntity> to Linq2Sql. I got burned bad.
As you said. L2S (and EF too in this regard) doesn't know that ID is mapped to the column VehicleId. You could get around this by refactoring your Vehicle.ID to Vehicle.VehicleID. (Yes, they work if they are the same name). However I still don't recommend it.
Use L2S with the object it provided. Masking an extra layer over it while working with IQueryable ... is bad IMO (from my experience).
Otherway is to do .ToList() after you have done the select statement. This loads all the vehicles into your memory. Then you do the .Where statment against Linq 2 Object collections. Ofcourse this won't be as effecient as L2S handles all of the query and causes larger memory usage.
Long story short. Don't use Sql IQueryable with any object other than the ones it was originally designed for. It just doesn't work (well).
I know this could be opinion, but I'm looking for best practices.
As I understand, IQueryable<T> implements IEnumerable<T>, so in my DAL, I currently have method signatures like the following:
IEnumerable<Product> GetProducts();
IEnumerable<Product> GetProductsByCategory(int cateogoryId);
Product GetProduct(int productId);
Should I be using IQueryable<T> here?
What are the pros and cons of either approach?
Note that I am planning on using the Repository pattern so I will have a class like so:
public class ProductRepository {
DBDataContext db = new DBDataContext(<!-- connection string -->);
public IEnumerable<Product> GetProductsNew(int daysOld) {
return db.GetProducts()
.Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld ));
}
}
Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?
It depends on what behavior you want.
Returning an IList<T> tells the caller that they've received all of the data they've requested
Returning an IEnumerable<T> tells the caller that they'll need to iterate over the result and it might be lazily loaded.
Returning an IQueryable<T> tells the caller that the result is backed by a Linq provider that can handle certain classes of queries, putting the burden on the caller to form a performant query.
While the latter gives the caller a lot of flexibility (assuming your repository fully supports it), it's the hardest to test and, arguably, the least deterministic.
One more thing to think about: where is your paging/sorting support? If you are providing paging support within your repository, returning IEnumerable<T> is fine. If you are paging outside of your repository (like in the controller or service layer) then you really want to use IQueryable<T> because you don't want to load the entire dataset into memory before it's paged.
HUUUUGGGE difference. I see this quite a bit.
You build up an IQueryable before it hits the database. The IQueryable only hits the DB once an eager function is called (.ToList() for example) or you actually try to pull values out. IQueryable = lazy.
An IEnumerable will execute your lambda against the DB right away. IEnumerable = eager.
As for which to use with the Repository pattern, I believe it's eager. I usually see ILists being passed but someone else will need to iron that out for you. EDIT - You usually see IEnumerable instead of IQueryable because you don't want layers past your Repository A) determining when the database hit will happen or B) Adding any logic to the joins outside the Repository
There is a very good LINQ video that I enjoy a lot- it hits more than just IEnumerable v IQueryable, but it really has some fantastic insight.
http://channel9.msdn.com/posts/matthijs/LINQ-Tips-Tricks-and-Optimizations-by-Scott-Allen/
You can use IQueryable and accept that someone could create a scenario where a SELECT N+1 could happen. This is a disadvantage, along with the fact that you may end up with code that is specific to your repository implementation in the layers above your repository. The advantage of this is that you are allowing the delegation common operations like paging and sorting to be expressed outside of your respository, therefore alleviating it of such concerns. It is also more flexible if you need to join the data with other database tables, as the query will remain an expression, so can be added to before its resolved into a query and hits the database.
The alternative is to lock down your repository so that it returns materialised lists by calling ToList(). With the example of paging and sorting, you will need to pass in skip, take and a sort expression as parameters to the methods of your repository, and use the parameters to return only a window of results. This means that the repository is taking on the responsibility of paging and sorting, and all of the projection of your data.
This is a bit of a judgement call, do you give your application the power of linq, and have less complexity in the repository, or do you control your data access. For me it depends on the number of queries associated with each entity, and combinations of entities, and where I want to manage that complexity.
The following code illustrates a pattern I sometimes see, whereby an object is transformed implicitly as it is passed as a parameter across a number of method calls.
var o = new MyReferenceType();
DoSomeWorkAndPossiblyModifyO(o);
DoYetMoreWorkAndPossiblyFurtherModifyO(o);
//now use o...
This feels wrong to me (it hardly feels object oriented). Is it acceptable?
Based on your method names, I would argue that there is nothing implicit in the transformation. This pattern would be acceptable. If, on the other hand your methods had names like printO(o) or compareTo(o), but actually modified the Object o, the design would be bad.
It is acceptable but usually bad style.
The usual "good" approach is:
DoSomeWorkAndModify(&o); // explicit reference means we will be accepting changes
o = DoSomeWorkAndReturnModified(o); // much more elastic because you often want to keep original.
The approach you presented makes sense when o is huge, and making a copy of it in memory is out of question, or if it's a function you (and nobody else = private) use very frequently and don't want to bother with the & syntax. Otherwise it's laziness that results in some really difficult to detect bugs.
It depends entirely on what the methods actually do, besides modifying that object.
For instance, an object primarily related to keeping some state in memory might for instance not have anything related to persisting that state anywhere.
The methods could for instance load data from a database, and update the object with that information.
However! Since I program mostly in C# and thus .NET, which is a wholly object-oriented language, I would actually write your code like this:
var o = new MyReferenceType();
SomeOtherClass.DoSomeWorkAndPossiblyModifyO(o);
SomeOtherClass.DoYetMoreWorkAndPossiblyFurtherModifyO(o);
//now use o...
In which case the actual name of that other class (or those other classes if there's 2 involved) would give me a big clue as to what is actually happening and/or the context.
Example:
Person p = new Person();
DatabaseContext.FetchAllLazilyLoadedProperties(p);
DatabaseContext.Save(p); // updates primary key property with new ID