Changing IRepository to support IQueryable (LINQtoSQL queries) - linq-to-sql

I've inherited a system that uses the Castle Windsor IRepository pattern to abstract away from the DAL which is LinqToSQL.
The main problem that I can see, is that IRepository only implements IEnumerable. So even the simplest of queries have to load ALL the data from the datatable, to return a single object.
Current usage is as follows
using (IUnitOfWork context2 = IocServiceFactory.Resolve<IUnitOfWork>())
{
KpiFormDocumentEntry entry = context2.GetRepository<KpiFormDocumentEntry>().FindById(id, KpiFormDocumentEntry.LoadOptions.FormItem);
And this uses lambda to filter, like so
public static KpiFormDocumentEntry FindById(this IRepository<KpiFormDocumentEntry> source, int id, KpiFormDocumentEntry.LoadOptions loadOptions)
{
return source.Where( qi => qi.Id == id ).LoadWith( loadOptions ).FirstOrDefault();
}
So it becomes a nice extension method.
My Question is, how can I use this same Interface/pattern etc. but also implement IQueryable to properly support LinqToSQL and get some serious performance improvements?
The current implementation/Interfaces for IRepository are as follows
public interface IRepository<T> : IEnumerable<T> where T : class
{
void Add(T entity);
void AddMany(IEnumerable<T> entities);
void Delete(T entity);
void DeleteMany(IEnumerable<T> entities);
IEnumerable<T> All();
IEnumerable<T> Find(Func<T, bool> predicate);
T FindFirst(Func<T, bool> predicate);
}
and then this is implemented by an SqlClientRepository like so
public sealed class SqlClientRepository<T> : IRepository<T> where T : class
{
private readonly Table<T> _source;
internal SqlClientRepository(Table<T> source)
{
if( source == null ) throw new ArgumentNullException( "source", Gratte.Aurora.SHlib.labelText("All_TableIsNull",1) );
_source = source;
}
//removed add delete etc
public IEnumerable<T> All()
{
return _source;
}
public IEnumerator<T> GetEnumerator()
{
return _source.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
The problem at the moment is, in our example above, the .Where is calling 'GetEnumerator', which then loads all rows into memory, and then looks for the one we need.
If I change IRepository to implement IQueryable, I can't implement the three methods needed, as these are not public in the Table class.
I think I should change the SQLClientRepository to be defined like so
public sealed class SqlClientRepository<T> : IQueryable<T>, IRepository<T> where T : class
And then implement the necessary methods, but I can't figure out how to pass the expressions around etc. as they are private members of the Table class, like so
public override Type ElementType
{
get { return _source.ElementType; } //Won't work as ElementType is private
}
public override Expression Expression
{
get { return _source.Expression; } //Won't work as Expression is private
}
public override IQueryProvider Provider
{
get { return _source.Provider; } //Won't work as Provider is private
}
Any help really appreciated to move this from 'iterate through every row in the database after loading it' to 'select x where id=1'!

If you want to expose linq you can stop using the repository pattern and use Linq2Sql directly. The reason to this is that every Linq To Sql provider has it's own custom solutions. So if you expose LINQ you get a leaky abstraction. There is no point in using an abstraction layer then.
Instead of exposing LINQ you got two options:
Implement the specification pattern
Use the repository pattern as I describe here: http://blog.gauffin.org/2013/01/repository-pattern-done-right/

So, while it may not be a true abstraction any longer, the main point was to get the benefit of linq to sql without updating all the queries already written.
so, I made the IRepository implement IQueryable instead of IEnumerable.
then in the SqlClientRepository implementation, I can call AsQueryable() to cast the Table to IQueryable, and then all is good, like so.
Now everywhere somebody has written IRepository().Where(qi => qi.id = id) or similar, it actually passes the ID to sql server and only pulls back one record, instead of all of them, and loops through looking for the correct one.
/// <summary>Provides the ability to query and access entities within a SQL Server data store.</summary>
/// <typeparam name="T">The type of entity in the repository.</typeparam>
public sealed class SqlClientRepository<T> : IRepository<T> where T : class
{
private readonly Table<T> _source;
private readonly IQueryable<T> _sourceQuery;
IQueryable<T> Query()
{
return (IQueryable<T>)_source;
}
public Type ElementType
{
get { return _sourceQuery.GetType(); }
}
public Expression Expression
{
get { return _sourceQuery.Expression; }
}
public IQueryProvider Provider
{
get { return _sourceQuery.Provider; }
}
/// <summary>Initializes a new instance of the <see cref="SqlClientRepository{T}"/> class.</summary>
/// <param name="source">A <see cref="Table{T}"/> to a collection representing the entities from a SQL Server data store.</param>
/// <exception cref="ArgumentNullException"><paramref name="source"/> is a <c>null</c> reference (<c>Nothing</c> in Visual Basic).</exception>
internal SqlClientRepository(Table<T> source)
{
if( source == null ) throw new ArgumentNullException( "source", "All_TableIsNull" ) );
_source = source;
_sourceQuery = _source.AsQueryable();
}

Related

Generic Singleton and share data between pages

To share data (complexe data ) between pages in my windows phone 8 application I want to implement a singleton, but I want it to be generic, is it possible? I suppose that it creates a new instance for each type isn't it?
public sealed class NavigationContextService<T>
{
private static readonly NavigationContextService<T> instance = new NavigationContextService<T>();
private NavigationContextService()
{
}
public static NavigationContextService<T> Instance
{
get
{
return instance;
}
}
public List<T> ShareList { get; set; }
public T ShareData { get; set; }
}
It is creating a new instance for every type, because it is generic - you want it to be like this (if you start with generics, take a look at some tutorials, blogs or MSDN - you will easily find many in the internet).
It is still a singleton. When you use
NavigationContextService<string>.Instance.ShareList.Add("Text");
then you have one Instance for type string. Generics helps a lot when you want to create same methods/classes that differ in type.
On the other hand if you want to create Singleton that will hold different types then you can for example modify your class to be non Generic like this:
public sealed class NavigationContextServiceNonGeneric
{
private static readonly NavigationContextServiceNonGeneric instance = new NavigationContextServiceNonGeneric();
private NavigationContextServiceNonGeneric() { ShareList = new List<object>(); }
public static NavigationContextServiceNonGeneric Instance
{ get { return instance; } }
public List<object> ShareList { get; set; }
public object ShareData { get; set; }
}
As you can see in the code above I haven't defined the 'exact' type of shared data - it is object type. Then you can easily hold most of data with it:
NavigationContextServiceNonGeneric.Instance.ShareList.Add("Text");
NavigationContextServiceNonGeneric.Instance.ShareList.Add(3);
NavigationContextServiceNonGeneric.Instance.ShareList.Add(3.0f);
It is singleton, which can hold different types of shared data. BUT it has also disavantages - the main is that you have to remember what type of data you hold and in what order. In my opinion Generic version is better because of that fact.
Everything depends on the purpose of your code. There may be easier and better ways that those two approaches.
As for the Page Navigation, you can for example try to use a method from this article - you extend Navigation service to pass the object:
public static class Extensions
{
private static object Data;
public static void Navigate(this NavigationService navigationService, Uri source, object data)
{
Data = data;
navigationService.Navigate(source);
}
public static object GetNavigationData(this NavigationService service) { return Data; }
}
Then you use it:
NavigationService.Navigate(yourUri, DataToPass);
After Navigation you can get your data:
string myTextData = NavigationService.GetNavigationData() as string;
This method has to disadvantages: it is not type-safe and your data won't be preserved in Tombstone mode.
As for the second disadvantage you can easily use PhoneApplicationService.State Property for the purpose of Page Navigation - it is a dictionary (which is preserved while tombstoning):
PhoneApplicationService.Current.State.Add("data", yourData);
Then when you want to get your data:
yourDataType yourData = PhoneApplicationService.Current.State["data"] as yourDataType;
There are also more ways in which you can pass the data.

Emit odata.type field with DataContractJsonSerializer?

Is there a way to make DataContractJsonSerializer emit the "odata.type" field required when posting an OData entity into a collection that supports multiple entity types (hierarchy per table)?
If I construct DataContractJsonSerializer with a settings object with EmitTypeInformation set to Always, it emits a "__type" field in the output, but that's not the field name needed for OData and the format of the value is wrong as well.
Is there any way to hook into the DataContractJsonSerializer pipeline to inject the desired "odata.type" field into the serialization output?
It would be such a hack to have to parse the serialization output in order to inject the field. How does WCF Data Services do it? Not using DataContractJsonSerializer is my guess.
Have you considered using Json.Net? Json.Net is much more extensible and the scenario that you have can be done using a custom resolver. sample code
class Program
{
static void Main(string[] args)
{
Console.WriteLine(
JsonConvert.SerializeObject(new Customer { Name = "Raghu" }, new JsonSerializerSettings
{
ContractResolver = new CustomContractResolver()
}));
}
}
public class CustomContractResolver : DefaultContractResolver
{
protected override JsonObjectContract CreateObjectContract(Type objectType)
{
JsonObjectContract objectContract = base.CreateObjectContract(objectType);
objectContract.Properties.Add(new JsonProperty
{
PropertyName = "odata.type",
PropertyType = typeof(string),
ValueProvider = new StaticValueProvider(objectType.FullName),
Readable = true
});
return objectContract;
}
private class StaticValueProvider : IValueProvider
{
private readonly object _value;
public StaticValueProvider(object value)
{
_value = value;
}
public object GetValue(object target)
{
return _value;
}
public void SetValue(object target, object value)
{
throw new NotSupportedException();
}
}
}
public class Customer
{
public string Name { get; set; }
}
I can't answer your first two questions, but for the third question, I found on the OData Team blog a link to the OData WCF Data Services V4 library open source code. Downloading that code, you will see that they perform all serialization and deserialization manually. They have 68 files in their two Json folders! And looking through the code they have comments such as:
// This is a work around, needTypeOnWire always = true for client side:
// ClientEdmModel's reflection can't know a property is open type even if it is, so here
// make client side always write 'odata.type' for enum.
So that to me kind of implies there is no easy, clean, simple, elegant way to do it.
I tried using a JavaScriptConverter, a dynamic type, and other stuff, but most of them ended up resorting to using Reflection which just made for a much more complicated solution versus just using a string manipulation approach.

How to use the same #jsonproperty name int following example?

At any point of time i will be setting only one setter method but the JsonProperty name should be same for both . when i am compiling this i am getting an exception. How to set the same name for both .?
public String getType() {
return type;
}
#JsonProperty("Json")
public void setType(String type) {
this.type = type;
}
public List<TwoDArrayItem> getItems() {
return items;
}
#JsonProperty("Json")
public void setItems(List<TwoDArrayItem> items) {
this.items = items;
}
Jackson tends to favor common scenarios and good design choices for annotation support.
Your case represents a very uncommon scenario. You have one field having two different meanings in different contexts. Typically this would not be a favourable data format since it adds messy logic to the consumer on the other end...they need to divine what the "Json" property should mean in each case. It would be cleaner for the consumer if you just used two different property names. Then it would be sufficient to simply check for the presence of each property to know which alternative it's getting.
Your Java class also seems poorly designed. Classes should not have this type of context or modes, where in one context a field is allowed, but in another context it's not.
Since this is primarily a smell with your design, and not serialization logic, the best approach would probably be to correct your Java class hierarchy:
class BaseClass {
}
class SubClassWithItems {
public List<TwoDArrayItem> getItems() {
return items;
}
#JsonProperty("Json")
public void setItems(List<TwoDArrayItem> items) {
this.items = items;
}
}
class SubClassWithType {
public String getType() {
return type;
}
#JsonProperty("Json")
public void setType(String type) {
this.type = type;
}
}
That way your class does not have a different set of fields based on some runtime state. If runtime state is driving what fields your class contains, you're not much better off than with just a Map.
If you can't change that, you're left with custom serialization.

Can you explain this thing about encapsulation?

In response to What is your longest-held programming assumption that turned out to be incorrect? question, one of the wrong assumptions was:
That private member variables were
private to the instance and not the
class.
(Link)
I couldn't catch what he's talking about, can anyone explain what is the wrong/right about that with an example?
public class Example {
private int a;
public int getOtherA(Example other) {
return other.a;
}
}
Like this. As you can see private doesn't protect the instance member from being accessed by another instance.
BTW, this is not all bad as long as you are a bit careful.
If private wouldn't work like in the above example, it would be cumbersome to write equals() and other such methods.
Here's the equivalent of Michael Borgwardt's answer for when you are not able to access the private fields of the other object:
public class MutableInteger {
private int value;
// Lots of stuff goes here
public boolean equals(Object o) {
if(!(o instanceof MutableInteger)){ return false; }
MutableInteger other = (MutableInteger) o;
return other.valueEquals(this.value); // <------------
}
#Override // This method would probably also be declared in an interface
public boolean valueEquals(int oValue) {
return this.value == oValue;
}
}
Nowadays this is familiar to Ruby programmers but I have been doing this in Java for a while. I prefer not to rely on access to another object's private fields. Remember that the other object may belong to a subclass, which could store the value in a different object field, or in a file or database etc.
Example code (Java):
public class MutableInteger {
private int value;
// Lots of stuff goes here
public boolean equals(Object o) {
if(!(o instanceof MutableInteger)){ return false; }
MutableInteger other = (MutableInteger) o;
return this.value == other.value; // <------------
}
}
If the assumption "private member variables are private to the instance" were correct, the marked line would cause a compiler error, because the other.value field is private and part of a different object than the one whose equals() method is being called.
But since in Java (and most other languages that have the visibility concept) private visibility is per-class, access to the field is allowed to all code of the MutableInteger, irrelevant of what instance was used to invoke it.

LINQ to SQL validate all fields, not just stop at first failed field

I just started using LINQ to SQL classes, and really like how this helps me write readable code.
In the documentation, typical examples state that to do custom validation, you create a partial class as so::
partial class Customer
{
partial void OnCustomerIDChanging(string value)
{
if (value=="BADVALUE") throw new NotImplementedException("CustomerID Invalid");
}
}
And similarly for other fields...
And then in the codebehind, i put something like this to display the error message and keep the user on same page so to correct the mistake.
public void CustomerListView_OnItemInserted(object sender, ListViewInsertedEventArgs e)
{
string errorString = "";
if (e.Exception != null)
{
e.KeepInInsertMode = true;
errorString += e.Exception.Message;
e.ExceptionHandled = true;
}
else errorString += "Successfully inserted Customer Data" + "\n";
errorMessage.Text = errorString;
}
Okay, that's easy, but then it stops validating the rest of the fields as soon as the first Exception is thrown!! Mean if the user made mode than one mistake, she/he/it will only be notified of the first error.
Is there another way to check all the input and show the errors in each ?
Any suggestions appreciated, thanks.
This looks like a job for the Enterprise Library Validation Application Block (VAB). VAB has been designed to return all errors. Besides this, it doesn't thrown an exception, so you can simply ask it to validate the type for you.
When you decide to use the VAB, I advise you to -not- use the OnXXXChanging and OnValidate methods of LINQ to SQL. It's best to override the SubmitChange(ConflictMode) method on the DataContext class to call into VAB's validation API. This keeps your validation logic out of your business entities, which keeps your entities clean.
Look at the following example:
public partial class NorthwindDataContext
{
public ValidationResult[] Validate()
{
return invalidResults = (
from entity in this.GetChangedEntities()
let type = entity.GetType()
let validator = ValidationFactory.CreateValidator(type)
let results = validator.Validate(entity)
where !results.IsValid
from result in results
select result).ToArray();
}
public override void SubmitChanges(ConflictMode failureMode)
{
ValidationResult[] this.Validate();
if (invalidResults.Length > 0)
{
// You should define this exception type
throw new ValidationException(invalidResults);
}
base.SubmitChanges(failureMode);
}
private IEnumerable<object> GetChangedEntities()
{
ChangeSet changes = this.GetChangeSet();
return changes.Inserts.Concat(changes.Updates);
}
}
[Serializable]
public class ValidationException : Exception
{
public ValidationException(IEnumerable<ValidationResult> results)
: base("There are validation errors.")
{
this.Results = new ReadOnlyCollection<ValidationResult>(
results.ToArray());
}
public ReadOnlyCollection<ValidationResult> Results
{
get; private set;
}
}
Calling the Validate() method will return a collection of all errors, but rather than calling Validate(), I'd simply call SubmitChanges() when you're ready to persist. SubmitChanges() will now check for errors and throw an exception when one of the entities is invalid. Because the list of errors is sent to the ValidationException, you can iterate over the errors higher up the call stack, and present them to the user, as follows:
try
{
db.SubmitChanges();
}
catch (ValidationException vex)
{
ShowErrors(vex.ValidationErrors);
}
private static void ShowErrors(IEnumerable<ValidationResult> errors)
{
foreach(var error in errors)
{
Console.WriteLine("{0}: {1}", error.Key, error.message);
}
}
When you use this approach you make sure that your entities are always validated before saving them to the database
Here is a good article that explains how to integrate VAB with LINQ to SQL. You should definitely read it if you want to use VAB with LINQ to SQL.
Not with LINQ. Presumably you would validate the input before giving it to LINQ.
What you're seeing is natural behaviour with exceptions.
I figured it out. Instead of throwing an exception at first failed validation, i store an error message in a class with static variable. to do this, i extend the DataContext class like this::
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
/// <summary>
/// Summary description for SalesClassesDataContext
/// </summary>
public partial class SalesClassesDataContext
{
public class ErrorBox
{
private static List<string> Messages = new List<string>();
public void addMessage(string message)
{
Messages.Add(message);
}
public List<string> getMessages()
{
return Messages;
}
}
}
in the classes corresponding to each table, i would inherit the newly defined class like this::
public partial class Customer : SalesClassesDataContext.ErrorBox
only in the function OnValidate i would throw an exception in case the number of errors is not 0. Hence not attempting to insert, and keeping the user on same input page, without loosing the data they entered.