Maximum 'Units of Work' in one page request? - linq-to-sql

Its not One is it? I have a method that gets five Lists from different repositories. Each call opens and closes a new Datacontext. Is this ok to do or should I wrap everything in One datacontext. In this case it is not straightforward to use the same datacontext, but i am afraid that opening and closing numerous datacontext in one page request is not good.

Here is an article on just that subject...
Linq to SQL DataContext Lifetime Management
He recommends one per request and I have implemented that pattern in a few applications and it has worked well for me.
He talk a little about that in is article... His quick and dirty version makes a reference to System.Web and does something like this:
private TDataContext _DataContext;
public TDataContext DataContext
{
get
{
if (_DataContext == null)
{
if (HttpContext.Current != null)
{
if (HttpContext.Current.Items[DataContextKey] == null)
{
HttpContext.Current.Items[DataContextKey] = new TDataContext();
}
_DataContext = (TDataContext)HttpContext.Current.Items[DataContextKey];
}
else
{
_DataContext = new TDataContext();
}
}
return _DataContext;
}
}
But then he recommends you take the next step and get rid of the reference to System.Web and use dependency injection and create your own IContainer that could determine the life span of your datacontext based on whether your running in unit test, web application, etc.
Example:
public class YourRepository
{
public YourRepository(IContainer<DataContext> container)
{
}
}
then replace HttpContext.Current.Items[DataContextKey] with _Container[DataContextKey]
hope this helps...

I use on Unit of Work per request and built a IHttpModule that manages unit of work lifecycle, creating it on request and diposing it afterwards. The current unit of work is stored in HttpContext.Current.Items (hidden in Local.Data).

Related

MVVMCross 4.0 Xamarin Forms Page not Found

We've been using MVVMCross for the 18 months. Great Stuff. But, we're looking to migrate from Xamarin.iOS to Xamarin.Forms in an effort to speed development time.
We have a PCL with our ViewModels. But, would like to have our View (Pages) in a separate PCL library, to allow parallel development with Native application.
MVVMCross can not seem to locate the Page if it's located in a separate PCL library, OR if it's located in the Application. However, if I put the Page in the same PCL as the ViewModels, things work like a champ.
I've tried putting the following code in our Setup.cs
protected override IEnumerable<Assembly> GetViewAssemblies()
{
var list = new List<Assembly>();
list.AddRange(base.GetViewAssemblies());
list.Add(typeof(NuSales.Forms.Pages.TestPage).GetTypeInfo().Assembly);
return list;
}
But, still no joy.
Any hints on how to fix the resolver to find the View (Page)?
Thanks
Looking at https://github.com/MvvmCross/MvvmCross-Forms/blob/master/MvvmCross.Forms.Presenter.Core/MvxFormsPageLoader.cs#L44
protected virtual Type GetPageType(string pageName)
{
return _request.ViewModelType.GetTypeInfo().Assembly.CreatableTypes()
.FirstOrDefault(t => t.Name == pageName);
}
... I'd say you need to override the default IMvxFormsPageLoader to change that single Assembly lookup.
...or (for bonus points) you could send in a Pull Request that changes the default behaviour to use the view assemblies collection - and it could also store a Dictionary to avoid multiple Reflection passes and to speed up lookup times.
Hopefully, I'm doing this right in terms of StackOverflow etiquette. Using Stuart's suggestion... A quick fix is.
Create a FormPageLoader like below.
public class MyFormsPageLoader : MvxFormsPageLoader
{
public MyFormsPageLoader() {
}
protected override Type GetPageType(string pageName)
{
return typeof(NuSales.Forms.Pages.TestPage).GetTypeInfo().Assembly.CreatableTypes().FirstOrDefault(t => t.Name == pageName);
}
}
Then you need to register it. I did it in my App.Initialize code
public class FormsApp : MvxApplication
{
public override void Initialize()
{
base.Initialize();
Mvx.RegisterSingleton(typeof(IMvxFormsPageLoader), new MyFormsPageLoader());
RegisterAppStart<TestViewModel>();
}
}

using same implementation with different lifestyles Windsor

I have a class like this:
public FooRepo : IFooRepo
{
public FooRepo(IDbContextFactory factory)
{
context = factory.GetContext();
}
}
In my app I register everything with LifeStyle.PerWebRequest,
but now I need to call one Method which uses this IFooRepo like this (because it's gonna take about an hour):
{
...
ThreadPool.QueueUserWorkItem(s => RequestReport(number));
...
}
private void RequestReport(int number)
{
// IFooRepo needed here
}
the problem is that I need this IFooRepo with PerWebRequest lifestyle most of the time, and I needed here in the thread also to stay alive, also it has a dependency IDbContextFactory which I don't know if I need to register it also in a different way
well then register your FooRepo twice and setup service override for whoever uses in on the ThreadPool to use the other component. Easy.

Hard to update an Entity created by another LINQ to SQL context

Why this keep bugging me all day.
I have an entity with several references where i get from a context which I then Dispose.
Do some Changes and try to SubmitChanges(). While calling SubmitChanges() without .Attach() seems to simply do nothing. When using .Attach() I get the Exception :
An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
Any ideas?
L2S is very picky about updating an entity that came from a different DB context. In fact, you cannot do it unless you 'detach' it first from the context it came from. There are a couple different ways of detaching an entity. One of them is shown below. This code would be in your entity class.
public virtual void Detach()
{
PropertyChanging = null;
PropertyChanged = null;
}
In addition to this, you can also serialize your entity using WCF based serialization. Something like this:
object ICloneable.Clone()
{
var serializer = new DataContractSerializer(GetType());
using (var ms = new System.IO.MemoryStream())
{
serializer.WriteObject(ms, this);
ms.Position = 0;
return serializer.ReadObject(ms);
}
}

Linq to SQL and concurrency with Rob Conery repository pattern

I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data.
This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings).
The basic pattern is:
private MyDataContext _datacontext
private Table _tasks;
public Repository(MyDataContext datacontext)
{
_dataContext = datacontext;
}
public void GetTasks()
{
_tasks = from t in _dataContext.Tasks;
return from t in _tasks
select new Domain.Task
{
Name = t.Name,
Id = t.TaskId,
Description = t.Description
};
}
public void SaveTask(Domain.Task task)
{
Task dbTask = null;
// Logic for new tasks omitted...
dbTask = (from t in _tasks
where t.TaskId == task.Id
select t).SingleOrDefault();
dbTask.Description = task.Description,
dbTask.Name = task.Name,
_dataContext.SubmitChanges();
}
So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task.
I then update the tasks from this stored Table and save what I've updated
This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want.
However, it just screams to me that I've missed a trick.
Is there a better way of doing this?
I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing.
I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.
I worked through this and found the following solution. It works in all the test cases I (and more importantly, my testers!) can think of.
I am using the .Attach() method on the datacontext, and a TimeStamp column. This works fine for the first time that you save a particular primary key back to the database but I found that the datacontext throws a System.Data.Linq.DuplicateKeyException "Cannot add an entity with a key that is already in use."
The work around for this I created was to add a dictionary that stored the item I attach the first time around and then every subsequent time I save I reuse that item.
Example code is below, I do wonder if I've missed any tricks - concurrency is pretty fundamental so the hoops I'm jumping through seem a little excessive.
Hopefully the below proves useful, or someone can point me towards a better implementation!
private Dictionary<int, Payment> _attachedPayments;
public void SavePayments(IList<Domain.Payment> payments)
{
Dictionary<Payment, Domain.Payment> savedPayments =
new Dictionary<Payment, Domain.Payment>();
// Items with a zero id are new
foreach (Domain.Payment p in payments.Where(p => p.PaymentId != 0))
{
// The list of attached payments that works around the linq datacontext
// duplicatekey exception
if (_attachedPayments.ContainsKey(p.PaymentId)) // Already attached
{
Payment dbPayment = _attachedPayments[p.PaymentId];
// Just a method that maps domain to datacontext types
MapDomainPaymentToDBPayment(p, dbPayment, false);
savedPayments.Add(dbPayment, p);
}
else // Attach this payment to the datacontext
{
Payment dbPayment = new Payment();
MapDomainPaymentToDBPayment(p, dbPayment, true);
_dataContext.Payments.Attach(dbPayment, true);
savedPayments.Add(dbPayment, p);
}
}
// There is some code snipped but this is just brand new payments
foreach (var payment in newPayments)
{
Domain.Payment payment1 = payment;
Payment newPayment = new Payment();
MapDomainPaymentToDBPayment(payment1, newPayment, false);
_dataContext.Payments.InsertOnSubmit(newPayment);
savedPayments.Add(newPayment, payment);
}
try
{
_dataContext.SubmitChanges();
// Grab the Timestamp into the domain object
foreach (Payment p in savedPayments.Keys)
{
savedPayments[p].PaymentId = p.PaymentId;
savedPayments[p].Timestamp = p.Timestamp;
_attachedPayments[savedPayments[p].PaymentId] = p;
}
}
catch (ChangeConflictException ex)
{
foreach (ObjectChangeConflict occ in _dataContext.ChangeConflicts)
{
Payment entityInConflict = (Payment) occ.Object;
// Use the datacontext refresh so that I can display the new values
_dataContext.Refresh(RefreshMode.OverwriteCurrentValues, entityInConflict);
_attachedPayments[entityInConflict.PaymentId] = entityInConflict;
}
throw;
}
}
I would look at trying to utilise the .Attach method by passing the 'original' and 'updated' objects thus achieving true optimistic concurrency checking from LINQ2SQL. This IMO would be preferred to using version or datetime stamps either in the DBML objects or your Domain objects. I'm not sure how MVC allows for this idea of persisting the 'original' data however.. i've been trying to investigate the validation scaffolding in the hope that it's storing the 'original' data.. but i suspect that it is as only as good as the most recent post (and/or failed validation). So that idea may not work.
Another crazy idea i had was this: override the GetHashCode() for all of your domain objects where the hash represents the unique set of data for that object (minus the ID of course). Then, either manually or with a helper bury that hash in a hidden field in the HTML POST form and send it back to your service layer with your updated domain object - do the concurrency checking in your service layer or data layer (by comparing the original hash with a newly extracted domain object's hash) but be aware that you need to be checking for and raising concurrency exceptions yourself. It's nice to use the DMBL functions but the idea of abstracting away the data layer is so to not depend on the particular implementation's features etc. So having full control of the optimistic concurrency checking on your domain objects in your service layer (for example) seems like a good approach to me.

Access to global application settings

A database application that I'm currently working on, stores all sorts of settings in the database. Most of those settings are there to customize certain business rules, but there's also some other stuff in there.
The app contains objects that specifically do a certain task, e.g., a certain complicated calculation. Those non-UI objects are unit-tested, but also need access to lots of those global settings. The way we've implemented this right now, is by giving the objects properties that are filled by the Application Controller at runtime. When testing, we create the objects in the test and fill in values for testing (not from the database).
This works better, in any case much better than having all those objects need some global Settings object --- that of course effectively makes unit testing impossible :) Disadvantage can be that you sometimes need to set a dozen of properties, or that you need to let those properties 'percolate' into sub-objects.
So the general question is: how do you provide access to global application settings in your projects, without the need for global variables, while still being able to unit test your code? This must be a problem that's been solved 100's of times...
(Note: I'm not too much of an experienced programmer, as you'll have noticed; but I love to learn! And of course, I've already done research into this topic, but I'm really looking for some first-hand experiences)
You could use Martin Fowlers ServiceLocator pattern. In php it could look like this:
class ServiceLocator {
private static $soleInstance;
private $globalSettings;
public static function load($locator) {
self::$soleInstance = $locator;
}
public static function globalSettings() {
if (!isset(self::$soleInstance->globalSettings)) {
self::$soleInstance->setGlobalSettings(new GlobalSettings());
}
return self::$soleInstance->globalSettings;
}
}
Your production code then initializes the service locator like this:
ServiceLocator::load(new ServiceLocator());
In your test-code, you insert your mock-settings like this:
ServiceLocator s = new ServiceLocator();
s->setGlobalSettings(new MockGlobalSettings());
ServiceLocator::load(s);
It's a repository for singletons that can be exchanged for testing purposes.
I like to model my configuration access off of the Service Locator pattern. This gives me a single point to get any configuration value that I need and by putting it outside the application in a separate library, it allows reuse and testability. Here is some sample code, I am not sure what language you are using, but I wrote it in C#.
First I create a generic class that will models my ConfigurationItem.
public class ConfigurationItem<T>
{
private T item;
public ConfigurationItem(T item)
{
this.item = item;
}
public T GetValue()
{
return item;
}
}
Then I create a class that exposes public static readonly variables for the configuration item. Here I am just reading the ConnectionStringSettings from a config file, which is just xml. Of course for more items, you can read the values from any source.
public class ConfigurationItems
{
public static ConfigurationItem<ConnectionStringSettings> ConnectionSettings = new ConfigurationItem<ConnectionStringSettings>(RetrieveConnectionString());
private static ConnectionStringSettings RetrieveConnectionString()
{
// In .Net, we store our connection string in the application/web config file.
// We can access those values through the ConfigurationManager class.
return ConfigurationManager.ConnectionStrings[ConfigurationManager.AppSettings["ConnectionKey"]];
}
}
Then when I need a ConfigurationItem for use, I call it like this:
ConfigurationItems.ConnectionSettings.GetValue();
And it will return me a type safe value, which I can then cache or do whatever I want with.
Here's a sample test:
[TestFixture]
public class ConfigurationItemsTest
{
[Test]
public void ShouldBeAbleToAccessConnectionStringSettings()
{
ConnectionStringSettings item = ConfigurationItems.ConnectionSettings.GetValue();
Assert.IsNotNull(item);
}
}
Hope this helps.
Usually this is handled by an ini file or XML configuration file. Then you just have a class that reads the setting when neeed.
.NET has this built in with the ConfigurationManager classes, but it's quite easy to implement, just read text files, or load XML into DOM or parse them by hand in code.
Having config files in the database is ok, but it does tie you to the database, and creates an extra dependancy for your app that ini/xml files solve.
I did this:
public class MySettings
{
public static double Setting1
{ get { return SettingsCache.Instance.GetDouble("Setting1"); } }
public static string Setting2
{ get { return SettingsCache.Instance.GetString("Setting2"); } }
}
I put this in a separate infrastructure module to remove any issues with circular dependencies.
Doing this I am not tied to any specific configuration method, and have no strings running havoc in my applications code.