linq to sql in multithreading by wrapping the linq query in Task.Run() - linq-to-sql

Is it good to wrap the linq 2 sql query into the Task.Run method as shown below
var keywordlistquery = await Task.Run(() =>
{
using (DataContext context = new DataContext(connection))
{
context.ObjectTrackingEnabled = false;
return from keyword in context.GetTable<KeywordsList>()
select new
{
keyword.search_text,
keyword.search_keyword
};
}
});
Is the above code thread safe, will it have any issues while working on production.Is there any alternate better way of writing the above code.

A good answer here depends a lot on what the intent of the code is.
In general though, keep in mind that Linq to SQL technologies were built and then discontinued before native async and await patterns were implemented in .Net.
So, unless you are very comfortable with maintaining async tasks manually, it might be a good idea not to try use async with Linq to SQL at all. Odds are, you will not get much of a performance boost unless the server is expected to handle very high levels of request concurrency, but manually mucking around with async tasks is a fantastic way to introduce really hard to detect bugs that end up accidentally blocking request threads.
If you do need to handle async in code like this, there are a couple of solutions.
First understand that the code above creates a query, but doesn't execute it. What it returns is an IQuerable... basically, think of it as a SQL statement that hasn't been run. Linq to SQL will not run the query until a method like ToArray or ToList is called, or until it is used in a foreach loop or similar.
Also, it becomes difficult to work with anonymous types like this when you are using return statements. You will likely need to create DTO classes and use select projections to instantiate them
Second, you are wrapping the context in a using block (which is a good practice), but if you return the query before it is actually executed then the context gets disposed. The caller will get an IQueryable, but when it tries to use it you'll end up with an exception because the context has been disposed.
So.... there are two options here depending on if this code is intended to return actual data, or return just a query that the caller can then further modify.
Case 1) return data:
public async Task<object> DoThings(CancellationToken token)
{
var keywordlistquery = await Task.Run(() =>
{
using (var context = new DataClasses1DataContext())
{
context.ObjectTrackingEnabled = false;
return from keyword in context.GetTable<KeywordsList>()
select new
{
keyword.search_text,
keyword.search_keyword
};
}
}, token);
return keywordlistquery;
}
Note here that the method itself should be async, and you should always try to use a cancellation token when possible. This calls ToArray to force the query to execute now and return the data. Keep in mind though that this will return the WHOLE table. If the caller wants to supply a where clause or whatever, the code will still load all the data.
Case 2: return IQuerable
In case 2, you want your method to return just the query. This way, the caller can modify the query before it gets executed. This allows the caller to add statements to include a where clause or order the results or whatever; and have those statements included in the TSQL that gets generated.
In this case, the trick is that the caller must be in control of the lifespan of the data context, and since the method isn't actually executing results, it doesn't need to be async.
public async Task CallingMethod()
{
using (var context = new DataClasses1DataContext())
{
var token = new CancellationToken();
context.ObjectTrackingEnabled = false;
var query = DoThings(context);
var result = await Task.Run(() => query.ToArray(), token);
}
}
public IQueryable<object> DoThings(DataContext context)
{
var keywordlistquery = from keyword in context.GetTable<KeywordsList>()
select new
{
keyword.search_text,
keyword.search_keyword
};
return keywordlistquery;
}
As I mentioned before though, select new anonynous doesn't work that well in cases like this. It would be better to create a DTO class and select a new one of those, or return the whole table.

Related

Is it possible to convert saga iterator to regular promise?

I'm building abstraction layer for keepassxc webextension. It's using redux-saga channels to make look chrome messaging synchronous. It's working (un)surprisingly well. However I want to completely abstract redux-saga, the way it will look like normal function returning Promise.
tl;dr
KeePassXC-browser will be browser extension that will allow retrieving passwords stored in KeePassXC app from the browser.
There are two possible communication protocols: HTTP and NativeClient. So I decided to use typescript interface and depending on communication protocol there will be two classes that implements this interface.
Interface:
interface Keepass {
getDatabaseHash(): Promise<string>;
getCredentials(origin: string, formUrl: string): Promise<KeepassCredentials[]>;
associate(): Promise<KeepassAssociation>;
isAssociated(dbHash: string): Promise<boolean>;
}
First implementation representing HTTP communication protocol is using fetch api, which is already Promise based, so implementation is straight forward and 100% conformed to this interface.
Second implementation representing NativeClient protocol is using redux-saga (effects and channels) to make asynchronous messaging look like synchronous function call. It's bit complicated, but works pretty well and covers edge cases, that would be hard to handle any other way, because native messaging is protocol based on standard input and standard output streams, so request and responses can be interleaved, out of order etc...
The actual problem I'm failing to solve, is that second implementation is not implementing interface, because it's generators not Promises.
Basically would like to convert (wrap) saga iterator function with function returning Promise. There is nice co library that basically does this for normal generators. But doesn't seem to work with redux saga.
function* someGenerator() {
const state = yield select(); // execution freeze here when called from wrapper
const result = yield call(someEffect);
return result;
}
function wrapper() {
return co(someGenerator); // returns Promise
}
Is this possible? If so, what I'm doing wrong?
Redux-saga is based on generator functions for special reason - to allow split asynchronous actions to separated yielded parts and manage them from one endpoint, which located at internal saga process-manager. Instead, in general case, Promise is a thing-in-self, and can't be partial executed. In other simplified words, Promises manage control flow in which them are located, and generators are managed by outer control flow.
yield select(); // execution freeze here when called from wrapper
Your main misconception is to assume that select actual perform some async operation. No, it just pauses function somegenatator on that point and transfers control to redux-saga engine, which knows that to do with returned value, and maybe state async process (Maybe no - it does not matter)
When process is done, saga engine resumes generator, and passes return value to it.
You can easily see it in source code of select (https://github.com/redux-saga/redux-saga/blob/master/src/internal/io.js#L139 ). It just returns an object with some structure, which can be understood by saga engine, then engine perform real action, and calls your generator in generatorName.next(resultValue) format.
UPD. Pure theoretically, you can wrap it to re-assignable promise, but it's not usable case
// Your library code
function deferredPromise() {
let resolver = null;
const promise = new Promise(resolve => (resolver = resolve));
return [
resolver,
promise
];
}
function generateSomeGenerator() {
let [ selectDoneResolve, selectDonePromise ] = deferredPromise();
const someGenetator = function* () {
const state = yield select(); // execution freeze here when called from wrapper
const [newSelectDoneResolve, newSelectDonePromise] = deferredPromise();
selectDoneResolve({
info: state, nextPromise: newSelectDonePromise
});
selectDoneResolve = newSelectDoneResolve;
selectDonePromise = newSelectDonePromise;
const result = yield call(someEffect);
return result;
}
return {
someGenetator,
selectDonePromise
};
}
const { someGenetator: someGenetatorImpl, selectDonePromise } = generateSomeGenerator();
export const someGenetator = someGenetatorImpl;
// Wrapper for interface
selectDonePromise.then(watchDone)
function watchDone({ info, nextPromise }) {
// Do something with your info
nextPromise.then(watchDone);
}

Can you bind a this value in a generator function

Given that you can't use arrow functions when you need to yield in its body, is it possible to set the this value for use in side the body.
I have made myself a database library which extends the "tedious" library that allows me to do something like the following
const self = this;
db.exec(function*(connection) {
let sql = 'SELECT * FROM myTable WHERE id = #id';
let request = connection.request(sql);
request.addParameter('id',db.TYPE.Int, myIdValue);
let count = yield connection.execSql(function*() {
let row = yield;
while(row) {
//process row with somthing like self.processRow(row);
row=yield;
}
});
if (count > 0) {
request = connection.request('some more sql');
//etc
}
return something;
}).then(something => {
//do some more things if the database access was a success
}).catch(error => {
// deal with any errors.
}) ;
I find I am increasingly needing to access the this value from the outside and am constantly doing the trick of assigning it to self at the head of the surrounding function.
Is it possible to set the this value with something like bind? inside the function* (at multiple levels down!)
Since I have full access to the iterators that I use to implement db.exec and connection.execSql I can change them if it's possible. to support it.
Generator use this as normal functions would.
You have few solutions:
use .bind on generator expression
pass this as first/second argument to generator named self
make db.exec take second argument thisArg, similar to array methods
If a thisArg parameter is provided to forEach(), it will be passed to callback when invoked, for use as its this value. Otherwise, the value undefined will be passed for use as its this value. The this value ultimately observable by callback is determined according to the usual rules for determining the this seen by a function.
I would suggest going with the last solution.

Adobe AIR SQLResult listener reached, but no data in SQLite

I'm currently working on a project using AIR and Flex that uses a remote data source to persist data locally in a SQLite database. Currently, there's a lot of copy and paste code that I was trying to alleviate, so since we already use a DAO pattern with several common queries that get passed to it and a type that creates SQLStatement values, I figured I would simplify our codebase even more.
I applied the Adapter pattern to allow a wider range of possible database operations to be performed ([saveOrUpdate, find, findAll, remove] => [selectSingle, selectMultiple, insert, updateSingle, updateMultiple, deleteSingle, deleteMultiple]). I also applied the Strategy pattern to two aspects of the statement runner: the first time for what sort of aggregated type to return (either an Array of records or an ArrayCollection of records) for the selectMultiple function; the second time for creating or not creating historical records (ChangeObjects).
After applying these patterns and testing some refactored code, it worked perfectly with an existing SQLite database. I neglected to test its compatibility with the remote data source, since the saving mechanisms are used during that process as well. After refactoring and simplifying our code and nearing the end of the development cycle, I tested the download.
It would read data from the SQLite database, despite the fact that there was actually no data in it according to sqlite3.
I will give the related piece of code for this.
public class BaseDaoAdaptee {
private var returnStrategy: ReturnTypeStrategy;
private var trackingStrategy: TrackingStrategy;
private var creator: StatementCreator;
public function insert(queryTitle: String,
object: DaoAwareDTO,
parameters: Array,
mutator: Function,
handler: Function): void {
var statement: SQLStatement;
mutator = creator.validEmptyFunction(mutator);
handler = creator.validFault(handler);
statement = defaultStatement(queryTitle, parameters, handler);
statement.addEventListener(SQLEvent.RESULT,
trackingStrategy.onInserted(object, mutator), false, 0, true);
statement.execute();
}
}
The code for the TrackingStrategy implemented:
public class TrackedStrategy
implements TrackingStrategy {
public function onInserted(object: DaoAwareDTO,
callback: Function): Function {
return function (event: SQLEvent): void {
var change: Change,
id:Number = event.target.getResult().lastInsertRowID;
creator.logger.debug((event.target as SQLStatement).itemClass + ' (id # ' + id + ') inserted');
(object as Storeable).id = id;
change = new Creation(object);
change.register();
callback();
};
}
}
The logger reads that various database records were inserted, and when stopped on a breakpoint in the above lambda, "object" has all proper values. When running a Select statement in sqlite3, no records ever get returned.
Why would this happen?
Turns out an open transaction on a SQLConnection value was the cause. Got to love team projects. Commit or rollback your SQLConnection transactions!

LINQ to SQL Translation

Depending on how I map my linq queries to my domain objects, I get the following error
The member 'member' has no supported translation to SQL.
This code causes the error:
public IQueryable<ShippingMethod> ShippingMethods {
get {
return from sm in _db.ShippingMethods
select new ShippingMethod(
sm.ShippingMethodID,
sm.Carrier,
sm.ServiceName,
sm.RatePerUnit,
sm.EstimatedDelivery,
sm.DaysToDeliver,
sm.BaseRate,
sm.Enabled
);
}
}
This code works fine:
public IQueryable<ShippingMethod> ShippingMethods
{
get
{
return from sm in _db.ShippingMethods
select new ShippingMethod
{
Id = sm.ShippingMethodID,
Carrier = sm.Carrier,
ServiceName = sm.ServiceName,
EstimatedDelivery = sm.EstimatedDelivery,
DaysToDeliver = sm.DaysToDeliver,
RatePerUnit = sm.RatePerUnit,
IsEnabled = sm.Enabled,
BaseRate = sm.BaseRate
};
}
}
This is my testmethod I am testing with:
[TestMethod]
public void Test_Shipping_Methods() {
IOrderRepository orderRepo = new SqlOrderRepository();
var items = orderRepo.ShippingMethods.Where(x => x.IsEnabled);
Assert.IsTrue(items.Count() > 0);
}
How does the way in which I instantiate my object affect the linq to sql translation?
Thanks
Ben
It tries to map the entire linq query to SQL, including all method and property calls. The only exceptions are the object initializer syntax (both for anonymous as named types) and extension methods that themselves map to SQL (.Count() for instance).
Short story: you cannot use non-default constructors with Linq to SQL or Entity Framework.
The most significant issue here is that you are mixing predicate and projection semantics.
Once you project (i.e. with select), it is no longer safe to use the Where extension until you materialize the results with ToList(), ToArray() or similar. The second case just happens to work because the projection is completely transparent - all you are doing is property assignments, and to the same class. Constructors don't fall into this category; as the error message says, there's no equivalent representation of a constructor invocation in SQL Server.
Why do you need to do this projection anyway? The whole property could be replaced with just:
return _db.ShippingMethods.AsQueryable();

Linq to SQL and concurrency with Rob Conery repository pattern

I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data.
This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings).
The basic pattern is:
private MyDataContext _datacontext
private Table _tasks;
public Repository(MyDataContext datacontext)
{
_dataContext = datacontext;
}
public void GetTasks()
{
_tasks = from t in _dataContext.Tasks;
return from t in _tasks
select new Domain.Task
{
Name = t.Name,
Id = t.TaskId,
Description = t.Description
};
}
public void SaveTask(Domain.Task task)
{
Task dbTask = null;
// Logic for new tasks omitted...
dbTask = (from t in _tasks
where t.TaskId == task.Id
select t).SingleOrDefault();
dbTask.Description = task.Description,
dbTask.Name = task.Name,
_dataContext.SubmitChanges();
}
So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task.
I then update the tasks from this stored Table and save what I've updated
This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want.
However, it just screams to me that I've missed a trick.
Is there a better way of doing this?
I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing.
I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.
I worked through this and found the following solution. It works in all the test cases I (and more importantly, my testers!) can think of.
I am using the .Attach() method on the datacontext, and a TimeStamp column. This works fine for the first time that you save a particular primary key back to the database but I found that the datacontext throws a System.Data.Linq.DuplicateKeyException "Cannot add an entity with a key that is already in use."
The work around for this I created was to add a dictionary that stored the item I attach the first time around and then every subsequent time I save I reuse that item.
Example code is below, I do wonder if I've missed any tricks - concurrency is pretty fundamental so the hoops I'm jumping through seem a little excessive.
Hopefully the below proves useful, or someone can point me towards a better implementation!
private Dictionary<int, Payment> _attachedPayments;
public void SavePayments(IList<Domain.Payment> payments)
{
Dictionary<Payment, Domain.Payment> savedPayments =
new Dictionary<Payment, Domain.Payment>();
// Items with a zero id are new
foreach (Domain.Payment p in payments.Where(p => p.PaymentId != 0))
{
// The list of attached payments that works around the linq datacontext
// duplicatekey exception
if (_attachedPayments.ContainsKey(p.PaymentId)) // Already attached
{
Payment dbPayment = _attachedPayments[p.PaymentId];
// Just a method that maps domain to datacontext types
MapDomainPaymentToDBPayment(p, dbPayment, false);
savedPayments.Add(dbPayment, p);
}
else // Attach this payment to the datacontext
{
Payment dbPayment = new Payment();
MapDomainPaymentToDBPayment(p, dbPayment, true);
_dataContext.Payments.Attach(dbPayment, true);
savedPayments.Add(dbPayment, p);
}
}
// There is some code snipped but this is just brand new payments
foreach (var payment in newPayments)
{
Domain.Payment payment1 = payment;
Payment newPayment = new Payment();
MapDomainPaymentToDBPayment(payment1, newPayment, false);
_dataContext.Payments.InsertOnSubmit(newPayment);
savedPayments.Add(newPayment, payment);
}
try
{
_dataContext.SubmitChanges();
// Grab the Timestamp into the domain object
foreach (Payment p in savedPayments.Keys)
{
savedPayments[p].PaymentId = p.PaymentId;
savedPayments[p].Timestamp = p.Timestamp;
_attachedPayments[savedPayments[p].PaymentId] = p;
}
}
catch (ChangeConflictException ex)
{
foreach (ObjectChangeConflict occ in _dataContext.ChangeConflicts)
{
Payment entityInConflict = (Payment) occ.Object;
// Use the datacontext refresh so that I can display the new values
_dataContext.Refresh(RefreshMode.OverwriteCurrentValues, entityInConflict);
_attachedPayments[entityInConflict.PaymentId] = entityInConflict;
}
throw;
}
}
I would look at trying to utilise the .Attach method by passing the 'original' and 'updated' objects thus achieving true optimistic concurrency checking from LINQ2SQL. This IMO would be preferred to using version or datetime stamps either in the DBML objects or your Domain objects. I'm not sure how MVC allows for this idea of persisting the 'original' data however.. i've been trying to investigate the validation scaffolding in the hope that it's storing the 'original' data.. but i suspect that it is as only as good as the most recent post (and/or failed validation). So that idea may not work.
Another crazy idea i had was this: override the GetHashCode() for all of your domain objects where the hash represents the unique set of data for that object (minus the ID of course). Then, either manually or with a helper bury that hash in a hidden field in the HTML POST form and send it back to your service layer with your updated domain object - do the concurrency checking in your service layer or data layer (by comparing the original hash with a newly extracted domain object's hash) but be aware that you need to be checking for and raising concurrency exceptions yourself. It's nice to use the DMBL functions but the idea of abstracting away the data layer is so to not depend on the particular implementation's features etc. So having full control of the optimistic concurrency checking on your domain objects in your service layer (for example) seems like a good approach to me.