Does anybody know how to find the number of rows affected AFTER I have submitted changes to the data context in LINQ to SQL?
At first I was doing something like this:
Using db as New MyDataContext()
db.Users.Attach(modifiedUser, True)
db.SubmitChanges()
Dim rowsUpdated As Integer = db.GetChangeSet().Updates.Count
End Using
I have since figured out that this doesn't work and that
db.GetChangeSet().Updates.Count
Only tells you how many updates there will be BEFORE you call SubmitChanges().
Is there anyway to find out how many rows have actually been affected?
L2S issues individual insert/update/delete statements for each row affected, so counting entities in the GetChangeSet results will give you the correct 'rows affected' numbers*.
If any row can not be updated due to a change conflict or similar, you'll get an exception during submitchanges, and the transaction will be rolled back.
(* = ...with one exception; if you have any updatable views with instead-of-triggers you could potentially have a situation where the instead-of-trigger hits multiple underlying rows for every row updated. but that is a bit of an edge case... :) )
I have not worked on LINQ to SQL. But, I think it might not be possible.
The reason that comes to mind is: you could do updates to multiple entities before calling SubmitChanges. So, which "records affected" you are looking for won't be known, I guess.
Because a number of different operations could potentially be committed, it's highly unlikely that you'll be able to get back that kind of information.
The SubmitChanges() command will commit inserts, updates and deletes and as far as I can tell there's no way to retrieve the number of rows affected for each (# rows deleted/updated/inserted etc). All you can do is see what is going to be committed, as you've already discovered.
If you had one operation in particular you wanted to perform you could use the ExecuteCommand() method which returns the affected row count.
Add this extension method to app:
/// <summary>
/// Saves all chanches made in this context to the underlying database.
/// </summary>
/// <returns></returns>
/// <exception cref="System.InvalidOperationException">
/// </exception>
public static int SaveChanges(this System.Data.Linq.DataContext context)
{
try
{
int count1 = context.GetChangeSet().Inserts.Count + context.GetChangeSet().Updates.Count + context.GetChangeSet().Deletes.Count;
context.SubmitChanges();
int count2 = context.GetChangeSet().Inserts.Count + context.GetChangeSet().Updates.Count + context.GetChangeSet().Deletes.Count;
return count1 - count2;
}
catch (Exception e)
{
throw new InvalidOperationException(e.Message);
}
}
Call it in below sequence, you will get the count
Dim rowsUpdated As Integer = db.GetChangeSet().Updates.Count
db.SubmitChanges();
Related
I am developing geomesa client to perform basic read write and delete operations. I have also created a function which will return matching feature count for specified query, however it always returns zero, i also tried DataStore stats for fetching the matching feature count, it gives the correct result but operation is very slow. Below is my client code:
public int getRideCount(Long rideId) throws Exception {
int count = 0;
if(rideId != null){
count = fs.getCount(new Query(tableName, CQL.toFilter("r="+rideId)));
//count = ((Long) (ds.stats().getCount(sft, CQL.toFilter("r=" + rideId), true).get())).intValue();
}
return count;
}
Can anyone help me in finding why it returning 0 though features exists in feature collections. or there exists other preferred techniques to fetch the matching feature count?. Any suggestion or clarifications are welcomed.
Based on the additional info from your email to the geomesa dev list, I believe this is caused by a bug in simple feature types that don't have a date attribute. I've opened a ticket here and a PR here for the issue. It should be fixed in the next release (1.3.2), or you can build the branch locally.
In the meantime, the 'exact' counts should still work, although they will be slower. Instructions for enabling exact counts are here and here.
I have a table A with many columns including a column "c".
In a method, I update the value of "c" for row "r1" to "c1" and in one of the subsequent methods (still running in the same thread), I try to read all rows with value of "c" equal to "c1" using hibernate's criteria.
The code snippet is shown below:
#Transactional
public void updateA(long id, long c1)
{
Session currentSession = sessionFactory.getCurrentSession();
A a1 = (A) currentSession.get(A.class.getName(), id);
a1.setC(c1);
currentSession.saveOrUpdate(a1);
}
#Transactional
public void getAllAsForGivenC(long c1)
{
Criteria criteria = sessionFactory.getCurrentSession().createCriteria(A.class.getName());
Criterion cValue= Restrictions.eq("c", "c1");
criteria.add(cValue);
return criteria.list();
}
But when the method getAllAsForGivenC executes, "r1" row is not returned. Both methods run in the same thread and use same hibernate session. Why is getAllAsForGivenC not able to see the row updated in updateA()? What am I doing wrong?
P.S: I run this on MySQL DB (if that matters)
Thanks in advance,
Shobhana
Do session.flush() between your method calls and then try.
e.g.
updateA(1l, 2l);
//do Flush
session.flush();
getAllAsForGivenC(2l);
--Update--
As the documentation says, The process flush occurs by default at the following points:
before some query executions
from org.hibernate.Transaction.commit()
from Session.flush()
Except when you explicitly flush(), there are absolutely no guarantee about when the Session executes the JDBC calls, only the order in which they are executed.
Flushing does not happen before every query! Remember, the purpose of the Hibernate session is to minimize the number of writes to the database, so it will avoid flushing to the database if it thinks that it isn’t needed.
It would have been more intuitive if the framework authors had chosen to name it FlushMode.SOMETIMES.
Have created the following Linq to SQL transaction to try and create invoices number without gaps.
Assuming 2 Tables:
Table 1: InvoiceNumbers. -
Columns ID, SerialNumber, Increment -
Example: 1, 10001, 1
Table 2: Invoices. -
Columns: ID, InvoiceNumber, Name -
Example: 1, 10001, "Bob Smith"
Dim db As New Invoices.InvoicesDataContext
Dim lastInvoiceNumber = (From n In db.InvoiceNumbers Order By n.LastSerialNumber Descending
Select n.LastSerialNumber, n.Increment).First
Dim nextInvoiceNumber As Integer = lastInvoiceNumber.LastSerialNumber + lastInvoiceNumber.Increment
Dim newInvoiceNumber = New Invoices.InvoiceNumber With {.LastSerialNumber = nextInvoiceNumber, .Increment = lastInvoiceNumber.Increment}
Dim newInvoice = New Invoices.Invoice With {.InvoiceNumber = nextInvoiceNumber, .Name = "Test" + nextInvoiceNumber.ToString}
db.InvoiceNumbers.InsertOnSubmit(newInvoiceNumber)
db.Invoices.InsertOnSubmit(newInvoice)
db.SubmitChanges()
All works fine but is is possible using this method that 2 users might pick up the same Invoice Number if they hit the transaction at the same time?
If so, is there are better way using Linq to Sql?
Gaps in sequences are inevitable when dealing with transactional databases.
First, you cannot use SELECT max(id)+1 because it may give the same id to 2 transactions which execute at the same time. This means you have to use database native auto-increment column (MySQL, SQLite) or database sequence (PostgreSQL, MSSQL, Oracle) to obtain next available id.
But even using auto-increment sequence does NOT solve this problem.
Imagine that you have 2 database connections that started 2 parallel transactions almost at the same time. First one acquired some id from auto-increment sequence and it became previously used value +1. One nanosecond later, second transaction acquired next id, which is now +2. Now imagine that first transaction rolled back for some reason (encountered error, your code decided to abort it, program crashed - you name it). After that, second transaction committed with id +2, creating a gap in id numbering.
But what if number of such parallel transactions was more than 2? You cannot predict, and you also cannot tell currently running transactions to reuse ids that were abandoned.
It is theoretically possible to reuse abandoned ids. However, in practice it is prohibitively expensive on database, and creates more problems when multiple sessions try to do the same thing.
TDLR: stop fighting it, gaps in used ids are perfectly normal.
You can always assure that the transaction is not running in more than one thread at the same time using lock():
private static object myLockObject = new object();
....
....
public class MyClass
{
...
public void TransactionAndStuff()
{
lock(myLockObject)
{
// your linq query
}
}
First, I will try to describe what I am willing to do and then, I will ask my questions.
I need to do the following:
List all rows corresponding to some conditions
Do some tests (e.g: check if it wasn't already inserted), if test passes then insert row into another database
Delete row (whether it passed tests or not)
The following is my implementation
List<MyObject> toAdd = new ArrayList<MyObject>();
for(MyObject obj:list){
if(!notYetInserted(obj){
toAdd.add(obj);
}
}
myObjectDAO.insertList(toAdd);
myObjectDAO.deleteList(list);
The service method is marked transactional.
In my DAO methods for deleteList and insertList are pretty similar so I will just put here method for insert.
public void insertList(final List<MyObject> list){
String sql = "INSERT INTO table_test " +
"(col_id, col2, col3,col4) VALUES (?, ?, ?,?)";
List<Object[]> paramList = new ArrayList<Object[]>();
for (MyObject myObject : list) {
paramList.add(new Object[] {myObject.getColId(),
myObject.getCol2(), myObject .getCol3(), myObject.getCol4()}
);
}
simpleJdbcTemplate.batchUpdate(sql, paramList);
}
I am not sure about the best way to perform such operations, I read here that calling for update inside a loop may slow down the system (especially in my case, I will have about 100K insert/delete at a time). I wonder if these additional loops inside DAO won't slow down my system even more and what would happen if problem happened repeatedly while processing that batch (I thought also about moving test from service to DAO to have only one loop and an additional test, I don't really know if it's a good idea). So, I would like your advices. Thanks a lot.
PS: if you need more details feel free to ask!
This is not necessarily a bad approach, but you are right, it might be really slow. If I were to do a process like this that inserted or deleted this many rows I would probably do it all in a stored procedure. My code would just execute the proc and the proc would handle the list and the enumeration through it (as well as the inserts and deletes).
Consider the following code block:
using (PlayersDataContext context = new PlayersDataContext())
{
Console.WriteLine(context.Players.Count()); // will output 'x'
context.Players.InsertOnSubmit(new Player {FirstName = "Vince", LastName = "Young"});
Console.WriteLine(context.Players.Count()); // will also output 'x'; but I'd like to output 'x' + 1
}
Given that I haven't called
context.SubmitChanges();
the application will output the same player count both before and after the InsertOnSubmit statement.
My two questions:
Can the DataContext instance return collections that include pending changes?
Or must I reconcile the DataContext instance with context.GetChangeSet()?
Sure, use:
context.GetChangeSet()
and for more granularity, there are members for Inserts, Updates, and Deletes.
EDIT: I understand your new question now. Yes, if you wanted to include changes in the collection, you would have to somehow combine the collections returned by GetChangeSet() and your existing collections.