What's the risk of multiple SubmitChanges? - linq-to-sql

Is there a risk with the code below that if two people submit at the same time the wrong s.saleID will be retrieved?
protected void submitSale(int paymentTypeID)
{
tadbDataContext tadb = new tadbDataContext();
ta_sale s = new ta_sale();
decimal total = decimal.Parse(lblTotal.Value);
s.paymentTypeID = paymentTypeID;
s.time = DateTime.Now;
s.totalSale = total;
tadb.ta_sales.InsertOnSubmit(s);
tadb.SubmitChanges();
char[] drinksSeparator = new char[] {'|'};
char[] rowSeparator = new char[] { ':' };
string drinkString = lblSummaryQty.Value.Substring(0, lblSummaryQty.Value.Length - 1);
string[] arrDrinks = drinkString.Split(drinksSeparator);
foreach (string row in arrDrinks)
{
string[] arrDrink = row.Split(rowSeparator);
int rowID = Convert.ToInt16(arrDrink[0]);
int rowQty = Convert.ToInt16(arrDrink[1]);
ta_saleDetail sd = new ta_saleDetail();
sd.drinkID = rowID;
sd.quantity = rowQty;
sd.saleID = s.saleID;
tadb.ta_saleDetails.InsertOnSubmit(sd);
}
tadb.SubmitChanges();
}
}
If so, what should I do to make sure it is atomic? (I think it should be OK, but want to double check!)

To be shure that it's ok just write a test that call submitSale 2 or even more times and make it to submit changes almost at the same time. If test will fail, use lock statement, but be carefull, it can cause deadlocks. After you change submitSale just test it again with high load (a lot of simultaneous calls). And so on until you pass the test.

Related

How to Parse the Text file in SSIS

Iam new to SSIS , Iam facing the below issue while parsing a text file which contains the below sample data
Below is the requirement
-> Need to Capture the number after IH1(454756567) and insert into one column as
InvoiceNumber
-> Need to insert the data between ABCD1234 to ABCD2345 into another column as
TotalRecord .
Many thanks for the help .
ABCD1234
IH1 454756567 686575634
IP2 HJKY TXRT
IBG 23455GHK
ABCD2345
IH1 689343256 686575634
IP2 HJKY TXRT
IBG 23455GHK
ABCD5678
This is the script component to process the entire file. You need to create your output and they are currently being processed as strings.
This assumes your file format is consistent. If you don't have 2 columns in IH1 and IP2 ALL the time. I would recommend a for loop from 1 to len -1 to process. And send the records to their own output.
public string recordID = String.Empty;
public override void CreateNewOutputRows()
{
string filePath = ""; //put your filepath here
using (System.IO.StreamReader sr = new System.IO.StreamReader(filePath))
{
while (!sr.EndOfStream)
{
string line = sr.ReadLine();
if (line.Substring(0, 4) == "ABCD") //Anything that identifies the start of a new record
// line.Split(' ').Length == 1 also meets your criteria.
{
recordID = line;
Output0Buffer.AddRow();
Output0Buffer.RecordID = line;
}
string[] cols = line.Split(' ');
switch (cols[0])
{
case "IH1":
Output0Buffer.InvoiceNumber = cols[1];
Output0Buffer.WhatEverTheSecondColumnIs = cols[2];
break;
case "IP2":
Output0Buffer.ThisRow = cols[1];
Output0Buffer.ThisRow2 = cols[2];
break;
case "IBG":
Output0Buffer.Whatever = cols[1];
break;
}
}
}
}
You'll need to do this in a script component.

In MVC, pass complex query and view model to session?

I have a view model:
public class UserCollectionView
{
public CardCollection CardCollections { get; set; }
public Card Cards { get; set; }
}
I have a List View Controller:
public ActionResult ViewCollection(int? page)
{
var userid = (int)WebSecurity.CurrentUserId;
var pageNumber = page ?? 1;
int pageSize = 5;
ViewBag.OnePageOfCards = pageNumber;
if (Session["CardCollection"] != null)
{
var paging = Session["CardCollection"].ToString.();
return View(paging.ToPagedList(pageNumber, pageSize));
}
var viewModel = from c in db.Cards
join j in db.CardCollections on c.CardID equals j.CardID
where (j.NumberofCopies > 0) && (j.UserID == userid)
orderby c.Title
select new UserCollectionView { Cards = c, CardCollections = j };
Session["CardCollection"] = viewModel;
return View(viewModel.ToPagedList(pageNumber, pageSize));
I am trying to use the PagedList to add paging to the results. I have been able to do this when I am not using a query that returns data from 2 databases in a single view. As shown here
My end result looks something like this:
Cards.SeveralColumns CardCollections.ColumnA CardCollections.ColumnB
Row 1 Data from Cards Table A from CardCollections B from CardCollections
Row 2 Data from Cards Table A from CardCollections B from CardCollections
Row 3 Data from Cards Table A from CardCollections B from CardCollections
And so on... I get an error
The ObjectContext instance has been disposed and can no longer be used for operations that require a connection.
I have tried variations of SQL statements but can't get it to fit with my view model. In SQL Management Studio this brings back the correct results
Select * from Cards Inner Join CardCollections On Cards.CardID = CardCollections.CardID where CardCollections.UserID = 1 and CardCollections.NumberofCopies > 0;
I need a way to pass the query in session so the paging will operate correctly. Any advice is appreciated. :)
Short answer is, you can't. The model needs to be a snapshot of the content and therefore you can't pass an open query across the boundary (either as a hand-off to a session or to the client directly).
What you're seeing is the disposal of the context beyond it's initial use (where you assemble var viewmodel).
With that said, you can cache the results (to save overhead) if querying the data is an expensive operation. Basically, you'd store the entire collection (or at least a large subset of the collection) in the session/memorycache (which can then be manipulated into a paged list). Something to the effect of:
public ActionResult ViewCollection(int? page)
{
var userId = (int) WebSecurity.CurrentUserId;
var pageNumber = page ?? 1;
var pageSize = 5;
ViewBag.OnePageOfCards = pageNumber;
var cacheKey = String.Format("ViewCollection[{0}]", userId);
var entities = GetOrCreateCacheItem<IEnumerable<UserCollectionView>>(cacheKey, () => {
var query = (from c in db.Cards
join j in db.CardCollections on c.CardID equals j.CardID
where (j.NumberofCopies > 0) && (j.UserID == userid)
orderby c.Title
select new UserCollectionView { Cards = c, CardCollections = j }
)
.ToList(); // Force fetch from Db so we can cache
});
return View(entities.ToPagedList(pageNumber, pageSize));
}
// To give an example of a cache provider; Feel free to change this,
// abstract it out, etc.
private T GetOrCreateCacheItem<T>(string cacheKey, Func<T> getItem)
{
T cacheItem = MemoryCache.Default.Get(cacheKey) as T;
if (cacheItem == null)
{
cacheItem = getItem();
var cacheExpiry = DateTime.Now.AddMinutes(5);
MemoryCache.Default.Add(cacheKey, cacheItem, cacheExpiry);
}
return cacheItem;
}
It turns out that I didn't need to pass the query at all. If I let it run through it works fine without the session. I am not really sure why this works but my search query has to be passed. Maybe it is because I am using a viewmodel to perform the query. I will experiment and post if I find anything. Currently the working code is:
public ActionResult ViewCollection(int? page)
{
var userid = (int)WebSecurity.CurrentUserId;
var pageNumber = page ?? 1;
int pageSize = 5;
ViewBag.OnePageOfCards = pageNumber;
ViewBag.Rarity_ID = new SelectList(db.Rarities, "RarityID", "Title");
ViewBag.MainType_ID = new SelectList(db.MainTypes, "MainTypeID", "Title");
ViewBag.CardSet_ID = new SelectList(db.CardSets, "CardSetID", "Title");
ViewBag.SubType_ID = new SelectList(db.SubTypes, "SubTypeID", "Title");
var viewModel = from c in db.Cards
join j in db.CardCollections on c.CardID equals j.CardID
where (j.NumberofCopies > 0) && (j.UserID == userid)
orderby c.Title
select new UserCollectionView { Cards = c, CardCollections = j };
return View(viewModel.ToPagedList(pageNumber, pageSize));

Exiting While Loop

I am trying to read values from a MySql database and check whether the id that needs to be updated, already exists in database or not. I have been able to make everything else in the program work except the part of checking database. Here is some of my code:
public void updateStatement() throws SQLException{
try
{
connnectDatabse();
}
catch (ClassNotFoundException e)
{
System.out.println("Could not connect to database..");
}
System.out.println("How many entries would you like to update?");
kb=new Scanner(System.in);
int numEntries = kb.nextInt();
int counter =0;
String newName=null, newDepartment =null;
int newSalary=0, newId =0;
int counterValues =0;
while(counterValues != numEntries){
System.out.println("Please enter 5 to view current entries in database\n");
selectStatement();
int idToUpdate =0;
boolean idVerify =false;
//Check if the user id exists in database or not
while(!(idVerify)){
System.out.println("\nPlease enter the ID of the record to update");
idToUpdate = kb.nextInt();
idFoundInDatabase(idArrayList, idToUpdate);
}
System.out.println("Please choose the number of column to update from below options.\n1.ID\n2.Name\n3.Salary\n4.Department");
int columnToUpdate = kb.nextInt();
switch(columnToUpdate){
case 1:
System.out.println("What will be the new id value for the selected ID?");
newId = kb.nextInt();
query = "update employee set ID = ? where ID = ?";
break;
case 2:
System.out.println("What will be the new name for the selected ID?");
newName = kb.next();
query = "update employee set Name = ? where ID = ?";
break;
case 3:
System.out.println("What will be the new salary for the selected ID?");
newSalary = kb.nextInt();
query = "update employee set Salary = ? where ID = ?";
break;
case 4:
System.out.println("What will be the new department for the selected ID?");
newDepartment = kb.next();
query = "update employee set Department = ? where ID = ?";
break;
default:
System.out.println("Correct option not chosen");
}
PreparedStatement st = conn.prepareStatement(query);
if(columnToUpdate ==1){
st.setInt(1, newId);
st.setInt(2, idToUpdate);
}
else if(columnToUpdate ==2){
st.setString(1, newName);
st.setInt(2, idToUpdate);
}
else if(columnToUpdate ==3){
st.setInt(1, newSalary);
st.setInt(2, idToUpdate);
}
else{
st.setString(1, newDepartment);
st.setInt(2, idToUpdate);
}
//execute the prepared statement
st.executeUpdate();
System.out.println("Record successfully updated..");
counterValues++;
}
}
//Code that I am unable to exit. This is
a separate method outside of updateStatement() method.
ArrayList contains the list of ids that are already in the database. ArrayList has been populated successfully.
public boolean idFoundInDatabase(ArrayList<String> arrayList, int id){
boolean validId = false;
while(validId == true){
String idRead = String.valueOf(id);
for (int i=0; i<arrayList.size(); i++){
String elementRead =arrayList.get(i);
if(elementRead.equals(idRead)){
validId = true;
break;
}
else{
validId= false;
}
}
}
return validId;
}
}
If required I am also posting the lines of code where I get the result set to make the array list of ids.
while(result.next()){
String id =result.getString("ID");
idArrayList.add(id);
if(choice == 1 || choice == 2 || choice == 3 || choice == 4){
resultIs = result.getString(columnName);
System.out.println(resultIs);
}
else{
System.out.println(result.getString("ID")+"\t"+result.getString("Name")+
"\t"+result.getString("Salary")+"\t"+result.getString("Department") );
}
}
Problem is exiting the above idFoundInDatabase method. It identifies if the id which user wants to update is in the database or not. On finding even the correct id which exists in the database, it just keeps on reading the values in array list, instead of going to return statement. Am I also doing something wrong where I call this method? Any help will be appreciated. Have been stuck on it for almost a day now. Have done debugging many many times. I am doing this just to get acquainted with jdbc and parameterized queries, so that I can follow the similar thing in a bigger project.Any help is appreciated. Thank You.
A few things:
Your while loop inside of idFoundInDatabase() is not necessary - one simple iteration through the for loop over arrayList is enough.
You are returning a boolean value from that function, but your calling method does not capture and use it, so idVerify is never being changed from false to true, so your input loop repeats forever.
All you need to do sort that out is change idFoundInDatabase(idArrayList, idToUpdate); to idVerify = idFoundInDatabase(idArrayList, idToUpdate); and then your input loop should successfully terminate when a valid id is found.

Sql Server 2008 Tuning with large transactions (700k+ rows/transaction)

So, I'm working on a database that I will be adding to my future projects as sort of a supporting db, but I'm having a bit of an issue with it, especially the logs.
The database basically needs to be updated once a month. The main table has to be purged and then refilled off of a CSV file. The problem is that Sql Server will generate a log for it which is MEGA big. I was successful in filling it up once, but wanted to test the whole process by purging it and then refilling it.
That's when I get an error that the log file is filled up. It jumps from 88MB (after shrinking via maintenance plan) to 248MB and then stops the process altogether and never completes.
I've capped it's growth at 256MB, incrementing by 16MB, which is why it failed, but in reality I don't need it to log anything at all. Is there a way to just completely bypass logging on any query being run against the database?
Thanks for any responses in advance!
EDIT: Per the suggestions of #mattmc3 I've implemented SqlBulkCopy for the whole procedure. It works AMAZING, except, my loop is somehow crashing on the very last remaining chunk that needs to be inserted. I'm not too sure where I'm going wrong, heck I don't even know if this is a proper loop, so I'd appreciate some help on it.
I do know that its an issue with the very last GetDataTable or SetSqlBulkCopy calls. I'm trying to insert 788189 rows, 788000 get in and the remaining 189 are crashing...
string[] Rows;
using (StreamReader Reader = new StreamReader("C:/?.csv")) {
Rows = Reader.ReadToEnd().TrimEnd().Split(new char[1] {
'\n'
}, StringSplitOptions.RemoveEmptyEntries);
};
int RowsInserted = 0;
using (SqlConnection Connection = new SqlConnection("")) {
Connection.Open();
DataTable Table = null;
while ((RowsInserted < Rows.Length) && ((Rows.Length - RowsInserted) >= 1000)) {
Table = GetDataTable(Rows.Skip(RowsInserted).Take(1000).ToArray());
SetSqlBulkCopy(Table, Connection);
RowsInserted += 1000;
};
Table = GetDataTable(Rows.Skip(RowsInserted).ToArray());
SetSqlBulkCopy(Table, Connection);
Connection.Close();
};
static DataTable GetDataTable(
string[] Rows) {
using (DataTable Table = new DataTable()) {
Table.Columns.Add(new DataColumn("A"));
Table.Columns.Add(new DataColumn("B"));
Table.Columns.Add(new DataColumn("C"));
Table.Columns.Add(new DataColumn("D"));
for (short a = 0, b = (short)Rows.Length; a < b; a++) {
string[] Columns = Rows[a].Split(new char[1] {
','
}, StringSplitOptions.RemoveEmptyEntries);
DataRow Row = Table.NewRow();
Row["A"] = Columns[0];
Row["B"] = Columns[1];
Row["C"] = Columns[2];
Row["D"] = Columns[3];
Table.Rows.Add(Row);
};
return (Table);
};
}
static void SetSqlBulkCopy(
DataTable Table,
SqlConnection Connection) {
using (SqlBulkCopy SqlBulkCopy = new SqlBulkCopy(Connection)) {
SqlBulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("A", "A"));
SqlBulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("B", "B"));
SqlBulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("C", "C"));
SqlBulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("D", "D"));
SqlBulkCopy.BatchSize = Table.Rows.Count;
SqlBulkCopy.DestinationTableName = "E";
SqlBulkCopy.WriteToServer(Table);
};
}
EDIT/FINAL CODE: So the app is now finished and works AMAZING, and quite speedy! #mattmc3, thanks for all the help! Here is the final code for anyone who may find it useful:
List<string> Rows = new List<string>();
using (StreamReader Reader = new StreamReader(#"?.csv")) {
string Line = string.Empty;
while (!String.IsNullOrWhiteSpace(Line = Reader.ReadLine())) {
Rows.Add(Line);
};
};
if (Rows.Count > 0) {
int RowsInserted = 0;
DataTable Table = new DataTable();
Table.Columns.Add(new DataColumn("Id"));
Table.Columns.Add(new DataColumn("A"));
while ((RowsInserted < Rows.Count) && ((Rows.Count - RowsInserted) >= 1000)) {
Table = GetDataTable(Rows.Skip(RowsInserted).Take(1000).ToList(), Table);
PerformSqlBulkCopy(Table);
RowsInserted += 1000;
Table.Clear();
};
Table = GetDataTable(Rows.Skip(RowsInserted).ToList(), Table);
PerformSqlBulkCopy(Table);
};
static DataTable GetDataTable(
List<string> Rows,
DataTable Table) {
for (short a = 0, b = (short)Rows.Count; a < b; a++) {
string[] Columns = Rows[a].Split(new char[1] {
','
}, StringSplitOptions.RemoveEmptyEntries);
DataRow Row = Table.NewRow();
Row["A"] = "";
Table.Rows.Add(Row);
};
return (Table);
}
static void PerformSqlBulkCopy(
DataTable Table) {
using (SqlBulkCopy SqlBulkCopy = new SqlBulkCopy(#"", SqlBulkCopyOptions.TableLock)) {
SqlBulkCopy.BatchSize = Table.Rows.Count;
SqlBulkCopy.DestinationTableName = "";
SqlBulkCopy.WriteToServer(Table);
};
}
If you are doing a Bulk Insert into the table in SQL Server, which is how you should be doing this (BCP, Bulk Insert, Insert Into...Select, or in .NET, the SqlBulkCopy class) you can use the "Bulk Logged" recovery model. I highly recommend reading the MSDN articles on recovery models: http://msdn.microsoft.com/en-us/library/ms189275.aspx
You can set the Recover model for each database separately. Maybe the simple recovery model will work for you. The simple model:
Automatically reclaims log space to keep space requirements small, essentially eliminating the need to manage the transaction log space.
Read up on it here.
There is no way to bypass using the transaction log in SQL Server.

Linq-2-Sql code: Does this scale?

I'm just starting to use linq to sql. I'm hoping that someone can verify that linq-2-sql has deferred execution until the foreach loop is executed. Over all, can someone tell me if this code scales. It's a simple get method with a few search parameters. Thanks!
Code:
public static IList<Content> GetContent(int contentTypeID, int feedID, DateTime? date, string text)
{
List<Content> contentList = new List<Content>();
using (DataContext db = new DataContext())
{
var contentTypes = db.ytv_ContentTypes.Where(p => contentTypeID == -1 || p.ContentTypeID == contentTypeID);
var feeds = db.ytv_Feeds.Where(p => p.FeedID == -1 || p.FeedID == feedID);
var targetFeeds = from f in feeds
join c in contentTypes on f.ContentTypeID equals c.ContentTypeID
select new { FeedID = f.FeedID, ContentType = f.ContentTypeID };
var content = from t in targetFeeds
join c in db.ytv_Contents on t.FeedID equals c.FeedID
select new { Content = c, ContentTypeID = t.ContentType };
if (String.IsNullOrEmpty(text))
{
content = content.Where(p => p.Content.Name.Contains(text) || p.Content.Description.Contains(text));
}
if (date != null)
{
DateTime dateTemp = Convert.ToDateTime(date);
content = content.Where(p => p.Content.StartDate <= dateTemp && p.Content.EndDate >= dateTemp);
}
//Execution has been defered to this point, correct?
foreach (var c in content)
{
Content item = new Content()
{
ContentID = c.Content.ContentID,
Name = c.Content.Name,
Description = c.Content.Description,
StartDate = c.Content.StartDate,
EndDate = c.Content.EndDate,
ContentTypeID = c.ContentTypeID,
FeedID = c.Content.FeedID,
PreviewHtml = c.Content.PreviewHTML,
SerializedCustomXMLProperties = c.Content.CustomProperties
};
contentList.Add(item);
}
}
//TODO
return contentList;
}
Depends on what you mean with 'scales'. DB side this code has the potential of causing trouble if you are dealing with large tables; SQL Server's optimizer is really poor at handling the "or" operator in where clause predicates and tend to fall back to table scans if there are multiple of them. I'd go for a couple of .Union calls instead to avoid the possibility that SQL falls back to table scans just because of the ||'s.
If you can share more details about the underlying tables and the data in them, it will be easier to give a more detailed answer...