So here's my problem:
I have an article submission form with an optional image upload field.
When the user submits the form - this is roughly what happens:
if($this->view->form->isValid($_POST){
$db->beginTransaction();
try{
// save content of POST to Article table
if(!$this->_saveArticle($_POST)){
return;
}
// resize and save image using ID generated by previous condition
if(!$this->_saveImage($_FILES){
$db->rollback();
return;
}
// update record if image successfully generated
if(!$this->_updateArticle(){
$db->rollback();
}
$db->commit();
}
}catch (Exception $e){
$db->rollback()
}
All Models are saved using mappers, which automate "UPSERT" functionality by checking for the existence of a surrogate key
public function save($Model){
if(!is_null($Model->id_article){
$Mapper->insert($Model->getFields());
return;
}
$Mapper->update($Model->getFields(),$Model->getIdentity());
}
The article table has a composite UNIQUE index of ID,Title and URL. In addition, I'm generating a UID that gets added to the ID field of the Model prior to insert (instead of auto-incrementing)
When I try to execute this, it runs fine for the first article inserted into the table - but subsequent calls (with radically different input) triggers a DUPLICATE KEY error. MySQL throws back the ID generated in condition 1 (_saveArticle) and complains that the key already exists...
I've dumped out the Model fields (and the condition state - i.e. insert | update) and they proceed as expected (pseudo):
inserting!
id = null
title = something
content = something
image = null
updating!
id = 1234123412341234
title = something
content = something else
image = 1234123412341234.jpg
This row data is not present in the database.
I figure this could be one of a few things:
1: I'm loading a secondary DB adapter on user login, allowing them to interface with several sites from one login - this might be confusing the transaction somehow
2: It's a bug of some description in the Zend transaction implementation (possibly triggered by 1)
3: I need to replace the save() with an INSERT ... ON DUPLICATE
4: I should restructure the submission process, or generate a name for the image that isn't dependent on the UID of the previously inserted row.
Still hunting, but I was wondering if anyone else has encountered this kind of issue or could point me in the direction of a solution
best SWK
OK - just for the record, this is entirely possible. The problem was in my application architecture. I was catching Exceptions in my Mapper classes that were handling persistence - and then querying them to return boolean states and thus interrupt the process. This was in turn breaking the try/catch loop which was preventing the insert/update from working correctly.
To summarise - Yes - you CAN insert and update the same row in a single transaction. I've ticked community wiki to cancel rep out
Related
I have a feed table that contains id, body, created_at fields. When I send Post() on postman after Delete() method the id for the feed table auto_increments as if a record has not been deleted. I am unsure how to rectify this, I am using MySql database, nestjs and TypeORM for the backend.
feed controller.ts
#Controller("feed")
export class FeedController {
constructor(private feedService: FeedService) {}
#Post()
createNewPost(#Body() feedPost: HomeFeedDto): Observable<HomeFeedDto> {
return this.feedService.createPost(feedPost);
}
#Get()
allPosts(): Observable<HomeFeedDto[]> {
return this.feedService.getAllPosts();
}
//api delete method
#Delete(":id")
// delete home feed post by id
deleteFeedPost(#Param("id") id: number): Observable<DeleteResult> {
return this.feedService.deletePost(id);
}
}
This is just the way that auto incrementing columns work in a database. Once a record has been created that uses a particular id value it can never be used again, even if the record that owned it was deleted.
What would you expect to happen in the case where there were many records? If the current incrementing id was 1000 and then you deleted the record with id = 1 would you expect that the next time you inserted a record it would be given id = 1 again instead of id = 1001?
There are lots of practical reasons why re-using a previously issued id would be very bad for business logic especially if anyone who is a consumer of your API has a cached version of the old record.
If you really want to achieve this behavior you would have to look at writing custom functions either inside of the database or your API which check to see if any ids are missing from sequence and then manually assign your own IDs instead of letting the database do it. I would highly recommend you don't do this though as the behavior you're seeing is designed like that for a reason.
Last year I made a laravel site with an events table where I needed three fields to be unique for any event (place, date and time). I wasn't able to set up a validation request to do this so I added an unique index for these three fields directly through phpmyadmin and catching the exception that could happen if a duplicated event was inserted.
So basically my store() method has a try/catch like this:
try {
$event = new Event;
$event->place = $request->input('place');
$event->date = $request->input('date');
$event->time = $request->input('time');
$event->save();
return view(...);
} catch (\Illuminate\Database\QueryException $e) {
// Exception if place-date-time is duplicated
if($e->getCode() === '23000') {
return view('event.create')
->withErrors("Selected date and time is not available");
}
}
Well, now I had to change the app so events could be soft deleted and I simply added the 'deleted_at' field to the unique index, thinking it would be so easy... This approach doesn't work anymore so I've been reading here and there about this problem and the only thing I get is I should do it through a validation request with unique, but honestly I just don't get the syntax for this validation rule with three fields that can't be equal while a fourth one, deleted_at, being null.
My app checks for the available places, dates and times and doesn't let the user choose any not available event but no matter how many times I've told them there's always someone who uses the browser back button and saves the event again :(
Any help will be much appreciated. Thank you!
This is not a good approach to solve the problem.
You can do follow things to solve this problem
Before insert into database get a specific row if exist from database
and store into a variable.
Then check the data is already stored into the database or not.
If data is already there create custom validation message using Message Bag Like below.
$ifExist = $event
->wherePlace(request->input('place'))
->whereDate(request->input('date'))
->whereTime(request->input('time'))
->exist();
if ($ifExist) return 'already exist';
It might help you.
#narayanshama91 have pointed the right way.
You said you would like to use the unique rule to validate the input but the problem is that last week there was a post in Laravel Blog warning users of a possible SQL Injection via the unique rule if the input is provided by the user.
I would highly advise you to NOT USE this rule in this case since you depend on users input.
The correct approach in your case would be #narayanshama91 answer.
$ifExist = $event
->wherePlace(request->input('place'))
->whereDate(request->input('date'))
->whereTime(request->input('time'))
->exist();
if ($ifExist) {
return 'already exist';
}
Can anyone advise on the following problem:
I have a custom get_or_create method, which checks multiple fields and does some fancy stuff upon creation:
def fancy_get_or_create(name):
object = self.fancy_get(name)
if not object:
object = self.fancy_create(name)
return object
def fancy_get(name):
return self.filter(Q(name=name) | Q(alias=name)).first()
def fancy_create(name):
name = self.some_preprocessing(name)
return self.create(name=name, alias=name)
There's a race condition, where one request will check to see if the object exists, find nothing, and start creating it. Before that request finishes creating the object, another request comes in looking for the same object, finds, nothing, and begins creating the new object. This request will fail because the database has some uniqueness constraints (the previous request had just created the object, so the second request will fail).
Is there any way to prevent request 2 from querying the database until request 1 has finished? I was reading about transaction management and it did not seem like the solution, since the issue is not partial updates (which would suggest an atomic transaction), but rather the need to make the second request wait until the first has finished.
Thanks!
Update:
Here's what I went with:
try:
return self.fancy_get(name) or self.fancy_create(name)
except IntegrityError:
return self.fancy_get(name)
There are two viable solutions:
Use a mutex so only one process can access the fancy_get_or_create
function at the same time.
Capture the error thrown by the database and do something instead: ignore
that create, update the row instead of creating it, throw an
exception, etc.
Edit: another solution might be doing an INSERT IGNORE instead of just an INSERT. https://dev.mysql.com/doc/refman/5.1/en/insert.html
I have a view ObjectDisplay that is composed of two relevant tables: Object and State. State represents the state of an Object, and the view pulls some of the details from the most recent State for each Object.
On the page that is displaying this information, a user can enter some comments, which creates a new State. After creating the new State, I immediately pull the Object from ObjectDisplay and send it back to be dropped into a partial view and replace the Object in the grid on the page.
// Add new State.
db.States.Add(new State()
{
ObjectId = objectId,
Comments = comments,
UserName = username
});
// Save the changes (executes all of the above).
db.SaveChanges();
// Return the new Object information.
return db.Objects.Single(c => c.ObjectId == objectId);
According to my db trace, the Single call occurs about 70 ms after the SaveChanges call, and it occurs on the same SPID.
Now for the issue: The database defaults the value of RecordDate in State to GETUTCDATE() - I don't provide the date myself. What I'm seeing is that the Object returned has the State's RecordDate of the old State and the Comments of the new State information of the old State. I am seeing that the Object returned has the old State's information. When I refresh the page, all the correct information is there, but the wrong information is returned in the initial call from the database/EF.
So.. what could be wrong? Could the view not be updating quickly enough? Could something be going on with EF? I don't really know where to start looking.
If you've previously loaded the same Object entity in the same DbContext, EF will return the cached instance with the stale values, and ignore the values returned from SQL.
The simplest solution is to reload the entity before returning it:
var result = db.Objects.Single(c => c.ObjectId == objectId);
db.Entry(result).Reload();
return result;
This is indeed odd. In SQL Server views are not persisted by default and therefore show changes in the underlying data right away. You can create a clustered index on a view with effectively persists the query, but in that case the data is updated synchronously, so you should see the change right away.
If you are working with snapshot isolation level your changes might not be visible to other SPIDs right away, but as you are on the same SPID and do not use snapshot isolation, this cant be the culprit either.
The only thing left at this point is the application layer. Are you actually using the result of the Single call higher up in the call stack or does that get lost somewhere. I assume that a refresh of the page uses a different code path, which would explain why it is working there.
I have a windows application. I am trying to insert a record through a DataContext. It has Unique identifier in the table. Even I am executing a trigger after insertion. So I am making a select query in the end of the trigger to get the auto generator number and to avoid auto-sync error. As it's a windows application I can keep the Context for longtime. When I create a new object ( for example order) and do the same previous operation, upon SubmitChanges operation, it shows cannot have duplicate key. Why can't I use this same Context to Insert the second record? Or do I need to create a new Context to insert a new Record?(Does this Unit of work Concept comes here?). Creating new Context is bad idea as I need to load all data again..
Any thought?
Some code sample to explain my situation:
CallCenterLogObjCotext = (CallCenterLogObjCotext == null ? (new CallcenterLogContext) : (CallCenterLogObjCotext));
CallDetail newCallDetailsOpenTicket = new CallDetail();
newCallDetailsOpenTicket.CallPurpose = (from callpuposelist in CallCenterLogObjCotext.CallPurposes
where callpuposelist.CallPurposeID == ((CallPurpose)(cbcallpurpose.SelectedItem)).CallPurposeID
select callpuposelist).FirstOrDefault();
Lots of settings like this ...
CallCenterLogObjCotext.CallDetails.InsertOnSubmit(newCallDetailsOpenTicket);
CallCenterLogObjCotext.SubmitChanges();
As I mentioned above, this is a click on Open Ticket button on windows form. I change the values of fname, lname and all in the textboxes available on that form and clicked the same button.
So it will call the same method again. I get the below specified error:
System.Data.Linq.DuplicateKeyException: Cannot add an entity with a key that is already in use.
You can insert more than one row with the same context object, see http://weblogs.asp.net/scottgu/archive/2007/05/19/using-linq-to-sql-part-1.aspx, http://msdn.microsoft.com/en-us/library/bb425822.aspx, and other numerous online examples. The duplicate key issue could be a linq to sql configuration issue, or a database integrity error, i.e. such as if you have a natural primary key on a table and are trying to insert a row with the same natural primary key more than once.