Thanks in advance for attempting to asssist me with this issue.
I'm using CakePHP 2 (2.10.22).
I have a system which creates applications. Each application that gets created has a unique application number. The MySQL database column that stores this application number is set to 'Not null' and 'Unique'. I'm using CakePHP to get the last used application number from the database to then build the next application number for the new application that needs to be created. The process that I have written works without any problem when a single request is received at a given point in time. The problem arises when two requests are received to create an application at the exact same time. The behaviour that I have observed is that the the request that gets picked up first gets the last application number - e.g. ABC001233 and assigns ABC001234 as the application number for the new application it needs to create. It successfully saves this application into the database. The second request which is running concurrently also gets ABC001233 as the last application number and tries to create a new application with ABC001234 as the application number. The MySQL database returns an error saying that the application number is not unique. I then put the second request to sleep for 2 seconds by which time the first application has successfully saved to the database. I then re-attempt the application creation process which first gets the last application number which should be ABC001234 but instead each database read keeps returning ABC001233 even though the first request has long been completed. Both requests have transactions in the controller. What I have noticed is that when I remove these transactions, the process works correctly where for the second request after the first attempt fails, the second attempt works correctly as the system correctly gets ABC001234 as the last application number and assigns ABC001235 as the new application number. I want to know what I need to be doing so as to ensure the process works correctly even with the transaction directives in the controller.
Please find below some basic information on how the code is structured -
Database
The last application number is ABC001233
Controller file
function create_application(){
$db_source->begin(); //The process works correctly if I remove this line.
$result = $Application->create_new();
if($result === true){
$db_source->commit();
)else{
$db_source->rollback();
}
}
Application model file
function get_new_application_number(){
$application_record = $this->find('first',[
'order'=>[
$this->name.'.application_number DESC'
],
'fields'=>[
$this->name.'.application_number'
]
]);
$old_application_number = $application_record[$this->name]['application_number'];
$new_application_number = $old_application_number+1;
return $new_application_number;
}
The above is where I feel the problem originates. For the first request that gets picked up, this find correctly finds that ABC001233 is the last application number and this function then returns ABC001234 as the next application number. For the second request, it also picks up ABC001233 as the last application number but will fail when it tries to save ABC001234 as the application number as the first request has already saved an application with that number. As a part of the second attempt for the second request (which occurs because of the do/while loop) this find is requested again, but instead of returning ABC001234 as the last application number (per the successfuly save of the first request), it keeps returning ABC001233 resulting in a failure to correctly save. If I remove the transaction from the controller, this then works correctly where it will return ABC001234 in the second attempt. I couldn't find any documentation as to why that is and what can be done about the same and is where I need some assistance. Thank you!
function create_new(){
$new_application_number = $this->get_new_application_number();
$save_attempts = 0;
do{
$save_exception = false;
try{
$result = $this->save([$this->name=>['application_number'=>$new_application_number]], [
'atomic'=>false
]);
}catch(Exception $e){
$save_exception = true;
sleep(2);
$new_application_number = $this->get_new_application_number();
}
}while($save_exception === true && $save_attempts++<5);
return !$save_exception;
}
You just have to lock the row with the previous number in a transaction using SELECT ... FOR UPDATE. It's much better than the whole table lock as said in the comments.
According to documentation https://book.cakephp.org/2/en/models/retrieving-your-data.html you just have to add 'lock' => true to get_new_application_number function:
function get_new_application_number(){
$application_record = $this->find('first',[
'order'=>[
$this->name.'.application_number DESC'
],
'fields'=>[
$this->name.'.application_number'
],
'lock'=>true
]);
$old_application_number = $application_record[$this->name]['application_number'];
$new_application_number = $old_application_number+1;
return $new_application_number;
}
How does it work:
The second transaction will wait on that request while the first transaction is ended.
P.S. According to documentation lock option was added in the 2.10.0 version of CakePHP.
Related
I managed to get PullAsync working correctly in Azure Mobile Services 1.3.0-beta3 using
responseTypeTable.PullAsync(responseTypeTable.Where(c => c.CompanyId == companyId));
Then I upgraded to the first stable release over the weekend.
Now PullAsync requires a QueryId parameter as well as a the query. First I am confused as to why there would be a breaking change crossing beta3 to stable, I thought that the API should have well and truly been sorted by now, so maybe I am doing something wrong.
Anyway, I put in the Query Id as shown
responseTypeTable.PullAsync("QueryResponseTypePull",
responseTypeTable.Where(c => c.CompanyId == companyId));
The code compiles and runs and it even executes fine, hits the API but it doesn't return any values into the local store. When I run
result = await responseTypeTable.Where(c => c.CompanyId == companyId).ToListAsync();
to get the results from the local database it is always empty. This is the exact same code that was working prior to my update to 1.3.0 stable.
Providing QueryId causes the framework to download changes incrementally i.e. only updated data is downloaded since the last time you synced.
If you wish to download all the data every time you can pass null in place of QueryId and it will resort to full sync.
I have a view ObjectDisplay that is composed of two relevant tables: Object and State. State represents the state of an Object, and the view pulls some of the details from the most recent State for each Object.
On the page that is displaying this information, a user can enter some comments, which creates a new State. After creating the new State, I immediately pull the Object from ObjectDisplay and send it back to be dropped into a partial view and replace the Object in the grid on the page.
// Add new State.
db.States.Add(new State()
{
ObjectId = objectId,
Comments = comments,
UserName = username
});
// Save the changes (executes all of the above).
db.SaveChanges();
// Return the new Object information.
return db.Objects.Single(c => c.ObjectId == objectId);
According to my db trace, the Single call occurs about 70 ms after the SaveChanges call, and it occurs on the same SPID.
Now for the issue: The database defaults the value of RecordDate in State to GETUTCDATE() - I don't provide the date myself. What I'm seeing is that the Object returned has the State's RecordDate of the old State and the Comments of the new State information of the old State. I am seeing that the Object returned has the old State's information. When I refresh the page, all the correct information is there, but the wrong information is returned in the initial call from the database/EF.
So.. what could be wrong? Could the view not be updating quickly enough? Could something be going on with EF? I don't really know where to start looking.
If you've previously loaded the same Object entity in the same DbContext, EF will return the cached instance with the stale values, and ignore the values returned from SQL.
The simplest solution is to reload the entity before returning it:
var result = db.Objects.Single(c => c.ObjectId == objectId);
db.Entry(result).Reload();
return result;
This is indeed odd. In SQL Server views are not persisted by default and therefore show changes in the underlying data right away. You can create a clustered index on a view with effectively persists the query, but in that case the data is updated synchronously, so you should see the change right away.
If you are working with snapshot isolation level your changes might not be visible to other SPIDs right away, but as you are on the same SPID and do not use snapshot isolation, this cant be the culprit either.
The only thing left at this point is the application layer. Are you actually using the result of the Single call higher up in the call stack or does that get lost somewhere. I assume that a refresh of the page uses a different code path, which would explain why it is working there.
I am currently getting products from one site, storing them in a database, and then having their prices display on another site. I am trying to get the prices from the one site to update daily in my database so the new updated prices can be displayed onto my other site.
Right now I am getting the products using an item number but have to manually go in and update any prices that have changed.
I am guessing I am going to have to use some kind of cronjob but not sure how to do this. I have no experience with cronjobs and am a noob with php.
Any ideas?
Thanks!
I have done some reading on the foreach loop and have written some code. But my foreach loop is only running once for the first item number. The foreach loop runs then goes to the "api.php" page but then stops. It doesn't continually loop for each item number. How do I tell it to go through all of item numbers in my database?
Also if you see anything else wrong in my code please let me know.
Thanks
....
$itemnumber = array("".$result['item_number']."");
foreach ($itemnumber as $item_number) {
echo "<form method=\"post\" action=\"api.php\" name=\"ChangeSubmit\" id=\"ChangeSubmit\">";
echo "<input type=\"text\" name=\"item_number\" value=\"{$item_number}\" />";
echo "<script type=\"text/javascript\">
function myfunc () {
var frm = document.getElementById(\"ChangeSubmit\");
frm.submit();
}
window.onload = myfunc;
</script></form>";
}
}
If you already retrieve the product data from an external site and store it in a local database, updating the prices from the same source should be no problem to you. Just retrieve the data, iterate through it in a foreach loop or similar and update the prices to the database based on the item number.
Once you have created the update script and run it manually, adding it as a cronjob will be as simple as running the command `crontab -e´ and adding this row to execute your script every midnight:
0 0 * * * /usr/local/bin/php /path/to/your/script.php
Don't forget to use the correct path for PHP for your system, running which php in the shell will tell you the path.
If you have cronjob's on your server, it'll be very apparent- You make a PHP script that updates it, and throw it in a daily cronjob.
However, I do it this way:
Method 1: At the beginning of every page request, check the last "update" time (you choose how to store it). If it's been more than a day, do the update and set the "update" time to the current time.
This way, every time someone loads a page and it's been a day since the last update, it updates for them. However, this means it's slower for random users, once a day. If this isn't acceptable, there's a little change:
Method 2: If you need to update (via the above method of checking), start an asyncronous request for the data, handle the rest of the page, flush it to the user, then in a while loop wait until the request finishes and update it.
The downside to method 2 is that the user won't see the updated values, but, the benefit is that it won't be any more of a wait for them.
I have some tables in a MySQL database to represent records from a sensor. One of the features of the system I'm developing is to display this records from the database to the web user, so I used ADO.NET Entity Data Model to create an ORM, used Linq to SQL to get the data from the database, and stored them in a ViewModel I designed, so I can display it using MVCContrib Grid Helper:
public IQueryable<TrendSignalRecord> GetTrends()
{
var dataContext = new SmgerEntities();
var trendSignalRecords = from e in dataContext.TrendSignalRecords
select e;
return trendSignalRecords;
}
public IQueryable<TrendRecordViewModel> GetTrendsProjected()
{
var projectedTrendRecords = from t in GetTrends()
select new TrendRecordViewModel
{
TrendID = t.ID,
TrendName = t.TrendSignalSetting.Name,
GeneratingUnitID = t.TrendSignalSetting.TrendSetting.GeneratingUnit_ID,
//{...}
Unit = t.TrendSignalSetting.Unit
};
return projectedTrendRecords;
}
I call the GetTrendsProjectedMethod and then I use Linq to SQL to select only the records I want. It is working fine in my developing scenario, but when I test it in a real scenario, where the number of records is way greater (something around a million records), it stops working.
I put some debug messages to test it, and everything works fine, but when it reaches the return View() statement, it simply stops, throwing me a MySQLException: Timeout expired. That let me wondering if the data I sent to the page is retrieved by the page itself (it only search for the displayed items in the database when the page itself needs it, or something like that).
All of my other pages use the same set of tools: MVCContrib Grid Helper, ADO.NET, Linq to SQL, MySQL, and everything else works alright.
You absolutely should paginate your data set before executing your query if you have millions of records. This could be done using the .Skip and .Take extension methods. And those should be called before running any query against your database.
Trying to fetch millions of records from a database without pagination would very likely cause a timeout at best.
Well, assuming information in this blog is correct, .AsPagination method requires you to sort your data by a particular column. It's possible that trying to do an OrderBy on a table with millions of records in it is just a time consuming operation and times out.
I'm trying to fit Linq to Sql into an N-Tier design. I am implementing concurrency by supplying original values when attaching objects to the data-context. When calling SubmitChanges, and observing the generated scripts on sql server profiler, I can see that they are being generated properly. They include where clauses that check all the object properties (they are all marked with UpdateCheck.Always).
The result is as expected, i.e., no rows are updated on updates or deleted on deletes. Yet I am not getting any exception. Isn't this supposed to throw a ChangeConflictException?
For clarity here is the design and flow for the tests I'm running: I have a client console and a service console talking to each other via WCF using WsHttpBinding.
Client requests data from service
Service instantiates a datacontext, retrieves data, disposes context, returns data to client.
Client makes modifications to returned data.
Client requests an update of changed data from the service.
5a. Service instantiates a datacontext, attaches objects, and...
5b. I pause execution and change values in the database in order to cause a change-conflict
5c. Service calls SubmitChanges.
Here's the code for step 5, cleaned up a bit for clarity:
public void UpdateEntities(ReadOnlyChangeSet<Entity> changeSet)
{
using (EntityDataContext context = new EntityDataContext())
{
if (changeSet.AddedEntities.Count > 0)
{
context.Entities.InsertAllOnSubmit(changeSet.AddedEntities);
}
if (changeSet.RemovedEntities.Count > 0)
{
context.Entities.AttachAll(changeSet.RemovedEntities, false);
context.Entities.DeleteAllOnSubmit(changeSet.RemovedEntities);
}
if (changeSet.ModifiedRecords.Count > 0)
{
foreach (var record in changeSet.ModifiedRecords)
{
context.Entities.Attach(record.Current, record.Original);
}
}
// This is where I pause execution and make changes to the database
context.SubmitChanges();
}
}
I'm using some classes to track changes and maintain originals, as you can see.
Any help appreciated.
EDIT: I'm having no problems with inserts. I've only included the code that calls InsertAllOnSubmit for completeness.
So I've found the answer. It appears to be a bug in Linq To Sql (correct me if I'm wrong). It turns out that the table being updated in the database has a trigger on it. This trigger calls a stored procedure that has a return value. This causes calls to insert, update or delete on this table to yield a return value (from the stored procedure run by the trigger) which is NOT a row-count but is a number. Apparently L2S sees this number and assumes all went well even though no insert/update/delete actually occurred.
This is quite bizarre, especially considering the returned number has a defined column name and its value is in the 6-digit area.