i have solution for a UniversalApp with a Backgroundtask. The backgroundtask is registered for an intervall of 30min. (limit for Phone)
But the backgroundtask doesn´t start for days. Only when i restart my whole phone the task is starting. The task doesn´t eat much cpu time, it is quite slim. The fact that the task is starting after a restart says to me that it is registered correctly. Also i can start it with the Visual Studio debugger.
How can i be sure that the task doesn´t run into the cpu quotas?
This is how i register the task:
await BackgroundExecutionManager.RequestAccessAsync();
var registeredTask = BackgroundTaskRegistration.AllTasks.Values.FirstOrDefault(x => x.Name == taskName);
if (registeredTask == null)
{
var backgroundTaskBuilder = new BackgroundTaskBuilder();
backgroundTaskBuilder.Name = taskName;
backgroundTaskBuilder.TaskEntryPoint = taskEntryPoint;
backgroundTaskBuilder.SetTrigger(new TimeTrigger(30, false));
backgroundTaskBuilder.SetTrigger(new SystemTrigger(SystemTriggerType.InternetAvailable, false));
backgroundTaskBuilder.Register();
}
I think a background task can have only one trigger. Since you're setting the SystemTrigger after the TimeTrigger, it is the one that the task is registered with. And so when you restart the phone, it gains Internet connection and the task is executed.
If you need to have two triggers, all you need to do is create two tasks. They can have the same entry point and just need different names and triggers.
If you want to run the task every 30 minutes IF there is Internet available, you need to add a condition rather than a trigger:
backgroundTaskBuilder.AddCondition(new SystemCondition(SystemConditionType.InternetAvailable));
Note the difference: SetTrigger - can set only one trigger; AddCondition - can add multiple conditions
Related
Thanks in advance for attempting to asssist me with this issue.
I'm using CakePHP 2 (2.10.22).
I have a system which creates applications. Each application that gets created has a unique application number. The MySQL database column that stores this application number is set to 'Not null' and 'Unique'. I'm using CakePHP to get the last used application number from the database to then build the next application number for the new application that needs to be created. The process that I have written works without any problem when a single request is received at a given point in time. The problem arises when two requests are received to create an application at the exact same time. The behaviour that I have observed is that the the request that gets picked up first gets the last application number - e.g. ABC001233 and assigns ABC001234 as the application number for the new application it needs to create. It successfully saves this application into the database. The second request which is running concurrently also gets ABC001233 as the last application number and tries to create a new application with ABC001234 as the application number. The MySQL database returns an error saying that the application number is not unique. I then put the second request to sleep for 2 seconds by which time the first application has successfully saved to the database. I then re-attempt the application creation process which first gets the last application number which should be ABC001234 but instead each database read keeps returning ABC001233 even though the first request has long been completed. Both requests have transactions in the controller. What I have noticed is that when I remove these transactions, the process works correctly where for the second request after the first attempt fails, the second attempt works correctly as the system correctly gets ABC001234 as the last application number and assigns ABC001235 as the new application number. I want to know what I need to be doing so as to ensure the process works correctly even with the transaction directives in the controller.
Please find below some basic information on how the code is structured -
Database
The last application number is ABC001233
Controller file
function create_application(){
$db_source->begin(); //The process works correctly if I remove this line.
$result = $Application->create_new();
if($result === true){
$db_source->commit();
)else{
$db_source->rollback();
}
}
Application model file
function get_new_application_number(){
$application_record = $this->find('first',[
'order'=>[
$this->name.'.application_number DESC'
],
'fields'=>[
$this->name.'.application_number'
]
]);
$old_application_number = $application_record[$this->name]['application_number'];
$new_application_number = $old_application_number+1;
return $new_application_number;
}
The above is where I feel the problem originates. For the first request that gets picked up, this find correctly finds that ABC001233 is the last application number and this function then returns ABC001234 as the next application number. For the second request, it also picks up ABC001233 as the last application number but will fail when it tries to save ABC001234 as the application number as the first request has already saved an application with that number. As a part of the second attempt for the second request (which occurs because of the do/while loop) this find is requested again, but instead of returning ABC001234 as the last application number (per the successfuly save of the first request), it keeps returning ABC001233 resulting in a failure to correctly save. If I remove the transaction from the controller, this then works correctly where it will return ABC001234 in the second attempt. I couldn't find any documentation as to why that is and what can be done about the same and is where I need some assistance. Thank you!
function create_new(){
$new_application_number = $this->get_new_application_number();
$save_attempts = 0;
do{
$save_exception = false;
try{
$result = $this->save([$this->name=>['application_number'=>$new_application_number]], [
'atomic'=>false
]);
}catch(Exception $e){
$save_exception = true;
sleep(2);
$new_application_number = $this->get_new_application_number();
}
}while($save_exception === true && $save_attempts++<5);
return !$save_exception;
}
You just have to lock the row with the previous number in a transaction using SELECT ... FOR UPDATE. It's much better than the whole table lock as said in the comments.
According to documentation https://book.cakephp.org/2/en/models/retrieving-your-data.html you just have to add 'lock' => true to get_new_application_number function:
function get_new_application_number(){
$application_record = $this->find('first',[
'order'=>[
$this->name.'.application_number DESC'
],
'fields'=>[
$this->name.'.application_number'
],
'lock'=>true
]);
$old_application_number = $application_record[$this->name]['application_number'];
$new_application_number = $old_application_number+1;
return $new_application_number;
}
How does it work:
The second transaction will wait on that request while the first transaction is ended.
P.S. According to documentation lock option was added in the 2.10.0 version of CakePHP.
I am creating a little data processing script using selenium. Where I input my values and it runs a function to do the task on a website. I would like to queue inputs so that I can enter the new values while it works on the old ones.
while customername != 1:
print("Customer name")
customername = input()
print("Credit amount")
creditamount = input()
addcredit(driver, customername, creditamount)
How would I get the function addcredit() to run while the loop continues and asks me for the next set of inputs?
Thank you all!
So after a bit more research, I used Threading.
p1 = threading.Thread(target=addcredit, args=(driver, customername, creditamount))
p1.start()
this is allowing my script to run as intended.. where it starts the action and then allows me to type more data in to run the action again. from my understanding when the function called in the second thread sleeps it bounces back to the first thread and continues on. someone, please correct me if I am wrong.
I have a Synchronisation tool which uses EWS Managed API 'SyncFolderItems' to retrieve changed items in a Calendar folder. It has been running fine for 18 months or so but this week two customers both experienced the same issue. (Could be a coincidence). Both are Office 365 customers.
The sync started failing to complete, and on closer analysis, the SyncFolderItems call was failing with error "An internal server error occurred. The operation failed." No further details given in the error.
I reset the sync (ie set the syncstate back to null so it sync everything in the folder again) . It worked fine for several pagination iterations getting 250 items at a time, but them at some point it failed.
I thought maybe there was an item with alot of data. I reduced the page size to 25 and it worked OK for a bit, then failed. I reduced it a page size of 1 and it fails. There are 1250 items in the folder, so I haven't worked out which item it is failing at yet. It seems inconsistent and I wonder if there is a throttle which has been set higher recently?
I think my next step is to see if there is one offending item and delete it, but its hard work out which item it might be.
Does anyone have any suggestions for what might be going wrong?
icc = service.SyncFolderItems(Connection.SourceID, ps, null, 1,
SyncFolderItemsScope.NormalItems, syncstate);
I having the same issue with SyncFolderItems(..).
The problem seems to exist only in public folders? I tested with a public calender folder. If I call SyncFolderItems for the first time with no status, I get no error. But if I have a status and delete one item in the public folder, I get this error on the next call of SyncFolderItems().
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2016, TimeZoneInfo.Local);
service.Url = new Uri("https://outlook.office365.com/EWS/Exchange.asmx");
service.Credentials = new WebCredentials("x.y#germany.de", "pwd");
Folder f1 = Folder.Bind(service, new FolderId("SOMEFOLDERID OF A PUBLIC CALENDAR FOLDER"), BasePropertySet.FirstClassProperties);
Console.WriteLine(f1.DisplayName);
String status = "";
do
{
ChangeCollection<ItemChange> changes = service.SyncFolderItems(f1.Id, BasePropertySet.IdOnly, null, 100, SyncFolderItemsScope.NormalItems, status);
Console.Write(changes.Count+",");
status = changes.SyncState;
} while (true);
ServerInfo: {15.01.1101.019} V2017_04_14
I would like to know what the correct place to close a connection to the database is.
Let's say that I have the following piece of code:
function addItem(dbName, versionNumber, storeName, element, callback){
var requestOpenDB = indexedDB.open(dbName, versionNumber); //IDBRequest
requestOpenDB.onsuccess = function(event){
//console.log ("requestOpenDB.onsuccess ");
var db = event.target.result;
var trans = db.transaction(storeName, "readwrite");
var store = trans.objectStore(storeName);
var requestAdd = store.add(element);
requestAdd.onsuccess = function(event) {
callback("Success");
};
requestAdd.onerror = function(event) {
callback("Error");
};
};
requestOpenDB.onerror = function(event) {
console.log ("Error:" + event.srcElement.error.message);/* handle error */
callback("Error");
};
}
addItem basically adds a new element into the database. As per my understanding, when the requestAdd event is triggered that doesn't mean necessarily that the transaction has finished. Therefore I am wondering what the best place to call db.close() is. I was closing the connection inside of requestAdd.onsucess, but if an error happens and requestAdd.onerror is triggered instead, the connection might still be opened. I am thinking about adding trans.oncomplete just under request.onerror and close the db connection here which might be a better option. Any inputs will be more than welcome. Thank you.
You may wish to explicitly close a connection if you anticipate upgrading your database schema. Here's the scenario:
A user opens your site in one tab (tab #1), and leaves it open.
You push an update to your site, which includes code to upgrade the database schema, increasing the version number.
The same user opens a second tab to your site (tab #2) and it attempts to connect to the database.
If the connection is held open by tab #1, the connection/upgrade attempt by tab #2 will be blocked. Tab #1 will see a "versionchange" event (so it could close on demand); if it doesn't close its connection then tab #2 will see a "blocked" event.
If the connection is not held open by tab #1, then tab #2 will be able to connect and upgrade. If tab #1 then tries (based on user action, etc) to open the database (with an explicit version number) it will fail since it will be using an old version number (since it still has the old code).
You generally never need to close a connection. You are not creating memory leaks or anything like that. Leaving the connection open does not result in a material performance hit.
I would suggest not worrying about it.
Also, whether you add trans.oncomplete before or after request.onerror is not important. I understand how it can be confusing, but the order in which you bind the listeners is irrelevant (qualified: from within the same function scope).
You can call db.close() immediately after creating the transaction
var trans = db.transaction(storeName, "readwrite");
db.close();
and it will close the connection only after the transaction has completed.
https://developer.mozilla.org/en-US/docs/Web/API/IDBDatabase/close says
The connection is not actually closed until all transactions created using this connection are complete. No new transactions can be created for this connection once this method is called.
If you want to run multiple versions of your app and both access the same database, you might think it's possible to keep connections open to both. This is not possible. You must close the database on one before opening it on another. But one problem is that there is currently no way to know when the database actually closes.
I managed to get PullAsync working correctly in Azure Mobile Services 1.3.0-beta3 using
responseTypeTable.PullAsync(responseTypeTable.Where(c => c.CompanyId == companyId));
Then I upgraded to the first stable release over the weekend.
Now PullAsync requires a QueryId parameter as well as a the query. First I am confused as to why there would be a breaking change crossing beta3 to stable, I thought that the API should have well and truly been sorted by now, so maybe I am doing something wrong.
Anyway, I put in the Query Id as shown
responseTypeTable.PullAsync("QueryResponseTypePull",
responseTypeTable.Where(c => c.CompanyId == companyId));
The code compiles and runs and it even executes fine, hits the API but it doesn't return any values into the local store. When I run
result = await responseTypeTable.Where(c => c.CompanyId == companyId).ToListAsync();
to get the results from the local database it is always empty. This is the exact same code that was working prior to my update to 1.3.0 stable.
Providing QueryId causes the framework to download changes incrementally i.e. only updated data is downloaded since the last time you synced.
If you wish to download all the data every time you can pass null in place of QueryId and it will resort to full sync.