ServiceStack Ormlite caching entries aren't deleted after expiry - mysql

We are using serviceStack caching with OrmLite Provider (MySql). We noticed that when we create caching keys with expiry dates, the keys don’t get deleted after the expiry date comes. Instead, they get NULL values in the “ExpiryDate” column. Thus, resulting in strange values when we calculate Cache.GetTimeToLive().
Is this a bug in serviceStack or is in our key creating code ? We are using ServiceStack version (4.5.4) and OrmLite version (4.5.4)
IAppSettings appSettings = new AppSettings();
var userConsultsPerHourLimit = appSettings.Get<int>("throttling:consultations:requests:perHourLimit");
var userConsultsPerDayLimit = appSettings.Get<int>("throttling:consultations:requests:perDayLimit");
var userConsultsPerMonthLimit = appSettings.Get<int>("throttling:consultations:requests:perMonthLimit");
var userConsultsMadePerHour = Cache.GetOrCreate<int>(UserConsultPerHourCacheKey, TimeSpan.FromHours(1), () => { return 0; });
var userConsultsMadePerDay = Cache.GetOrCreate<int>(UserConsultPerDayCacheKey, TimeSpan.FromDays(1), () => { return 0; });
var userConsultsMadePerMonth = Cache.GetOrCreate<int>(UserConsultPerMonthCacheKey, (new DateTime(DateTime.UtcNow.Year, DateTime.UtcNow.Month, 1).AddMonths(1).AddDays(-1) - DateTime.UtcNow), () => { return 0; });
string retryAfter = System.Threading.Thread.CurrentThread.CurrentCulture.Name == "ar-SA" ? "يوم" : "day";
bool shouldThrottleRequest = false;
bool didExceedMonthlyLimit = false;
if (userConsultsMadePerHour >= userConsultsPerHourLimit)
{
shouldThrottleRequest = true;
TimeSpan? timeToLive = Cache.GetTimeToLive(UserConsultPerHourCacheKey);
if (timeToLive.HasValue)
retryAfter = Humanizer.TimeSpanHumanizeExtensions.Humanize(timeToLive.Value, 2, System.Threading.Thread.CurrentThread.CurrentUICulture);
}
else if (userConsultsMadePerDay >= userConsultsPerDayLimit)
{
shouldThrottleRequest = true;
TimeSpan? timeToLive = Cache.GetTimeToLive(UserConsultPerDayCacheKey);
if (timeToLive.HasValue)
retryAfter = Humanizer.TimeSpanHumanizeExtensions.Humanize(timeToLive.Value, 2, System.Threading.Thread.CurrentThread.CurrentUICulture);
}
else if (userConsultsMadePerMonth >= userConsultsPerMonthLimit)
{
shouldThrottleRequest = true;
TimeSpan? timeToLive = Cache.GetTimeToLive(UserConsultPerMonthCacheKey);
if (timeToLive.HasValue)
retryAfter = Humanizer.TimeSpanHumanizeExtensions.Humanize(timeToLive.Value, 3, System.Threading.Thread.CurrentThread.CurrentUICulture);
didExceedMonthlyLimit = true;
}

This is working as expected in the latest version of ServiceStack where the row is Deleted after fetching an expired cache entry:
var ormliteCache = Cache as OrmLiteCacheClient;
var key = "int:key";
var value = Cache.GetOrCreate(key, TimeSpan.FromMilliseconds(100), () => 1);
var ttl = Cache.GetTimeToLive(key);
using (var db = ormliteCache.DbFactory.OpenDbConnection())
{
var row = db.SingleById<CacheEntry>(key);
Assert.That(row, Is.Not.Null);
Assert.That(row.ExpiryDate, Is.Not.Null);
}
Assert.That(value, Is.EqualTo(1));
Assert.That(ttl.Value.TotalMilliseconds, Is.GreaterThan(0));
Thread.Sleep(200);
value = Cache.Get<int>(key);
ttl = Cache.GetTimeToLive(key);
Assert.That(value, Is.EqualTo(0));
Assert.That(ttl, Is.Null);
using (var db = ormliteCache.DbFactory.OpenDbConnection())
{
var row = db.SingleById<CacheEntry>(key);
Assert.That(row, Is.Null);
}
We noticed that when we create caching keys with expiry dates, the keys don’t get deleted after the expiry date comes.
The RDBMS doesn't automatically expire Cache Entries by date, but when resolving a Cache Entry the OrmLiteCacheClient will automatically delete expired entries (as can be seen above) so it will never return an expired entry.
Instead, they get NULL values in the “ExpiryDate” column.
This isn't possible. The ExpiryDate is only populated when creating or replacing the existing entry, it's never set to null when it expires. When an entry expires, the entire entry is deleted.

I think we got to the bottom of this .. it was caused by a misuse to the caching APIs from our side .. we found API calls to "Increment" and "Decrement" APIs in different other places causing the keys (those who had exceeded expiry date) to be deleted (by internal call to validate method) and then recreated from scratch (but without expiry date) .. the solution was to call GetOrCreate before calling Increment/Decrement to make sure the key does in fact exist and if it doesn't, recreate it with a fresh expiry date value ..

Related

How to fix "Service Documents failed while accessing document" while inserting a lot of data?

This is a follow-up question derivated from How to solve error when adding big number of tables
With the code below, I get the following message when, for 500 tables. BUt it works fine for 200 for example.
Exception: Service Documents failed while accessing document with id
The error happens on line 22, inside de if body = DocumentApp.getActiveDocument().getBody();
You also have the table template id to try, but here is an image
Image Table Template
function RequirementTemplate_Copy() {
var templatedoc = DocumentApp.openById("1oJt02MfOIQPFptdWCwDpj5j-zFdO_Wrq-I48mUq9I-w");
return templatedoc.getBody().getChild(1).copy()
}
function insertSpecification_withSection(){
// Retuns a Table Template Copied from another Document
reqTableItem = RequirementTemplate_Copy();
var body = DocumentApp.getActiveDocument().getBody();
// Creates X number of separated tables from the template
for (var i = 1; i < 501; i++){
table = reqTableItem.copy().replaceText("#Title#",String(i))
body.appendTable(table);
if((i % 100) === 0) {
DocumentApp.getActiveDocument().saveAndClose();
body = DocumentApp.getActiveDocument().getBody()
}
}
}
It looks that the error message isn't related to the number of tables to be inserted because it occurs before adding the tables.
Just wait a bit an try again. If the problem persist try your code using a different account if the code runs on the second account it's very possible that you first account exceeded a limit... there are some limits to prevent abuse that aren't published and that might change without any announcement.
Using the fix suggested for the code from my answer to the previous question and changing the number for iteration limit to 1000 and 2000 works fine
The following screenshot shows the result for 1000
Here is the code used for the tests
function insertSpecification_withSection(){
startTime = new Date()
console.log("Starting Function... ");
// Retuns a Table Template Copied from another Document
reqTableItem = RequirementTemplate_Copy();
var body = DocumentApp.getActiveDocument().getBody();
// Creates X number of separated tables from the template
for (var i = 0; i < 2000; i++){
table = body.appendTable(reqTableItem.copy());
// if((i % 100) === 0) {
// DocumentApp.getActiveDocument().saveAndClose();
// }
//
}
endTime = new Date();
timeDiff = endTime - startTime;
console.log("Ending Function..."+ timeDiff + " ms");
}
function RequirementTemplate_Copy() {
//---------------------------------------------------------------------------------------------------------------------------------------------------
var ReqTableID = PropertiesService.getDocumentProperties().getProperty('ReqTableID');
try{
var templatedoc = DocumentApp.openById(ReqTableID);
} catch (error) {
DocumentApp.getUi().alert("Could not find the document. Confirm it was not deleted and that anyone have read access with the link.");
//Logger.log("Document not accessible", ReqTableID)
}
var reqTableItem = templatedoc.getChild(1).copy();
//---------------------------------------------------------------------------------------------------------------------------------------------------
return reqTableItem
}
function setReqTableID(){
PropertiesService.getDocumentProperties().setProperty('ReqTableID', '1NS9nOb3qEBrqkcAQ3H83OhTJ4fxeySOQx7yM4vKSFu0')
}

Assign JSON value to variable based on value of a different key

I have this function for extracting the timestamp from two JSON objects:
lineReader.on('line', function (line) {
var obj = JSON.parse(line);
if(obj.Event == "SparkListenerApplicationStart" || obj.Event == "SparkListenerApplicationEnd") {
console.log('Line from file:', obj.Timestamp);
}
});
The JSON comes from a log file(not JSON) where each line represents an entry in the log and each line also happens to be in JSON format on its own.
The two objects represent the start and finish of a job. These can be identified by the event key(SparkListenerApplicationStart and SparkListenerApplicationEnd). They also both contain a timestamp key. I want to subtract the end time from the start time to get the duration.
My thinking is to assign the timestamp from the JSON where Event key = SparkListenerApplicationStart to one variable and assign the timestamp from the JSON where Event key = SparkListenerApplicationEnd to another variable and subtract one from the other. How can I do this? I know I can't simply do anything like:
var startTime = if(obj.Event == "SparkListenerApplicationStart"){
return obj.Timestamp;
}
I'm not sure if I understood, but if are reading rows and want get the Timestamp of each row I would re-write a new object;
const collection = []
lineReader.on('line', function (line) {
var obj = JSON.parse(line);
if(obj.Event == "SparkListenerApplicationStart" || obj.Event == "SparkListenerApplicationEnd") {
// console.log('Line from file:', obj.Timestamp);
collection.push(obj.Timestamp)
}
});
console.log(collection);
Where collection could be a LocalStorage, Global Variable, or something alike.
Additional info
With regard to my comment where I queried how to identify the start and end times, I ended up setting start as the smallest value and end as the largest. Here is my final code:
const collection = []
lineReader.on('line', function (line) {
var obj = JSON.parse(line);
if((obj.Event == "SparkListenerApplicationStart" || obj.Event == "SparkListenerApplicationEnd")) {
collection.push(obj.Timestamp);
if(collection.length == 2){
startTime = Math.min.apply(null, collection);
finishTime = Math.max.apply(null, collection);
duration = finishTime - startTime;
console.log(duration);
}
}
});

firebase update by batches does not work with large dataset

I want to populate a feed to almost one million of users upon a content posted by a user with high number of followers using GCP cloud functions.
In order to do this, I am designing to split the firebase update of the feed into numbers of small batches. That's because I think if I dont split the update, I might face the following issues:
i) keeping one million of users feed in memory will exceed the allocated maximum 2GB memory.
ii) update one million of entries at one go will not work (How long it takes to update one million entries?)
However, the batch update only works for me when the batch only inserting around 100 entries per update invocation. When I tried with 1000 per batch, only the 1st batch was inserted. I wonder if this is due to:
i) time-out ? however I dont see this error in the log.
ii) The array variable , userFeeds{} , keeping the batch is destroyed when the function is out of scope ?
Below is my code:
var admin = require('firebase-admin');
var spark = require('./spark');
var user = require('./user');
var Promise = require('promise');
var sparkRecord;
exports.newSpark = function (sparkID) {
var getSparkPromise = spark.getSpark(sparkID);
Promise.all([getSparkPromise]).then(function(result) {
var userSpark = result[0];
sparkRecord = userSpark;
sparkRecord.sparkID = sparkID;
// the batch update only works if the entries per batch is aroud 100 instead of 1000
populateFeedsToFollowers(sparkRecord.uidFrom, 100, null, myCallback);
});
};
var populateFeedsToFollowers = function(uid, fetchSize, startKey, callBack){
var fetchCount = 0;
//retrieving only follower list by batch
user.setFetchLimit(fetchSize);
user.setStartKey(startKey);
//I use this array variable to keep the entries by batch
var userFeeds = {};
user.getFollowersByBatch(uid).then(function(users){
if(users == null){
callBack(null, null, null);
return;
}
//looping thru the followers by batch size
Object.keys(users).forEach(function(userKey) {
fetchCount += 1;
if(fetchCount > fetchSize){
// updating users feed by batch
admin.database().ref().update(userFeeds);
callBack(null, userKey);
fetchCount = 0;
return;
}else{
userFeeds['/userFeed/' + userKey + '/' + sparkRecord.sparkID] = {
phase:sparkRecord.phase,
postTimeIntervalSince1970:sparkRecord.postTimeIntervalSince1970
}
}
});//Object.keys(users).forEach
if(fetchCount > 0){
admin.database().ref().update(userFeeds);
}
});//user.getFollowersByBatch
};
var myCallback = function(err, nextKey) {
if (err) throw err; // Check for the error and throw if it exists.
if(nextKey != null){ //if having remaining followers, keep populating
populateFeedsToFollowers(sparkRecord.uidFrom, 100, nextKey, myCallback);
}
};

Trigger UiApp-builder callback within another routine

Serge's solution here seemed like the way to go about this, but I'm a bit afraid that my circumstances may be too different...
I have a button where users can add a new set of rows with controls to a FlexTable, in order to allow them to insert a new member into a record set. After designing and building the app to do this (and despite assurances to the contrary), a requirement was then added for the users to be able to edit the record sets at a later date.
I've finally managed to get the data retrieved and correctly displayed on the Ui - for single member record sets. As a final stage, I am now attempting to extend this to accommodate record sets having more than one member. Obviously this requires determining how many members there are in the record set, and then adding the new rows/control group to the FlexTable, before loading the member into each control group.
So within this routine, (depending on how many members there are) I may need to trigger the same callback, which the user normally does with a button. However, the difference with Serge's fine example, is that his code triggers the checkbox callback at the end of his routine once all the Ui components are in place. My situation needs to do this on the fly - and so far I'm getting 'Unexpected error', which suggests to me that the Ui is not able to update with the added FlexTable controls before my code attempts to assign values to them.
Does anyone have any insight into this problem? Is my only recourse to completely re-build a fixed Ui and dispense with the dynamic rowset model?
Code follows -
1. event for adding controls:
var app = UiApp.getActiveApplication();
var oFlexGrid = app.getElementById('ExpenseDetail');
var oRowCount = app.getElementById('rowCount');
var oScriptDBId = app.getElementById('scriptDBId');
var iRows = parseInt(e.parameter.rowCount);
var sVId = e.parameter.scriptDBId;
var vGridDefs = loadArrayById(sVId); //retrieve upload definition array from ScriptDB
var vControlNames = [];
if (isOdd(iRows)){
var sColour = 'AliceBlue';
} else {
var sColour = 'LavenderBlush';
};
oFlexGrid.insertRow(0);
oFlexGrid.insertRow(0);
oFlexGrid.insertRow(0);
oFlexGrid.insertRow(0);
oFlexGrid.setRowStyleAttributes(0,{'backgroundColor':sColour});
oFlexGrid.setRowStyleAttributes(1,{'backgroundColor':sColour});
oFlexGrid.setRowStyleAttributes(2,{'backgroundColor':sColour});
oFlexGrid.setRowStyleAttributes(3,{'backgroundColor':sColour});
var vExpenseDef = Get_NamedRangeValues_(CONST_SSKEY_APP,'UIAPP_GridExpense');
iRows = iRows+1;
vControlNames = CreateGrid_MixedSet_(iRows, vExpenseDef, oFlexGrid, app);
oRowCount.setText(iRows.toString()).setValue(iRows.toString());
//SOME INCONSEQUENTIAL CODE REMOVED HERE, LET ME KNOW IF YOU NEED IT
vGridDefs = vGridDefs.concat(vControlNames); // unify grid definition arrays
var sAryId = saveArray('expenseFieldDef', vGridDefs);
oScriptDBId.setText(sAryId).setValue(sAryId); //store array and save ScriptDB ID
if (e.parameter.source == 'btnExpenseAdd'){
hideDialog(); //IGNORE CHEKCBOX-DRIVEN CALLS
};
return app;
2. routine that calls the event
var app = UiApp.getActiveApplication();
var oPanelExpense = app.getElementById('mainPanelExpense');
var oPanelIncome = app.getElementById('mainPanelIncome');
var oPanelEdit = app.getElementById('mainPanelEdit');
var chkExpenseAdd= app.getElementById('chkExpenseAdd');
var bExpenseTrigger = e.parameter.chkExpenseAdd;
var sVoucherId = nnGenericFuncLib.cacheLoadObject(CACHE_EDIT_VOUCHERID);
var sVoucher = e.parameter.ListSearch1Vouchers;
var aryVoucherInfo = getVoucherEditDetail(sVoucherId);
//SAVE FOR RECORD MARKING CALLBACK
nnGenericFuncLib.cacheSaveObject(CACHE_EDIT_OLDRECORDS, JSON.stringify(aryVoucherInfo), CACHE_TIMEOUT);
sVoucher = nnGenericFuncLib.textPad(sVoucher, '0', 7);
var bExp = (sVoucher.substring(0,2) == '03')
var oRowCount = app.getElementById('rowCount');
var iRowCount = parseInt(e.parameter.rowCount);
var sControlName = '';
var vControlVal = '';
var iExpIdx = 0;
var sControlType = '';
var oControl = '';
var vSummaryTotal = 0;
for (var iVal in aryVoucherInfo){
sControlName = aryVoucherInfo[iVal][2];
vControlVal = aryVoucherInfo[iVal][3];
switch (sControlName){
case 'ESUM60':
vSummaryTotal = vControlVal;
break;
case 'EXUSRN':
continue; //DON'T OVERWRITE CURRENT USERNAME
break;
};
if (sControlName.indexOf('_')!=-1){ //TEST FOR CONTROL SET MEMBER
var aryControlSet = sControlName.split('_');
if (parseInt(aryControlSet[1])>iRowCount){//*** TRIGGER THE EVENT ***
Logger.log(bExpenseTrigger + ' - ' + !bExpenseTrigger);
chkExpenseAdd.setValue(!bExpenseTrigger, true);
iRowCount = iRowCount +1;
};
};
oControl = app.getElementById(sControlName);
var vCache = cacheSaveReturn(CACHE_UIEX_LISTS,sControlName);
if (typeof vCache == 'undefined'){
oControl.setValue(vControlVal);
oControl.setText(vControlVal);
//controlSetTextBox(oControl,vControlVal);
//controlSetDateBox(oControl,vControlVal);
} else {
if (!(nnGenericFuncLib.arrayIsReal(vCache))){
vCache = JSON.parse(vCache);
};
vCache = vCache.indexOf(vControlVal);
if (vCache != -1){
oControl.setSelectedIndex(vCache);
} else {
controlSetListBox(oControl,vControlVal);
};
};
};
//SOME CODE REMOVED HERE
hideDialog();
return app;
Mogsdad to the rescue!
The answer (see above) for those at the back of the class (with me) is to simply pass the app instance parameter (e) to the event function, calling it directly from the main routine, thus keeping the chronology in step for when it returns the app to complete the routine. No need for the checkbox in this situation.
This only took me all day, but thanks Mogsdad! :)
Snippet below taken from 1/2 way down code sample 2 in the OP:
if (sControlName.indexOf('_')!=-1){ //TEST FOR CONTROL SET MEMBER
var aryControlSet = sControlName.split('_');
if (parseInt(aryControlSet[1])>iRowCount){
eventAddExpense(e); //THAT'S ALL IT TAKES
iRowCount = iRowCount +1;
};
};

Linq-2-Sql code: Does this scale?

I'm just starting to use linq to sql. I'm hoping that someone can verify that linq-2-sql has deferred execution until the foreach loop is executed. Over all, can someone tell me if this code scales. It's a simple get method with a few search parameters. Thanks!
Code:
public static IList<Content> GetContent(int contentTypeID, int feedID, DateTime? date, string text)
{
List<Content> contentList = new List<Content>();
using (DataContext db = new DataContext())
{
var contentTypes = db.ytv_ContentTypes.Where(p => contentTypeID == -1 || p.ContentTypeID == contentTypeID);
var feeds = db.ytv_Feeds.Where(p => p.FeedID == -1 || p.FeedID == feedID);
var targetFeeds = from f in feeds
join c in contentTypes on f.ContentTypeID equals c.ContentTypeID
select new { FeedID = f.FeedID, ContentType = f.ContentTypeID };
var content = from t in targetFeeds
join c in db.ytv_Contents on t.FeedID equals c.FeedID
select new { Content = c, ContentTypeID = t.ContentType };
if (String.IsNullOrEmpty(text))
{
content = content.Where(p => p.Content.Name.Contains(text) || p.Content.Description.Contains(text));
}
if (date != null)
{
DateTime dateTemp = Convert.ToDateTime(date);
content = content.Where(p => p.Content.StartDate <= dateTemp && p.Content.EndDate >= dateTemp);
}
//Execution has been defered to this point, correct?
foreach (var c in content)
{
Content item = new Content()
{
ContentID = c.Content.ContentID,
Name = c.Content.Name,
Description = c.Content.Description,
StartDate = c.Content.StartDate,
EndDate = c.Content.EndDate,
ContentTypeID = c.ContentTypeID,
FeedID = c.Content.FeedID,
PreviewHtml = c.Content.PreviewHTML,
SerializedCustomXMLProperties = c.Content.CustomProperties
};
contentList.Add(item);
}
}
//TODO
return contentList;
}
Depends on what you mean with 'scales'. DB side this code has the potential of causing trouble if you are dealing with large tables; SQL Server's optimizer is really poor at handling the "or" operator in where clause predicates and tend to fall back to table scans if there are multiple of them. I'd go for a couple of .Union calls instead to avoid the possibility that SQL falls back to table scans just because of the ||'s.
If you can share more details about the underlying tables and the data in them, it will be easier to give a more detailed answer...