couchbase synchronous retrieval get API - couchbase

I have ran into a problem while dealing with bucket.get() API of the couchbase. I need to see, if some set of DocIDs are already stored in couchbase server or not, if not then I need to do some XML parsing.
var policy_bucket = cluster.openBucket('ss_policy_db');
function someFun(){
for (var i = 0; i < Policies.length; i++) {
var Profile = Policies[i];
var polID = Profile.get('id');
var ret = retrievePolicyNew(polID)
// do some action on the basis of ret.
}
}
function retrievePolicyNew(id) {
var result = policy_bucket.get(id.toString()); // TypeError: Second argument needs to be an object or callback.
console.log(result);
// return -1, on if we find the ID.
}
The problem with bucket.get() is that, it is a asynchronous (not properly know how to make synchronous call), I don't want to handle callback for every ID search. Is their any other way to search the list of ID in couchbase. It would be great if someone can help me getting synchronous call API set, that will solve my lot of other problems also. Because it not looks very good to make very small search also and handling it in callback.
I have stored very less data in DB, so performance is not a issue here.

You should be able to use this in a synchronous manner. I think either the code sample you provide above is incomplete and somewhere you're calling CouchbaseBucket.async() or something else. In any case, the docs are pretty clear that get() takes a string and returns a JsonDocument

Related

MySQL query error "Illegal hour value" causing loop and write issues in Google Apps Script

I'm not well-versed in either language, so please bear with me on this. I'm trying to pull a full table with 100 rows from a remote MySQL database into a Google Sheet. I've managed to sort all the issues I've been having with this, but finally feel stuck. It seems the "illegal hour value" error I get from the SQL query during the loop is the main problem.
One of the columns in the MySQL database is "duration", and unfortunately, it can contain durations longer than 23:59:59. I only have access to call the procedure and cannot make any changes to the table. I get the
"illegal hour value"
error when a row hits a duration longer than 24 hours (e.g., 70:00:00). I tried to simplify things by using a try-catch to skip the error and continue writing on the Google Sheet, but then I get the
"number of columns in the data does not match the number of columns in the range.The data has X but the range has Y"
error in the final sheet.getRange() line.
I'm also unable to figure out how to pass multiple statements when executing the MySQL query. I tried to understand addBatch and a couple of other things, but it becomes too complicated for me. Maybe there's a simpler solution with the query, or maybe that's the only solution, because it just might work if I can also add a CONCAT query after the CALL query to convert the "duration" column to string before the data goes into the loop.
The code below has been updated to include the solution:
function getData3(query, sheetName) {
//MySQL (MariaDB) connection and statements.
var user = '';
var userPwd = '';
var url = 'jdbc:mysql://remote.server.example.com/database_name';
var conn = Jdbc.getConnection(url, user, userPwd);
var stmt2 = conn.createStatement();
stmt2.setMaxRows(100);
var rs2 = stmt2.executeQuery('CALL spdAdminGetPIREPS(api_key)');
//Logger.log(rs2)
//Function to convert raw binary data to string to be used for durations >23:59:59.
function byteArrToString(byteArr){
return Utilities.newBlob(byteArr).getDataAsString();
}
//Setting up spreadsheet, results array, and cell range.
var doc = SpreadsheetApp.openById("id");
var sheet = doc.getSheetByName("sheet_name");
var results = [];
var cell = doc.getRange('a1');
var row = 0;
//Loop to get column names.
cols = rs2.getMetaData();
colNames = [];
for (i = 1; i <= cols.getColumnCount(); i++ ) {
//Logger.log(cols.getColumnName(i));
colNames.push(cols.getColumnName(i));
}
results.push(colNames);
//Loop to get row data, catch type errors due to duration >23:59:59 and fix it.
var rowCount = 1;
while(rs2.next()) {
curRow = rs2.getMetaData();
rowData = [];
for (i = 1; i <= curRow.getColumnCount(); i++) {
try {
rowData.push(rs2.getString(i));
} catch (e){
var bytes = rs2.getBytes(i);
rowData.push(byteArrToString(bytes)); //Pushes converted raw binary data as string using function defined above.
//Logger.log(JSON.stringify(rs2.getBytes(i))); //To see the raw binary data returned by getBytes() for durations >23:59:59 that throw an error.
continue;
}
}
results.push(rowData);
rowCount++;
}
//Write data to sheet.
sheet.getRange(1, 1, rowCount, cols.getColumnCount()).clearContent();
sheet.getRange(1, 1, rowCount, cols.getColumnCount()).setValues(results);
//Close result set, conn, and statement.
//Logger.log(results);
rs2.close();
stmt.close();
conn.close();
}
I know the two separate statements and all look ridiculous, but it seems they work, because I don't get the "no database" error with the query anymore. The simpler, single-line JDBC connector did not work for me, hence the current format for connecting to the MySQL server (Mariadb).
If there are no durations in the table longer than 24 hours, the code works and successfully writes the entire table into the Google Sheet.
To sum:
If I don't use try-catch, the loop stops with the error. If I use try-catch and continue, I get the number of columns mismatch error.
The end goal is to call the procedure and write the entire table onto a Google Sheet. Skipping problematic cells I can probably work with, but I'd definitely love to grab all of the data. I might be missing something trivial here, but any direction or help would be appreciated. Thanks in advance!
UPDATE:
I found this question answered here, but cannot figure out how to utilize it in my case. I think that if I can pass multiple queries, I should be able to send a CONCAT query following the CALL query to convert the "duration" column from Datetime (I believe) to string.
UPDATE 2:
#TheMaster's solution for try-catch helped with skipping the problematic cell and continue writing the rest. I'd love to find a way to convert all durations (the entire column or the ones >23:59:59) to string to capture all the data.
UPDATE 3:
Based on #TheMaster's suggestions, using getInt() instead of getString() partially works by returning the hour but not the minutes (e.g., returns 34 if duration is 34:22:00). Need a way to convert when getString() is used.
UPDATE 4 (edited):
Using getBytes(), the values returned are:
[51,52,58,52,56,58,48,48] for 34:48:00
[51,55,58,48,48,58,48,48] for 37:00:00
[56,55,58,48,48,58,48,48] for 87:00:00
[49,53,49,58,51,53,58,48,48] for 151:35:00
Which means:
[48,49,50,51,52,53,54,55,56,57,58] corresponds to [0,1,2,3,4,5,6,7,8,9,:]. How can I incorporate this conversion?
Using getLong(), the values returned are:
34 for 34:48:00, converted to duration -> 816:00
37 for 37:00:00, converted to duration -> 888:00
87 for 87:00:00, converted to duration -> 2088:00
UPDATE FINAL:
#TheMaster's modified answer solved the problem by getting raw binary data and converting to string for durations >23:59:59. Code above has been updated to reflect all modifications; it works as written above.
You currently use MySQL connector, and even if TIME value can be from '-838:59:59.999999' to '838:59:59.999999', MySQL driver throws an exception when getting a value of a TIME type not in the 0-24 hour range.
This can make sense when using Resultset.getTime(i) when not really but doesn't when using Resultset.getString(i).
This can be disabled using noDatetimeStringSync=true, so changing URL to jdbc:mysql://remote.server.example.com?noDatetimeStringSync=true
Disclaimer: I am one of the maintainers of MariaDB java driver, but I would recommend to use MariaDB driver with MariaDB server (and MySQL one with MySQL server). You would have avoid this issue :)
btw, you can directly set database in URL jdbc:mysql://remote.server.example.com/database?noDatetimeStringSync=true avoiding a query.
If you want to skip failed getString(), it should be easy:
try {
rowData.push(rs2.getString(i));
} catch (e){
rowData.push("");//Keeps array straight
continue;
}
If you want to convert the time column to string, you need to use CAST(time AS char) or CONCAT('',time) on the CREATE_PROCEDURE query used to create spdAdminGetPIREPS
Alternatively, You can get the raw binary data using resultSet.getBytes() and change it back to string through blob using Utilities:
function byteArrToString(byteArr){
return Utilities.newBlob(byteArr).getDataAsString();
}
Use it as
var bytes = rs2.getBytes();
rowData.push(byteArrToString(bytes));
If you could directly get blob, it will be easier to get appsScriptBlob
var jdbcBlob = rs2.getBlob();
var blob = jdbcBlob.getAppsScriptBlob();
rowData.push(blob.getDataAsString());
jdbcBlob.free();

Calendar.getEvents() returned array order

I've searched online and I've looked at the Class Calendar API reference, found here:
https://developers.google.com/apps-script/reference/calendar/calendar
I notice from running a script I've created that the elements of CalendarEvent[] returned by getEvents(startTime,endTime) seem to be in chronological order. Is this always true?
Essentially, am I guaranteed that the following code
events[i].getStartTime().getTime() <= events[i+1].getStartTime().getTime()
will always be true for 0 <= i < (events.length - 1)?
I'm interested in this because I'm creating a script, which merges two (or more) distinct calendars into one and also returns all time slots which are either unallocated (i.e. no event scheduled) or overlap more than one event. Knowing that the elements within a CalendarEvent[] are chronologically ordered makes this task significantly easier (and computationally less expensive).
TIA for any assistance,
S
From my experience, yes. It was always in this order.
Though I checked the docs and they don't mention anything about it.
So to be safe, you can either use advanced services to sort by the date https://developers.google.com/google-apps/calendar/v3/reference/events/list
or use vanilla javascript to sort them.
My take on this is no, the array doesn't guarantee it will be ordered
An event will be returned if it starts during the time range, ends during the time range, or encompasses the time range. If no time zone is specified, the time values are interpreted in the context of the script's time zone, which may be different from the calendar's time zone.
If the data isn't complete it may hinder with what you handle it. Its still best for you to implement a sort
I was having this problem as well. Instead of going with the overkill Calendar Advanced Service, I wrote a simple sorter for arrays of CalendarEvent objects.
// copy this to the bottom of your script, then call it on your array of CalendarEvent objects that you got from the CalendarApp
//
// ex:
// var sortedEvents = sortArrayOfCalendarEventsChronologically(events);
// or
// events = sortArrayOfCalendarEventsChronologically(events);
function sortArrayOfCalendarEventsChronologically(array) {
if (!array || array.length == 0) {
return 0;
}
var temp = [];
for (var i in array) {
var startTime = new Date(array[i].getStartTime());
var startTimeMilli = startTime.getTime();
for (var j in temp) {
var iteratorStartTime = temp[j].getStartTime();
var iteratorStartTimeMilli = iteratorStartTime.getTime();
if (startTimeMilli < iteratorStartTimeMilli) {
break;
}
}
temp.splice(j, 0, array[i]);
}
return temp;
}
https://gist.github.com/xd1936/0d2b2222c068e4cbbbfc3a84edf8f696

error - "Method Range.getValue is heavily used by the script"

I posted this question previously but did not tag it properly (and hence why I likely did not get an answer) so I thought I would give it another shot as I haven't been able to find the answer in the meantime.
The below script is giving me the message in the title. I have another function which is using the same getValue method but it is running fine. What can I change in my script to avoid this issue?
function trashOldFiles() {
var ffile = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CtrlSht").getRange("B3:B3").getValue();
var files = DriveApp.getFilesByName(ffile);
while (files.hasNext()) {
var file = files.next();
var latestfile = DriveApp.getFileById(listLatestFile());
if(file.getId() ==! latestfile){
file.setTrashed(true);
}
}
};
Is it an error or an execution hint(the light bulb in the menu)?
are you using that method on other part of your code? probably in listLatestFile()?
I got the same execution hint by calling getRange().getValue() in listLatestFile() (using a loop)
and the hint always mentioned that the problem was when calling
var ffile = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CtrlSht").getRange("B3:B3").getValue();
in the function trashOldFiles() even when the actual problem was in other function.
Check if you are calling it in other place in your code, probably inside a loop.
OK, so Gerardo's comment about loops started to get me thinking again. I checked some other posts about how to re-use a variable and decided to put the listLatestFile() value in my spreadsheet -
var id = result[0][1];
SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CtrlSht").getRange("B5:B5").setValue(id);
//Logger.log(id);
return id;
and then retrieved the latest file ID from the spreadsheet to use as a comparison value for the trashOldFiles() function which worked a treat.
function trashOldFiles() {
var tfile = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CtrlSht").getRange("B3:B3").getValue();
var tfiles = DriveApp.getFilesByName(tfile);
var lfile = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("CtrlSht").getRange("B5:B5").getValue();
while (tfiles.hasNext()) {
var tfile = tfiles.next();
if(tfile.getId() !== lfile){
tfile.setTrashed(true);
}
}
};
Not sure if that approach was best practice but it did work for me. If anyone has suggestions for achieving this in a more elegant way, I'm all ears.

Why is my caching inconsistent?

I am trying setup caching for a spreadsheet custom funciton but the results seem to be inconsistent/unexpected. Sometimes I get the cached results, sometimes it refreshes the data. I've set the timeout to 10 seconds, and when I refresh within 10 seconds, sometimes it grabs new data, sometimes it caches. Even after waiting more than 10 seconds since last call, sometimes I get the cached results. Why is there so much inconsistency in the spreadsheet function? (or am I just doing something wrong?). When I call the function directly within the actual script, it seems to be much more consistent but sometimes I get inconsistenties/unexpected results.
function getStackOverflow(){
var cache = CacheService.getPublicCache();
var cached = cache.get("stackoverflow");
if(cached != null) {
Logger.log('this is cached');
return 'this is cached version';
}
// Fetch the data and create an object.
var result = UrlFetchApp.fetch('http://api.stackoverflow.com/1.1/tags/google-apps-script/top-answerers/all-time');
var json = Utilities.jsonParse(result.getContentText()).top_users;
var rows = [],data;
for (i = 0; i < json.length; i++) {
data = json[i].user;
rows.push(data.display_name);
}
Logger.log("This is a refresh");
cache.put("stackoverflow",JSON.stringify(rows),10);
return rows;
}
You cant use custom functions like that. Its documented.
Custom functions must be deterministic, they have always the same output given fhe same input (in your case none since you are passing no parameters.
the spreadsheet will remember the values for each input set, basically like a second layer of cache that yiu have no control.

Is delete function is depreciated in HTML5 indexed database API

HI I am trying to delete a record in indexed database by passing its id, but my the function is not working properly and even Visual Studio intellisence is not showing any such function. Is objectstore.delete() function of the indexed database API has been depreciated or I am doing something wrong in calling it.
Following is the code spinet
var result = objectStore.delete(key);
result.onsuccess = function() {
alert('Success');
};
The delete by key function is working fine in all browsers Chrome, FF and IE10. Here is the sample code:
var connection = indexedDB.open(dbName);
connection.onsuccess = function(e) {
var database = e.target.result;
var transaction = database.transaction(storeName, 'readwrite');
var objectStore = transaction.objectStore(storeName);
var request = objectStore.delete(parseInt(key));
request.onsuccess = function (event)
{
database.close();
};
}
Almost everything in IndexedDB works the same way, and your question belies a misunderstanding of this model: everything happens in a transaction.
Almost nothing is syncronous in the IndexedDB API except opening the database. So you'll never see anything like database.delete() or database.set() when dealing with records.
To delete a record, as with getting or setting, you start by creating a new transaction on the database. You then use that transaction (like in Deni's example) to invoke the method for your change.
The transaction then "disappears" when it goes out of scope of all functions and your change is then committed to the database. It's on this transaction's reference to the database (not the database itself) that you hook event listeners such as success and error callbacks.