Couchbase: MetaData.Metrics always contain default values? - couchbase

I am testing Couchbase, and I am making a very simply query:
public async Task SelectRandomJobs(int nbr)
{
IBucket bucket = await cluster.BucketAsync("myBucket");
IScope scope = bucket.Scope("myScope");
IQueryResult<JObject> result = await scope.QueryAsync<JObject>("SELECT * FROM myCollection WHERE Id = {id}");
// The Metrics.* has default values
Console.WriteLine(result.MetaData.Metrics.ElaspedTime);
}
Here are the values:
I was expecting ElaspedTime (misspelled!) and ExecutionTime to be not null. There is a AnalyticsQueryAsync method, but that did work for me (error 24045).
Why are those values null?
-- UPDATE --
I followed the advice of Eric, but I got the same results:

So you will need to enable Metrics for this query, I have provided a code sample below with two possible ways of doing this, it is covered in our docs but maybe could be easier to find or have better examples, this is something I will investigate further and see if we can make it clearer in future editions of the docs.
I have used the travel-sample dataset and tried to set the code up similar to your example so that it will be easy to implement for you.
As for why the times are null by default and the other fields are zero, that seems to just be a design decision for this class.
About the misspelling, we have filed a ticket to get the spelling corrected. Thank you for pointing that out.
using System;
using System.Threading.Tasks;
using Couchbase;
using Couchbase.Query;
namespace _3x_simple
{
class Program
{
static async Task Main(string[] args)
{
var cluster = await Cluster.ConnectAsync("couchbase://localhost", "Administrator", "password");
var bucket = await cluster.BucketAsync("travel-sample");
var myScope = bucket.Scope("inventory");
//scope path
var options = new QueryOptions().Metrics(true);
var queryResult = await myScope.QueryAsync<dynamic>("SELECT * FROM airline LIMIT 10;", options);
//cluster path
//var queryResult = await cluster.QueryAsync<dynamic>("SELECT * FROM `travel-sample`.inventory.airline LIMIT 10;", options => options.Metrics(true));
Console.WriteLine($"Execution time before read: {queryResult.MetaData.Metrics.ExecutionTime}");
await foreach(var row in queryResult){
Console.WriteLine(row);
}
Console.WriteLine($"Execution time after read: {queryResult.MetaData.Metrics.ExecutionTime}");
Console.WriteLine("Press any key to exit...");
Console.Read();
}
}
}
You won't see the execution time until after the results are read. The reason you are seeing default values for those fields is because you are trying to read that information at the wrong time/place considering your async operation.

Related

How to work with data returned by mysql select query in nodejs

I am working on a discord bot written in nodejs, the bot utilises a mysql database server to store information. The problem I have run into is that I cannot seem to retrieve the data from the database in a neat way, every single thing I try seems to run into some issue or another.
The select query returns an object called RowDataPacket. When googling every single result will reference this solution: Object.values(JSON.parse(JSON.stringify(rows)))
It postulates that I should get the values back, but I dont I get an array back that is as hard to work with as the rowdatapacket object.
This is a snippet of my code:
const kenneledMemberRolesTableName = 'kenneled_member_roles'
const kenneledMemberKey = 'kenneled_member'
const kenneledMemberRoleKey = 'kenneled_member_role_id'
const kenneledStaffMemberKey = 'kenneled_staff_member'
const kenneledDateKey = 'kenneled_date'
const kenneledReturnableRoleKey = 'kenneled_role_can_be_returned'
async function findKenneledMemberRoles(kenneledMemberId) {
let sql = `SELECT CAST(${kenneledMemberRoleKey} AS Char) FROM ${kenneledMemberRolesTableName} WHERE ${kenneledMemberKey} = ${kenneledMemberId}`
let rows = await databaseAccessor.runQuery(sql)
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row.kenneled_member_role_id
})
return result
}
This seemed to work, until I had to do a type conversion on the value, now the dot notations requires me to reference row.CAST(kenneled_member_role_id AS Char), this cannot work, and I have found no other way to retrieve the data than through dot notation. I swear there must be a better way to work with mysql rowdatapackets but the solution eludes me
I figured out something that works, however I still feel like this is an inelegant solution, I would love to hear from others if I am misunderstanding how to work with mysql code in nodejs, or if this is just a consequence of the library:
let result = JSON.parse(JSON.stringify(rows)).map(row => {
return row[`CAST(${kenneledMemberRoleKey} AS CHAR)`];
})
So what I did is I access the value through brackets instead of dot notation, this seems to work, and at least makes me able to store part of or the whole expression in a constant variable, hiding the ugliness.

RDF4J SPARQL query to JSON

I am trying to move data from a SPARQL endpoint to a JSONObject. Using RDF4J.
RDF4J documentation does not address this directly (some info about using endpoints, less about converting to JSON, and nothing where these two cases meet up).
Sofar I have:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
Map<String, String> headers = new HashMap<String, String>();
headers.put("Accept", "SPARQL/JSON");
repo.setAdditionalHttpHeaders(headers);
try (RepositoryConnection conn = repo.getConnection())
{
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
GraphQuery query = conn.prepareGraphQuery(queryString);
debug("Mark 2");
try (GraphQueryResult result = query.evaluate())
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
I figured a SPARQLGraphQuery should take the place of GraphQuery, but RepositoryConnection does not have a relevant prepare statement.
If I exchange
try (RepositoryConnection conn = repo.getConnection())
with
try (SPARQLConnection conn = (SPARQLConnection)repo.getConnection())
I run into the problem that SPARQLConnection does not generate a SPARQLGraphQuery. The closest I can get is:
SPARQLGraphQuery query = (SPARQLGraphQuery)conn.prepareQuery(QueryLanguage.SPARQL, queryString);
which gives a runtime error as these types cannot be cast to eachother.
I do not know how to proceed from here. Any help or advise much appreciated. Thank you
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
In RDF4J, SPARQL SELECT queries are tuple queries, so named because each result is a set of bindings, which are tuples of the form (name, value). In contrast, CONSTRUCT (and DESCRIBE) queries are graph queries, so called because their result is a graph, that is, a collection of RDF statements.
Furthermore, setting additional headers for the response format as you have done here is not necessary (except in rare circumstances), the RDF4J client handles this for you automatically, based on the registered set of parsers.
So, in short, simplify your code as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
debug("Mark 2");
try (TupleQueryResult result = query.evaluate()) {
...
}
}
If you want to write the result of the query in JSON format, you could use a TupleQueryResultHandler, for example the SPARQLResultsJSONWriter, as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
query.evaluate(new SPARQLResultsJSONWriter(System.out));
}
This will write the result of the query (in this example to standard output) using the SPARQL Query Results JSON format. If you have a non-standard format in mind, you could of course also create your own TupleQueryResultHandler implementation.
For more details on the various ways in which you can process the result (including iterating, streaming, adding to a List, or just directly sending to a result handler), see the documentation on querying a repository. As an aside, the javadoc on the RDF4J APIs is pretty extensive too, so if your Java editing environment has support for displaying that, I'd advise you to make use of it.

Concurrent Read/Write MySQL EF Core

Using EF Core 2.2.6 and Pomelo.EntityFrameworkCore.MySql 2.2.6 (with MySqlConnector 0.59.2)). I have a model for UserData:
public class UserData
{
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public ulong ID { get; private set; }
[Required]
public Dictionary<string, InventoryItem> Inventory { get; set; }
public UserData()
{
Data = new Dictionary<string, string>();
}
}
I have a REST method that can be called that will add items to the user inventory:
using (var transaction = context.Database.BeginTransaction())
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
If two or more calls to this method are made with the same user id then I get concurrent accesses (despite the transaction). This causes the data to sometimes be incorrect. For example, if the inventory is empty and then two calls are made to add items simultaneously (item A and item B), sometimes the database will only contain either A or B, and not both. From logging it appears that it is possible for EF to read from the database while the other read/write is still occurring, causing the code to have the incorrect state of the inventory for when it tries to write back to the db. So I tried marking the isolation level as serializable.
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
Now I sometimes see an exception:
MySql.Data.MySqlClient.MySqlException (0x80004005): Deadlock found when trying to get lock; try restarting transaction
I don't understand how this code could deadlock... Anyways, I tried to proceed by wrapping this whole thing in a try/catch, and retry:
public static async Task<ResponseError> AddUserItem(Controller controller, MyContext context, ulong userId, List<InventoryItem> items, int retry = 5)
{
ResponseError result = null;
try
{
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
}
catch (Exception e)
{
if (retry > 0)
{
await Task.Delay(SafeRandomGenerator(10, 500));
return await AddUserItem(controller, context, userId, items, retry--);
}
else
{
// store exception and return error
}
}
return result;
}
And now I am back to the data being sometimes correct, sometimes not. So I think the deadlock is another problem, but this is the only method accessing this data. So, I'm at a loss. Is there a simple way to read from the database (locking the row in the process) and then writing back (releasing the lock on write) using EF Core? I've looked at using concurrency tokens, but this seems overkill for what appears (on the surface to me) to be a trivial task.
I added logging for mysql connector as well as asp.net server and can see the following failure:
fail: Microsoft.EntityFrameworkCore.Database.Command[20102]
=> RequestId:0HLUD39EILP3R:00000001 RequestPath:/client/AddUserItem => Server.Controllers.ClientController.AddUserItem (ServerSoftware)
Failed executing DbCommand (78ms) [Parameters=[#p1='?' (DbType = UInt64), #p0='?' (Size = 4000)], CommandType='Text', CommandTimeout='30']
UPDATE `UserData` SET `Inventory` = #p0
WHERE `ID` = #p1;
SELECT ROW_COUNT();
A total hack is to just delay the arrival of the queries by a bit. This works because the client is most likely to generate these calls on load. Normally back-to-back calls aren't expected, so spreading them out in time by delaying on arrival works. However, I'd rather find a correct approach, since this just makes it less likely to be an issue:
ResponseError result = null;
await Task.Delay(SafeRandomGenerator(100, 500));
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
// etc
This isn't a good answer, because it isn't what I wanted to do, but I'll post it here as it did solve my problem. My problem was that I was trying to read the database row, modify it in asp.net, and then write it back, all within a single transaction and while avoiding deadlocks. The backing field is JSON type, and MySQL provides some JSON functions to help modify that JSON directly in the database. This required me to write SQL statements directly instead of using EF, but it did work.
The first trick was to ensure I could create the row if it didn't exist, without requiring a transaction and lock.
INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0};
I used JSON_REMOVE to delete keys from the JSON field:
UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0};
and JSON_SET to add/modify entries:
UPDATE UserData as S set S.Inventory = JSON_SET(S.Inventory,{1},CAST({2} as JSON)) WHERE S.ID = {0};
Note, if you're using EF Core and want to call this using FromSql then you need to return the entity as part of your SQL statement. So you'll need to add something like this to each SQL statement:
SELECT * from UserData where ID = {0} LIMIT 1;
Here is a full working example as an extension method:
public static async Task<UserData> FindOrCreateAsync(this IQueryable<UserData> table, ulong userId)
{
string sql = "INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0}; SELECT * FROM UserData WHERE ID={0} LIMIT 1;";
return await table.FromSql(sql, userId).SingleOrDefaultAsync();
}
public static async Task<UserData> JsonRemoveInventory(this DbSet<UserData> table, ulong userId, string key)
{
if (!key.StartsWith("$.")) key = $"$.\"{key}\"";
string sql = "UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0}; SELECT * from UserData where ID = {0} LIMIT 1;";
return await table.AsNoTracking().FromSql(sql, userId, key).SingleOrDefaultAsync();
}
Usage:
var data = await context.UserData.FindOrCreateAsync(userId);
await context.UserData.JsonRemoveInventory(userId, itemId);

endsWith with array in ES6

I learning ES6 and try to use new for me endsWith. Before this I used includes in some of my scripts, and I thought that mechanic will be same. I picked a casual task: I have domains list and want to filter all "cn" domains. Logic is:
let ends = [".cn",".tw",".jp"]
for(let i=0;i<arrayOfDomains.length;i++){
const host = /https?:\/\/(www\.)?([a-zA-Z0-9-]+\.)+[a-zA-Z0-9]+/.exec(arrayOfDomains[i])[0];
console.log(host.endsWith(ends))
}
and result of console.log all false. Is there a way to use array in endsWith?
No, there isn't a way to use an array in endsWith, one option is to declare another function that uses the ends array and host variable as parameters to check it.
You can try something like this:
let ends = [".cs", ".com"];
let host = "www.page.com";
let hostEndsWith = (host, ends) => {
let value = false;
value = ends.some(element => {
return host.endsWith(element);
});
console.log(value);
};
hostEndsWith(host, ends);
You can copy that code in JSFiddle to test it.
Here is the information about the endsWith function endsWith informartion
I hope this helps you!

JPA caching database results, need to "un-cache"

I'm seeing "caching" behavior with database (MySQL 5) records. I can't seem to see the new data application side w/o logging in/out or restarting the app server (Glassfish 3). This is the only place in the application where db records are "stuck." I'm guessing I'm missing something with JPA persistence.
I've attempted changing db records by hand, there's still some sort of caching mechanism in place "helping" me.
This is editFile() method that saves new data.
After I fire this, I see the data updated in the db as expected.
this.file is the class level property that the view uses to show file data. It shows old data. I attempt to move db data back in to it after I've fired my UPDATE queries with the filesList setter: this.setFilesList(newFiles);
When the application reads it back out though, GlassFish seems to resond with requests for this data w/ old data.
public void editFile(Map<String, String> params) {
// update file1 record
File1 thisFile = new File1();
thisFile.setFileId(Integer.parseInt(params.get("reload-form:fileID")));
thisFile.setTitle(params.get("reload-form:input-small-name"));
thisFile.setTitle_friendly(params.get("reload-form:input-small-title-friendly"));
this.filesFacade.updateFileRecord(thisFile);
//update files_to_categories record
int thisFileKeywordID = Integer.parseInt(params.get("reload-form:select0"));
this.filesToCategoriesFacade.updateFilesToCategoriesRecords(thisFile.getFileId(), thisFileKeywordID);
this.file = this.filesFacade.findFileByID(thisFile.getFileId());
List<File1> newFiles = (List<File1>)this.filesFacade.findAllByRange(low, high);
this.setFilesList(newFiles);
}
Facades
My Facades are firing native SQL to update each of those DB tables. When I check the DB after they fire, the data is going in, that part is happening as I expect and hope.
File1
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
FilesToCategories
public int updateFilesToCategoriesRecords(int fileId, int keywordID){
Query q = this.em.createNativeQuery("UPDATE files_to_categories set categories = ?1 where file1 = ?2");
q.setParameter(1, keywordID);
q.setParameter(2, fileId);
return q.executeUpdate();
}
How do I un-cache?
Thanks again for looking.
I don't think caching is the Problem, I think it's transactions.
em.getTransaction().begin();
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
em.getTransaction().commit();
I recommend to surrond your Writings to the DB with Transactions to get them persisted. Unless you commit requests may return results without the changes.
Ok, JTA does the Transactionmanagement.
Why are you doing this, when you are using JPA.
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
This should work and update the internal State that comes with JPA
public int updateFileRecord(File1 file){
em.persist(file);
}
#daniel & #Tiny got me going on this one, thanks again guys.
I wanted to point out that I used the .merge() method out of the Entity Manager class.
It's important to note that for .merge() to UPDATE the record instead of INSERTing a new one; that the object you're submitting to .merge() must include all properties respective of the fields in the database table (that your DAO knows about) or you will INSERT new database records.
public void updateFileRecord(File1 file){
em.merge(file);
}