JPA caching database results, need to "un-cache" - mysql

I'm seeing "caching" behavior with database (MySQL 5) records. I can't seem to see the new data application side w/o logging in/out or restarting the app server (Glassfish 3). This is the only place in the application where db records are "stuck." I'm guessing I'm missing something with JPA persistence.
I've attempted changing db records by hand, there's still some sort of caching mechanism in place "helping" me.
This is editFile() method that saves new data.
After I fire this, I see the data updated in the db as expected.
this.file is the class level property that the view uses to show file data. It shows old data. I attempt to move db data back in to it after I've fired my UPDATE queries with the filesList setter: this.setFilesList(newFiles);
When the application reads it back out though, GlassFish seems to resond with requests for this data w/ old data.
public void editFile(Map<String, String> params) {
// update file1 record
File1 thisFile = new File1();
thisFile.setFileId(Integer.parseInt(params.get("reload-form:fileID")));
thisFile.setTitle(params.get("reload-form:input-small-name"));
thisFile.setTitle_friendly(params.get("reload-form:input-small-title-friendly"));
this.filesFacade.updateFileRecord(thisFile);
//update files_to_categories record
int thisFileKeywordID = Integer.parseInt(params.get("reload-form:select0"));
this.filesToCategoriesFacade.updateFilesToCategoriesRecords(thisFile.getFileId(), thisFileKeywordID);
this.file = this.filesFacade.findFileByID(thisFile.getFileId());
List<File1> newFiles = (List<File1>)this.filesFacade.findAllByRange(low, high);
this.setFilesList(newFiles);
}
Facades
My Facades are firing native SQL to update each of those DB tables. When I check the DB after they fire, the data is going in, that part is happening as I expect and hope.
File1
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
FilesToCategories
public int updateFilesToCategoriesRecords(int fileId, int keywordID){
Query q = this.em.createNativeQuery("UPDATE files_to_categories set categories = ?1 where file1 = ?2");
q.setParameter(1, keywordID);
q.setParameter(2, fileId);
return q.executeUpdate();
}
How do I un-cache?
Thanks again for looking.

I don't think caching is the Problem, I think it's transactions.
em.getTransaction().begin();
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
em.getTransaction().commit();
I recommend to surrond your Writings to the DB with Transactions to get them persisted. Unless you commit requests may return results without the changes.
Ok, JTA does the Transactionmanagement.
Why are you doing this, when you are using JPA.
public int updateFileRecord(File1 file){
String title = file.getTitle();
String title_titleFriendly = file.getTitle_friendly();
int fileID = file.getFileId();
int result = 0;
Query q = this.em.createNativeQuery("UPDATE file1 set title = ?1, title_friendly = ?2 where file_id = ?3");
q.setParameter(1, title);
q.setParameter(2, title_titleFriendly);
q.setParameter(3, fileID);
result = q.executeUpdate();
return result;
}
This should work and update the internal State that comes with JPA
public int updateFileRecord(File1 file){
em.persist(file);
}

#daniel & #Tiny got me going on this one, thanks again guys.
I wanted to point out that I used the .merge() method out of the Entity Manager class.
It's important to note that for .merge() to UPDATE the record instead of INSERTing a new one; that the object you're submitting to .merge() must include all properties respective of the fields in the database table (that your DAO knows about) or you will INSERT new database records.
public void updateFileRecord(File1 file){
em.merge(file);
}

Related

Couchbase: MetaData.Metrics always contain default values?

I am testing Couchbase, and I am making a very simply query:
public async Task SelectRandomJobs(int nbr)
{
IBucket bucket = await cluster.BucketAsync("myBucket");
IScope scope = bucket.Scope("myScope");
IQueryResult<JObject> result = await scope.QueryAsync<JObject>("SELECT * FROM myCollection WHERE Id = {id}");
// The Metrics.* has default values
Console.WriteLine(result.MetaData.Metrics.ElaspedTime);
}
Here are the values:
I was expecting ElaspedTime (misspelled!) and ExecutionTime to be not null. There is a AnalyticsQueryAsync method, but that did work for me (error 24045).
Why are those values null?
-- UPDATE --
I followed the advice of Eric, but I got the same results:
So you will need to enable Metrics for this query, I have provided a code sample below with two possible ways of doing this, it is covered in our docs but maybe could be easier to find or have better examples, this is something I will investigate further and see if we can make it clearer in future editions of the docs.
I have used the travel-sample dataset and tried to set the code up similar to your example so that it will be easy to implement for you.
As for why the times are null by default and the other fields are zero, that seems to just be a design decision for this class.
About the misspelling, we have filed a ticket to get the spelling corrected. Thank you for pointing that out.
using System;
using System.Threading.Tasks;
using Couchbase;
using Couchbase.Query;
namespace _3x_simple
{
class Program
{
static async Task Main(string[] args)
{
var cluster = await Cluster.ConnectAsync("couchbase://localhost", "Administrator", "password");
var bucket = await cluster.BucketAsync("travel-sample");
var myScope = bucket.Scope("inventory");
//scope path
var options = new QueryOptions().Metrics(true);
var queryResult = await myScope.QueryAsync<dynamic>("SELECT * FROM airline LIMIT 10;", options);
//cluster path
//var queryResult = await cluster.QueryAsync<dynamic>("SELECT * FROM `travel-sample`.inventory.airline LIMIT 10;", options => options.Metrics(true));
Console.WriteLine($"Execution time before read: {queryResult.MetaData.Metrics.ExecutionTime}");
await foreach(var row in queryResult){
Console.WriteLine(row);
}
Console.WriteLine($"Execution time after read: {queryResult.MetaData.Metrics.ExecutionTime}");
Console.WriteLine("Press any key to exit...");
Console.Read();
}
}
}
You won't see the execution time until after the results are read. The reason you are seeing default values for those fields is because you are trying to read that information at the wrong time/place considering your async operation.

Concurrent Read/Write MySQL EF Core

Using EF Core 2.2.6 and Pomelo.EntityFrameworkCore.MySql 2.2.6 (with MySqlConnector 0.59.2)). I have a model for UserData:
public class UserData
{
[DatabaseGenerated(DatabaseGeneratedOption.None)]
public ulong ID { get; private set; }
[Required]
public Dictionary<string, InventoryItem> Inventory { get; set; }
public UserData()
{
Data = new Dictionary<string, string>();
}
}
I have a REST method that can be called that will add items to the user inventory:
using (var transaction = context.Database.BeginTransaction())
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
If two or more calls to this method are made with the same user id then I get concurrent accesses (despite the transaction). This causes the data to sometimes be incorrect. For example, if the inventory is empty and then two calls are made to add items simultaneously (item A and item B), sometimes the database will only contain either A or B, and not both. From logging it appears that it is possible for EF to read from the database while the other read/write is still occurring, causing the code to have the incorrect state of the inventory for when it tries to write back to the db. So I tried marking the isolation level as serializable.
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
Now I sometimes see an exception:
MySql.Data.MySqlClient.MySqlException (0x80004005): Deadlock found when trying to get lock; try restarting transaction
I don't understand how this code could deadlock... Anyways, I tried to proceed by wrapping this whole thing in a try/catch, and retry:
public static async Task<ResponseError> AddUserItem(Controller controller, MyContext context, ulong userId, List<InventoryItem> items, int retry = 5)
{
ResponseError result = null;
try
{
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
{
UserData data = await context.UserData.FindAsync(userId);
// there is code here to detect duplicate entries/etc, but I've removed it for brevity
foreach (var item in items) data.Inventory.Add(item.ItemId, item);
context.UserData.Update(data);
await context.SaveChangesAsync();
transaction.Commit();
}
}
catch (Exception e)
{
if (retry > 0)
{
await Task.Delay(SafeRandomGenerator(10, 500));
return await AddUserItem(controller, context, userId, items, retry--);
}
else
{
// store exception and return error
}
}
return result;
}
And now I am back to the data being sometimes correct, sometimes not. So I think the deadlock is another problem, but this is the only method accessing this data. So, I'm at a loss. Is there a simple way to read from the database (locking the row in the process) and then writing back (releasing the lock on write) using EF Core? I've looked at using concurrency tokens, but this seems overkill for what appears (on the surface to me) to be a trivial task.
I added logging for mysql connector as well as asp.net server and can see the following failure:
fail: Microsoft.EntityFrameworkCore.Database.Command[20102]
=> RequestId:0HLUD39EILP3R:00000001 RequestPath:/client/AddUserItem => Server.Controllers.ClientController.AddUserItem (ServerSoftware)
Failed executing DbCommand (78ms) [Parameters=[#p1='?' (DbType = UInt64), #p0='?' (Size = 4000)], CommandType='Text', CommandTimeout='30']
UPDATE `UserData` SET `Inventory` = #p0
WHERE `ID` = #p1;
SELECT ROW_COUNT();
A total hack is to just delay the arrival of the queries by a bit. This works because the client is most likely to generate these calls on load. Normally back-to-back calls aren't expected, so spreading them out in time by delaying on arrival works. However, I'd rather find a correct approach, since this just makes it less likely to be an issue:
ResponseError result = null;
await Task.Delay(SafeRandomGenerator(100, 500));
using (var transaction = context.Database.BeginTransaction(System.Data.IsolationLevel.Serializable))
// etc
This isn't a good answer, because it isn't what I wanted to do, but I'll post it here as it did solve my problem. My problem was that I was trying to read the database row, modify it in asp.net, and then write it back, all within a single transaction and while avoiding deadlocks. The backing field is JSON type, and MySQL provides some JSON functions to help modify that JSON directly in the database. This required me to write SQL statements directly instead of using EF, but it did work.
The first trick was to ensure I could create the row if it didn't exist, without requiring a transaction and lock.
INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0};
I used JSON_REMOVE to delete keys from the JSON field:
UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0};
and JSON_SET to add/modify entries:
UPDATE UserData as S set S.Inventory = JSON_SET(S.Inventory,{1},CAST({2} as JSON)) WHERE S.ID = {0};
Note, if you're using EF Core and want to call this using FromSql then you need to return the entity as part of your SQL statement. So you'll need to add something like this to each SQL statement:
SELECT * from UserData where ID = {0} LIMIT 1;
Here is a full working example as an extension method:
public static async Task<UserData> FindOrCreateAsync(this IQueryable<UserData> table, ulong userId)
{
string sql = "INSERT INTO UserData VALUES ({0},'{{}}','{{}}') ON DUPLICATE KEY UPDATE ID = {0}; SELECT * FROM UserData WHERE ID={0} LIMIT 1;";
return await table.FromSql(sql, userId).SingleOrDefaultAsync();
}
public static async Task<UserData> JsonRemoveInventory(this DbSet<UserData> table, ulong userId, string key)
{
if (!key.StartsWith("$.")) key = $"$.\"{key}\"";
string sql = "UPDATE UserData as S set S.Inventory = JSON_REMOVE(S.Inventory,{1}) WHERE S.ID = {0}; SELECT * from UserData where ID = {0} LIMIT 1;";
return await table.AsNoTracking().FromSql(sql, userId, key).SingleOrDefaultAsync();
}
Usage:
var data = await context.UserData.FindOrCreateAsync(userId);
await context.UserData.JsonRemoveInventory(userId, itemId);

Load 3D model in Unity using Resource folder and Mysql

I want to load 3D model using Resource folder. I created an sql database to store the address. In this case I stored the file "deer-3ds" in folder "Models" and also save these information in a table named "modeladdress" in sql.
So please help me to correct my code. I know that it's 100% wrong but I dont know how to fix it. Thank you.
using UnityEngine;
using System.Collections;
using System;
using System.Data;
using Mono.Data.Sqlite;
public class addobject : MonoBehaviour {
// Use this for initialization
void Start () {
//GameObject deer=Instantiate(Resources.Load("deer-3d.bak",typeof(GameObject)))as GameObject;
// GameObject instance = Instantiate(Resources.Load("Models/deer-3ds", typeof(GameObject))) as GameObject;
string conn = "URI=file:" + Application.dataPath + "/modeladdress.s3db"; //Path to database.
IDbConnection dbconn;
dbconn = (IDbConnection) new SqliteConnection(conn);
dbconn.Open(); //Open connection to the database.
IDbCommand dbcmd = dbconn.CreateCommand();
string sqlQuery = "SELECT ordinary,foldername, filename " + "FROM modeladdress";
dbcmd.CommandText = sqlQuery;
IDataReader reader = dbcmd.ExecuteReader();
while (reader.Read ()) {
int ordinary = reader.GetInt32 (0);
string foldername = reader.GetString (1);
string filename = reader.GetString (2);
string path = foldername + "/" + filename;
//Debug.Log( "value= "+value+" name ="+name+" random ="+ rand);
GameObject instance = Instantiate(Resources.Load(path, typeof(GameObject))) as GameObject;
instance.SetActive (true);
}
reader.Close();
reader = null;
dbcmd.Dispose();
dbcmd = null;
dbconn.Close();
dbconn = null;
}
// Update is called once per frame
void Update () {
// GameObject instance = Instantiate(Resources.Load("Models/deer-3ds", typeof(GameObject))) as GameObject;
// instance.SetActive (true);
}
}
First of all, you are using SQLite at your database management system, not MySQL. Second, the way you have written your query,
string sqlQuery = "SELECT ordinary,foldername, filename " + "FROM modeladdress";
Will return the ordinary, foldername, and filename for every model. You need to use a WHERE clause to specify precisely which model you want to use. Thus, you need some way to know which model you want to query from the database before you actually execute the query, and in that case, why even query a database? You're going to have to store some unique identifier anyway so a database solves nothing.
Now concerning the actual code you have written, it appears to be correct (i.e. it should be returning what you want). The problem must be that either your table is empty, your values that are returned are incorrect, or that the object is being instantiated in an incorrect location and thus you are thinking it's not working. If you want a more concrete answer you'll have to comment on this answer with the specific problem you are facing (i.e. what specifically is "wrong"?).

Weird behaviour encountered using java.sql.TimeStamp and a mysql database

The weird behavior is that a java.sql.Timestamp that I create using the System.currentTimeMillis() method, is stored in my MySQL database as 1970-01-01 01:00:00.
The two timestamps I am creating are to mark the beginning and end of a monitoring task I am trying to perform, what follows are excepts from the code where the behavior occurs
final long startTime = System.currentTimeMillis();
while(numberOfTimeStepsPassed < numTimeStep) {
/*
* Code in here
*/
}
final long endTime = System.currentTimeMillis();
return mysqlConnection.insertDataInformation(matrixOfRawData, name,Long.toString(startTime),
Long.toString(endTime), Integer.toString(numTimeStep),
Integer.toString(matrixOfRawData[0].length), owner,
type);
And here is the code used for inserting the time stamps and other data into the MySQL database
public String insertDataInformation(final double [][] matrix,
final String ... params) {
getConnection(lookUpName);
String id = "";
PreparedStatement dataInformationInsert = null;
try {
dataInformationInsert =
databaseConnection.prepareStatement(DATA_INFORMATION_PREPARED_STATEMENT);
id = DatabaseUtils.createUniqueId();
int stepsMonitored = Integer.parseInt(params[STEPS_MONITORED]);
int numberOfMarkets = Integer.parseInt(params[NUMBER_OF_MARKETS]);
dataInformationInsert.setNString(ID_INDEX, id);
dataInformationInsert.setNString(NAME_INDEX, params[0]);
dataInformationInsert.setTimestamp(START_INDEX, new Timestamp(Long.parseLong(params[START_INDEX])));
dataInformationInsert.setTimestamp(END_INDEX, new Timestamp(Long.parseLong(params[END_INDEX])));
dataInformationInsert.setInt(STEPS_INDEX, stepsMonitored);
dataInformationInsert.setInt(MARKETS_INDEX, numberOfMarkets);
dataInformationInsert.setNString(OWNER_INDEX, params[OWNER]);
dataInformationInsert.setNString(TYPE_INDEX, params[TYPE]);
dataInformationInsert.executeUpdate();
insertRawMatrix(matrix, id, Integer.toString(stepsMonitored), Integer.toString(numberOfMarkets));
} catch (SQLException sqple) {
// TODO Auto-generated catch block
sqple.printStackTrace();
System.out.println(sqple.getSQLState());
} finally {
close(dataInformationInsert);
dataInformationInsert = null;
close(databaseConnection);
}
return id;
}
The important lines of code are :
dataInformationInsert.setTimestamp(START_INDEX, new Timestamp(Long.parseLong(params[START_INDEX])));
dataInformationInsert.setTimestamp(END_INDEX, new Timestamp(Long.parseLong(params[END_INDEX])));
The JavaDocs on the TimeStamp ( http://docs.oracle.com/javase/1.5.0/docs/api/java/sql/Timestamp.html ) says that it takes in time in milliseconds since 1st January 1970 and a simple print test confirms this.
What I am looking for is:
A reason for this behavior when trying to store timestamps in a MySQL database through java.sql.Timestamp?
Any solutions to this behavior?
Any possible alternatives?
Any possible improvements?
EDIT:
Been asked to include what START_INDEX and END_INDEX are:
private static final int END_INDEX = 4;
private static final int START_INDEX = 3;
Apologises for not putting them in the original post.
Okay, look at your call:
insertDataInformation(matrixOfRawData, name, Long.toString(startTime),
Long.toString(endTime), Integer.toString(numTimeStep),
Integer.toString(matrixOfRawData[0].length), owner,
type);
So params will have values:
0: name
1: start time
2: end time
3: numTimeStep
4: matrixOfRowData[0].length
5: owner
6: type
Then you're doing:
dataInformationInsert.setTimestamp(START_INDEX,
new Timestamp(Long.parseLong(params[START_INDEX])));
... where START_INDEX is 3.
So you're using the value corresponding to numTimeStep as the value for the timestamp... I suspect you don't want to do that.
I would strongly advise you to create a simple object type (possibly a nested type in the same class) to let you pass these parameters in a strongly typed, simple to get right fashion. The string conversion and the access by index are both unwarranted, and can easily give rise to errors.

SqlDependency and table update do not refresh DataContext

I'm having trouble with the implementation of SqlDependency in my project.
I'm using SqlDependency in a WCF Service. WCF Service then holds in memory cache all results from all tables in order to have a huge speed gain. Everything seems to be working fine, except when I'm doing a table row update. If I add or delete a row in my table, DataContext is refreshed and cache is invalidated without problems. But when it comes to a table row update, nothing happens, the cache is not invalidated and when I look in debug mode at the content of DataContext, no changes seems to be there.
Here's the code I'm using (note that I'm using the System.Runtime.Caching object) :
public static List<T> LinqCache<T>(this Table<T> query) where T : class
{
ObjectCache cache = MemoryCache.Default;
string tableName =
query.Context.Mapping.GetTable(typeof(T)).TableName;
List<T> result = cache[tableName] as List<T>;
if (result == null)
{
using (SqlConnection conn =
new SqlConnection(query.Context.Connection.ConnectionString))
{
conn.Open();
SqlCommand cmd = new SqlCommand(
query.Context.GetCommand(query).CommandText, conn);
cmd.Notification = null;
cmd.NotificationAutoEnlist = true;
SqlDependency dependency = new SqlDependency(cmd);
SqlChangeMonitor sqlMonitor =
new SqlChangeMonitor(dependency);
CacheItemPolicy policy = new CacheItemPolicy();
policy.ChangeMonitors.Add(sqlMonitor);
cmd.ExecuteNonQuery();
result = query.ToList();
cache.Set(tableName, result, policy);
}
}
return result;
}
I created an extension method so all I have to do is to query any table like that :
List<MyTable> list = context.MyTable.LinqCache();
My DataContext is opened at the Global.asax Application_OnStart and stored in cache, so I can use it whenever I want in my WCF Service. As well at this moment I'm opening the SqlDependency object with
SqlDependency.Start(
ConfigurationManager.ConnectionStrings[myConnectionString].ConnectionString);
So, is that a limitation of SqlDependency, or I'm doing something wrong/missing something in the process?
I think the problem is that although you do all the work in setting up the command object you then do:
cmd.ExecuteNonQuery();
result = query.ToList();
Which is going to use your SQL Command and throw away the results then LINQ to SQL will generate it's own internally via query.ToList(). Thankfully you can ask LINQ to SQL to execute your own command and translate the results for you so try replacing those two lines with:
results = db.Translate<T>(cmd.ExecuteReader());