Call function when class is deleted/garbage collected - function

I have a class that opens a sqlite database in its constructor. Is there a way to have it close the database when it is destroyed (whether that be due to the programmer destroying it or being destroyed via Lua's garbage collection)?
The code so far:
local MyClass = {}
local myClass_mt= {__index=MyClass, __gc=__del}
function DBHandler.__init()
-- constructor
local newMyClass = {
db = sqlite3.open(dbPath)
}
return setmetatable(newMyClass , myClass_mt)
end
local function __del()
self.db.close()
end

For your particular case, according to its source code, LuaSQLite already closes its handle when it is destroyed:
/* close method */
static int db_close(lua_State *L) {
sdb *db = lsqlite_checkdb(L, 1);
lua_pushnumber(L, cleanupdb(L, db));
return 1;
}
/* __gc method */
static int db_gc(lua_State *L) {
sdb *db = lsqlite_getdb(L, 1);
if (db->db != NULL) /* ignore closed databases */
cleanupdb(L, db);
return 0;
}
But IMO, freeing such resources on GC should be a backup solution: your object could be GCed after quite some time, so SQLite handle will stay open during this time. Some languages provides mechanism to release unmanaged resources as early as possible such as Python's with or C# using.
Unfortunately Lua does not provide such feature so you should call close yourself when possible, by making a close method on your class too for instance.

You don't mention what Lua version you use, but __gc won't work on tables in Lua 5.1. Something like this may work (it's using newproxy hack for Lua 5.1):
m = newMyClass
if _VERSION >= "Lua 5.2" then
setmetatable(m, {__gc = m.__del})
else
-- keep sentinel alive until 'm' is garbage collected
m.sentinel = newproxy(true)
getmetatable(m.sentinel).__gc = m.__del -- careful with `self` in this case
end
For Lua 5.2 this is not different from the code you have; you don't say what exactly is not working, but Egor's suggestion on self.db:close is worth checking...

look for finalizer in the manual.

Related

How to call Stored Procedures and defined functions in MySQL with Slick 3.0

I have defined in my db something like this
CREATE FUNCTION fun_totalInvestorsFor(issuer varchar(30)) RETURNS INT
NOT DETERMINISTIC
BEGIN
RETURN (SELECT COUNT(DISTINCT LOYAL3_SHARED_HOLDER_ID)
FROM stocks_x_hldr
WHERE STOCK_TICKER_SIMBOL = issuer AND
QUANT_PURCHASES > QUANT_SALES);
END;
Now I have received an answer from Stefan Zeiger (Slick lead) redirecting me here: User defined functions in Slick
I have tried (having the following object in scope):
lazy val db = Database.forURL("jdbc:mysql://localhost:3306/mydb",
driver = "com.mysql.jdbc.Driver", user = "dev", password = "root")
val totalInvestorsFor = SimpleFunction.unary[String, Int]("fun_totalInvestorsFor")
totalInvestorsFor("APPLE") should be (23)
Result: Rep(slick.lifted.SimpleFunction$$anon$2#13fd2ccd fun_totalInvestorsFor, false) was not equal to 23
I have also tried while having an application.conf in src/main/resources like this:
tsql = {
driver = "slick.driver.MySQLDriver$"
db {
connectionPool = disabled
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://localhost/mydb"
}
}
Then in my code with #StaticDatabaseConfig("file:src/main/resources/application.conf#tsql")
tsql"select fun_totalInvestorsFor('APPLE')" should be (23)
Result: Error:(24, 9) Cannot load #StaticDatabaseConfig("file:src/main/resources/application.conf#tsql"): No configuration setting found for key 'tsql'
tsql"select fun_totalInvestorsFor('APPLE')" should be (23)
^
I am also planning to call stored procedures that return one tuple of three values, via sql"call myProc(v1).as[(Int, Int, Int)]
Any ideas?
EDIT: When making
sql""""SELECT COUNT(DISTINCT LOYAL3_SHARED_HOLDER_ID)
FROM stocks_x_hldr
WHERE STOCK_TICKER_SIMBOL = issuer AND
QUANT_PURCHASES > QUANT_SALES""".as[(Int)]
results in SqlStreamingAction[Vector[Int], Int, Effect] instead of the suggested DBIO[Int] (from what I infer) suggested by the documentation
I've been running into exactly the same problem for the past week. After some extensive research (see my post here, I'll be adding a complete description of what I've done as a solution), I decided it can't be done in Slick... not strictly speaking.
But, I'm resistant to adding pure JDBC or Anorm into our solution stack, so I did find an "acceptable" fix, IMHO.
The solution is to get the session object from Slick, and then use common JDBC to manage the stored function / stored procedure calls. At that point you can use any third party library that makes it easier... although in my case I wrote my own function to set up the call and return a result set.
val db = Database.forDataSource(DB.getDataSource)
var response: Option[GPInviteResponse] = None
db.withSession {
implicit session => {
// Set up your call here... (See my other post for a more detailed
// answer with an example:
// procedure is eg., "{?=call myfunction(?,?,?,?)}"
val cs = session.conn.prepareCall(procedure.toString)
// Set up your in and out parameters here
// eg. cs.setLong(index, value)
val result = cs.execute()
val rc = result.head.asInstanceOf[Int]
response = rc match {
// Package up the response to the caller
}
}
}
db.close()
I know that's pretty terse, but as I said, see the other thread for a more complete posting. I'm putting it together right now and will post the answer shortly.

how NamedParameterJdbcTemplate.update really works with Spring and MySQL

Ok, I've probably dug up the entire Google land and still couldn't find anything that could possibly answer my question.
I have my little foo method that does some deleting like this:
private void foo()
{
jdbcNamedParameterTemplate.update(sqlString, params); //1
jdbcNamedParameterTemplate.update(sqlString2, params2); //2
}
sqlString and sqlString2 are just delete statements like "Delete * from FooBar".
So when I get to the second call to update, do I have any guarantee that whatever operation the first one invokes in the database has already finished?
If you do that two in one session, and non multithreading, then yes the first one invokes in the database has already finished before the second update.
But if not in the same session you can check the version to check if the object already changed or not
int oldVersion = foo.getVersion();
session.load( foo, foo.getKey() ); // load the current state
if ( oldVersion != foo.getVersion()) { .... }// if true then the object has been changed

Mahout 0.7 Failed to get recommendation with a large data using MysqlJdbcDataModel

I am using Mahout to build an Item-based Cf recommendation engine.
I create an MahoutHelper class which has a constructor:
public MahoutHelper(String serverName, String user, String password,
String DatabaseName, String tableName) {
source = new MysqlConnectionPoolDataSource();
source.setServerName(serverName);
source.setUser(user);
source.setPassword(password);
source.setDatabaseName(DatabaseName);
source.setCachePreparedStatements(true);
source.setCachePrepStmts(true);
source.setCacheResultSetMetadata(true);
source.setAlwaysSendSetIsolation(true);
source.setElideSetAutoCommits(true);
DBmodel = new MySQLJDBCDataModel(source, tableName, "userId", "itemId",
"value", null);
similarity = new TanimotoCoefficientSimilarity(DBmodel);
}
and the recommend method is:
public List<RecommendedItem> recommendation() throws TasteException {
Recommender recommender = null;
recommender = new GenericItemBasedRecommender(DBmodel, similarity);
List<RecommendedItem> recommendations = null;
recommendations = recommender.recommend(userId, maxNum);
System.out.println("query completed");
return recommendations;
}
It's using datasource to build datamodel but the problem is that when mysql has only a few data (less than 100) the program works fine for me, while when the scale turns to be over 1,000,000, the program stacks at doing recommendation and never goes forward. I have no idea how it happens. By the way I used the same data to build a FileDataModel with a .dat file, and it takes only 2~3 second to complete analysis. I am confused.
Using the database directly will only work for tiny data sets, like maybe a hundred thousand data points. Beyond that the overhead of such data-intensive applications will never run quickly; a query takes thousands of SQL queries or more.
Instead you must load and re-load into memory. You can still pull from the database; look at ReloadFromJDBCDataModel as a wrapper.

tcllib Tcl_CreateObjTrace usage example

Does anyone have an example of how to use Tcl_CreateObjTrace? This is the procedure to add Tcl calls tracing to the C code using TclLib.
My main problem is this: I'm trying to develop a tracer for my Tcl code. However, I'd like to trace only my own procedures. The following code works:
static int
tcl_tracer( ClientData clientData,
Tcl_Interp* interp,
int level,
CONST char* command,
Tcl_Command commandToken,
int objc, Tcl_Obj *CONST objv[])
{
int param_length = 0;
CONST char *param_str = NULL;
int i;
/**
* The first three parameters represent the procedure
*/
if (objc < 2) {
printf("Invalid number of parameters for the tracer: %d\n", objc);
return TCL_OK;
}
param_str = Tcl_GetStringFromObj(objv[0], &param_length);
printf("%d:%s ", 0, param_str);
param_str = Tcl_GetStringFromObj(objv[1], &param_length);
printf("%d:%s ", 1, param_str);
param_str = Tcl_GetStringFromObj(objv[2], &param_length);
printf("%d:%s ", 2, param_str);
printf("\n");
return TCL_OK;
}
However, it traces all procedures. It traces 'puts', 'set', etc.
Is there any way to avoid that? There is a parameter to specify the level of tracing. But I don't know beforehand how many levels deep my code may run.
Much appreciated.
-Ilya.
As that page mentions, setting the flags parameter of the Tcl_CreateObjTrace call to TCL_ALLOW_INLINE_COMPILATION will disable the most intrusive level of tracing (in particular, many common core commands are bytecode compiled as normal with that flag set).
That said, it is substantially easier to hook into this mechanism from the Tcl level through trace add execution; setting an enter trace on each command you're interested in (sorry, you'll have to list them) should do the trick. (This works because the trace internals can turn off a lot of the cost in a way your code can't. This is fairly tricky, and one of the reasons I hate dealing with the trace command implementation.)

Linq to SQL concurrency problem

Hallo,
I have web service that has multiple methods that can be called. Each time one of these methods is called I am logging the call to a statistics database so we know how many times each method is called each month and the average process time.
Each time I log statistic data I first check the database to see if that method for the current month already exists, if not the row is created and added. If it already exists I update the needed columns to the database.
My problem is that sometimes when I update a row I get the "Row not found or changed" exception and yes I know it is because the row has been modified since I read it.
To solve this I have tried using the following without success:
Use using around my datacontext.
Use using around a TransactionScope.
Use a mutex, this doesn’t work because the web service is (not sure I am calling it the right think) replicated out on different PC for performance but still using the same database.
Resolve concurrency conflict in the exception, this doesn’t work because I need to get the new database value and add a value to it.
Below I have added the code used to log the statistics data. Any help would be appreciated very much.
public class StatisticsGateway : IStatisticsGateway
{
#region member variables
private StatisticsDataContext db;
#endregion
#region Singleton
[ThreadStatic]
private static IStatisticsGateway instance;
[ThreadStatic]
private static DateTime lastEntryTime = DateTime.MinValue;
public static IStatisticsGateway Instance
{
get
{
if (!lastEntryTime.Equals(OperationState.EntryTime) || instance == null)
{
instance = new StatisticsGateway();
lastEntryTime = OperationState.EntryTime;
}
return instance;
}
}
#endregion
#region constructor / initialize
private StatisticsGateway()
{
var configurationAppSettings = new System.Configuration.AppSettingsReader();
var connectionString = ((string)(configurationAppSettings.GetValue("sqlConnection1.ConnectionString", typeof(string))));
db = new StatisticsDataContext(connectionString);
}
#endregion
#region IStatisticsGateway members
public void AddStatisticRecord(StatisticRecord record)
{
using (db)
{
var existing = db.Statistics.SingleOrDefault(p => p.MethodName == record.MethodName &&
p.CountryID == record.CountryID &&
p.TokenType == record.TokenType &&
p.Year == record.Year &&
p.Month == record.Month);
if (existing == null)
{
//Add new row
this.AddNewRecord(record);
return;
}
//Update
existing.Count += record.Count;
existing.TotalTimeValue += record.TotalTimeValue;
db.SubmitChanges();
}
}
I would suggest letting SQL Server deal with the concurrency.
Here's how:
Create a stored procedure that accepts your log values (method name, month/date, and execution statistics) as arguments.
In the stored procedure, before anything else, get an application lock as described here, and here. Now you can be sure only one instance of the stored procedure will be running at once. (Disclaimer! I have not tried sp_getapplock myself. Just saying. But it seems fairly straightforward, given all the examples out there on the interwebs.)
Next, in the stored procedure, query the log table for a current-month's entry for the method to determine whether to insert or update, and then do the insert or update.
As you may know, in VS you can drag stored procedures from the Server Explorer into the DBML designer for easy access with LINQ to SQL.
If you're trying to avoid stored procedures then this solution obviously won't be for you, but it's how I'd solve it easily and quickly. Hope it helps!
If you don't want to use the stored procedure approach, a crude way of dealing with it would simply be retrying on that specific exception. E.g:
int maxRetryCount = 5;
for (int i = 0; i < maxRetryCount; i++)
{
try
{
QueryAndUpdateDB();
break;
}
catch(RowUpdateException ex)
{
if (i == maxRetryCount) throw;
}
}
I have not used the sp_getapplock, instead I have used HOLDLOCK and ROWLOCK as seen below:
CREATE PROCEDURE [dbo].[UpdateStatistics]
#MethodName as varchar(50) = null,
#CountryID as varchar(2) = null,
#TokenType as varchar(5) = null,
#Year as int,
#Month as int,
#Count bigint,
#TotalTimeValue bigint
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRAN
UPDATE dbo.[Statistics]
WITH (HOLDLOCK, ROWLOCK)
SET Count = Count + #Count
WHERE MethodName=#MethodName and CountryID=#CountryID and TokenType=#TokenType and Year=#Year and Month=#Month
IF ##ROWCOUNT=0
INSERT INTO dbo.[Statistics] (MethodName, CountryID, TokenType, TotalTimeValue, Year, Month, Count) values (#MethodName, #CountryID, #TokenType, #TotalTimeValue, #Year, #Month, #Count)
COMMIT TRAN
END
GO
I have tested it by calling my web service methods by multiple threads simultaneous and each call is logged without any problems.