What are these `a.bits.user.amba_prot` signals and why are they only uninitialized conditionally in my HarnessBinder? - chisel

Context: I began using Chipyard about a month back to facilitate the building of a quick prototype using RISC-V cores on the VCU118. Chipyard was perfect, but required me to step up and learn Chisel and Rocket-chip tools to extend interconnection to my design.
The first piece of hardware to hook up was the PCIe to AXI Bridge IP provided by Xilinx known as XDMA. fpga-shells provides a wrapper, shell, and overlay for this IP already, so with some studying of the Chipyard System, HarnessBinders, and IOBinders, I managed to hook it up. The overlay was placed like so:
val overlayOutput = dp(PCIeOverlayKey).last.place(PCIeDesignInput(wrangler=dutWrangler.node, corePLL=harnessSysPLL)).overlayOutput
val (pcieNode: TLNode, pcieIntNode: IntOutwardNode) = (overlayOutput.pcieNode, overlayOutput.intNode)
val (pcieSlaveTLNode: TLIdentityNode, pcieMasterTLNode: TLAsyncSinkNode) = (pcieNode.inward, pcieNode.outward)
There are two slaves, but I'll only show the IOBinders and HarnessBinders for one. I'm assuming my other Port mixin is functioning correctly since it's exactly like the CanHaveMasterTLMMIO port, but with a separate key and different address ranges. I realize this is inefficient, but was easier than creating an external bus. Here is the IOBinder which takes advantage of CanHaveMasterTLMMIOPort to introduce a master MMIO port to the system bus.
class WithXDMASlaveIOPassthrough extends OverrideIOBinder({
(system: CanHaveMasterTLMMIOPort) => {
val io_xdma_slave_pins_temp = IO(DataMirror.internal.chiselTypeClone[HeterogeneousBag[TLBundle]](system.mmio_tl)).suggestName("tl_slave_mmio")
io_xdma_slave_pins_temp <> system.mmio_tl
(Seq(io_xdma_slave_pins_temp), Nil)
}
})
I retrieve and connect the slave nodes in the TestHarness like so:
val inParamsMMIOPeriph = topDesign match { case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLMMIOPort =>
lsys.mmioTLNode.edges.in(0)
}
}
val inParamsControl = topDesign match {case td: ChipTop =>
td.lazySystem match { case lsys: CanHaveMasterTLCtrlPort =>
lsys.ctrlTLNode.edges.in(0)
}
}
val pcieClient = TLClientNode(Seq(inParamsMMIOPeriph.master))
val pcieCtrlClient = TLClientNode(Seq(inParamsControl.master))
val connectorNode = TLIdentityNode()
// pcieSlaveTLNode should be driven for both the control slave and the axi bridge slave
connectorNode := pcieClient
connectorNode := pcieCtrlClient
pcieSlaveTLNode :=* connectorNode
Finally, I connect the node to a harness binder. I followed the way DDR was connected in Chipyard for this step.
class WithPCIeClient extends OverrideHarnessBinder({
(system: CanHaveMasterTLMMIOPort, th: BaseModule with HasHarnessSignalReferences, ports: Seq[HeterogeneousBag[TLBundle]]) => {
require(ports.size == 1)
th match { case vcu118th: FCMVCU118FPGATestHarnessImp => {
val bundles = vcu118th.fcmOuter.pcieClient.out.map(_._1)
val pcieClientBundle = Wire(new HeterogeneousBag(bundles.map(_.cloneType)))
pcieClientBundle <> DontCare // Some signals aren't being driven to this bundle, but it's hard to know how critical that is. Only happens when myPeripheral is on.
bundles.zip(pcieClientBundle).foreach{case (bundle, io) => bundle <> io}
pcieClientBundle <> ports.head
} }
}
})
I added the pcieClientBundle <> DontCare to surpress these undriven signals. This problem only affects the signals driven to both slaves.
Signals such as:
a.bits.user.amba_prot.fetch
a.bits.user.amba_prot.secure
a.bits.user.amba_prot.modifiable
a.bits.user.amba_prot.privileged
are undriven (resulting in a $RefNotInitializedException in FIRRTL pass through)
in both wires. I recognize these come from TLToAXI4 but what's weird is that they are only undriven when I connect another peripheral to the bus. This peripheral does not leave ChipTop. It has three master AXI buses connected to the system like so:
( peirpheralMasterNode1
:= TLBuffer(BufferParams.default)
:= TLWidthWidget(8)
:= AXI4ToTL()
:= AXI4UserYanker(capMaxFlight=Some(16))
:= AXI4Fragmenter()
:= AXI4IdIndexer(idBits=3)
:= AXI4Buffer()
:= peripheralTop.Master1)
fbus.fromPort(Some("PERIPHERAL_MASTER_1"))() := peripheralMasterNode1
The peripheral's AXI Lite slave is where I suspect the problem could be. Its node is declared like so:
val regCfgSlv = AXI4SlaveNode(Seq(AXI4SlavePortParameters(
slaves = Seq(AXI4SlaveParameters(
address = AddressSet.misaligned(0xf000E0000L, 0x1000L),
resources = regCfgDevice.reg("config"),
// executable = true, // Determines whether processor can execute from this memory.
supportsWrite = TransferSizes(1, 4),
supportsRead = TransferSizes(1, 4),
interleavedId = Some(0))),
beatBytes = 4
)))
And it is connected to the system bus using toSlave
sbus.toSlave(Some(portName)){
(peripheralTop.regCfgSlv
:= AXI4Buffer()
:= AXI4UserYanker(capMaxFlight=Some(2))
:= TLToAXI4()
// := TLWidthWidget(sbus.beatBytes))
:= TLFragmenter(4,
p(CacheBlockBytes),
// sbus.beatBytes,
holdFirstDeny = true)
:= TLWidthWidget(sbus.beatBytes))
}
When I include this mixin peripheral in my config by setting myPeripheral's key, I get the error about a.bits.user.amba_prot signals not being driven for both HarnessBinders Client Bundles [val pcieClientBundle = Wire(new HeterogeneousBag(bundles.map(_.cloneType)))]. When I only use one of myPeripheral or XDMA, the $RefNotInitializedException goes away.
Here are my configs. I have tried moving WithMyPeripheral around to no avail.
class CustomRocketConfig extends Config(
new WithPCIeMMIOPort ++ // add default external master port
new WithControlPort ++ // add control port for pcie cfg. // TODO: Crossbar this on MMIO Port? Move both MMIO and Control to a port on System Bus?
new freechips.rocketchip.subsystem.WithDefaultSlavePort ++ // add default external slave port
new WithMyPeripheral(MyPeripheralParams()) ++ // Link up myPeripheral
new freechips.rocketchip.subsystem.WithNBigCores(2) ++
new freechips.rocketchip.subsystem.WithNExtTopInterrupts(3) ++
new chipyard.config.AbstractConfig)
class WithPCIeTweaks extends Config (
new WithPCIeClient ++
new WithPCIeManager ++
new WithPCIeCtrlClient ++ // Same for these harness binders - ME
new WithXDMAMasterIOPassthrough ++
new WithXDMASlaveIOPassthrough ++
new WithXDMACtrlIOPassthrough // I imagine these three IOBinders can be combined into one - ME
//TODO: Probably need harness binder and io binder for interrupt if we use it
)
class myRocketConfig extends Config (
// new WithMyPeripheral(MyPeripheralParams()) ++ // Link up myPeripheral
new WithPCIeTweaks ++
new WithVCU118Tweaks ++
new WithMyVCU118System ++
new CustomRocketConfig
)
I hope my problem is clear and interesting. What are a.user.amba_prot signals and why are they undriven when I hook up both XDMA and my peripheral? Why is it that I can declare either myPeripheral or the XDMA, but, when I hook up both, these signals don't have drivers? I realize this is a difficult question with a lot of moving parts in an already scarcely viewed tag. If you took the time to read this and have suggestions, your kindness and expertise are greatly appreciated.
Edit: I think the issue might be parameter negotiation failing between Test Harness's diplomacy region and ChipTop's diplomacy region.
This is the control node in XDMA.
val control = AXI4SlaveNode(Seq(AXI4SlavePortParameters(
slaves = Seq(AXI4SlaveParameters(
address = List(AddressSet(c.control, c.ecamMask)),
resources = device.reg("control"),
supportsWrite = TransferSizes(1, 4),
supportsRead = TransferSizes(1, 4),
interleavedId = Some(0))), // AXI4-Lite never interleaves responses
beatBytes = 4)))
This is the port the system sees.
val mmioTLNode = TLManagerNode(
mmioPortParamsOpt.map(params =>
TLSlavePortParameters.v1(
managers = Seq(TLSlaveParameters.v1(
address = AddressSet.misaligned(params.base, params.size),
resources = device.ranges,
executable = params.executable,
supportsGet = TransferSizes(1, sbus.blockBytes),
supportsPutFull = TransferSizes(1, sbus.blockBytes),
supportsPutPartial = TransferSizes(1, sbus.blockBytes))),
beatBytes = params.beatBytes)).toSeq)
mmioPortParamsOpt.map { params =>
sbus.coupleTo(s"port_named_$portName") {
(mmioTLNode
:= TLBuffer()
:= TLSourceShrinker(1 << params.idBits)
:= TLWidthWidget(sbus.beatBytes)
:= _ )
}
}
param.beatBytes is currently set to 8 (site(MemoryBusKey).beatBytes). But the control configuration slave node is 4.

I have determined a somewhat satisfactory way around this error. My understanding of diplomacy might be flawed here, so if this helps someone craft a better answer, I'd love to hear it.
What are a.user.amba_prot signals?
These signals are a result of bridging protocols between TL and AXI4. If the master is originally AXI4, then it has ar and aw channel signals labelled prot and cache.
AxCACHE[0], Bufferable (B) bit
AxCACHE[1], Cacheable (C) bit (this is the "modifiable" bit)
AxCACHE[2], Read-allocate (RA) bit
AxCACHE[3], Write-allocate (WA) bit
AxPROT[0], Privileged Access bit
AxPROT[1], Secure Access bit
AxPROT[2], Data/Instruction Access bit
These signals are driven from AXI masters to AXI slaves.
Why is it that I can declare either myPeripheral or the XDMA, but, when I hook up both, these signals don't have drivers?
It's important to keep in mind the following point when using Chipyard: the Chipyard system is one diplomatic region. The Test Harness is another. So from Chipyard system perspective, XDMA appears completely as TL connections. It has a TL master driving its slave port and a TL slave on its master port. This is because XDMA in the PCIeOverlay uses Diplomacy to connect the AXI nodes to TileLink within the Test Harness diplomatic region. The system's diplomatic region can only see these TileLink master and slave ports. When AXI masters and slaves are connected, the TileLink client bundles now need amba_prot signals because the system can see it needs to connect to AXI slaves.
The workaround for this is to expose the AXI ports directly for XDMA. This way the system can see the AXI slaves and masters from XDMA and the bridge to TL can happen on the system side.

Related

How to add a sbus master to rocket-chip periphery

I'm trying to implement a DMA like periphery to the rocket chip.
Meaning a module that is hooked to the pbus, and controlled by registers.
it also has a master hooked to the sbus.
I followed the sifive format to attach registers controlled peripheries without any problems.
My question is how do I add the sbus master ?, the bellow is what I've tried before getting to dead end.
To the attach parameters class I've added the sbus:
case class dmaAttachParams(
dma : dmaParams,
controlBus: TLBusWrapper,
masterBus : TLBusWrapper, // <-- my addition
....
) (implicit val p: Parameters)
Then I modified the attach method in the factory object:
def attach(params: dmaAttachParams): TLdma = {
implicit val p = params.p
val name = s"dma_${nextId()}"
val cbus = params.controlBus
val mbus = params.masterBus // <- my addition
val dma = LazyModule(new TLdma(params.dma))
dma.suggestName(name)
cbus.coupleTo(s"slave_named_name") {
dma.controlXing(params.controlXType) := TLFragmenter(cbus.beatBytes, cbus.blockBytes) := _
}
InModuleBody { dma.module.clock := params.mclock.map(_.getWrappedValue).getOrElse(cbus.module.clock) }
InModuleBody { dma.module.reset := params.mreset.map(_.getWrappedValue).getOrElse(cbus.module.reset) }
// this section is my problem // <-- this section is my addition
mbus.from(s"master_named_name") {
mbus.inwardNode := TLBuffer() := dma.mnode // <- what should i do here ???
}
dma
}
The mndoe is a node I have add to the dma class like this:
val mnode = TLClientNode(Seq(TLClientPortParameters(Seq(TLClientParameters(name = "dmaSbusMaster")))))
what should be the body of the mbus.from() method that will do the work?
trying to build this code gives this error:
Caused by: java.lang.IllegalArgumentException: requirement failed: buffer.node (A adapter node with parent buffer inside coupler_from_master_named_name) has 1 inputs and 0 outputs; they must match (Buffer.scala:69:28)
Any help will appreciated, in the rocket chip github issue forum, they no longer answer support questions. So if someone from there can answer here it will be great, as I am really stuck here.
P.S. just adding the way the attach method is invoked:
trait HasPeripheryDma { this: BaseSubsystem =>
val dmaNodes = p(PeripheryDmaKey).map { ps =>
dma.attach(dmaAttachParams(ps, pbus, sbus))
}
}
Update:
Implementing the body of the mbus.from() method as below:
mbus.from(s"master_named_name") {
mbus.inwardNode := TLBuffer(BufferParams.default) := dma.mnode
}
Does create a coupler from the dma on the SBUS , but it is not connected to the dma periphery. Any Ideas ?
I don't understand what is going wrong in your "Update", but this should work:
mbus.coupleFrom("master_named_dma") {
_ := TLBuffer(BufferParams.default) := dma.mnode
}
I have managed to attach the SBUS by reverse engineering of the way the slave is attached.
If someone can/wants to elaborate more feel free to do so.
I have added a "TLOutwardCrossingHelper" field to the DMA periphery like this:
class TLdma(params : dmaParams) (implicit p: Parameters) extends dma(params) with HasTLControlRegMap {
val controlXingMaster : TLOutwardCrossingHelper = this.crossOut(mnode)
}
please note that equivalent "TLInwardCrossingHelper" is defined in the "HasTLControlRegMap " trait that we extending.
Then,In the attach method, the next line did the work:
_ := TLBuffer(BufferParams.default) := dma.controlXingMaster(params.controlXType)
In this way I was able to also hook the periphery to the coupler on the sbus.
I assume the crossing object does something to the node , but I don't know what.

Shortest path performance in Graphx with Spark

I am creating a graph from a gz compressed json file of edge and vertices type.
I have put the files in a dropbox folder here
I load and map these json records to create the vertices and edge types required by graphx like this:
val vertices_raw = sqlContext.read.json("path/vertices.json.gz")
val vertices = vertices_raw.rdd.map(row=> ((row.getAs[String]("toid").stripPrefix("osgb").toLong),row.getAs[Long]("index")))
val verticesRDD: RDD[(VertexId, Long)] = vertices
val edges_raw = sqlContext.read.json("path/edges.json.gz")
val edgesRDD = edges_raw.rdd.map(row=>(Edge(row.getAs[String]("positiveNode").stripPrefix("osgb").toLong, row.getAs[String]("negativeNode").stripPrefix("osgb").toLong, row.getAs[Double]("length"))))
val my_graph: Graph[(Long),Double] = Graph.apply(verticesRDD, edgesRDD).partitionBy(PartitionStrategy.RandomVertexCut)
I then use this dijkstra implementation I found to compute a shortest path between two vertices:
def dijkstra[VD](g: Graph[VD, Double], origin: VertexId) = {
var g2 = g.mapVertices(
(vid, vd) => (false, if (vid == origin) 0 else Double.MaxValue, List[VertexId]())
)
for (i <- 1L to g.vertices.count - 1) {
val currentVertexId: VertexId = g2.vertices.filter(!_._2._1)
.fold((0L, (false, Double.MaxValue, List[VertexId]())))(
(a, b) => if (a._2._2 < b._2._2) a else b)
._1
val newDistances: VertexRDD[(Double, List[VertexId])] =
g2.aggregateMessages[(Double, List[VertexId])](
ctx => if (ctx.srcId == currentVertexId) {
ctx.sendToDst((ctx.srcAttr._2 + ctx.attr, ctx.srcAttr._3 :+ ctx.srcId))
},
(a, b) => if (a._1 < b._1) a else b
)
g2 = g2.outerJoinVertices(newDistances)((vid, vd, newSum) => {
val newSumVal = newSum.getOrElse((Double.MaxValue, List[VertexId]()))
(
vd._1 || vid == currentVertexId,
math.min(vd._2, newSumVal._1),
if (vd._2 < newSumVal._1) vd._3 else newSumVal._2
)
})
}
g.outerJoinVertices(g2.vertices)((vid, vd, dist) =>
(vd, dist.getOrElse((false, Double.MaxValue, List[VertexId]()))
.productIterator.toList.tail
))
}
I take two random vertex id's:
val v1 = 4000000028222916L
val v2 = 4000000031019012L
and compute the path between them:
val results = dijkstra(my_graph, v1).vertices.map(_._2).collect
I am unable to compute this locally on my laptop without getting a stackoverflow error. I can see that it is using 3 out of 4 cores available. I can load this graph and compute shortest 10 paths per second with the igraph library in Python on exactly the same graph. Is this an inefficient means of computing paths? At scale, on multiple nodes the paths will compute (no stackoverflow error) but it is still 30/40seconds per path computation.
As you can read on the python-igraph github
"It is intended to be as powerful (ie. fast) as possible to enable the
analysis of large graphs."
In order to explain why it is taking 4000x more time on apache-spark than on local python, you may take a look here (A deep dive into performance bottlenecks with Spark PMC member Kay Ousterhout.) to see that it is probably due to a bottleneck:
... beginning with the idea that network and disk I/O are major bottlenecks ...
You may not need to store your data in-memory because the job may not get that much faster. This is saying that if you moved the serialized compressed data from on-disk to in-memory...
you may also see here & here some informations , but best final method is to benchmark your code to know where the bottleneck is.

How to execute a scenario using data from the previous scenario?

I'd like to execute two scenarios that should be executed one after another and the data "produced" by the first scenario should be used as base for the second scenario.
So a case could be for example clearing of a credit card. The first scenarios is there to authorize/reserve of a certain amount on the card:
val auths = scenario("auths").during(durationInMinutes minutes) {
feed(credentials)
.feed(firstNames)
.feed(lastNames)
.feed(cards)
.feed(amounts)
.exec(http("send auth requests")
.post(...)
.check(...))}
The second one is there to capture/take the amount from the credit card:
val caps = scenario("caps").during(durationInMinutes minutes) {
feed(credentials)
.feed(RESPONSE_IDS_FROM_PREVIOUS_SCENARIO)
.exec(http("send auth requests")
.post(...)
.check(...))}
I initially thought about using the saveAs(...) option on check but I figured out that the saved field is only valid for the given session.
So basically I want to preserve the IDs I got from the auths scenario and use them in the caps scenario.
I cannot execute both steps in one scenario though (saveAs would work for that) because I have different requirement for both scenarios.
Quoting the documentation: "Presently our Simulation is one big monolithic scenario. So first let us split it into composable business processes, akin to the PageObject pattern with Selenium. This way, you’ll be able to easily reuse some parts and build complex behaviors without sacrificing maintenance." at gatling.io/Advanced Tutorial
Thus your there is no build-in mechanism for communication between scenarios (AFAIK). Recommendation is to structure your code that way that you can combine your calls to URIs subsequently. In your case (apart from implementation details) you should have something like this:
val auths = feed(credentials)
.feed(firstNames)
.feed(lastNames)
.feed(cards)
.feed(amounts)
.exec(http("send auth requests")
.post(...)
.check(...) // extract and store RESPONSE_ID to session
)
val caps = exec(http("send auth requests")
.post(...) // use of RESPONSE_ID from session
.check(...))
Then your scenario can look something like this:
val scn = scenario("auth with caps").exec(auths, caps) // rest omitted
Maybe even better way to structure your code is to use objects. See mentioned tutorial link.
More illustrative example (which compiles, but I didn't run it while domain is foo.com):
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class ExampleSimulation extends Simulation {
import scala.util.Random
import scala.concurrent.duration._
val httpConf = http.baseURL(s"http://foo.com")
val emails = Iterator.continually(Map("email" -> (Random.alphanumeric.take(20).mkString + "#foo.com")))
val names = Iterator.continually(Map("name" -> Random.alphanumeric.take(20).mkString))
val getIdByEmail = feed(emails)
.exec(
http("Get By Email")
.get("/email/$email")
.check(
jsonPath("userId").saveAs("anId")
)
)
val getIdByName = feed(names)
.exec(
http("Get By Name")
.get("/name/$name")
.check(
jsonPath("userId").is(session =>
session("anId").as[String]
)
)
)
val scn = scenario("Get and check user id").exec(getIdByEmail, getIdByName).inject(constantUsersPerSec(5) during (5.minutes))
setUp(scn).protocols(httpConf)
}
Hope it is what you're looking for.

How to call Stored Procedures and defined functions in MySQL with Slick 3.0

I have defined in my db something like this
CREATE FUNCTION fun_totalInvestorsFor(issuer varchar(30)) RETURNS INT
NOT DETERMINISTIC
BEGIN
RETURN (SELECT COUNT(DISTINCT LOYAL3_SHARED_HOLDER_ID)
FROM stocks_x_hldr
WHERE STOCK_TICKER_SIMBOL = issuer AND
QUANT_PURCHASES > QUANT_SALES);
END;
Now I have received an answer from Stefan Zeiger (Slick lead) redirecting me here: User defined functions in Slick
I have tried (having the following object in scope):
lazy val db = Database.forURL("jdbc:mysql://localhost:3306/mydb",
driver = "com.mysql.jdbc.Driver", user = "dev", password = "root")
val totalInvestorsFor = SimpleFunction.unary[String, Int]("fun_totalInvestorsFor")
totalInvestorsFor("APPLE") should be (23)
Result: Rep(slick.lifted.SimpleFunction$$anon$2#13fd2ccd fun_totalInvestorsFor, false) was not equal to 23
I have also tried while having an application.conf in src/main/resources like this:
tsql = {
driver = "slick.driver.MySQLDriver$"
db {
connectionPool = disabled
driver = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://localhost/mydb"
}
}
Then in my code with #StaticDatabaseConfig("file:src/main/resources/application.conf#tsql")
tsql"select fun_totalInvestorsFor('APPLE')" should be (23)
Result: Error:(24, 9) Cannot load #StaticDatabaseConfig("file:src/main/resources/application.conf#tsql"): No configuration setting found for key 'tsql'
tsql"select fun_totalInvestorsFor('APPLE')" should be (23)
^
I am also planning to call stored procedures that return one tuple of three values, via sql"call myProc(v1).as[(Int, Int, Int)]
Any ideas?
EDIT: When making
sql""""SELECT COUNT(DISTINCT LOYAL3_SHARED_HOLDER_ID)
FROM stocks_x_hldr
WHERE STOCK_TICKER_SIMBOL = issuer AND
QUANT_PURCHASES > QUANT_SALES""".as[(Int)]
results in SqlStreamingAction[Vector[Int], Int, Effect] instead of the suggested DBIO[Int] (from what I infer) suggested by the documentation
I've been running into exactly the same problem for the past week. After some extensive research (see my post here, I'll be adding a complete description of what I've done as a solution), I decided it can't be done in Slick... not strictly speaking.
But, I'm resistant to adding pure JDBC or Anorm into our solution stack, so I did find an "acceptable" fix, IMHO.
The solution is to get the session object from Slick, and then use common JDBC to manage the stored function / stored procedure calls. At that point you can use any third party library that makes it easier... although in my case I wrote my own function to set up the call and return a result set.
val db = Database.forDataSource(DB.getDataSource)
var response: Option[GPInviteResponse] = None
db.withSession {
implicit session => {
// Set up your call here... (See my other post for a more detailed
// answer with an example:
// procedure is eg., "{?=call myfunction(?,?,?,?)}"
val cs = session.conn.prepareCall(procedure.toString)
// Set up your in and out parameters here
// eg. cs.setLong(index, value)
val result = cs.execute()
val rc = result.head.asInstanceOf[Int]
response = rc match {
// Package up the response to the caller
}
}
}
db.close()
I know that's pretty terse, but as I said, see the other thread for a more complete posting. I'm putting it together right now and will post the answer shortly.

Call function when class is deleted/garbage collected

I have a class that opens a sqlite database in its constructor. Is there a way to have it close the database when it is destroyed (whether that be due to the programmer destroying it or being destroyed via Lua's garbage collection)?
The code so far:
local MyClass = {}
local myClass_mt= {__index=MyClass, __gc=__del}
function DBHandler.__init()
-- constructor
local newMyClass = {
db = sqlite3.open(dbPath)
}
return setmetatable(newMyClass , myClass_mt)
end
local function __del()
self.db.close()
end
For your particular case, according to its source code, LuaSQLite already closes its handle when it is destroyed:
/* close method */
static int db_close(lua_State *L) {
sdb *db = lsqlite_checkdb(L, 1);
lua_pushnumber(L, cleanupdb(L, db));
return 1;
}
/* __gc method */
static int db_gc(lua_State *L) {
sdb *db = lsqlite_getdb(L, 1);
if (db->db != NULL) /* ignore closed databases */
cleanupdb(L, db);
return 0;
}
But IMO, freeing such resources on GC should be a backup solution: your object could be GCed after quite some time, so SQLite handle will stay open during this time. Some languages provides mechanism to release unmanaged resources as early as possible such as Python's with or C# using.
Unfortunately Lua does not provide such feature so you should call close yourself when possible, by making a close method on your class too for instance.
You don't mention what Lua version you use, but __gc won't work on tables in Lua 5.1. Something like this may work (it's using newproxy hack for Lua 5.1):
m = newMyClass
if _VERSION >= "Lua 5.2" then
setmetatable(m, {__gc = m.__del})
else
-- keep sentinel alive until 'm' is garbage collected
m.sentinel = newproxy(true)
getmetatable(m.sentinel).__gc = m.__del -- careful with `self` in this case
end
For Lua 5.2 this is not different from the code you have; you don't say what exactly is not working, but Egor's suggestion on self.db:close is worth checking...
look for finalizer in the manual.