Chisel: How the "read" function (or macro) is implemented for the SyncReadMem class? - chisel

Can anyone help me to understand how "read" macro is implemented? I have the feeling that "do_read" function below is actually called, but could not figure out how that is done. I'm intrigued by the "SourceInfoTransform" class. Can anyone give me a hint on its usage?
The "SyncReadMem" implementation is listed below.
Thanks in advance for any help!
Best regards,
-Fei
sealed class SyncReadMem[T <: Data] private (t: T, n: BigInt, val readUnderWrite: SyncReadMem.ReadUnderWrite) extends MemBase[T](t, n) {
def read(x: UInt, en: Bool): T = macro SourceInfoTransform.xEnArg
/** #group SourceInfoTransformMacro */
def do_read(addr: UInt, enable: Bool)(implicit sourceInfo: SourceInfo, compileOptions: CompileOptions): T = {
val a = Wire(UInt())
a := DontCare
var port: Option[T] = None
when (enable) {
a := addr
port = Some(read(a))
}
port.get
}
}

The SourceInfoTransform is a scala macro that transforms the def read into the def do_read. The code for the macro is in src/main/scala/chisel3/internal/sourceinfo/SourceInfoTransform.scala of github.com/chipsalliance/chisel3. In that file there are a lot of transform classes for handling different chisel constructs with different numbers of arguments. The main use of the SourceInfoTransform is to get the line number of the Chisel/Scala source so it can be reported in Exceptions and in generated Firrtl and Verilog. Here is an article on macros, there are many more available.
Good luck.

Chick's answer is mostly correct--it's correct on the "how" and where to look, but slightly wrong on the "why":
The main use of the SourceInfoTransform is to get the line number of the Chisel/Scala source
This is not quite true, you can get source locators with merely having implicit sourceInfo: SourceInfo, and you could have that on the original def read if you wanted to. What the SourceInfoTransform macros do is resolve ambiguity when you want to do a bit extraction immediately following invoking the read method, eg.
myMem.read(addr, en)(16, 0)
If we had defined read with the implicits directly, the compiler would think you're trying to pass 16 and 0 as the implicit arguments, when in reality, you're trying to call .apply on the resulting T (if it's a subtype of Bits). The macro resolves the ambiguity so that the compiler knowns you're not trying to pass the implicits.
Here's a link to a talk where I describe this (warning while the little part about source locators and the macro are correct, much of this talk is out-of-date): https://youtu.be/2-ZiXNd9wbc?t=2756

Related

Is there a way to make signals in Chisel not defined at module scope visible in waveforms?

If we take for example the following code excerpt (at the top of a module):
val write_indices = WireInit(VecInit(Seq.fill(wordsPerBeat)(0.U((log2Ceil(nWays)+log2Ceil(nSets)+log2Ceil(cacheBlockBytes/wordBytes)).W))))
val write_line_indices = WireInit(VecInit(Seq.fill(wordsPerBeat)(0.U(log2Ceil(cacheBlockBytes/wordBytes).W))))
dontTouch(write_indices)
dontTouch(write_line_indices)
// refill logic
when(mem_response_present) {
for (i <- 0 until wordsPerBeat) {
val beatIdx = i.U(log2Ceil(wordsPerBeat).W)
val write_line_index = Cat(d_refill_cnt(log2Ceil(cacheBlockBytes/wordsPerBeat/wordBytes)-1, 0), beatIdx)
val write_idx = Cat(refill_way, refill_set, write_line_index)
write_indices(i) := write_idx
write_line_indices(i) := write_line_index
cache_line(write_idx) := tl_out.d.bits.data((i + 1) * wordBits - 1, i * wordBits)
}
}
The only reason for the two top level signals is to get lower signals visible in waveforms.
Is there any way to achieve the same effect without having to manually create those signals?
In this example half the code is used just to get the ability to debug.
That seems a bit excessive.
That seems a bit excessive
Completely agreed, fortunately there is a solution. For implementation reasons, Chisel by default is only able to name public fields of the Module class. That is, only the values at the the top-level scope of your Module. However, there is a nifty macro chisel3.experimental.chiselName that can name these vals inside of the for loop. Try annotating your Module like so:
import chisel3._
import chisel3.experimental.chiselName
#chiselName
class MyModule extends Module {
...
}
Please check out this earlier answer discussing naming, it has more information than is relevant to answer this question alone, but it has other useful information about how naming works in Chisel.

Python objects in dealloc in cython

In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.

Scala method = trait { ... } meaning

I'm trying to learn Scala and the Play Framework at the same time. Scala looks to me like it has a lot of really cool ideas, but one of my frustrations is trying to understand all of the different syntaxes for methods/functions/lambdas/anonymous functions/etc.
So I have my main application controller like so:
object Application extends Controller {
def index = Action {
Ok(views.html.index("Your new application is ready."))
}
}
This tells me I have a singleton Application that has one method, index, that returns what type?? I was expecting index to have a definition more like:
def index(req: Request) : Result = { ... }
Looking at Play Framework's documentation, it looks as though Action is a trait, that transforms a request to a result, by I'm having a hard time understanding what this line is saying:
def index = Action { ... }
I come from a Java background, so I don't know what this is saying? (this statement feels like it's saying "method index = [some interface Action]", which doesn't make sense to me; it seems something beautiful is happening, but it is magic to me, and I feel uneasy with magic in my code ;))
When you invoke an object as if it were a function, that's translated into a call to apply. I.e.:
foo(bar)
is translated into
foo.apply(bar)
So, inside index you are calling the Action object as if it were a function, which means you are actually calling Action.apply.
The return type of index is omitted because the compiler can infer it to be the return type of Action.apply (which I guess from the name is Unit).
So the short answer to this question is that there's some stuff going on behind the scenes that makes the above work: namely that the compiler is inferring types, and in Scala, objects with an apply method can get called as if they were functions.
So what's going on here is that this code:
def index = Action {
Ok("Hello World!")
}
...is equivalent to (or rather shorthand for) this code:
def index : Action[AnyContent] = Action.apply(
(req: Request[AnyContent]) => {
Ok(views.html.index("Hello World!"))
} : Result
)
The magic is happening here: ... = Action {...}. Action {...} says "call Action with this anonymous function {...}".
Because Action.apply is defined as apply(block: => Result): Action[AnyContent], all of the argument-/return- types can be inferred.

Scala: val foo = (arg: Type) => {...} vs. def(arg:Type) = {...}

Related to this thread
I am still unclear on the distinction between these 2 definitions:
val foo = (arg: Type) => {...}
def(arg:Type) = {...}
As I understand it:
1) the val version is bound once, at compile time
a single Function1 instance is created
can be passed as a method parameter
2) the def version is bound anew on each call
new method instance created per call.
If the above is true, then why would one ever choose the def version in cases where the operation(s) to perform are not dependent on runtime state?
For example, in a servlet environment you might want to get the ip address of the connecting client; in this case you need to use a def as, of course there is no connected client at compile time.
On the other hand you often know, at compile time, the operations to perform, and can go with immutable val foo = (i: Type) => {...}
As a rule of thumb then, should one only use defs when there is a runtime state dependency?
Thanks for clarifying
I'm not entirely clear on what you mean by runtime state dependency. Both vals and defs can close over their lexical scope and are hence unlimited in this way. So what are the differences between methods (defs) and functions (as vals) in Scala (which has been asked and answered before)?
You can parameterize a def
For example:
object List {
def empty[A]: List[A] = Nil //type parameter alllowed here
val Empty: List[Nothing] = Nil //cannot create a type parameter
}
I can then call:
List.empty[Int]
But I would have to use:
List.Empty: List[Int]
But of course there are other reasons as well. Such as:
A def is a method at the JVM level
If I were to use the piece of code:
trades filter isEuropean
I could choose a declaration of isEuropean as either:
val isEuropean = (_ : Trade).country.region = Europe
Or
def isEuropean(t: Trade) = t.country.region = Europe
The latter avoids creating an object (for the function instance) at the point of declaration but not at the point of use. Scala is creating a function instance for the method declaration at the point of use. It is clearer if I had used the _ syntax.
However, in the following piece of code:
val b = isEuropean(t)
...if isEuropean is declared a def, no such object is being created and hence the code may be more performant (if used in very tight loops where every last nanosecond is of critical value)

Type mismatch while using allCatch opt

In order to avoid Java exceptions I'm using Scala's exception handling class.
However, when compiling the following snippet:
import scala.util.control.Exception._
val cls = classManifest[T].erasure
// Invoke special constructor if it's available. Otherwise use default constructor.
allCatch opt cls.getConstructor(classOf[Project]) match {
case Some(con) =>
con.newInstance(project) // use constructor with one Project param
case None =>
cls.newInstance // just use default constructor
};
I receive the following error:
error: type mismatch;
[scalac] found : java.lang.reflect.Constructor[_]
[scalac] required: java.lang.reflect.Constructor[_$1(in method init)] where
type _$1(in method init)
[scalac] allCatch opt cls.getConstructor(classOf[Project]) match {
[scalac] ^
[scalac] one error found
What's going on here and how can I fix it?
I have no explanation, but a much shorter example which I hope pinpoint where the problem occurs. I think it is not related at all to exceptions, nor to reflection. Whether this behavior is an arcane but correct consequence of the specification or a bug, I have no idea.
val untypedList : List[_] = List("a", "b")
val typedList : List[String] = List("a", "b")
def uselessByName[A](a: => A) = a
def uselessByValue[A](a: A) = a
uselessByName(untypedList) fails with the same error as your code. The other combinations do not. So combination of a method with a generic call-by-name argument, called with a parameter of a generic with an existential type.
uselessByName[List[_]](untypedList) works, so I guess if you call explicitly opt[Constructor[_]] it might work too.
The type inference scheme has gotten confused by the types available--specifically, by the type of cls. If we write generic code:
def clser[A](cls: Class[A]) = allCatch opt cls.getConstructor(classOf[Project])
then it works perfectly okay. But you're probably doing something else--I can't tell what because you didn't provide the code--and this results in a mismatch between the expected and actual types.
My currently solution is to cast the constructor explicitly.
cls.getConstructor(classOf[Project]) becomes cls.getConstructor(classOf[Project]).asInstanceOf[Constructor[Project]].
I'm still wondering about the actual error and if there are better ways to resolve it -- so I'm going to leave this open.