In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.
Related
Can anyone help me to understand how "read" macro is implemented? I have the feeling that "do_read" function below is actually called, but could not figure out how that is done. I'm intrigued by the "SourceInfoTransform" class. Can anyone give me a hint on its usage?
The "SyncReadMem" implementation is listed below.
Thanks in advance for any help!
Best regards,
-Fei
sealed class SyncReadMem[T <: Data] private (t: T, n: BigInt, val readUnderWrite: SyncReadMem.ReadUnderWrite) extends MemBase[T](t, n) {
def read(x: UInt, en: Bool): T = macro SourceInfoTransform.xEnArg
/** #group SourceInfoTransformMacro */
def do_read(addr: UInt, enable: Bool)(implicit sourceInfo: SourceInfo, compileOptions: CompileOptions): T = {
val a = Wire(UInt())
a := DontCare
var port: Option[T] = None
when (enable) {
a := addr
port = Some(read(a))
}
port.get
}
}
The SourceInfoTransform is a scala macro that transforms the def read into the def do_read. The code for the macro is in src/main/scala/chisel3/internal/sourceinfo/SourceInfoTransform.scala of github.com/chipsalliance/chisel3. In that file there are a lot of transform classes for handling different chisel constructs with different numbers of arguments. The main use of the SourceInfoTransform is to get the line number of the Chisel/Scala source so it can be reported in Exceptions and in generated Firrtl and Verilog. Here is an article on macros, there are many more available.
Good luck.
Chick's answer is mostly correct--it's correct on the "how" and where to look, but slightly wrong on the "why":
The main use of the SourceInfoTransform is to get the line number of the Chisel/Scala source
This is not quite true, you can get source locators with merely having implicit sourceInfo: SourceInfo, and you could have that on the original def read if you wanted to. What the SourceInfoTransform macros do is resolve ambiguity when you want to do a bit extraction immediately following invoking the read method, eg.
myMem.read(addr, en)(16, 0)
If we had defined read with the implicits directly, the compiler would think you're trying to pass 16 and 0 as the implicit arguments, when in reality, you're trying to call .apply on the resulting T (if it's a subtype of Bits). The macro resolves the ambiguity so that the compiler knowns you're not trying to pass the implicits.
Here's a link to a talk where I describe this (warning while the little part about source locators and the macro are correct, much of this talk is out-of-date): https://youtu.be/2-ZiXNd9wbc?t=2756
I'm trying to construct a trait and an abstract class to subtype by messages (In an Akka play environment) so I can easily convert them to Json.
What have done so far:
abstract class OutputMessage(val companion: OutputMessageCompanion[OutputMessage]) {
def toJson: JsValue = Json.toJson(this)(companion.fmt)
}
trait OutputMessageCompanion[OT] {
implicit val fmt: OFormat[OT]
}
Problem is, when I'm trying to implement the mentioned interfaces as follows:
case class NotifyTableChange(tableStatus: BizTable) extends OutputMessage(NotifyTableChange)
object NotifyTableChange extends OutputMessageCompanion[NotifyTableChange] {
override implicit val fmt: OFormat[NotifyTableChange] = Json.format[NotifyTableChange]
}
I get this error from Intellij:
Type mismatch, expected: OutputMessageCompanion[OutputMessage], actual: NotifyTableChange.type
I'm kinda new to Scala generics - so help with some explanations would be much appreciated.
P.S I'm open for any more generic solutions than the one mentioned.
The goal is, when getting any subtype of OutputMessage - to easily convert it to Json.
The compiler says that your companion is defined over the OutputMessage as the generic parameter rather than some specific subtype. To work this around you want to use a trick known as F-bound generic. Also I don't like the idea of storing that companion object as a val in each message (after all you don't want it serialized, do you?). Defining it as a def is IMHO much better trade-off. The code would go like this (companions stays the same):
abstract class OutputMessage[M <: OutputMessage[M]]() {
self: M => // required to match Json.toJson signature
protected def companion: OutputMessageCompanion[M]
def toJson: JsValue = Json.toJson(this)(companion.fmt)
}
case class NotifyTableChange(tableStatus: BizTable) extends OutputMessage[NotifyTableChange] {
override protected def companion: OutputMessageCompanion[NotifyTableChange] = NotifyTableChange
}
You may also see standard Scala collections for an implementation of the same approach.
But if all you need the companion for is to encode with JSON format, you can get rid of it like this:
abstract class OutputMessage[M <: OutputMessage[M]]() {
self: M => // required to match Json.toJson signature
implicit protected def fmt: OFormat[M]
def toJson: JsValue = Json.toJson(this)
}
case class NotifyTableChange(tableStatus: BizTable) extends OutputMessage[NotifyTableChange] {
override implicit protected def fmt: OFormat[NotifyTableChange] = Json.format[NotifyTableChange]
}
Obviously is you also want to decode from JSON you still need a companion object anyway.
Answers to the comments
Referring the companion through a def - means that is a "method", thus defined once for all the instances of the subtype (and doesn't gets serialized)?
Everything you declare with val gets a field stored in the object (instance of the class). By default serializers trying to serialize all the fields. Usually there is some way to say that some fields should be ignored (like some #IgnoreAnnotation). Also it means that you'll have one more pointer/reference in each object which uses memory for no good reason, this might or might not be an issue for you. Declaring it as def gets a method so you can have just one object stored in some "static" place like companion object or build it on demand every time.
I'm kinda new to Scala, and I've peeked up the habit to put the format inside the companion object, would you recommend/refer to some source, about how to decide where is best to put your methods?
Scala is an unusual language and there is no direct mapping the covers all the use cases of the object concept in other languages. As a first rule of thumb there are two main usages for object:
Something where you would use static in other languages, i.e. a container for static methods, constants and static variables (although variables are discouraged, especially static in Scala)
Implementation of the singleton pattern.
By f-bound generic - do you mean the lower bound of the M being OutputMessage[M] (btw why is it ok using M twice in the same expr. ?)
Unfortunately wiki provides only a basic description. The whole idea of the F-bounded polymorphism is to be able to access to the type of the sub-class in the type of a base class in some generic manner. Usually A <: B constraint means that A should be a subtype of B. Here with M <: OutputMessage[M], it means that M should be a sub-type of the OutputMessage[M] which can easily be satisfied only by declaring the child class (there are other non-easy ways to satisfy that) as:
class Child extends OutputMessage[Child}
Such trick allows you to use the M as a an argument or result type in methods.
I'm a bit puzzled about the self bit ...
Lastly the self bit is another trick that is necessary because F-bounded polymorphism was not enough in this particular case. Usually it is used with trait when traits are used as a mix-in. In such case you might want to restrict in what classes the trait can be mixed in. And at the same type it allows you to use the methods from that base type in your mixin trait.
I'd say that the particular usage in my answer is a bit unconventional but it has the same twofold effect:
When compiling OutputMessage the compiler can assume that the type will also somehow be of the type of M (whatever M is)
When compiling any sub-type compiler ensures that the constraint #1 is satisfied. For example it will not let you to do
case class SomeChild(i: Int) extends OutputMessage[SomeChild]
// this will fail because passing SomeChild breaks the restriction of self:M
case class AnotherChild(i: Int) extends OutputMessage[SomeChild]
Actually since I had to use self:M anyway, you probably can remove the F-bounded part here, living just
abstract class OutputMessage[M]() {
self: M =>
...
}
but I'd stay with it to better convey the meaning.
As SergGr already answered, you would need an F-Bounded kind of polymorphism to solve this as it is right now.
However, for these cases, I believe (note this is only my opinion) is better to use Typeclasses instead.
In your case, you only want to provide a toJson method to any value as long as they have an instance of the OFormat[T] class.
You can achieve that with this (more simple IMHO) piece of code.
object syntax {
object json {
implicit class JsonOps[T](val t: T) extends AnyVal {
def toJson(implicit: fmt: OFormat[T]): JsVal = Json.toJson(t)(fmt)
}
}
}
final case class NotifyTableChange(tableStatus: BizTable)
object NotifyTableChange {
implicit val fmt: OFormat[NotifyTableChange] = Json.format[NotifyTableChange]
}
import syntax.json._
val m = NotifyTableChange(tableStatus = ???)
val mJson = m.toJson // This works!
The JsonOps class is an Implicit Class which will provide the toJson method to any value for which there is an implicit OFormat instance in scope.
And since the companion object of the NotifyTableChange class defines such implicit, it is always in scope - more information about where does scala look for implicits in this link.
Additionally, given it is a Value Class, this extension method does not require any instantiation in runtime.
Here, you can find a more detailed discussion about F-Bounded vs Typeclasses.
I have a class definition which I have based on json.JSONEncoder, within which I have overriden the default method. Now when I call json.dumps on an instance of that class the default method is not being called? Is there something I have missed?
In my example code I do not expect this to magically produce the serialized object but I would expect the print("here") to be executed.
import json
class MyClass(json.JSONEncoder):
id = "myId"
data = "myData"
def default(self, o):
print("here")
print ("Create instance")
obj = MyClass()
print("Serialize")
print(json.dumps(obj))
print ("and done")
I am quite new to Python, so apologies if this is something horribly obvious.
After some further digging and tracing I think I have found the cause. Part of the issue I think it my own misunderstanding of how this is intended to be used.
When calling json.dumps if you wish to use a custom encoder you need to specify the class of that encoder, otherwise dumps defaults to using the standard implementation of JSONEncoder.
json.dumps(obj, cls=MyEncoder)
My misconception was that by basing my class on json.JSONEncoder that dumps would simply recognise the instance as inheriting from JSONEncoder and call the override default method. However this is not the case.
I have now created the logic in it's own class to encode my own class/types and when I call json.dumps I pass in that class name.
So I now have
class MyEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, MyClass):
return {"id": obj.id, "data": obj.data}
return json.JSONEncoder.default(self, obj)
And when I wish to serialize I use
json.dumps(object_to_serialize, cls=MyEncoder)
Which recognises my class and handles it, or passes on the encoding to the default encoder.
Related to this thread
I am still unclear on the distinction between these 2 definitions:
val foo = (arg: Type) => {...}
def(arg:Type) = {...}
As I understand it:
1) the val version is bound once, at compile time
a single Function1 instance is created
can be passed as a method parameter
2) the def version is bound anew on each call
new method instance created per call.
If the above is true, then why would one ever choose the def version in cases where the operation(s) to perform are not dependent on runtime state?
For example, in a servlet environment you might want to get the ip address of the connecting client; in this case you need to use a def as, of course there is no connected client at compile time.
On the other hand you often know, at compile time, the operations to perform, and can go with immutable val foo = (i: Type) => {...}
As a rule of thumb then, should one only use defs when there is a runtime state dependency?
Thanks for clarifying
I'm not entirely clear on what you mean by runtime state dependency. Both vals and defs can close over their lexical scope and are hence unlimited in this way. So what are the differences between methods (defs) and functions (as vals) in Scala (which has been asked and answered before)?
You can parameterize a def
For example:
object List {
def empty[A]: List[A] = Nil //type parameter alllowed here
val Empty: List[Nothing] = Nil //cannot create a type parameter
}
I can then call:
List.empty[Int]
But I would have to use:
List.Empty: List[Int]
But of course there are other reasons as well. Such as:
A def is a method at the JVM level
If I were to use the piece of code:
trades filter isEuropean
I could choose a declaration of isEuropean as either:
val isEuropean = (_ : Trade).country.region = Europe
Or
def isEuropean(t: Trade) = t.country.region = Europe
The latter avoids creating an object (for the function instance) at the point of declaration but not at the point of use. Scala is creating a function instance for the method declaration at the point of use. It is clearer if I had used the _ syntax.
However, in the following piece of code:
val b = isEuropean(t)
...if isEuropean is declared a def, no such object is being created and hence the code may be more performant (if used in very tight loops where every last nanosecond is of critical value)
I want to know how to know, given an object, if it is an instance of an sqlalchemy mapped model.
Normally, I would use isinstance(obj, DeclarativeBase). However, in this scenario, I do not have the DeclarativeBase class used available (since it is in a dependency project).
I would like to know what is the best practice in this case.
class Person(DeclarativeBase):
__tablename__ = "Persons"
p = Person()
print isinstance(p, DeclarativeBase)
#prints True
#However in my scenario, I do not have the DeclarativeBase available
#since the DeclarativeBase will be constructed in the depending web app
#while my code will act as a library that will be imported into the web app
#what are my alternatives?
You can use class_mapper() and catch the exception.
Or you could use _is_mapped_class, but ideally you should not as it is not a public method.
from sqlalchemy.orm.util import class_mapper
def _is_sa_mapped(cls):
try:
class_mapper(cls)
return True
except:
return False
print _is_sa_mapped(MyClass)
# #note: use this at your own risk as might be removed/renamed in the future
from sqlalchemy.orm.util import _is_mapped_class
print bool(_is_mapped_class(MyClass))
for instances there is the object_mapper(), so:
from sqlalchemy.orm.base import object_mapper
def is_mapped(obj):
try:
object_mapper(obj)
except UnmappedInstanceError:
return False
return True
the complete mapper utilities are documented here: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html
Just a consideration: since specific errors are raised by SQLAlchemy (UnmappedClassError for calsses and UnmappedInstanceError for instances) why not catch them rather than a generic exception? ;)