I have a cython module that uses memoryview arrays, that is...
double[:,:] foo
I want to run this module in parallel using multiprocessing. However I get the error:
PicklingError: Can't pickle <type 'tile_class._memoryviewslice'>: attribute lookup tile_class._memoryviewslice failed
Why can't I pickle a memory view and what can I do about it.
Maybe passing the actual array instead of the memory view can solve your problem.
If you want to execute a function in parallel, all of it parameters have to be picklable if i recall correctly. At least that is the case with python multiprocessing. So you could pass the array to the function and create the memoryview inside your function.
def some_function(matrix_as_array):
cdef double[:,:] matrix = matrix_as_array
...
I don't know if this helps you, but I encountered a similar problem. I use a memoryview as an attribute in a cdef class. I had to write my own __reduce__ and __setstate__ methods to correctly unpickle instances of my class. Pickling the memory view as an array by using numpy.asarray and restoring it in __setstate__ worked for me. A reduced version of my code:
import numpy as np
cdef class Foo:
cdef double[:,:] matrix
def __init__(self, matrix):
'''Assign a passed array to the typed memory view.'''
self.matrix = matrix
def __reduce__(self):
'''Define how instances of Foo are pickled.'''
d=dict()
d['matrix'] = np.asarray(self.matrix)
return (Foo, (d['matrix'],), d)
def __setstate__(self, d):
'''Define how instances of Foo are restored.'''
self.matrix = d['matrix']
Note that __reduce__ returns a tuple consisting of a callable (Foo), a tuple of parameters for that callable (i.e. what is needed to create a 'new' Foo instance, in this case the saved matrix) and the dictionary with all values needed to restore the instance.
Related
I am wanting to create a cython object that can has convenient operations such as addition multiplication and comparisons. But when I compile such classes they all seem to have a lot of python overhead.
A simple example:
%%cython -a
cdef class Pair:
cdef public:
int a
int b
def __init__(self, int a, int b):
self.a = a
self.b = b
def __add__(self, Pair other):
return Pair(self.a + other.a, self.b + other.b)
p1 = Pair(1, 2)
p2 = Pair(3, 4)
p3 = p1+p2
print(p3.a, p3.b)
But I end up getting quite large readouts from the annotated compiler
It seems like the __add__ function is converting objects from python floats to cython doubles and doing a bunch of type checking. Am I doing something wrong?
There's likely a couple of issues:
I'm assuming that you're using Cython 0.29.x (and not the newer Cython 3 alpha). See https://cython.readthedocs.io/en/stable/src/userguide/special_methods.html#arithmetic-methods
This means that you can’t rely on the first parameter of these methods being “self” or being the right type, and you should test the types of both operands before deciding what to do
It is likely treating self as untyped and thus accessing a and b as Python attributes.
The Cython 3 alpha treats special methods differently (see https://cython.readthedocs.io/en/latest/src/userguide/special_methods.html#arithmetic-methods) so you could also consider upgrading to that.
Although the call to __init__ has C typed arguements it's still a Python call so you can't avoid boxing and unboxing the arguments to Python ints. You could avoid this call and do something like:
cdef Pair res = Pair.__new__()
res.a = ... # direct assignment to attribute
In the code below, self is a python object, as it is declared from def instead of cdef. Can a python object be referenced under a cdef function, just like how it is used under the c_function from my example below?
I am confused because cdef makes it sound like cdef is a C function, so I am not sure if it is able to take on a python object.
class A(self, int w):
def __init__():
self.a = w
cdef c_function (self, int b):
return self.a + b
Thank you,
I am the OP.
From "Cython: A Guide for Python Programmers":
Source: https://books.google.com.hk/books?id=H3JmBgAAQBAJ&pg=PA49&lpg=PA49&dq=python+object+cdef+function&source=bl&ots=QI9If_wiyR&sig=ACfU3U1CmEPBaEVmW0UN1_9m9G8B9a6oFQ&hl=en&sa=X&ved=2ahUKEwiBgs3Lw5vqAhXSZt4KHTs0DK0Q6AEwBHoECAoQAQ#v=onepage&q=python%20object%20cdef%20function&f=false
"[...] Nothing prevents us from declaring and using Python objects and dynamic variables in cdef functions, or accepting them as arguments"
So I take this as the book saying "yes" to my original question.
In the docs it is written, that "Any C data that you explicitly allocated (e.g. via malloc) in your __cinit__() method should be freed in your __dealloc__() method."
This is not my case. I have following extension class:
cdef class SomeClass:
cdef dict data
cdef void * u_data
def __init__(self, data_len):
self.data = {'columns': []}
if data_len > 0:
self.data.update({'data': deque(maxlen=data_len)})
else:
self.data.update({'data': []})
self.u_data = <void *>self.data
#property
def data(self):
return self.data
#data.setter
def data(self, new_val: dict):
self.data = new_val
Some c function has an access to this class and it appends some data to SomeClass().data dict. What should I write in __dealloc__, when I want to delete the instance of the SomeClass()?
Maybe something like:
def __dealloc__(self):
self.data = None
free(self.u_data)
Or there is no need to dealloc anything at all?
No you don't need to and no you shouldn't. From the documentation
You need to be careful what you do in a __dealloc__() method. By the time your __dealloc__() method is called, the object may already have been partially destroyed and may not be in a valid state as far as Python is concerned, so you should avoid invoking any Python operations which might touch the object. In particular, don’t call any other methods of the object or do anything which might cause the object to be resurrected. It’s best if you stick to just deallocating C data.
You don’t need to worry about deallocating Python attributes of your object, because that will be done for you by Cython after your __dealloc__() method returns.
You can confirm this by inspecting the C code (you need to look at the full code, not just the annotated HTML). There's an autogenerated function __pyx_tp_dealloc_9someclass_SomeClass (name may vary slightly depending on what you called your module) does a range of things including:
__pyx_pw_9someclass_9SomeClass_3__dealloc__(o);
/* some other code */
Py_CLEAR(p->data);
where the function __pyx_pw_9someclass_9SomeClass_3__dealloc__ is (a wrapper for) your user-defined __dealloc__. Py_CLEAR will ensure that data is appropriately reference-counted then set to NULL.
It's a little hard to follow because it all goes through several layers of wrappers, but you can confirm that it does what the documentation says.
I've been playing with Cython recently for the speed ups, but when I was trying to use copy.deepcopy() some error occurred.Here is the code:
from copy import deepcopy
cdef class cy_child:
cdef public:
int move[2]
int Q
int N
def __init__(self, move):
self.move = move
self.Q = 0
self.N = 0
a = cy_child((1,2))
b = deepcopy(a)
This is the error:
can't pickle _cython_magic_001970156a2636e3189b2b84ebe80443.cy_child objects
How can I solve the problem for this code?
As hpaulj says in the comments, deepcopy looks to use pickle by default to do its work. Cython cdef classes didn't used to be pickleable. In recent versions of Cython they are where possible (see also http://blog.behnel.de/posts/whats-new-in-cython-026.html) but pickling the array looks to be a problem (and even without the array I didn't get it to work).
The solution is to implement the relevant functions yourself. I've done __deepcopy__ since it's simple but alternatively you could implement the pickle protocol
def __deepcopy__(self,memo_dictionary):
res = cy_child(self.move)
res.Q = self.Q
res.N = self.N
return res
I suspect that you won't need to do that in the future as Cython improves their pickle implementation.
A note on memo_dictionary: Suppose you have
a=[None]
b=[A]
a[0]=B
# i.e. A contains a link to B and B contains a link to A
c = deepcopy(a)
memo_dictionary is used by deepcopy to keep a note of what it's already copied so that it doesn't loop forever. You don't need to do much with it yourself. However, if your cdef class contains a Python object (including another cdef class) you should copy it like this:
cdef class C:
cdef object o
def __deepcopy__(self,memo_dictionary):
# ...
res.o = deepcopy(self.o,memo_dictionary)
# ...
(i.e. make sure it gets passed on to further calls of deepcopy)
I've got a C library that I'm trying to wrap in Cython. One of the classes I'm creating contains a pointer to a C structure. I'd like to write a copy constructor that would create a second Python object pointing to the same C structure, but I'm having trouble, as the pointer cannot be converted into a python object.
Here's a sketch of what I'd like to have:
cdef class StructName:
cdef c_libname.StructName* __structname
def __cinit__(self, other = None):
if not other:
self.__structname = c_libname.constructStructName()
elif type(other) is StructName:
self.__structname = other.__structname
The real problem is that last line - it seems Cython can't access cdef fields from within a python method. I've tried writing an accessor method, with the same result. How can I create a copy constructor in this situation?
When playing with cdef classes, attribute access are compiled to C struct member access. As a consequence, to access to a cdef member of an object A you have to be sure of the type of A. In __cinit__ you didn't tell Cython that other is an instance of StructName. Therefore Cython refuses to compile other.__structname. To fix the problem, just write
def __cinit__(self, StructName other = None):
Note: None is equivalent to NULL and therefore is accepted as a StructName.
If you want more polymorphism then you have to rely on type casts:
def __cinit__(self, other = None):
cdef StructName ostr
if not other:
self.__structname = c_libname.constructStructName()
elif type(other) is StructName:
ostr = <StructName> other
self.__structname = ostr.__structname