I have two classes, each constructible from the other.
Example:
class B;
class A{
public:
double val;
constexpr A(B b): val(b.val){};
};
class B{
public:
double val;
constexpr B(A a): val(a.val){};
};
I need to forward-declare class B, so A knows about it. When these constructors are not constexpr, I can move their definitions to a source file and it happily compiles.
However, to make it constexpr, they have to be defined in the header. B is ok to construct from A, because it sees the full definition of A. A cannot construct from B because it only sees a declaration, and therefore has no idea about B::val.
I'm left with only making class B constexpr. Is there a way to do it for both classes?
Using gcc I get the error (https://godbolt.org/z/qvP7absdr):
<source>:6:15: error: 'b' has incomplete type
6 | constexpr A(B b) : val(b.val){};
| ~~^
<source>:1:7: note: forward declaration of 'class B'
1 | class B;
| ^
<source>: In constructor 'constexpr A::A(B)':
<source>:6:11: error: invalid type for parameter 1 of 'constexpr' function 'constexpr A::A(B)'
6 | constexpr A(B b) : val(b.val){};
| ^
Compiler returned: 1
So this fails because type B is incomplete when it is used it is the definition of the constructor A::A(B b).
In order to deal with this we can wait until we have declared B fully before we define the constructor and use B. Essentially, move the definition of the constructor out of class A and after class B
class B;
class A{
public:
double val;
constexpr A(B b);
};
class B{
public:
double val;
constexpr B(A a): val(a.val){};
};
constexpr A::A(B b) : val(b.val){};
See an example without compilations issues:
https://godbolt.org/z/44fbcr8sh
Related
I have the following code which works:
%%cython
cdef int add(int a, int b):
return a+b
cdef int mult(int a, int b):
return a*b
ctypedef int (*function_type)(int a, int b)
cdef int lambda_c(function_type func, int c, int d):
return func(c,d)
print(lambda_c(add, 5, 6))
print(lambda_c(mult, 5, 6))
So I have lambda_c function that takes c function as an argument and I am not able to change it (as it it a wrapper over c library that is supported by another team).
What I want to do is to write a wrapper:
cdef class PyClass:
def py_wrap(func, e, f):
return lambda_c(func, e, f)
print(PyClass().py_wrap(add, 5, 5))
print(PyClass().py_wrap(mult, 6, 6))
But this throws an error:
Cannot convert Python object to 'function_type'
I also tried to cast func(return lambda_c(<function_type>func, e, f)) but got an error:
Python objects cannot be cast to pointers of primitive types
The idea behind this is following: any user will be able to write his own function in cython, compile it and then import and pass his function to PyClass().py_wrap method.
Is it even possible to import pure c function and pass it as a parameter with cython?
I also saw Pass cython functions via python interface but unlike the solution there I am not able to change lambda_c functon and turn it into a class. Moreover lambda_c takes only functions of certain type (function_type in our example)
If you want to pass an arbitrary Python callable as a C function pointer then it doesn't work - there's no way to do this in standard C (and thus it's impossible to for Cython to generate the code). There's a very hacky workaround involving ctypes (which can do runtime code generation which I'll find links to if needed. However I don't really recommend it.
If you're happy for your users to write cdef functions in Cython (the question implies you are) then you can just build on my answer to your question yesterday.
Write a suitable wrapper class (you just need to change the function pointer type) - this gets split between your .pxd and .pyx files that you write.
Have your users cimport it and then use it to expose their cdef classes to Python:
from your_module cimport FuncWrapper
cdef int add_c_implementation(int a, int b):
return a+b
# `add` accessible from Python
add = WrapperFunc.make_from_ptr(add_c_implementation)
Change PyClass to take FuncWrapper as an argument:
# this is in your_module.pyx
cdef class PyClass:
def py_wrap(FuncWrapper func, e, f):
return lambda_c(func.func, e, f)
Your users can then use their compiled functions from Python:
from your_module import PyClass
from users_module import add
PyClass().py_wrap(add,e,f)
Really all this is doing is using a small Python wrapper to allow you to pass around a type that Python normally can't deal with. You're pretty limited in what it's possible to do with these wrapped function pointers (for example they must be set up in Cython) but it does give a handle to select and pass them.
I am not sure if you are allowed to change the function pointer type from
ctypedef int (*function_type)(int a, int b)
to
ctypedef int (*function_type)(int a, int b, void *func_d)
but this is usually the way callback functions are implemented in C. void * parameter func_d to the function contains the user-provided data in any form. If the answer is yes, then you can have the following solution.
First, create the following definition file in Cython to reveal your C API to other Cython users:
# binary_op.pxd
ctypedef int (*func_t)(int a, int b, void *func_d) except? -1
cdef int func(int a, int b, void *func_d) except? -1
cdef class BinaryOp:
cpdef int eval(self, int a, int b) except? -1
cdef class Add(BinaryOp):
cpdef int eval(self, int a, int b) except? -1
cdef class Multiply(BinaryOp):
cpdef int eval(self, int a, int b) except? -1
This basically allows any Cython user to cimport these definitions directly into their Cython code and bypass any Python-related function calls. Then, you implement the module in the following pyx file:
# binary_op.pyx
cdef int func(int a, int b, void *func_d) except? -1:
return (<BinaryOp>func_d).eval(a, b)
cdef class BinaryOp:
cpdef int eval(self, int a, int b) except? -1:
raise NotImplementedError()
cdef class Add(BinaryOp):
cpdef int eval(self, int a, int b) except? -1:
return a + b
cdef class Multiply(BinaryOp):
cpdef int eval(self, int a, int b) except? -1:
return a * b
def call_me(BinaryOp oper not None, c, d):
return func(c, d, <void *>oper)
As you can see, BinaryOp serves as the base class which raises NotImplementedError for its users who do not implement eval properly. cpdef functions can be overridden by both Cython and Python users, and if they are called from Cython, efficient C calling mechanisms are involved. Otherwise, there is a small overhead when called from Python (well, of course, these functions work on scalars, and hence, the overhead might not be that small).
Then, a Python user might have the following application code:
# app_1.py
import pyximport
pyximport.install()
from binary_op import BinaryOp, Add, Multiply, call_me
print(call_me(Add(), 5, 6))
print(call_me(Multiply(), 5, 6))
class LinearOper(BinaryOp):
def __init__(self, p1, p2):
self.p1 = p1
self.p2 = p2
def eval(self, a, b):
return self.p1 * a + self.p2 * b
print(call_me(LinearOper(3, 4), 5, 6))
As you can see, they can not only create objects from efficient Cython (concrete) classes (i.e., Add and Multiply) but also implement their own classes based on BinaryOp (hopefully by providing the implementation to eval). When you run python app_1.py, you will see (after the compilations):
11
30
39
Then, your Cython users can implement their favorite functions as follows:
# sub.pyx
from binary_op cimport BinaryOp
cdef class Sub(BinaryOp):
cpdef int eval(self, int a, int b) except? -1:
return a - b
Well, of course, any application code that uses sub.pyx can use both libraries as follows:
import pyximport
pyximport.install()
from sub import Sub
from binary_op import call_me
print(call_me(Sub(), 5, 6))
When you run python app_2.py, you get the expected result: -1.
EDIT. By the way, provided that you are allowed to have the aforementioned function_type signature (i.e., the one that has a void * parameter as the third argument), you can in fact pass an arbitrary Python callable object as a C pointer. For this to happen, you need to have the following changes:
# binary_op.pyx
cdef int func(int a, int b, void *func_d) except? -1:
return (<object>func_d)(a, b)
def call_me(oper not None, c, d):
return func(c, d, <void *>oper)
Note, however, that Python now needs to figure out which object oper is. In the former solution, we were constraining oper to be a valid BinaryOp object. Note also that __call__ and similar special functions can only be declared def, which limits your use case. Nevertheless, with these last changes, we can have the following code run without any problems:
print(call_me(lambda x, y: x - y, 5, 6))
Thanks to #ead , I have changed code a bit and the result satisfies me:
cdef class PyClass:
cdef void py_wrap(self, function_type func, e, f):
print(lambda_c(func, e, f))
PyClass().py_wrap(mult, 5, 5)
For my purposes it is ok to have void function, but I do not know, how to make it all work with method, that should return some value. Any ideas for this case will be useful
UPD: cdef methods are not visible from python, so it looks like there is no way to make things work
I have a cython function like:
cdef void foo(int a, int b=1, int c=2):
pass
and a getter function to return its address:
def get_foo():
return <size_t>foo
so I can get function foo's address somewhere and cast it back to the real function in cython(eg: a callback one can use in python).
The problem is how to write such a type?
I tried:
cdef void foo(int a, int b=1, int c=2):
pass
ctypedef void (*foo_type)(int a, int b, int c)
cdef foo_type f = foo
this won't compile, cython complains: Cannot assign type 'void (int, struct __pyx_opt_args_46_cython_magic_49f265438c694830523a60bef4fe2ee8_foo *__pyx_optional_args)' to 'foo_type '
From the error mesage , one can note that, the option(default) arguments are wrapped in a struct by cython.
Is there a way to do such ctypedef in cython? If not, I think I'd better leave out the default value.:-(
You can create small wrappers without the default values:
cdef void foo_abc(int a, int b, int c):
foo(a,b,c)
# if you need it
cdef void foo_ab(int a, int b):
foo(a,b)
# etc
These will be convertable to function pointers:
cdef foo_type f = foo_abc
The speed loss from the wrapper functions will be small, so the main cost is just that you need to write a bit more code.
I have a function that has a lot of parameters. (4-7 parameters)
For simplicity, this is an example:-
class B{
friend class C;
int f(int param1,float param2, structA a, structB b){
//... some code ...
}
//.... other functions ....
};
Sometimes, I want to encapsulate it under another (more-public) function that has the same signature:-
class C{
B* b;
public: int g(int param1,float param2, structA a, structB b){
return b->f(param1,param2,a,b);
}
//.... other functions ....
};
In my opinion, the above code is :-
tedious
causes a bit of maintainability problem
human error-prone
Is there any C++ technique / magic / design-pattern to assist it?
In the real case, it happens mostly in edge-cases that composition is just a little more suitable than inheritance.
I feel that <...> might solve my problem, but it requires template from which I want to avoid.
but it requires template from which I want to avoid.
That's, in my opinion, the wrong mindset to have. You should avoid templates if you have a very good reason to do so, otherwise you should embrace them - they are a core feature of the C++ language.
With a variadic template, you can create a perfect-forwarding wrapper as follows:
class C{
B* b;
public:
template <typename... Ts>
int g(Ts&&... xs){
return b->f(std::forward<Ts>(xs)...);
}
};
The above g function template will accept any number of arguments and call b->f by perfectly-forwarding them.
(Using std::forward allows your wrapper to properly retain the value category of the passed expressions when invoking the wrapper. In short, this means that no unnecessary copies/moves will be made and that references will be correctly passed as such.)
In a public header:
using f_sig = int(int param1,float param2, structA a, structB b);
class hidden;
class famous {
hidden* pImpl
public:
f_sig g;
};
In your .cpp:
class hidden {
friend class famous;
f_sig f;
};
Now, you cannot use this pattern to define what f or g does, but this does declair their signatures. And if your definition doesn't match the declaration you get an error.
int hidden::f(int param1,float param2, structA a, structB b) {
std::cout << "f!";
}
int famous::g(int param1,float param2, structA a, structB b) {
return pImpl->f(param1, param2, a, b);
}
type the signatures wrong above, and you'll get a compile-time error.
I'm trying to overload make_uint4 in the following manner:
namespace A {
namespace B {
inline __host__ __device__ uint4 make_uint4(uint2 a, uint2 b) {
return make_uint4(a.x, a.y, b.x, b.y);
}
}
}
But when I try to compile it, nvcc returns an error:
error: no suitable constructor exists to convert from "unsigned int" to "uint2"
error: no suitable constructor exists to convert from "unsigned int" to "uint2"
error: too many arguments in function call
All these errors point to the "return…" line.
I was able to get a partial repro on VS 2010 and CUDA 4.0 (the compiler built the code OK but Intellisense flagged the error you are seeing). Try the following:
#include "vector_functions.h"
inline __host__ __device__ uint4 make_uint4(uint2 a, uint2 b)
{
return ::make_uint4(a.x, a.y, b.x, b.y);
}
This fixed it for me.
I have no problem compiling it in Visual Studio+nvcc. What compiler are you using?
If that would be of any help: make_uint4 is defined in vector_functions.h, line 170 as
static __inline__ __host__ __device__ uint4 make_uint4(unsigned int x, unsigned int y, unsigned int z, unsigned int w)
{
uint4 t; t.x = x; t.y = y; t.z = z; t.w = w; return t;
}
Update:
I get similar error when I try to overload the function while being inside my custom namespace. Are you certain you are not inside one? If so, try putting :: in front of function call to refer to global scope, i.e:
return ::make_uint4(a.x, a.y, b.x, b.y);
I don't have the library code, but it seems like the compiler doesn't like overloaded device functions (as they are treated just like really fancy inline macros). What is does is shadow (hide) the old make_uint4(a,b,c,d) with your new make_uint4(va, vb) and try to call the latter with 4 uint parameters. That doesn't work because there is no conversion from uint to uint2 (as indicated by the first two error messages) and there are 4 instead of 2 arguments (the last error message).
Use a slightly different function name like make_uint4_from_uint2s and you'll be fine.
Partial template specialization is one of the most important concepts for generic programming in C++. For example: to implement a generic swap function:
template <typename T>
void swap(T &x, T &y) {
const T tmp = x;
y = x;
x = tmp;
}
To specialize it for a vector to support O(1) swap:
template <typename T, class Alloc>
void swap(vector<T, Alloc> &x, vector<T, Alloc> &y) { x.swap(y); }
So you can always get optimal performance when you call swap(x, y) in a generic function;
Much appreciated, if you can post the equivalent (or the canonical example of partial specialization of the language if the language doesn't support the swap concept) in alternative languages.
EDIT: so it looks like many people who answered/commented really don't known what partial specialization is, and that the generic swap example seems to get in the way of understanding by some people. A more general example would be:
template <typename T>
void foo(T x) { generic_foo(x); }
A partial specialization would be:
template <typename T>
void foo(vector<T> x) { partially_specialized_algo_for_vector(x); }
A complete specialization would be:
void foo(vector<bool> bitmap) { special_algo_for_bitmap(bitmap); }
Why this is important? because you can call foo(anything) in a generic function:
template <typename T>
void bar(T x) {
// stuff...
foo(x);
// more stuff...
}
and get the most appropriate implementation at compile time. This is one way for C++ to achieve abstraction w/ minimal performance penalty.
Hope it helps clearing up the concept of "partial specialization". In a way, this is how C++ do type pattern matching without needing the explicit pattern matching syntax (say the match keyword in Ocaml/F#), which sometimes gets in the way for generic programming.
D supports partial specialization:
Language overview
Template feature comparison (with C++ 98 and 0x).
(scan for "partial" in the above links).
The second link in particular will give you a very detailed breakdown of what you can do with template specialization, not only in D but in C++ as well.
Here's a D specific example of swap. It should print out the message for the swap specialized for the Thing class.
import std.stdio; // for writefln
// Class with swap method
class Thing(T)
{
public:
this(T thing)
{
this.thing = thing;
}
// Implementation is the same as generic swap, but it will be called instead.
void swap(Thing that)
{
const T tmp = this.thing;
this.thing = that.thing;
that.thing = tmp;
}
public:
T thing;
}
// Swap generic function
void swap(T)(ref T lhs, ref T rhs)
{
writefln("Generic swap.");
const T tmp = lhs;
lhs = rhs;
rhs = tmp;
}
void swap(T : Thing!(U))(ref T lhs, ref T rhs)
{
writefln("Specialized swap method for Things.");
lhs.swap(rhs);
}
// Test case
int main()
{
auto v1 = new Thing!(int)(10);
auto v2 = new Thing!(int)(20);
assert (v1.thing == 10);
assert (v2.thing == 20);
swap(v1, v2);
assert (v1.thing == 20);
assert (v2.thing == 10);
return 0;
}
I am afraid that C# does not support partial template specialization.
Partial template specialization means:
You have a base class with two or more templates (generics / type parameters).
The type parameters would be <T, S>
In a derived (specialized) class you indicate the type of one of the type parameters.
The type parameters could look like this <T, int>.
So when someone uses (instantiates an object of) the class where the last type parameter is an int, the derived class is used.
Haskell has overlapping instances as an extension:
class Sizable a where
size :: a -> Int
instance Collection c => Sizable c where
size = length . toList
is a function to find size of any collection, which can have more specific instances:
instance Sizable (Seq a) where
size = Seq.length
See also Advanced Overlap on HaskellWiki.
Actually, you can (not quite; see below) do it in C# with extension methods:
public Count (this IEnumerable<T> seq) {
int n = 0;
foreach (T t in seq)
n++;
return n;
}
public Count (this T[] arr) {
return arr.Length;
}
Then calling array.Count() will use the specialised version. "Not quite" is because the resolution depends on the static type of array, not on the run-time type. I.e. this will use the more general version:
IEnumerable<int> array = SomethingThatReturnsAnArray();
return array.Count();
C#:
void Swap<T>(ref T a, ref T b) {
var c = a;
a = b;
b = c;
}
I guess the (pure) Haskell-version would be:
swap :: a -> b -> (b,a)
swap a b = (b, a)
Java has generics, which allow you to do similar sorts of things.