thrust transform defining custom binary function - cuda

I am trying to write a custom function to carry out sum. I followed this question Cuda Thrust Custom function to take reference.Here is how I have defined my functor
struct hashElem
{
int freq;
int error;
};
//basically this function adds some value to to the error field of each element
struct hashErrorAdd{
const int error;
hashErrorAdd(int _error): error(_error){}
__host__ __device__
struct hashElem operator()(const hashElem& o1,const int& o2)
{
struct hashElem o3;
o3.freq = o1.freq;
o3.error = o1.error + (NUM_OF_HASH_TABLE-o2)*error; //NUM_OF_HASH_TABLE is a constant
return o3;
}
};
struct hashElem freqError[SIZE_OF_HASH_TABLE*NUM_OF_HASH_TABLE];
int count[SIZE_OF_HASH_TABLE*NUM_OF_HASH_TABLE];
thrust::device_ptr<struct hashElem> d_freqError(freqError);
thrust::device_ptr<int> d_count(count);
thrust::transform(thrust::device,d_freqError,d_freqError+new_length,d_count,hashErrorAdd(perThreadLoad)); //new_length is a constant
This code on compilation gives the following error:
error: function "hashErrorAdd::operator()" cannot be called with the given argument list
argument types are: (hashElem)
object type is: hashErrorAdd
Please can anybody explain to me why I am getting this error? and how I can resolve it. Please comment in case I am not able to explain the problem clearly. Thankyou.

It appears that you want to pass two input vectors to thrust::transform and then do an in-place transform (i.e. no output vector is specified).
There is no such incarnation of thrust::transform
Since you have passed:
thrust::transform(vector_first, vector_last, vector_first, operator);
The closest matching prototype is a version of transform that takes one input vector and creates one output vector. In that case, you would need to pass a unary op that takes the input vector type (hashElem) only as an argument, and returns a type appropriate for the output vector, which is int in this case, i.e. as you have written it (not as your intent). Your operator() does not do that, and it cannot be called with the arguments that thrust is expecting to pass to it.
As I see it, you have a couple options:
You could switch to the version of transform that takes two input vectors and produces one output vector, and create a binary op as functor.
You could zip together your two input vectors, and do an in-place transform if that is what you want. Your functor would then be a unary op, but it would take as argument whatever tuple was created from dereferencing the input vector, and it would have to return or modify the same kind of tuple.
As an aside, your method of creating device pointers directly from host arrays looks broken to me. You may wish to review the thrust quick start guide.

Related

C++11: Why result_of can accept functor type as lvalue_reference, but not function type as lvalue_reference?

I've got program below:
#include<type_traits>
#include<iostream>
using namespace std;
template <class F, class R = typename result_of<F()>::type>
R call(F& f) { return f(); }
struct S {
double operator()(){return 0.0;}
};
int f(){return 1;}
int main()
{
S obj;
call(obj);//ok
call(f);//error!
return 0;
}
It fails to compile in the line of "call(f)".
It's weird that "call(obj)" is OK.
(1) I've a similar post in another thread C++11 result_of deducing my function type failed. But it doesn't tell why functor objects are OK while functions are not.
(2) I'm not sure if this is related to "R call(F& f)": a function type cannot declare a l-value?
(3) As long as I know, any token with a name, like variable/function, should be considered a l-value. And in the case of function parameter, compiler should "decay" my function name "f" to a function pointer, right?
(4) This is like decaying an array and pass it to a function----And a function pointer could be an l-value, then what's wrong with "call(F& f)"?
Would you help to give some further explanations on "why" is my case, where did I get wrong?
Thanks.
The problem with call(f) is that you deduce F as a function type, so it doesn't decay to a function pointer. Instead you get a reference to a function. Then the result_of<F()> expression is invalid, because F() is int()() i.e. a function that returns a function, which is not a valid type in C++ (functions can return pointers to functions, or references to functions, but not functions).
It will work if you use result_of<F&()> which is more accurate anyway, because that's how you're calling the callable object. Inside call(F& f) you do f() and in that context f is an lvalue, so you should ask what the result of invoking an lvalue F with no arguments is, otherwise you could get the wrong answer. Consider:
struct S {
double operator()()& {return 0.0;}
void operator()()&& { }
};
Now result_of<F()>::type is void, which is not the answer you want.
If you use result_of<F&()> then you get the right answer, and it also works when F is a function type, so call(f) works too.
(3) As long as I know, any token with a name, like variable/function, should be considered a l-value. And in the case of function parameter, compiler should "decay" my function name "f" to a function pointer, right?
No, see above. Your call(F&) function takes its argument by reference, so there is no decay.
(4) This is like decaying an array and pass it to a function----And a function pointer could be an l-value, then what's wrong with "call(F& f)"?
Arrays don't decay when you pass them by reference either.
If you want the argument to decay then you should write call(F f) not call(F& f). But even if you do that you still need to use result_of correctly to get the result of f() where f is an lvalue.

Memory allocation in cuda c [duplicate]

For example, cudaMalloc((void**)&device_array, num_bytes);
This question has been asked before, and the reply was "because cudaMalloc returns an error code", but I don't get it - what has a double pointer got to do with returning an error code? Why can't a simple pointer do the job?
If I write
cudaError_t catch_status;
catch_status = cudaMalloc((void**)&device_array, num_bytes);
the error code will be put in catch_status, and returning a simple pointer to the allocated GPU memory should suffice, shouldn't it?
In C, data can be passed to functions by value or via simulated pass-by-reference (i.e. by a pointer to the data). By value is a one-way methodology, by pointer allows for two-way data flow between the function and its calling environment.
When a data item is passed to a function via the function parameter list, and the function is expected to modify the original data item so that the modified value shows up in the calling environment, the correct C method for this is to pass the data item by pointer. In C, when we pass by pointer, we take the address of the item to be modified, creating a pointer (perhaps a pointer to a pointer in this case) and hand the address to the function. This allows the function to modify the original item (via the pointer) in the calling environment.
Normally malloc returns a pointer, and we can use assignment in the calling environment to assign this returned value to the desired pointer. In the case of cudaMalloc, the CUDA designers chose to use the returned value to carry an error status rather than a pointer. Therefore the setting of the pointer in the calling environment must occur via one of the parameters passed to the function, by reference (i.e. by pointer). Since it is a pointer value that we want to set, we must take the address of the pointer (creating a pointer to a pointer) and pass that address to the cudaMalloc function.
Adding to Robert's answer, but to first reiterate, it is a C API, which means it does not support references, which would allow you to modify the value of a pointer (not just what is pointed to) inside the function. The answer by Robert Crovella explained this. Also note that it needs to be void because C also does not support function overloading.
Further, when using a C API within a C++ program (but you have not stated this), it is common to wrap such a function in a template. For example,
template<typename T>
cudaError_t cudaAlloc(T*& d_p, size_t elements)
{
return cudaMalloc((void**)&d_p, elements * sizeof(T));
}
There are two differences with how you would call the above cudaAlloc function:
Pass the device pointer directly, without using the address-of operator (&) when calling it, and without casting to a void type.
The second argument elements is now the number of elements rather than the number of bytes. The sizeof operator facilitates this. This is arguably more intuitive to specify elements and not worry about bytes.
For example:
float *d = nullptr; // floats, 4 bytes per elements
size_t N = 100; // 100 elements
cudaError_t err = cudaAlloc(d,N); // modifies d, input is not bytes
if (err != cudaSuccess)
std::cerr << "Unable to allocate device memory" << std::endl;
I guess the signature of cudaMalloc function could be better explained by an example. It is basically assigning a buffer through a pointer to that buffer (a pointer to pointer), like the following method:
int cudaMalloc(void **memory, size_t size)
{
int errorCode = 0;
*memory = new char[size];
return errorCode;
}
As you can see, the method takes a memory pointer to pointer, on which it saves the new allocated memory. It then returns the error code (in this case as an integer, but it is actually an enum).
The cudaMalloc function could be designed as it follows also:
void * cudaMalloc(size_t size, int * errorCode = nullptr)
{
if(errorCode)
errorCode = 0;
char *memory = new char[size];
return memory;
}
In this second case, the error code is set through a pointer implicit set to null (for the case people do not bother with the error code at all). Then the allocated memory is returned.
The first method can be used as is the actual cudaMalloc right now:
float *p;
int errorCode;
errorCode = cudaMalloc((void**)&p, sizeof(float));
While the second one can be used as follows:
float *p;
int errorCode;
p = (float *) cudaMalloc(sizeof(float), &errorCode);
These two methods are functionally equivalent, while they have different signatures, and the people from cuda decided to go for the first method, returning the error code and assigning the memory through a pointer, while most people say that the second method would have been a better choice.

Function objects in C++ (C++11)

I am reading about boost::function and I am a bit confused about its use and its relation to other C++ constructs or terms I have found in the documentation, e.g. here.
In the context of C++ (C++11), what is the difference between an instance of boost::function, a function object, a functor, and a lambda expression? When should one use which construct? For example, when should I wrap a function object in a boost::function instead of using the object directly?
Are all the above C++ constructs different ways to implement what in functional languages is called a closure (a function, possibly containing captured variables, that can be passed around as a value and invoked by other functions)?
A function object and a functor are the same thing; an object that implements the function call operator operator(). A lambda expression produces a function object. Objects with the type of some specialization of boost::function/std::function are also function objects.
Lambda are special in that lambda expressions have an anonymous and unique type, and are a convenient way to create a functor inline.
boost::function/std::function is special in that it turns any callable entity into a functor with a type that depends only on the signature of the callable entity. For example, lambda expressions each have a unique type, so it's difficult to pass them around non-generic code. If you create an std::function from a lambda then you can easily pass around the wrapped lambda.
Both boost::function and the standard version std::function are wrappers provided by the li­brary. They're potentially expensive and pretty heavy, and you should only use them if you actually need a collection of heterogeneous, callable entities. As long as you only need one callable entity at a time, you are much better off using auto or templates.
Here's an example:
std::vector<std::function<int(int, int)>> v;
v.push_back(some_free_function); // free function
v.push_back(&Foo::mem_fun, &x, _1, _2); // member function bound to an object
v.push_back([&](int a, int b) -> int { return a + m[b]; }); // closure
int res = 0;
for (auto & f : v) { res += f(1, 2); }
Here's a counter-example:
template <typename F>
int apply(F && f)
{
return std::forward<F>(f)(1, 2);
}
In this case, it would have been entirely gratuitous to declare apply like this:
int apply(std::function<int(int,int)>) // wasteful
The conversion is unnecessary, and the templated version can match the actual (often unknowable) type, for example of the bind expression or the lambda expression.
Function Objects and Functors are often described in terms of a
concept. That means they describe a set of requirements of a type. A
lot of things in respect to Functors changed in C++11 and the new
concept is called Callable. An object o of callable type is an
object where (essentially) the expression o(ARGS) is true. Examples
for Callable are
int f() {return 23;}
struct FO {
int operator()() const {return 23;}
};
Often some requirements on the return type of the Callable are added
too. You use a Callable like this:
template<typename Callable>
int call(Callable c) {
return c();
}
call(&f);
call(FO());
Constructs like above require you to know the exact type at
compile-time. This is not always possible and this is where
std::function comes in.
std::function is such a Callable, but it allows you to erase the
actual type you are calling (e.g. your function accepting a callable
is not a template anymore). Still calling a function requires you to
know its arguments and return type, thus those have to be specified as
template arguments to std::function.
You would use it like this:
int call(std::function<int()> c) {
return c();
}
call(&f);
call(FO());
You need to remember that using std::function can have an impact on
performance and you should only use it, when you are sure you need
it. In almost all other cases a template solves your problem.

Writing a simple thrust functor operating on some zipped arrays

I am trying to perform a thrust::reduce_by_key using zip and permutation iterators.
i.e. doing this on a zipped array of several 'virtual' permuted arrays.
I am having trouble in writing the syntax for the functor density_update.
But first the setup of the problem.
Here is my function call:
thrust::reduce_by_key( dflagt,
dflagtend,
thrust::make_zip_iterator(
thrust::make_tuple(
thrust::make_permutation_iterator(dmasst, dmapt),
thrust::make_permutation_iterator(dvelt, dmapt),
thrust::make_permutation_iterator(dmasst, dflagt),
thrust::make_permutation_iterator(dvelt, dflagt)
)
),
thrust::make_discard_iterator(),
danswert,
thrust::equal_to<int>(),
density_update()
)
dmapt, dflagt are of type thrust::device_ptr<int> and dvelt , dmasst and danst are of type
thrust::device_ptr<double>.
(They are thrust wrappers to my raw cuda arrays)
The arrays mapt and flagt are both index vectors from which I need to perform a gather operation from the arrays dmasst and dvelt.
After the reduction step I intend to write my data to the danswert array. Since multiple arrays are being used in the reduction, obviously I am using zip iterators.
My problem lies in writing the functor density_update which is binary operation.
struct density_update
{
typedef thrust::device_ptr<double> ElementIterator;
typedef thrust::device_ptr<int> IndexIterator;
typedef thrust::permutation_iterator<ElementIterator,IndexIterator> PIt;
typedef thrust::tuple< PIt , PIt , PIt, PIt> Tuple;
__host__ __device__
double operator()(const Tuple& x , const Tuple& y)
{
return thrust::get<0>(*x) * (thrust::get<1>(*x) - thrust::get<3>(*x)) + \
thrust::get<0>(*y) * (thrust::get<1>(*y) - thrust::get<3>(*y));
}
};
The value being returned is a double . Why the binary operation looks like the above functor is
not important. I just want to know how I would go about correcting the above syntactically.
As shown above the code is throwing a number of compilation errors. I am not sure where I have gone wrong.
I am using CUDA 4.0 on GTX 570 on Ubuntu 10.10
density_update should not receive tuples of iterators as parameters -- it needs tuples of the iterators' references.
In principle you could write density_update::operator() in terms of the particular reference type of the various iterators, but it's simpler to have the compiler infer the type of the parameters:
struct density_update
{
template<typename Tuple>
__host__ __device__
double operator()(const Tuple& x, const Tuple& y)
{
return thrust::get<0>(x) * (thrust::get<1>(x) - thrust::get<3>(x)) + \
thrust::get<0>(y) * (thrust::get<1>(y) - thrust::get<3>(y));
}
};

What Does "Overloaded"/"Overload"/"Overloading" Mean?

What does "Overloaded"/"Overload" mean in regards to programming?
It means that you are providing a function (method or operator) with the same name, but with a different signature.
For example:
void doSomething();
int doSomething(string x);
int doSomething(int a, int b, int c);
Basic Concept
Overloading, or "method overloading" is the name of the concept of having more than one methods with the same name but with different parameters.
For e.g. System.DateTime class in c# have more than one ToString method. The standard ToString uses the default culture of the system to convert the datetime to string:
new DateTime(2008, 11, 14).ToString(); // returns "14/11/2008" in America
while another overload of the same method allows the user to customize the format:
new DateTime(2008, 11, 14).ToString("dd MMM yyyy"); // returns "11 Nov 2008"
Sometimes parameter name may be the same but the parameter types may differ:
Convert.ToInt32(123m);
converts a decimal to int while
Convert.ToInt32("123");
converts a string to int.
Overload Resolution
For finding the best overload to call, compiler performs an operation named "overload resolution". For the first example, compiler can find the best method simply by matching the argument count. For the second example, compiler automatically calls the decimal version of replace method if you pass a decimal parameter and calls string version if you pass a string parameter. From the list of possible outputs, if compiler cannot find a suitable one to call, you will get a compiler error like "The best overload does not match the parameters...".
You can find lots of information on how different compilers perform overload resolution.
A function is overloaded when it has more than one signature. This means that you can call it with different argument types. For instance, you may have a function for printing a variable on screen, and you can define it for different argument types:
void print(int i);
void print(char i);
void print(UserDefinedType t);
In this case, the function print() would have three overloads.
It means having different versions of the same function which take different types of parameters. Such a function is "overloaded". For example, take the following function:
void Print(std::string str) {
std::cout << str << endl;
}
You can use this function to print a string to the screen. However, this function cannot be used when you want to print an integer, you can then make a second version of the function, like this:
void Print(int i) {
std::cout << i << endl;
}
Now the function is overloaded, and which version of the function will be called depends on the parameters you give it.
Others have answered what an overload is. When you are starting out it gets confused with override/overriding.
As opposed to overloading, overriding is defining a method with the same signature in the subclass (or child class), which overrides the parent classes implementation. Some language require explicit directive, such as virtual member function in C++ or override in Delphi and C#.
using System;
public class DrawingObject
{
public virtual void Draw()
{
Console.WriteLine("I'm just a generic drawing object.");
}
}
public class Line : DrawingObject
{
public override void Draw()
{
Console.WriteLine("I'm a Line.");
}
}
An overloaded method is one with several options for the number and type of parameters. For instance:
foo(foo)
foo(foo, bar)
both would do relatively the same thing but one has a second parameter for more options
Also you can have the same method take different types
int Convert(int i)
int Convert(double i)
int Convert(float i)
Just like in common usage, it refers to something (in this case, a method name), doing more than one job.
Overloading is the poor man's version of multimethods from CLOS and other languages. It's the confusing one.
Overriding is the usual OO one. It goes with inheritance, we call it redefinition too (e.g. in https://stackoverflow.com/users/3827/eed3si9n's answer Line provides a specialized definition of Draw().