how to get the index of thrust foreach - cuda

I am trying to using thrust for each to give device vector certain values
here is the code
const uint N = 222222;
struct assign_functor
{
template <typename Tuple>
__device__
void operator()(Tuple t)
{
uint x = threadIdx.x + blockIdx.x * blockDim.x;
uint y = threadIdx.y + blockIdx.y * blockDim.y;
uint offset = x + y * blockDim.x * gridDim.x;
thrust::get<0>(t) = offset;
}
};
int main(int argc, char** argv)
{
thrust::device_vector <float> d_float_vec(N);
thrust::for_each(
thrust::make_zip_iterator(
thrust::make_tuple(d_float_vec.begin())
),
thrust::make_zip_iterator(
thrust::make_tuple(d_float_vec.end())
),
assign_functor()
);
std::cout<<d_float_vec[10]<<" "<<d_float_vec[N-2]
}
the output of d_float_vec[N-2] is supposed to be 222220; but it turns out 1036. whats wrong with my code??
I know I could use thrust::sequence to give a sequence values to the vector. I just want to know how to get the real index for thrust foreach function. Thanks!

As noted in comments, your approach is never likely to work because you have assumed a number of things about the way thrust::for_each works internally which are probably not true, including:
You implicitly are assuming that for_each uses a single thread to process each input element. This is almost certainly not the case; it is much more likely that thrust will process multiple elements per thread during the operation.
You are also assuming that execution happens in order so that the Nth thread processes the Nth array element. That may not be the case, and execution may occur in an order which cannot be known a priori
You are assuming for_each processes the whole input data set in a single kernel laumch
Thrust algorithms should be treated as black boxes whose internal operations are undefined and no knowledge of them is required to implement user defined functors. In your example, if you require a sequential index inside a functor, pass a counting iterator. One way to re-write your example would be like this:
#include "thrust/device_vector.h"
#include "thrust/for_each.h"
#include "thrust/tuple.h"
#include "thrust/iterator/counting_iterator.h"
typedef unsigned int uint;
const uint N = 222222;
struct assign_functor
{
template <typename Tuple>
__device__
void operator()(Tuple t)
{
thrust::get<1>(t) = (float)thrust::get<0>(t);
}
};
int main(int argc, char** argv)
{
thrust::device_vector <float> d_float_vec(N);
thrust::counting_iterator<uint> first(0);
thrust::counting_iterator<uint> last = first + N;
thrust::for_each(
thrust::make_zip_iterator(
thrust::make_tuple(first, d_float_vec.begin())
),
thrust::make_zip_iterator(
thrust::make_tuple(last, d_float_vec.end())
),
assign_functor()
);
std::cout<<d_float_vec[10]<<" "<<d_float_vec[N-2]<<std::endl;
}
Here the counting iterator gets passed in a tuple along with the data array, allow the functor access to a sequential index which corresponds to the data array entry it is dealing with.

Related

How to bring equal elements together using thrust without sort

I have an array of elements such that each element defines the "equal to" operator only.
In other words no ordering is defined for such type of element.
Since I can't use thrust::sort as in the thrust histogram example how can I bring equal elements together using thrust?
For example:
my array is initially
a e t b c a c e t a
where identical characters represent equal elements.
After the elaboration, the array should be
a a a t t b c c e e
but it can be also
a a a c c t t e e b
or any other permutation.
I would recommend that you follow an approach such as that laid out by #m.s. in the posted answer there. As I stated in the comments, ordering of elements is an extremely useful mechanism that aids in the reduction of complexity for problems like this.
However the question as posed asks if it is possible to group like elements without sorting. With an inherently parallel processor like a GPU, I spent some time thinking about how it might be accomplished without sorting.
If we have both a large number of objects, as well as a large number of unique object types, then I think it's possible to bring some level of parallelism to the problem, however my approach outlined here will still have atrocious, scattered memory access patterns. For the case where there are only a small number of distinct or unique object types, the algorithm I am discussing here has little to commend it. This is just one possible approach. There may well be other, far better approaches:
The starting point is to develop a set of "linked lists" that indicate the matching neighbor to the left and the matching neighbor to the right, for each element. This is accomplished via my search_functor and thrust::for_each, on the entire data set. This step is reasonably parallel and also has reasonable memory access efficiency for large data sets, but it does require a worst-case traversal of the entire data set from start to finish (a side-effect, I would call it, of not being able to use ordering; we must compare every element to other elements until we find a match). The generation of two linked lists allows us to avoid all-to-all comparisons.
Once we have the lists (right-neighbor and left-neighbor) built from step 1, it's an easy matter to count the number of unique objects, using thrust::count.
We then get the starting indexes of each unique element (i.e. the leftmost index of each type of unique element, in the dataset), using thrust::copy_if stream compaction.
The next step is to count the number of instances of each of the unique elements. This step is doing list traversal, one thread per element list. If I have a small number of unique elements, this will not effectively utilize the GPU. In addition, the list traversal will result in lousy access patterns.
After we have counted the number of each type of object, we can then build a sequence of starting indices for each object type in the output list, via thrust::exclusive_scan on the numbers of each type of object.
Finally, we can copy each input element to it's appropriate place in the output list. Since we have no way to group or order the elements yet, we must again resort to list traversal. Once again, this will be inefficient use of the GPU if the number of unique object types is small, and will also have lousy memory access patterns.
Here's a fully worked example, using your sample data set of characters. To help clarify the idea that we intend to group objects that have no inherent ordering, I have created a somewhat arbitrary object definition (my_obj), that has the == comparison operator defined, but no definition for < or >.
$ cat t707.cu
#include <thrust/device_vector.h>
#include <thrust/host_vector.h>
#include <thrust/for_each.h>
#include <thrust/transform.h>
#include <thrust/transform_scan.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/iterator/zip_iterator.h>
#include <thrust/copy.h>
#include <thrust/count.h>
#include <iostream>
template <typename T>
class my_obj
{
T element;
int index;
public:
__host__ __device__ my_obj() : element(0), index(0) {};
__host__ __device__ my_obj(T a) : element(a), index(0) {};
__host__ __device__ my_obj(T a, int idx) : element(a), index(idx) {};
__host__ __device__
T get() {
return element;}
__host__ __device__
void set(T a) {
element = a;}
__host__ __device__
int get_idx() {
return index;}
__host__ __device__
void set_idx(int idx) {
index = idx;}
__host__ __device__
bool operator ==(my_obj &e2)
{
return (e2.get() == this->get());
}
};
template <typename T>
struct search_functor
{
my_obj<T> *data;
int end;
int *rn;
int *ln;
search_functor(my_obj<T> *_a, int *_rn, int *_ln, int len) : data(_a), rn(_rn), ln(_ln), end(len) {};
__host__ __device__
void operator()(int idx){
for (int i = idx+1; i < end; i++)
if (data[idx] == data[i]) {
ln[i] = idx;
rn[idx] = i;
return;}
return;
}
};
template <typename T>
struct copy_functor
{
my_obj<T> *data;
my_obj<T> *result;
int *rn;
copy_functor(my_obj<T> *_in, my_obj<T> *_out, int *_rn) : data(_in), result(_out), rn(_rn) {};
__host__ __device__
void operator()(const thrust::tuple<int, int> &t1) const {
int idx1 = thrust::get<0>(t1);
int idx2 = thrust::get<1>(t1);
result[idx1] = data[idx2];
int i = rn[idx2];
int j = 1;
while (i != -1){
result[idx1+(j++)] = data[i];
i = rn[i];}
return;
}
};
struct count_functor
{
int *rn;
int *ot;
count_functor(int *_rn, int *_ot) : rn(_rn), ot(_ot) {};
__host__ __device__
int operator()(int idx1, int idx2){
ot[idx1] = idx2;
int i = rn[idx1];
int count = 1;
while (i != -1) {
ot[i] = idx2;
count++;
i = rn[i];}
return count;
}
};
using namespace thrust::placeholders;
int main(){
// data setup
char data[] = { 'a' , 'e' , 't' , 'b' , 'c' , 'a' , 'c' , 'e' , 't' , 'a' };
int sz = sizeof(data)/sizeof(char);
for (int i = 0; i < sz; i++) std::cout << data[i] << ",";
std::cout << std::endl;
thrust::host_vector<my_obj<char> > h_data(sz);
for (int i = 0; i < sz; i++) { h_data[i].set(data[i]); h_data[i].set_idx(i); }
thrust::device_vector<my_obj<char> > d_data = h_data;
// create left and right neighbor indices
thrust::device_vector<int> ln(d_data.size(), -1);
thrust::device_vector<int> rn(d_data.size(), -1);
thrust::for_each(thrust::counting_iterator<int>(0), thrust::counting_iterator<int>(0) + sz, search_functor<char>(thrust::raw_pointer_cast(d_data.data()), thrust::raw_pointer_cast(rn.data()), thrust::raw_pointer_cast(ln.data()), d_data.size()));
// determine number of unique objects
int uni_objs = thrust::count(ln.begin(), ln.end(), -1);
// determine the number of instances of each unique object
// get object starting indices
thrust::device_vector<int> uni_obj_idxs(uni_objs);
thrust::copy_if(thrust::counting_iterator<int>(0), thrust::counting_iterator<int>(0)+d_data.size(), ln.begin(), uni_obj_idxs.begin(), (_1 == -1));
// count each object list
thrust::device_vector<int> num_objs(uni_objs);
thrust::device_vector<int> obj_type(d_data.size());
thrust::transform(uni_obj_idxs.begin(), uni_obj_idxs.end(), thrust::counting_iterator<int>(0), num_objs.begin(), count_functor(thrust::raw_pointer_cast(rn.data()), thrust::raw_pointer_cast(obj_type.data())));
// at this point, we have built object lists that have allowed us to identify a unique, orderable "type" for each object
// the sensible thing to do would be to employ a sort_by_key on obj_type and an index sequence at this point
// and use the reordered index sequence to reorder the original objects, thus grouping them
// however... without sorting...
// build output vector indices
thrust::device_vector<int> copy_start(num_objs.size());
thrust::exclusive_scan(num_objs.begin(), num_objs.end(), copy_start.begin());
// copy (by object type) input to output
thrust::device_vector<my_obj<char> > d_result(d_data.size());
thrust::for_each(thrust::make_zip_iterator(thrust::make_tuple(copy_start.begin(), uni_obj_idxs.begin())), thrust::make_zip_iterator(thrust::make_tuple(copy_start.end(), uni_obj_idxs.end())), copy_functor<char>(thrust::raw_pointer_cast(d_data.data()), thrust::raw_pointer_cast(d_result.data()), thrust::raw_pointer_cast(rn.data())));
// display results
std::cout << "Grouped: " << std::endl;
for (int i = 0; i < d_data.size(); i++){
my_obj<char> temp = d_result[i];
std::cout << temp.get() << ",";}
std::cout << std::endl;
for (int i = 0; i < d_data.size(); i++){
my_obj<char> temp = d_result[i];
std::cout << temp.get_idx() << ",";}
std::cout << std::endl;
return 0;
}
$ nvcc -o t707 t707.cu
$ ./t707
a,e,t,b,c,a,c,e,t,a,
Grouped:
a,a,a,e,e,t,t,b,c,c,
0,5,9,1,7,2,8,3,4,6,
$
In the discussion we found out that your real goal is to eliminate duplicates in a vector of float4 elements.
In order to apply thrust::unique the elements need to be sorted.
So you need a sort method for 4 dimensional data. This can be done using space-filling curves. I have previously used the z-order curve (aka morton code) to sort 3D data. There are efficient CUDA implementations for the 3D case available, however quick googling did not return a ready-to-use implementation for the 4D case.
I found a paper which lists a generic algorithm for sorting n-dimensional data points using the z-order curve:
Fast construction of k-Nearest Neighbor Graphs for Point Clouds
(see Algorithm 1 : Floating Point Morton Order Algorithm).
There is also a C++ implementation available for this algorithm.
For 4D data, the loop could be unrolled, but there might be simpler and more efficient algorithms available.
So the (not fully implemented) sequence of operations would then look like this:
#include <thrust/device_vector.h>
#include <thrust/unique.h>
#include <thrust/sort.h>
inline __host__ __device__ float dot(const float4& a, const float4& b)
{
return a.x * b.x + a.y * b.y + a.z * b.z + a.w * b.w;
}
struct identity_4d
{
__host__ __device__
bool operator()(const float4& a, const float4& b) const
{
// based on the norm function you provided in the discussion
return dot(a,b) < (0.1f*0.1f);
}
};
struct z_order_4d
{
__host__ __device__
bool operator()(const float4& p, const float4& q) const
{
// you need to implement the z-order algorithm here
// ...
}
};
int main()
{
const int N = 100;
thrust::device_vector<float4> data(N);
// fill the data
// ...
thrust::sort(data.begin(),data.end(), z_order_4d());
thrust::unique(data.begin(),data.end(), identity_4d());
}

CUDA cudaMemcpy2D not giving expected results [duplicate]

How do I initialize device array which is allocated using cudaMalloc()?
I tried cudaMemset, but it fails to initialize all values except 0.code, for cudaMemset looks like below, where value is initialized to 5.
cudaMemset(devPtr,value,number_bytes)
As you are discovering, cudaMemset works like the C standard library memset. Quoting from the documentation:
cudaError_t cudaMemset ( void * devPtr,
int value,
size_t count
)
Fills the first count bytes of the memory area pointed to by devPtr
with the constant byte value value.
So value is a byte value. If you do something like:
int *devPtr;
cudaMalloc((void **)&devPtr,number_bytes);
const int value = 5;
cudaMemset(devPtr,value,number_bytes);
what you are asking to happen is that each byte of devPtr will be set to 5. If devPtr was a an array of integers, the result would be each integer word would have the value 84215045. This is probably not what you had in mind.
Using the runtime API, what you could do is write your own generic kernel to do this. It could be as simple as
template<typename T>
__global__ void initKernel(T * devPtr, const T val, const size_t nwords)
{
int tidx = threadIdx.x + blockDim.x * blockIdx.x;
int stride = blockDim.x * gridDim.x;
for(; tidx < nwords; tidx += stride)
devPtr[tidx] = val;
}
(standard disclaimer: written in browser, never compiled, never tested, use at own risk).
Just instantiate the template for the types you need and call it with a suitable grid and block size, paying attention to the last argument now being a word count, not a byte count as in cudaMemset. This isn't really any different to what cudaMemset does anyway, using that API call results in a kernel launch which is do too different to what I posted above.
Alternatively, if you can use the driver API, there is cuMemsetD16 and cuMemsetD32, which do the same thing, but for half and full 32 bit word types. If you need to do set 64 bit or larger types (so doubles or vector types), your best option is to use your own kernel.
I also needed a solution to this question and I didn't really understand the other proposed solution. Particularly I didn't understand why it iterates over the grid blocks for(; tidx < nwords; tidx += stride) and for that matter, the kernel invocation and why using the counter-intuitive word sizes.
Therefore I created a much simpler monolithic generic kernel and customized it with strides i.e. you may use it to initialize a matrix in multiple ways e.g. set rows or columns to any value:
template <typename T>
__global__ void kernelInitializeArray(T* __restrict__ a, const T value,
const size_t n, const size_t incx) {
int tid = threadIdx.x + blockDim.x * blockIdx.x;
if (tid*incx < n) {
a[tid*incx] = value;
}
}
Then you may invoke the kernel like this:
template <typename T>
void deviceInitializeArray(T* a, const T value, const size_t n, const size_t incx) {
int number_of_blocks = ((n / incx) + BLOCK_SIZE - 1) / BLOCK_SIZE;
dim3 gridDim(number_of_blocks, 1);
dim3 blockDim(BLOCK_SIZE, 1);
kernelInitializeArray<T> <<<gridDim, blockDim>>>(a, value, n, incx);
}

External call of a class method in a kernel

I have a class FPlan that has a number of methods such as permute and packing.
__host__ __device__ void Perturb_action(FPlan *dfp){
dfp->perturb();
dfp->packing();
}
__global__ void Vector_Perturb(FPlan **dfp, int n){
int i=threadIx.x;
if(i<n) Perturb_action(dfp[i]);
}
in main:
FPlan **fp_vec;
fp_vec=(FPlan**)malloc(VEC_SIZE*sizeof(FPlan*));
//initialize the vec
for(int i=0; i<VEC_SIZE;i++)
fp_vec[i]=&fp;
//fp of type FPlan that is initialized
int v_sz=sizeof(fp_vec);
double test=fp_vec[0]->getCost();
printf("the cost before perturb %f\n"test);
FPlan **value;
cudaMalloc(&value,v_sz);
cudaMemcpy(value,&fp_vec,v_sz,cudaMemcpyHostToDevice);
//call kernel
dim3 threadsPerBlock(VEC_SIZE);
dim3 numBlocks(1);
Vector_Perturb<<<numBlocks,threadsPerBlock>>> (value,VEC_SIZE);
cudaMemcpy(fp_vec,value,v_sz,cudaMemcpyDeviceToHost);
test=fp_vec[0]->getCost();
printf("the cost after perturb %f\n"test);
test=fp_vec[1]->getCost();
printf("the cost after perturb %f\n"test);
I am getting before permute for fp_vec[0] printf the cost 0.8.
After permute for fp_vec[0] the value inf and for fp_vec[1] the value 0.8.
The expected output after the permutation should be something like fp_vec[0] = 0.7 and fp_vec[1] = 0.9. I want to apply these permutations to an array of type FPlan.
What am I missing? Is calling an external function supported in CUDA?
This seems to be a common problem these days:
Consider the following code:
#include <stdio.h>
#include <stdlib.h>
int main() {
int* arr = (int*) malloc(100);
printf("sizeof(arr) = %i", sizeof(arr));
return 0;
}
what is the expected ouptut? 100? no its 4 (at least on a 32 bit machine). sizeof() returns the size of the type of a variable not the allocated size of an array.
int v_sz=sizeof(fp_vec);
double test=fp_vec[0]->getCost();
printf("the cost before perturb %f\n"test);
FPlan **value;
cudaMalloc(&value,v_sz);
cudaMemcpy(value,&fp_vec,v_sz,cudaMemcpyHostToDevice);
You are allocating 4 (or 8) bytes on the device and copy 4 (or 8) bytes. The result is undefined (and maybe every time garbage).
Besides that, you shold do proper error checking of your CUDA calls.
Have a look: What is the canonical way to check for errors using the CUDA runtime API?

How to gather rows from a matrix by indices list using CUDA Thrust

This is seemingly a simple problem but I just can’t figure out an elegant way to do this with CUDA Thrust.
I have a two dimensional matrix NxM and a vector of desired row indices of size L that is a subset of all rows(i.e. L < N) and is not regular (basically an irregular list like, 7,11,13,205,... etc.). The matrix is stored by rows in a thrust device vector. The array of indices is a device vector as well.
Here are my two questions:
What is the most efficient way to copy the desired rows from the original NxM matrix forming a new matrix LxM?
Is it possible to create an iterator for the original NxM matrix that would dereference to only elements that belong to the desired rows?
Thank you very much for your help.
What you are asking about seems like a pretty straight forward stream compaction problem, and there isn't any particular problem doing it with thrust, but there are a couple of twists. In order to select the rows to copy, you need to have an stencil or key that the stream compaction algorithm can use. That needs to be constructed by a search or select operation using your list of rows to copy.
One example procedure to do this would go something like this:
Construct an iterator which returns the row number of any entry in the input matrix. Thrust has a very useful counting_iterator and transform_iterator which can be combined to do this
Perform a search of that row number iterator to find which entries match the list of rows to copy. thrust::binary search can be used for this. The search yields the stencil for the stream compaction operation
Use thrust::copy_if to perform the stream compaction on the input matrix with the stencil.
It sounds like a lot of work and intermediate steps, but the counting and transformation iterators don't actually produce any intermediate device vectors. The only intermediate storage required is the stencil array, which can be a boolean (so m*n bytes).
A full example in code:
#include <thrust/copy.h>
#include <thrust/binary_search.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/iterator/transform_iterator.h>
#include <thrust/device_vector.h>
#include <cstdio>
struct div_functor : public thrust::unary_function<int,int>
{
int m;
div_functor(int _m) : m(_m) {};
__host__ __device__
int operator()(int x) const
{
return x / m;
}
};
struct is_true
{
__host__ __device__
bool operator()(bool x) { return x; }
};
int main(void)
{
// dimensions of the problem
const int m=20, n=5, l=4;
// Counting iterator for generating sequential indices
// Sample matrix containing 0...(m*n)
thrust::counting_iterator<float> indices(0.f);
thrust::device_vector<float> in_matrix(m*n);
thrust::copy(indices, indices+(m*n), in_matrix.begin());
// device vector contain rows to select
thrust::device_vector<int> select(l);
select[0] = 1;
select[1] = 4;
select[2] = 9;
select[3] = 16;
// construct device iterator supplying row numbers via a functor
typedef thrust::counting_iterator<int> counter;
typedef thrust::transform_iterator<div_functor, counter> rowIterator;
rowIterator rows_begin = thrust::make_transform_iterator(thrust::make_counting_iterator(0), div_functor(n));
rowIterator rows_end = rows_begin + (m*n);
// constructor a stencil array which indicates which entries will be copied
thrust::device_vector<bool> docopy(m*n);
thrust::binary_search(select.begin(), select.end(), rows_begin, rows_end, docopy.begin());
// use stream compaction on the matrix with the stencil array
thrust::device_vector<float> out_matrix(l*n);
thrust::copy_if(in_matrix.begin(), in_matrix.end(), docopy.begin(), out_matrix.begin(), is_true());
for(int i=0; i<(l*n); i++) {
float val = out_matrix[i];
printf("%i %f\n", i, val);
}
}
(usual disclaimer: use at your own risk)
About the only comment I would make is that the predicate to the copy_if call feels a bit redundant given we have already a binary stencil that could be used directly, but there doesn't seem to be a variant of the compaction algorithms which can operate on a binary stencil directly. Similarly, I could not think of a sensible way to use the list of rows directly in the stream compaction call. There might well be a more efficient way to do this with thrust, but this should at least get you started.
From your comment, it seems that space is tight and the additional memory overhead of the binary search and stencil creation is prohibitive for your application. In that case I would follow the advice I offered in a comment to Roger Dahl's answer, and use a custom copy kernel instead. Thrust device vectors can be cast to a pointer you can pass directly to a kernel (thrust::raw_pointer_cast), so it need not interfere with your existing thrust code. I would suggest using a block of threads per row to copy, that allows coalescing of reads and writes and should perform a lot better than using thrust::copy for each row. A very simple implementation might look something like this (reusing most of my thrust example):
#include <thrust/copy.h>
#include <thrust/iterator/counting_iterator.h>
#include <thrust/device_vector.h>
#include <cstdio>
__global__
void rowcopykernel(const float *in, float *out, const int *list, const int m, const int n, const int l)
{
__shared__ const float * inrowp;
__shared__ float * outrowp;
if (threadIdx.x == 0) {
inrowp = (blockIdx.x < l) ? in + (n*list[blockIdx.x]) : 0;
outrowp = out + (n*blockIdx.x);
}
__syncthreads();
for(int i=threadIdx.x; (inrowp != 0) && (i<n); i+=blockDim.x) {
*(outrowp+i) = *(inrowp+i);
}
}
int main(void)
{
// dimensions of the problem
const int m=20, n=5, l=4;
// Sample matrix containing 0...(m*n)
thrust::counting_iterator<float> indices(0.f);
thrust::device_vector<float> in_matrix(m*n);
thrust::copy(indices, indices+(m*n), in_matrix.begin());
// device vector contain rows to select
thrust::device_vector<int> select(l);
select[0] = 1;
select[1] = 4;
select[2] = 9;
select[3] = 16;
// Output matrix
thrust::device_vector<float> out_matrix(l*n);
// raw pointer to thrust vectors
int * selp = thrust::raw_pointer_cast(&select[0]);
float * inp = thrust::raw_pointer_cast(&in_matrix[0]);
float * outp = thrust::raw_pointer_cast(&out_matrix[0]);
dim3 blockdim = dim3(128);
dim3 griddim = dim3(l);
rowcopykernel<<<griddim,blockdim>>>(inp, outp, selp, m, n, l);
for(int i=0; i<(l*n); i++) {
float val = out_matrix[i];
printf("%i %f\n", i, val);
}
}
(standard disclaimer: use at your own risk).
The execution parameter selection could be made fancier, but otherwise that should be about all that is required. If your rows are very small, you might want to investigate using a warp per row rather than a block (so one block copies several rows). If you have more than 65535 output rows, then you will need to either use a 2D grid, or modify the code to have each block do multiple rows. But, as with the thrust based solution about, this should get you started.
if you are not fixed on thrust, check out Arrafire:
surprisingly unlike thrust, this library has a native support for subscript indexing,
so that your problem can be solved in just few lines of code:
const int N = 7, M = 5;
float L_host[] = {3, 6, 4, 1};
int szL = sizeof(L_host) / sizeof(float);
// generate random NxM matrix with cuComplex data
array A = randu(N, M, c32);
// array used to index rows
array L(szL, 1, L_host);
print(A);
print(L);
array B = A(L,span); // copy selected rows of A
print(B);
and the results:
A =
0.7402 + 0.9210i 0.6814 + 0.2920i 0.5786 + 0.5538i 0.2133 + 0.4131i 0.7305 + 0.9400i
0.0390 + 0.9690i 0.3194 + 0.8109i 0.3557 + 0.7229i 0.0328 + 0.5360i 0.8432 + 0.6116i
0.9251 + 0.4464i 0.1541 + 0.4452i 0.2783 + 0.6192i 0.7214 + 0.3546i 0.2674 + 0.0208i
0.6673 + 0.1099i 0.2080 + 0.6110i 0.5876 + 0.3750i 0.2527 + 0.9847i 0.8331 + 0.7218i
0.4702 + 0.5132i 0.3073 + 0.4156i 0.2405 + 0.4148i 0.9200 + 0.1872i 0.6087 + 0.6301i
0.7762 + 0.2948i 0.2343 + 0.8793i 0.0937 + 0.6326i 0.1820 + 0.5984i 0.5298 + 0.8127i
0.7140 + 0.3585i 0.6462 + 0.9264i 0.2849 + 0.7793i 0.7082 + 0.0421i 0.0593 + 0.4797i
L = (row indices)
3.0000
6.0000
4.0000
1.0000
B =
0.6673 + 0.1099i 0.2080 + 0.6110i 0.5876 + 0.3750i 0.2527 + 0.9847i 0.8331 + 0.7218i
0.7140 + 0.3585i 0.6462 + 0.9264i 0.2849 + 0.7793i 0.7082 + 0.0421i 0.0593 + 0.4797i
0.4702 + 0.5132i 0.3073 + 0.4156i 0.2405 + 0.4148i 0.9200 + 0.1872i 0.6087 + 0.6301i
0.0390 + 0.9690i 0.3194 + 0.8109i 0.3557 + 0.7229i 0.0328 + 0.5360i 0.8432 + 0.6116i
it also works pretty fast. I tested this with an array of cuComplex of size
2000 x 2000 using the following code:
float *g_data = 0, *g_data2 = 0;
int g_N = 2000, g_M = 2000, // matrix of size g_N x g_M
g_L = 400; // copy g_L rows
void af_test()
{
array A(g_N, g_M, (cuComplex *)g_data, afDevicePointer);
array L(g_L, 1, g_data2, afDevicePointer);
array B = (A(L, span));
std::cout << "sz: " << B.elements() << "\n";
}
int main()
{
// input matrix N x M of cuComplex
array in = randu(g_N, g_M, c32);
g_data = (float *)in.device< cuComplex >();
// generate unique row indices
array in2 = setunique(floor(randu(g_L) * g_N));
print(in2);
g_data2 = in2.device<float>();
const int N_ITERS = 30;
try {
info();
af::sync();
timer::tic();
for(int i = 0; i < N_ITERS; i++) {
af_test();
}
af::sync();
printf("af: %.5f seconds\n", timer::toc() / N_ITERS);
} catch (af::exception& e) {
fprintf(stderr, "%s\n", e.what());
}
in.unlock();
in2.unlock();
}
I don't think there is a way to do this with Thrust but, because the operation will be memory bound, it should be easy to write a kernel that performs this operation at maximum possible performance. Simply create the same number of threads as there are indices in the vector. Have each thread calculate the source and destination addresses for one row and then use memcpy() to copy the row.
You may also want to carefully consider if it is possible to set up subsequent processing steps to access the rows in place, thereby avoiding the entire, expensive "compacting" operation, that only shuffles memory around. Even if addressing the rows becomes slightly more complicated (an extra memory lookup and multiply, maybe), overall performance may be much better.

CUDA Dot Product

I'm trying to implement the classic dot-product kernel for double precision arrays with atomic computation of the final sum across the various blocks. I used the atomicAdd for double precision as stated in page 116 of the programming guide.Probably i'm doing something wrong.The partial sums across the threads in every block are computed correctly but afterwords the atomic operation doesn't seem to be working properly since every time i run my kernel with the same data,i receive different results. I'll be grateful if somebody could spot the mistake or provide an alternative solution!
Here is my kernel:
__global__ void cuda_dot_kernel(int *n,double *a, double *b, double *dot_res)
{
__shared__ double cache[threadsPerBlock]; //thread shared memory
int global_tid=threadIdx.x + blockIdx.x * blockDim.x;
int i=0,cacheIndex=0;
double temp = 0;
cacheIndex = threadIdx.x;
while (global_tid < (*n)) {
temp += a[global_tid] * b[global_tid];
global_tid += blockDim.x * gridDim.x;
}
cache[cacheIndex] = temp;
__syncthreads();
for (i=blockDim.x/2; i>0; i>>=1) {
if (threadIdx.x < i) {
cache[threadIdx.x] += cache[threadIdx.x + i];
}
__syncthreads();
}
__syncthreads();
if (cacheIndex==0) {
*dot_res=cuda_atomicAdd(dot_res,cache[0]);
}
}
And here is my device function atomicAdd:
__device__ double cuda_atomicAdd(double *address, double val)
{
double assumed,old=*address;
do {
assumed=old;
old= __longlong_as_double(atomicCAS((unsigned long long int*)address,
__double_as_longlong(assumed),
__double_as_longlong(val+assumed)));
}while (assumed!=old);
return old;
}
Getting a reduction right using ad hoc CUDA code can be tricky, so here's an alternative solution using a Thrust algorithm, which is included with the CUDA Toolkit:
#include <thrust/inner_product.h>
#include <thrust/device_ptr.h>
double do_dot_product(int n, double *a, double *b)
{
// wrap raw pointers to device memory with device_ptr
thrust::device_ptr<double> d_a(a), d_b(b);
// inner_product implements a mathematical dot product
return thrust::inner_product(d_a, d_a + n, d_b, 0.0);
}
You are using the cuda_atomicAdd function incorrectly. This section of your kernel:
if (cacheIndex==0) {
*dot_res=cuda_atomicAdd(dot_res,cache[0]);
}
is the culprit. Here, you atomically add to dot_res. then non atomically set dot_res with the result it returns. The return result from this function is the previous value of the location being atomically updated, and it supplied for "information" or local use of the caller only. You don't assign it to what you are atomically updated, that completely defeats the purpose of using atomic memory access in the first place. Do something like this instead:
if (cacheIndex==0) {
double result=cuda_atomicAdd(dot_res,cache[0]);
}
Did not checked your code that depth but here are some advices.
I would only advice using Thrust if you only use your GPU for such generic tasks, since if a complex problem will arise people have no idea to efficiently program parallel on the gpu.
Start a new parallel reduction kernel to summarize the dot product.
Since the data is already on the device you won't see a decrease in performance starting a new kernel.
Your kernel seems not to scale across the maximum number of possible blocks on the newest GPU. If it would and your kernel would be able to calculate the dot product of millions of values the performance would decrease dramatically because of the serialized atomic operation.
Beginner mistake: Is your input data and shared memory access range checked? Or are you sure the input data is always multiple of your block size? Else you will read garbage. Most of my wrong results were due to this fault.
optimise your parallel reduction. My Thesis or Optimisations Mark Harris
Untested, i just wrote it down in notepad:
/*
* #param inCount_s unsigned long long int Length of both input arrays
* #param inValues1_g double* First value array
* #param inValues2_g double* Second value array
* #param outDots_g double* Output dots of each block, length equals the number of blocks
*/
__global__ void dotProduct(const unsigned long long int inCount_s,
const double* inValuesA_g,
const double* inValuesB_g,
double* outDots_g)
{
//get unique block index in a possible 3D Grid
const unsigned long long int blockId = blockIdx.x //1D
+ blockIdx.y * gridDim.x //2D
+ gridDim.x * gridDim.y * blockIdx.z; //3D
//block dimension uses only x-coordinate
const unsigned long long int tId = blockId * blockDim.x + threadIdx.x;
/*
* shared value pair products array, where BLOCK_SIZE power of 2
*
* To improve performance increase its size by multiple of BLOCK_SIZE, so that each threads loads more then 1 element!
* (outDots_g length decreases by same factor, and you need to range check and initialize memory)
* -> see harris gpu optimisations / parallel reduction slides for more informations.
*/
__shared__ double dots_s[BLOCK_SIZE];
/*
* initialize shared memory array and calculate dot product of two values,
* shared memory always needs to be initialized, its never 0 by default, else garbage is read later!
*/
if(tId < inCount_s)
dots_s[threadIdx.x] = inValuesA_g[tId] * inValuesB_g[tId];
else
dots_s[threadIdx.x] = 0;
__syncthreads();
//do parallel reduction on shared memory array to sum up values
reductionAdd(dots_s, dots_s[0]) //see my thesis link
//output value
if(threadIdx.x == 0)
outDots_g[0] = dots_s[0];
//start new parallel reduction kernel to sum up outDots_g!
}
Edit: removed unnecessary points.