I just write an simple CUDA Thrust program, but when I run it. I got this error: thrust::system::system_error at position 0x0037f99c .
Can someone help me to figure out why this happen?
#include<thrust\host_vector.h>
#include<thrust\device_vector.h>
#include<iostream>
using namespace std;
using namespace thrust;
int main()
{
thrust::host_vector<int> h_vec(3);
h_vec[0]=1;h_vec[1]=2;h_vec[2]=3;
thrust::device_vector<int> d_vec(3) ;
d_vec= h_vec;
int h_sum = thrust::reduce(h_vec.begin(), h_vec.end());
int d_sum = thrust::reduce(d_vec.begin(), d_vec.end());
return 0;
}
A few suggestions with Thrust:
If you are compiling your code with -G and having trouble, try compiling without -G
You can catch the errors that thrust throws, to get more information.
It's always recommended to compile your code for the architecture of the GPU you are using. So if you are on a cc2.0 GPU, compile with -arch=sm_20. If you are on a cc3.0 GPU, compile with -arch=sm_30 etc.
Finally, it's recommended to build a 64-bit project. On windows you would select a release/x64 project.
Related
So I'm trying to start on GPU programming and using the Thrust library to simplify things.
I have created a test program to work with it and see how it works, however whenever I try to create a thrust::device_vector with non-zero size the program crashes with "Run-time Check Failure #3 - The variable 'result' is being used without being initialized.' (this comes from the allocator_traits.inl file) And... I have no idea how to fix this.
The following is all that is needed to cause this error.
#include <thrust/device_vector.h>
int main()
{
int N = 100;
thrust::device_vector<int> d_a(N);
return 0;
}
I suspect it may be a problem with how the environment is set up so the details on that are...
Created using visual studio 2019, in a CUDA 11.0 Runtime project (the example program given when opening this project works fine, however), Thrust version 1.9, and the given GPU is a GTX 970.
This issue only seems to manifest with the thrust version (1.9.x) associated with CUDA 11.0, and only in debug projects on windows/Visual Studio.
Some workarounds would be to switch to building a release project, or just click "Ignore" on the dialogs that appear at runtime. According to my testing this allows ordinary run or debug at that point.
I have not confirmed it, but I believe this issue is fixed in the latest thrust (1.10.x) just released (although not part of any formal CUDA release at this moment, I would expect it to be part of some future CUDA release).
Following the Answer of Robert Crovella, I fixed this issue by changing the corresponding lines of code in the thrust library with the code from GitHub. More precisely, in the file ...\CUDA\v11.1\include\thrust\detail\allocator\allocator_traits.inl I replaced the following function
template<typename Alloc>
__host__ __device__
typename disable_if<
has_member_system<Alloc>::value,
typename allocator_system<Alloc>::type
>::type
system(Alloc &)
{
// return a copy of a default-constructed system
typename allocator_system<Alloc>::type result;
return result;
}
by
template<typename Alloc>
__host__ __device__
typename disable_if<
has_member_system<Alloc>::value,
typename allocator_system<Alloc>::type
>::type
system(Alloc &)
{
// return a copy of a default-constructed system
return typename allocator_system<Alloc>::type();
}
Toy program:
#include <iostream>
#include <vector>
// Matrix side size (they are square).
const int N = 3;
const int num_mats = 14;
// Rotation matrices.
__constant__ float rot_mats_device[num_mats*N*N];
int main() {
std::vector<float> rot_mats_host(num_mats*N*N);
for (int i = 0; i < rot_mats_host.size(); i++)
rot_mats_host[i] = i;
auto errMemcpyToSymbol = cudaMemcpyToSymbol(rot_mats_device,
rot_mats_host.data(),
sizeof(rot_mats_device));
if (errMemcpyToSymbol != cudaSuccess) {
std::cout << "MemcpyToSymbol error: " <<
cudaGetErrorString(errMemcpyToSymbol) << std::endl;
}
}
Compiled with
nvcc -arch=sm_52 -std=c++11 cuda_invalid_symbol_error.cu -o cuda_invalid_symbol_error
does not give any error during runtime. However, with
nvcc -gencode arch=compute_52,code=sm_52 -std=c++11 cuda_invalid_symbol_error.cu -o cuda_invalid_symbol_error
it will fail with the message MemcpyToSymbol error: invalid device symbol.
Why do the latter instructions for compilation give the runtime error?
Specs: cuda 8.0, Ubuntu 16.04, GeForce GTX 1060 (I know the cc of this card is 6.1).
Why do the latter instructions for compilation give the runtime error?
-arch=sm_xx is shorthand for:
-gencode arch=compute_xx,code=sm_xx -gencode arch=compute_xx,code=compute_xx
In your case, where xx is 52, this command embeds both cc 5.2 PTX code (the second gencode instance) and cc 5.2 SASS code (the first gencode instance). The SASS code for cc 5.2 will not run on your cc6.1 device, so the runtime JIT-compiles the cc 5.2 PTX code to create an object compatible with your cc 6.1 architecture. All is happy and everything works.
When you instead compile with:
nvcc -gencode arch=compute_52,code=sm_52 ...
you are omitting the PTX code from the compiled object. Only the cc 5.2 SASS code is present. This code will not run on your cc6.1 device, and the runtime has no other options, so a "hidden" error of NO_BINARY_FOR_GPU occurs when the runtime attempts to load the GPU image for your program. Since no image gets loaded, no device symbol is present/usable. Since it is not present/usable, you get the invalid device symbol error when you attempt to refer to it using the CUDA runtime API.
If you had performed another CUDA runtime API call prior to this which forced a sufficient or equivalent level of initialization of the CUDA runtime (and checked the returned error code), you would have received a NO_BINARY_FOR_GPU error or similar. Certainly, for example, if you had attempted to launch a GPU kernel, you would receive that error. There may be other CUDA runtime API calls that would force an equivalent or sufficient level of lazy initialization, but I don't have a list of those.
I have a function in my program called float valueAt(float3 v). It's supposed to return the value of a function at the given point. The function is user-specified. I have an interpreter for this function at the moment, but others recommended I compile the function online so it's in machine code and is faster.
How do I do this? I believe I know how to load the function when I have PTX generated, but I have no idea how to generate the PTX.
CUDA provides no way of runtime compilation of non-PTX code.
What you want can be done, but not using the standard CUDA APIs. PyCUDA provides an elegant just-in-time compilation method for CUDA C code which includes behind the scenes forking of the toolchain to compile to device code and loading using the runtime API. The (possible) downside is that you need to use Python for the top level of your application, and if you are shipping code to third parties, you might need to ship a working Python distribution too.
The only other alternative I can think of is OpenCL, which does support runtime compilation (that is all it supported until recently). The C99 language base is a lot more restrictive than what CUDA offers, and I find the APIs to be very verbose, but the runtime compilation model works well.
I've thought about this problem for a while, and while I don't think this is a "great" solution, it does seem to work so I thought I would share it.
The basic idea is to use linux to spawn processes to compile and then run the compiled code. I think this is pretty much a no-brainer, but since I put together the pieces, I'll post instructions here in case it's useful for somebody else.
The problem statement in the question is to be able to take a file that contains a user-defined function, let's assume it is a function of a single variable f(x), i.e. y = f(x), and that x and y can be represented by float quantities.
The user would edit a file called fx.txt that contains the desired function. This file must conform to C syntax rules.
fx.txt:
y=1/x
This file then gets included in the __device__ function that will be holding it:
user_testfunc.cuh:
__device__ float fx(float x){
float y;
#include "fx.txt"
;
return y;
}
which gets included in the kernel that is called via a wrapper.
cudalib.cu:
#include <math.h>
#include "cudalib.h"
#include "user_testfunc.cuh"
__global__ void my_kernel(float x, float *y){
*y = fx(x);
}
float cudalib_compute_fx(float x){
float *d, *h_d;
h_d = (float *)malloc(sizeof(float));
cudaMalloc(&d, sizeof(float));
my_kernel<<<1,1>>>(x, d);
cudaMemcpy(h_d, d, sizeof(float), cudaMemcpyDeviceToHost);
return *h_d;
}
cudalib.h:
float cudalib_compute_fx(float x);
The above files get built into a shared library:
nvcc -arch=sm_20 -Xcompiler -fPIC -shared cudalib.cu -o libmycudalib.so
We need a main application to use this shared library.
t452.cu:
#include <stdio.h>
#include <stdlib.h>
#include "cudalib.h"
int main(int argc, char* argv[]){
if (argc == 1){
// recompile lib, and spawn new process
int retval = system("nvcc -arch=sm_20 -Xcompiler -fPIC -shared cudalib.cu -o libmycudalib.so");
char scmd[128];
sprintf(scmd, "%s skip", argv[0]);
retval = system(scmd);}
else { // compute f(x) at x = 2.0
printf("Result is: %f\n", cudalib_compute_fx(2.0));
}
return 0;
}
Which is compiled like this:
nvcc -arch=sm_20 -o t452 t452.cu -L. -lmycudalib
At this point, the main application (t452) can be executed and it will produce the result of f(2.0) which is 0.5 in this case:
$ LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./t452
Result is: 0.500000
The user can then modify the fx.txt file:
$ vi fx.txt
$ cat fx.txt
y = 5/x
And just re-run the app, and the new functional behavior is used:
$ LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH ./t452
Result is: 2.500000
This method takes advantage of the fact that upon recompilation/replacement of a shared library, a new linux process will pick up the new shared library. Also note that I've omitted several kinds of error checking for clarity. At a minimum I would check CUDA errors, and I would also probably delete the shared object (.so) library before recompiling it, and then test for its existence after compilation, to do a basic test that the compilation proceeded successfully.
This method entirely uses the runtime API to achieve this goal, so as a result the user would have to have the CUDA toolkit installed on their machine and appropriately set up so that nvcc is available in the PATH. Using the driver API with PTX code would make this process much cleaner (and not require the toolkit on the user's machine), but AFAIK there is no way to generate PTX from CUDA C without using nvcc or a user-created toolchain built on the nvidia llvm compiler tools. In the future, there may be a more "integrated" approach available in the "standard" CUDA C toolchain, or perhaps even by the driver.
A similar approach can be arranged using separate compilation and linking of device code, such that the only source code that needs to be exposed to the user is in user_testfunc.cu (and fx.txt).
EDIT: There is now a CUDA runtime compilation facility, which should be used in place of the above.
When I try to compile the following code with c++ on OS X 10.8, it works fine - no compile errors.
#include <gmpxx.h>
int main(int argc, const char * argv[]) { }
However, when I try to do the same with nvcc, I get a ton of errors:
/usr/local/Cellar/gcc47/4.7.3/gcc/lib/gcc/x86_64-apple-darwin12.5.0/4.7.3/../../../../include/c++/4.7.3/limits(1405): error: identifier "__int128" is undefined
/usr/local/Cellar/gcc47/4.7.3/gcc/lib/gcc/x86_64-apple-darwin12.5.0/4.7.3/../../../../include/c++/4.7.3/limits(1421): error: function call is not allowed in a constant expression
...
How can I use GMP with NVCC/CUDA? To clarify, I don't intend to perform GMP calculations on the device, just the host.
Create a .cpp module that you compile with your host compiler, and
include your GMP code there.
Create a separate .cu module that you compile with nvcc, and include
your CUDA code there.
Link them together.
I am developing a CUDA 4.0 application running on a Fermi card. According to the specs, Fermi has Compute Capability 2.0 and therefore should support non-inlined function calls.
I compile every class I have with nvcc 4.0 in a distinct obj file. Then, I link them all with g++-4.4.
Consider the following code :
[File A.cuh]
#include <cuda_runtime.h>
struct A
{
__device__ __host__ void functionA();
};
[File B.cuh]
#include <cuda_runtime.h>
struct B
{
__device__ __host__ void functionB();
};
[File A.cu]
#include "A.cuh"
#include "B.cuh"
void A::functionA()
{
B b;
b.functionB();
}
Attempting to compile A.cu with nvcc -o A.o -c A.cu -arch=sm_20 outputs Error: External calls are not supported (found non-inlined call to _ZN1B9functionBEv).
I must be doing something wrong, but what ?
As explained on this thread on the NVidia forums, it appears that even though Fermi supports non-inlined functions, nvcc still needs to have all the functions available during compilation, i.e. in the same source file: there is no linker (yep, that's a pity...).
functionB is not declared and therefore considered external call. As the error said external calls are not supported. Implement functionB and it will work.
True, CUDA 5.0 does it. I can't get it to expose external device variables but device methods work just fine. Not by default.
The nvcc option is "-rdc=true". In Visual Studio and Nsight it is an option in the project properties under Configuration Properties -> CUDA C/C++ -> Common -> Generate Relocatable Device Code.