nvcc compiled object on windows not a valid file format - cuda

Since I did not have access to a nVIDIA card, I was using GPUOcelot to compile and run my programs. Since I had separated out my cuda kernel and the C++ programs in two separate files (since I was using C++11 features) I was doing the following to run my program.
nvcc -c my_kernel.cu -arch=sm_20
g++ -std=c++0x -c my_main.cpp
g++ my_kernel.o my_main.o -o latest_output.o 'OcelotConfig -l'
I have recently been given access to a Windows box which has a nVIDIA card. I downloaded the CUDA toolkit for windows and mingw g++. Now I run
nvcc -c my_kernel.cu -arch=sm_20
g++ -std=c++0x -c my_main.cpp
The nvcc call now instead of producing my_kernel.o produces my_kernel.obj. And when I try to link them and run using g++ as I did before
g++ my_kernel.obj my_main.o -o m
I get the following error:
my_kernel.obj: file not recognized: File format not recognized
collect2.exe: error: ld returned 1 status
Could you please resolve the problem? Thanks.

nvcc is a compiler wrapper that invokes the device compiler and the host compiler under the hood (it can also invoke the host linker, but you're using -c so not doing linking). On Windows, the supported host compiler is cl.exe from Visual Studio.
Linking two object files created with two different C++ compilers is typically not possible, even if you are just using CPU only. This is because the ABI is different. The error message you are seeing is simply telling you that the object file format from cl.exe (via nvcc) is incompatible with g++.
You need to compile my_main.cpp with cl.exe, if that's producing errors then that's a different question!

Related

riscv64 linux kernel compilation issue

I am trying to compile the linux kernel for riscv64 using the following link -
https://risc-v-getting-started-guide.readthedocs.io/en/latest/linux-qemu.html
While building linux with the command make ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- defconfig
the following error shows up -
scripts/kconfig.include:35 compiler riscv64-unknown-linux-gnu-gcc not found in PATH
scripts/kconfig/Makefile:82:recipe for target 'defconfig' failed
I have included path of tool chain. Still not working. Attached the screenshot of folder structure and error.
I would suggest to provide the full prefix for your toolchain in the make command, for example:
wget https://toolchains.bootlin.com/downloads/releases/toolchains/riscv64/tarballs/riscv64--glibc--bleeding-edge-2020.02-2.tar.bz2
mkdir -p /opt/bootlin
tar jxf riscv64--glibc--bleeding-edge-2020.02-2.tar.bz2 -C /opt/bootlin
make ARCH=riscv CROSS_COMPILE=/opt/bootlin/riscv64--glibc--bleeding-edge-2020.02-2/bin/riscv64-buildroot-linux-gnu- mrproper defconfig Image
Compilation should complete without errors - using linux 5.7.11 here:
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/expr.o
LEX scripts/kconfig/lexer.lex.c
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/lexer.lex.o
.../...
LD vmlinux.o
MODPOST vmlinux.o
MODINFO modules.builtin.modinfo
GEN modules.builtin
LD .tmp_vmlinux.kallsyms1
KSYM .tmp_vmlinux.kallsyms1.o
LD .tmp_vmlinux.kallsyms2
KSYM .tmp_vmlinux.kallsyms2.o
LD vmlinux
SYSMAP System.map
OBJCOPY arch/riscv/boot/Image
Kernel: arch/riscv/boot/Image is ready
It is saying compiler riscv64-unknown-linux-gnu not found which means either riscv gnu toolchain is not installed on your machine or it is not found by the shell instance in which you are trying to compile linux kernel.
To check if RISC-V GNU toolchain is installed, create a simple C file and try to compile it with RISC-V gnu toolchain with following command:
riscv64-unknown-linux-gnu-gcc <filename.c> -o <binaryname>
1. If RISC-V GNU Toolchain is not installed:
Follow the instruction here to install riscv gnu toolchain. And keep in mind to compile it with make linux instead of make.
2. If RISC-V GNU Toolchain is installed or you are done installing
Add it to the $PATH variable inside .bashrc file located on home directory.
Then try compiling your kernel again.

Set default host compiler for nvcc

I have just installed Debian Stretch (9) and Cuda 8 on a new GPU server. Stretch does not come with older versions of gcc, so I need to use clang as the host compiler (nvcc does not support gcc-6). I can do this invoking nvcc as:
nvcc -ccbin clang-3.8
Is there any way to acheive this system wide - e.g. in cuda config or an environment variable?
Documentation of nvcc does not list any env variable to change ccbin, only the option:
http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html
--compiler-bindir directory, -ccbin Specify the directory in which the compiler executable resides. The host compiler executable name can be also specified to ensure that the correct host compiler is selected.
Linux guide have no such info too: http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
You may try creating some nvcc wrapper script and putting it earlier in the PATH env var like:
mkdir ~/supernvcc
echo '#!/bin/sh' > ~/supernvcc/nvcc
echo `which nvcc` -ccbin clang-3.8 '$#' >> ~/supernvcc/nvcc
chmod +x ~/supernvcc/nvcc
export PATH=/home/`id -un`/supernvcc:$PATH
(repeat last line with export in every new shell before using nvcc or add it to your .bashrc or other shell init script)
PS: and nvcc is bash script too, you can just copy it and edit:
cat `which nvcc`
UPDATE: People recommend to link correct gcc version to the internal dir /usr/local/cuda/bin/ of cuda:
sudo ln -s /usr/bin/gcc-4.4 /usr/local/cuda/bin/gcc
you can use NVCC_PREPEND_FLAGS and NVCC_APPEND_FLAGS as described in the official docs to inject -ccbin into all calls of nvcc.
For example, I have the following in my ~/.bash_profile:
export NVCC_PREPEND_FLAGS='-ccbin /home/linuxbrew/.linuxbrew/bin/g++-11'

CUDA *.cpp files

Is there a flag I can pass nvcc to treat a .cpp file like it would a .cu file? I would rather not have to do a cp x.cpp x.cu; nvcc x.cu; rm x.cu.
I ask because I have cpp files in my library that I would like to compile with/without CUDA based on particular flags passed to the Makefile.
Yes, referring to the nvcc documentation the flag is -x:
nvcc -x cu test.cpp
will compile test.cpp as if it were a test.cu file (i.e. pass it through the CUDA toolchain)

Converting Octave to Use CuBLAS

I'd like to convert Octave to use CuBLAS for matrix multiplication. This video seems to indicate this is as simple as typing 28 characters:
Using CUDA Library to Accelerate Applications
In practice it's a bit more complex than this. Does anyone know what additional work must be done to make the modifications made in this video compile?
UPDATE
Here's the method I'm trying
in dMatrix.cc add
#include <cublas.h>
in dMatrix.cc change all occurences of (preserving case)
dgemm
to
cublas_dgemm
in my build terminal set
export CC=nvcc
export CFLAGS="-lcublas -lcudart"
export CPPFLAGS="-I/usr/local/cuda/include"
export LDFLAGS="-L/usr/local/cuda/lib64"
the error I receive is:
libtool: link: g++ -I/usr/include/freetype2 -Wall -W -Wshadow -Wold-style-cast
-Wformat -Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual -g -O2
-o .libs/octave octave-main.o -L/usr/local/cuda/lib64
../libgui/.libs/liboctgui.so ../libinterp/.libs/liboctinterp.so
../liboctave/.libs/liboctave.so -lutil -lm -lpthread -Wl,-rpath
-Wl,/usr/local/lib/octave/3.7.5
../liboctave/.libs/liboctave.so: undefined reference to `cublas_dgemm_'
EDIT2:
The method described in this video requires the use of the fortran "thunking library" bindings for cublas.
These steps worked for me:
Download octave 3.6.3 from here:
wget ftp://ftp.gnu.org/gnu/octave/octave-3.6.3.tar.gz
extract all files from the archive:
tar -xzvf octave-3.6.3.tar.gz
change into the octave directory just created:
cd octave-3.6.3
make a directory for your "thunking cublas library"
mkdir mycublas
change into that directory
cd mycublas
build the "thunking cublas library"
g++ -c -fPIC -I/usr/local/cuda/include -I/usr/local/cuda/src -DCUBLAS_GFORTRAN -o fortran_thunking.o /usr/local/cuda/src/fortran_thunking.c
ar rvs libmycublas.a fortran_thunking.o
switch back to the main build directory
cd ..
run octave's configure with additional options:
./configure --disable-docs LDFLAGS="-L/usr/local/cuda/lib64 -lcublas -lcudart -L/home/user2/octave/octave-3.6.3/mycublas -lmycublas"
Note that in the above command line, you will need to change the directory for the second -L switch to that which matches the path to your mycublas directory that you created in step 4
Now edit octave-3.6.3/liboctave/dMatrix.cc according to the instructions given in the video. It should be sufficient to replace every instance of dgemm with cublas_dgemm and every instance of DGEMM with CUBLAS_DGEMM. In the octave 3.6.3 version I used, there were 3 such instances of each (lower case and upper case).
Now you can build octave:
make
(make sure you are in the octave-3.6.3 directory)
At this point, for me, Octave built successfully. I did not pursue make install although I assume that would work. I simply ran octave using the ./run-octave script in the octave-3.6.3 directory.
The above steps assume a proper and standard CUDA 5.0 install. I will try to respond to CUDA-specific questions or issues, but there are any number of problems that may arise with a general Octave install on your platform. I'm not an octave expert and I won't be able to respond to those. I used CentOS 6.2 for this test.
This method, as indicated, involves modification of the C source files of octave.
Another method was covered in some detail in the S3527 session at the GTC 2013 GPU Tech Conference. This session was actually a hands-on laboratory exercise. Unfortunately the materials on that are not conveniently available. However the method there did not involve any modification of GNU Octave source, but instead uses the LD_PRELOAD capability of Linux to intercept the BLAS library calls and re-direct (the appropriate ones) to the cublas library.
A newer, better method (using the NVBLAS intercept library) is discussed in this blog article
I was able to produce a compiled executable using the information supplied. It's a horrible hack, but it works.
The process looks like this:
First produce an object file for fortran_thunking.c
sudo /usr/local/cuda-5.0/bin/nvcc -O3 -c -DCUBLAS_GFORTRAN fortran_thunking.c
Then move that object file to the src subdirectory in octave
cp /usr/local/cuda-5.0/src/fortran_thunking.o ./octave/src
run make. The compile will fail on the last step. Change to the src directory.
cd src
Then execute the failing final line with the addition of ./fortran_thunking.o -lcudart -lcublas just after octave-main.o. This produces the following command
g++ -I/usr/include/freetype2 -Wall -W -Wshadow -Wold-style-cast -Wformat
-Wpointer-arith -Wwrite-strings -Wcast-align -Wcast-qual
-I/usr/local/cuda/include -o .libs/octave octave-main.o
./fortran_thunking.o -lcudart -lcublas -L/usr/local/cuda/lib64
../libgui/.libs/liboctgui.so ../libinterp/.libs/liboctinterp.so
../liboctave/.libs/liboctave.so -lutil -lm -lpthread -Wl,-rpath
-Wl,/usr/local/lib/octave/3.7.5
An octave binary will be created in the src/.libs directory. This is your octave executable.
In a most recent version of CUDA you don't have to recompile anything. At least as I found in Debian. First, create a config file for NVBLAS (a cuBLAS wrapper). It won't work without it, at all.
tee nvblas.conf <<EOF
NVBLAS_CPU_BLAS_LIB $(dpkg -L libopenblas-base | grep libblas)
NVBLAS_GPU_LIST ALL
EOF
Then use Octave as you would usually do running it with:
LD_PRELOAD=libnvblas.so octave
NVBLAS will do what it can on a GPU while relaying everything else to OpenBLAS.
Further reading:
Benchmark for Octave.
Relevant slides for NVBLAS presentation.
Manual for nvblas.conf
Worth noting that you may not enjoy all the benefits of GPU computing depending on used CPU/GPU: OpenBLAS is quite fast with current multi-core processors. So fast that time spend copying data to GPU, working on it, and copying back could come close to time needed to do the job right on CPU. Check for yourself. Though GPUs are usually more energy efficient.

CUDA bandwidthTest.cu

I want to compile and run the bandwidthTest.cu in the CUDA SDK. I face the two following errors when I compile it with:
nvcc -arch=sm_20 bandwidthTest.cu -o bTest
cutil_inline.h: no such file or directory
shrUtils.h: no such file or directory
How can I solve this problem?
Add the current directory to your include search path.
nvcc -I. -arch=sm_20 bandwidthTest.cu -o bTest
Probably the two header files you tried to #include are not available in that directory. If you use the Visual Studio IDE, you can see the red outlining.
Find the path to cutil_inline.h and the path to shrUtils.h and put them in the compilation line in the following way:
nvcc -Ipath to cutil_inline.h -Ipath to shrUtils.h -arch=sm_20 bandwidthTest.cu -o bTest
Also, consider using a makefile for the compilation in case you aren't.