CMake: how to add cuda to existing project - cuda

I have a project that builds a library and I want to add some cuda support to it.
The structure is:
|Basedir
|_subdir1
|_subdir2
The basic structure of the CMakeLists.txt files: (subdir2 is not important).
in Basedir:
cmake_minimum_required(VERSION 2.6)
PROJECT(myproject)
find_package(CUDA)
INCLUDE_DIRECTORIES(${MYPROJECT_SOURCE_DIR})
ADD_SUBDIRECTORY(subdir1)
ADD_SUBDIRECTORY(subdir2)
in subdir1:
ADD_LIBRARY(mylib shared
file1.cpp
file2.cpp
file3.cpp
)
INSTALL(
TARGETS mylib
DESTINATION lib
PERMISSIONS
OWNER_READ OWNER_WRITE OWNER_EXECUTE
GROUP_READ GROUP_EXECUTE
WORLD_READ WORLD_EXECUTE
)
FILE(GLOB_RECURSE HEADERS RELATIVE ${MYPROJECT_SOURCE_DIR}/myproject *.h)
FOREACH(HEADER ${HEADERS})
STRING(REGEX MATCH "(.*)[/\\]" DIR ${HEADER})
INSTALL(FILES ${HEADER} DESTINATION include/myproject/${DIR})
ENDFOREACH(HEADER)
I actually don't really know how to put the cuda-support into it. I want to replace file2.cpp with file2.cu and I did that, but it didn't build the .cu file, only the cpp files.
Do I have to add CUDA_ADD_EXECUTABLE() to include any cuda-files? How will I then link it to the other files?
I tried adding the following to the CMakeLists.txt in subdir1:
CUDA_ADD_EXECUTABLE(cuda file2.cu OPTIONS -arch sm_20)
That will compile the file but build an executable cuda. How do I link it to mylib?
Just with?:
TARGET_LINK_LIBRARIES(cuda mylib)
I have to admit that I'm not experienced in cmake, but I guess you figured that.

You can use CUDA_ADD_LIBRARY for mylib project. It works as CUDA_ADD_EXECUTABLE but for libraries.
CUDA_ADD_LIBRARY(mylib SHARED
file1.cpp
file2.cu
file3.cpp
OPTIONS -arch sm_20
)
TARGET_LINK_LIBRARIES(mylib ${CUDA_LIBRARIES})

Related

How to link cusparse using CMakeLists.txt

How can I add the cusparse library from CUDA in a CMakeLists.txt-file, such that the nvcc compiler includes it automatically with -lcusparse? I already added the line
set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS};-lcusparse)
in CMakeLists.txt with no success. It looks like I'm missing something, because Nsight throws the error
undefined reference to 'cusparseDestroyMatDescr'.
Although when I exclude this line where cusparseDestroyMatDescr is called via commenting it, the Nsight project builds with no error, even with these three lines of code included
cusparseStatus_t status;
cusparseHandle_t handle=0;
cusparseMatDescr_t descr=0;
So it looks like it knows what cusparseStatus_t and so on is, but it does not know what cusparseDestroyMatDescr is.
What do I miss?
The correct way in CMake to link a library is using
target_link_libraries( target library ).
If you use FindCUDA to locate the CUDA installation, the variable CUDA_cusparse_LIBRARY will be defined. Thus, all you need to do is
target_link_libraries( target ${CUDA_cusparse_LIBRARY} )
I recommend to use the CMake CUDAToolkit package, which is available with CMake 3.17 and newer:
find_package(CUDAToolkit REQUIRED)
...
target_link_libraries(target CUDA::cusparse)

Bitbake append file to reconfigure kernel

I'm trying to reconfigure some .config variables to generate a modified kernel with wifi support enabled. The native layer/recipe for the kernel is located in this directory:
meta-layer/recipes-kernel/linux/linux-yocto_3.19.bb
First I reconfigure the native kernel to add wifi support (for example, adding CONFIG_WLAN=y):
$ bitbake linux-yocto -c menuconfig
After that, I generate a "fragment.cfg" file:
$ bitbake linux-yocto -c diffconfig
I have created this directory into my custom-layer:
custom-layer/recipes-kernel/linux/linux-yocto/
I have copied the "fragment.cfg file into this directory:
$ cp fragment.cfg custom-layer/recipes-kernel/linux/linux-yocto/
I have created an append file to customize the native kernel recipe:
custom-layer/recipes-kernel/linux/linux-yocto_3.19.bbappend
This is the content of this append file:
FILESEXTRAPATHS_prepend:="${THISDIR}/${PN}:"
SRC_URI += "file://fragment.cfg"
After that I execute the kernel compilation:
$ bitbake linux-yocto -c compile -f
After this command, "fragment.cfg" file can be found into this working directory:
tmp/work/platform/linux-yocto/3.19-r0
However none of the expected variables is active on the .config file (for example, CONFIG_WLAN is not set).
How can I debug this issue? What is supposed I'm doing wrong?
When adding this configuration you want to use append in your statement such as:
SRC_URI_append = "file://fragment.cfg"
After analyzing different links and solutions proposed on different resources, I finally found the link https://community.freescale.com/thread/376369 pointing to a nasty but working patch, consisting in adding this function at the end of append file:
do_configure_append() {
cat ${WORKDIR}/*.cfg >> ${B}/.config
}
It works, but I expected Yocto managing all this stuff. It would be nice to know what is wrong with the proposed solution. Thank you in advance!
If your recipe is based on kernel.bbclass then fragments will not work. You need to inherit kernel-yocto.bbclass
You can also use merge_config.sh scripts which is present in kernel sources. I did something like this:
do_configure_append () {
${S}/scripts/kconfig/merge_config.sh -m -O ${WORKDIR}/build ${WORKDIR}/build/.config ${WORKDIR}/*.cfg
}
Well, unfortunately, not a real answer... As I haven't been digging deep enough.
This was working alright for me on a Daisy-based build, however, when updating the build system to Jethro or Krogoth, I get the same issue as you.
Issue:
When adding a fragment like
custom-layer/recipes-kernel/linux/linux-yocto/cdc-ether.cfg
The configure step of the linux-yocto build won't find it. However, if you move it to:
custom-layer/recipes-kernel/linux/linux-yocto/${MACHINE}/cdc-ether.cfg
it'll work as expected. And it's a sligthly less hackish way of getting it to work.
If anyone comes by, this is working on jethro and sumo:
FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
SRC_URI_append = " \
file://fragment.cfg \
"
FILESEXTRAPATHS documentation says:
Extends the search path the OpenEmbedded build system uses when looking for files and patches as it processes recipes and append files. The directories BitBake uses when it processes recipes are defined by the FILESPATH variable, and can be extended using FILESEXTRAPATHS.

How do I specify which compiler toolchain Yocto uses to build images?

For example how could I get my image to be compiled using:
gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux
?
What does core-image-sato has to do with the toolchains (they supply with Yocto)?
I don't understand...
in local.conf, specify the path of your toolchain in your system.
EXTERNAL_TOOLCHAIN = "/home/manjunath/linaro/gcc-linaro-arm-linux-gnueabihf-4.7-2014.11-20121123_linux"
In your original toolchain distro add toolchain-external-linaro.inc
More specifically
you get a directory named meta-linaro-toolchain. you may copy complete or
Add .inc to your distribution (sources/poky/meta-yocto/conf/distro/ in my case)
Try now ...

Compiling external JS files with Cljsbuild in ClojureScript

I'm trying to compile some JS libraries that we have with lein-cljsbuild to integrate them in our ClojureScript code base. First I added some goog.provide in top of each file, and the files are hierarchically organised in a directory tree according to their namespace (like in Java). That is namespace a.b.c is in src-js/libs/a/b/c.js
I have put the JS files in the root directory of the projects in src-js/libs, and I have the following :compiler options for lein-cljsbuild:
{:id "prod",
:source-paths ["src-cljs" "src-js"]
:compiler
{:pretty-print false,
:libs ["libs/"]
:output-to "resources/public/js/compiled-app.js",
:optimizations :simple}}
None of the JS files get compiled into the compiled-app file. What's wrong?
I also tried to put them in resources/closure-js/libs without success.
I'm using lein-cljsbuild 0.3.0.
First, unlike what is suggested in some texts, you do not need to include your private closure library locations in any classpath configuration statement in your project.clj. So unless the "src/js" directory included in your "source-paths:" statement is for some other purpose, you can remove it.
Second, the only thing to add to your project.clj, for the sake of bringing in your private closure code, is the "libs:" reference you have made; BUT unlike what you have entered, that reference must be to a specific *.js file (or files) and not merely a directory. So if the library you want to used is in a file named test.js and that resides in the /src/js directory, your libs: entry would be: "src/js/test.js". See the cljs-build release notes if you want to use that plugin's default :libs directory option.
Third, (and it looks like you know this already, but this is what tripped me up) if you are using a browser-backed REPL (repl-listen option of cljsbuild), you still will not be able to load/reference/use your private library assets from that REPL until you include a :require statement somewhere in the source for your compiled-app.js (e.g. "(ns testing (:require [myprivatelib]))" ), THEN you must re-compile (lein cljsbuild once) and reload your browser page with a link to compiled-app.js. This brings in that code base. Otherwise, your browser REPL will just keep insisting that the namespace provided in your closure library is not defined.
I hope this helps.

Finding CUDA_SDK_ROOT_DIR

I am trying to set up Point Cloud Library trunk build with CUDA options enabled.
I believe I have installed CUDA correctly, following these instructions.
In the cmake options for the PCL build, some options are unrecognised:
Is there something I can manually set CUDA_SDK_ROOT_DIR to? Likewise for the other unfound options.
CUDA_SDK_ROOT_DIR should be set to the direction in which you installed the NVIDIA's GPU Computing SDK. The GPU Computing SDK is downloadable from the same page at NVIDIA where you downloaded CUDA. By default, this SDK will install to $HOME/NVIDIA_GPU_Computing_SDK. Set it appropriately and then rerun cmake.
Edit:
The CUDA_SDK_ROOT_DIR variable is actually looking for the sub-directory beneath $HOME/NVIDIA_GPU_Computing_SDK that contains the version of CUDA you're using. For me, this is $HOME/NVIDIA_GPU_Computing_SDK/CUDA/v4.1.
The source code for FindCUDA.cmake gives some hints on how this path is found:
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
I.e. check that NVSDKCOMPUTE_ROOT or NVSDKCUDA_ROOT environment variables are set correctly.
On a Linux machine,..
Add "$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C" to the 'find_path' options in FindCUDA.cmake module: (usr/share/cmake-2.8/Modules/FindCUDA.cmake)
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C"
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
cmake now finds my 4.0 SDK automatically.
But my build still fails to find cutil.h, even though it is there. $HOME/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil.h. I had to add an include flag to the project to get it to finally work. CUDA_NVCC_FLAGS : -I/home/bill/NVIDIA_GPU_Computing_SDK/C/common/inc
Note: -I/$HOME/NVIDIA_GPU_Computing_SDK/C/common/inc does NOT work. (The env $HOME is set correctly.)