I'd like to write cuda kernels on my laptop without GPU, in vscode (Ubuntu 21.04). Is it possible to just download all the headers and source files somewhere, without the need of installing the full CUDA Toolkit?
That way I can still enjoy some autocompletion, but do not have to install the entire toolkit that won't function anyway as I don't have a GPU.
Yes, it is possible. You can download the headers from Nvidia's GitLab repo and add the path into includePath in your VSCode project C/C++ settings the same way you would add include directory of CUDA Toolkit installation.
Below Linux example of .vscode/c_cpp_properties.json where you would replace /usr/local/cuda-11.3/include with your path to the downloaded headers. If you are targeting specific device or architecture, then definining e.g. __CUDA_ARCH__=750 (for CUDA Compute capability 7.5) will help Intellisense pick correct set of CUDA intrinsic functions that will be available on those devices.
{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
"/usr/local/cuda-11.3/include"
],
"defines": ["__CUDA_ARCH__=750"],
"compilerPath": "/usr/bin/g++",
"cStandard": "gnu17",
"cppStandard": "gnu++14",
"intelliSenseMode": "gcc-x64"
}
],
"version": 4
}
With the setup above, NVIDIA Nsight extension should work for syntax highlighting without CUDA Toolkit installation (not tested) with .vscode/settings.json:
{
"files.associations": {
"*.cu": "cuda-cpp",
"*.cuh": "cuda-cpp"
}
}
Or alternatively, associate your CUDA kernel source files with C++ syntax highlighting in .vscode/settings.json:
{
"files.associations": {
"*.cu": "cpp",
"*.cuh": "cpp"
}
}
Related
I am using a package that provides multiple features for working with Actionscript 3 projects in Sublime Text 2. And while it's working perfect, I can't run the code in the Sublime Text.
I saw in some places that you need to go to Tools > Build System > choose your build (here would appear the action script).
But it doesn't appear, and I can't run the code, giving me the follow error :
No Build System
How can I make this work ?
Use the existing Action Script 3 package for Sublime Text, or create your own custom build system.
Example:
{
"selector": "source.actionscript",
"cmd": [
"mxmlc",
"${file}",
"-library-path+=${project_path}/libs",
"-output", "${project_path}/bin/${project_base_name}.swf",
"-debug=false",
"-static-link-runtime-shared-libraries=true"
],
"file_regex": "^(.+?)\\(([0-9]+)\\): col: (([0-9]+))(.*)$"
}
Make sure you got Adobe Flex SDK (or Apache Flex SDK) installed and that mxmlc is in your PATH environmental variable. Alternatively, you can provide its path in the build file (see the documentation for details).
I'm running the octave kernel in Jupyter, but I'm not getting syntax highlighting in the code cells. I've installed Jupyter et. al. through Anaconda. I can't remember how I got the octave kernel installed, but probably from here:
https://github.com/calysto/octave_kernel
Do I need to do something in the kernel spec to flip on CodeMirror support?
Edit: turns out the syntax highlighting appeared after refreshing the notebook page, even without the config entry.
Leaving the answer in case it helps with other issues that people might be facing.
I managed to get syntax highlighting working for Octave by following the instructions here:
Configuring the notebook frontend
configuring the CodeMirror mode to Octave: "mode": "octave"
TL;DR
Put the following code snippet in ~/.jupyter/nbconfig/notebook.json. (If the file/directory structure is not present then create as required).
{
"CodeCell": {
"cm_config": {
"mode": "octave"
}
}
}
Caveats
I haven't tested this fully, but it seems the syntax highlighting will persist even for other languages, e.g. Python. This means that the config file might need to be disabled/deleted when using Jupyter for non-Octave notebooks.
Also, I noticed the syntax highlighting doesn't immediately show up on first load of the notebook. I had to refresh the page before it appeared.
I have several utility script files that are used by multiple extensions. Thus far, I have been copy/pasting those utility scripts to each extension's root folder whenever I make a change. This is becoming less and less feasible. I would like to reference the same utility script files from both extensions' manifests. I have tried this:
{
"background":
{
"scripts":
[
"../utils.js",
"background.js"
]
}
}
But, I when I reload my extension, I get an Extension error saying:
Could not load extension from 'C:\...'. Could not load background script '../../utils.js'.
If I use backslashes instead (this seems like a more likely solution since I'm working with windows...), I get the same error (but with backslashes).
Is it even possible to achieve this type of relative file path?
How about creating a local server that hosts the JS files you need and then your extension can access those JS file through a localhost port and use their functionality? A simple lightweight server would do the trick (maybe bottle.py in Python).
Chrome v33 tightened up extension security so i'm not sure you can access a file like you tried in your manifest.json
Let me know how you get around this problem!
Have you considered using Shared Modules? According to the documentation you can export common functionality from one extension that can thusly be imported into another extension:
"The export field indicates an extension is a Shared Module that exports its resources:
{
"version": "1.0",
"name": "My Shared Module",
"export": {
// Optional list of extension IDs explicitly allowed to
// import this Shared Module's resources. If no whitelist
// is given, all extensions are allowed to import it.
"whitelist": [
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
"bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"
]
}
// Note: no permissions are allowed in Shared Modules
}
The import field is used by extensions and apps to declare that they depend on the resources from particular Shared Modules:
{
"version": "1.0",
"name": "My Importing Extension",
...
"import": [
{"id": "cccccccccccccccccccccccccccccccc"},
{"id": "dddddddddddddddddddddddddddddddd"
"minimum_version": "0.5" // optional
},
]
}
"
I am trying to compile a CUDA 5.5 application on nsight with ubuntu 12.04
At first I was getting an issue about missing header files such as #include <helper_cuda_drvapi.h>
To fix this I added the path /usr/include/samples/common/inc to my includes list.
This solved the missing header file issue but caused a new issue.
when trying to compile the program on nsight I get the following errors
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:278: undefined reference to cuInit'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:279: undefined reference tocuDeviceGetCount'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:290: undefined reference to cuDeviceGetName'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:291: undefined reference tocuDeviceComputeCapability'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:294: undefined reference to cuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:327: undefined reference tocuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:330: undefined reference to cuDeviceGetAttribute'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:333: undefined reference tocuDeviceComputeCapability'
/usr/local/cuda-5.5/samples/common/inc/helper_cuda_drvapi.h:336: undefined reference to `cuDeviceGetAttribute'
any suggestions?
Thanks in advance
*****************UPDATE************
What it basically comes down to is I am trying to compile the "CUDA Video Decoder GL API" sample program on linux and it is not working because of some error with the header files. Does anyone know why this is?
UPDATE
The undefined references are to CUDA driver API methods. helper_cuda_drvapi.h has the following comment near the top:
Helper functions for CUDA Driver API error handling (make sure that CUDA_H is included in your projects)
So, in your .cu and .cpp files, before the #include <helper_cuda_drvapi.h>, include cuda.h:
#include "cuda.h"
#include <helper_cuda_drvapi.h>
See this question for more information about the CUDA headers.
You need to manually link with libcuda (Nsight projects use Runtime API)
To link with this library:
Go to Properties for your project, open General / Path and Symbols
On the Libraries tab add cuda (without prefix or suffix - in theory should make your project more clossplatform. You may also want to check "Add to all configurations" when adding the library - otherwise it will be for your current build configuration (e.g. "Debug" or "Release" only).
Update: Project settings screenshot:
I am trying to set up Point Cloud Library trunk build with CUDA options enabled.
I believe I have installed CUDA correctly, following these instructions.
In the cmake options for the PCL build, some options are unrecognised:
Is there something I can manually set CUDA_SDK_ROOT_DIR to? Likewise for the other unfound options.
CUDA_SDK_ROOT_DIR should be set to the direction in which you installed the NVIDIA's GPU Computing SDK. The GPU Computing SDK is downloadable from the same page at NVIDIA where you downloaded CUDA. By default, this SDK will install to $HOME/NVIDIA_GPU_Computing_SDK. Set it appropriately and then rerun cmake.
Edit:
The CUDA_SDK_ROOT_DIR variable is actually looking for the sub-directory beneath $HOME/NVIDIA_GPU_Computing_SDK that contains the version of CUDA you're using. For me, this is $HOME/NVIDIA_GPU_Computing_SDK/CUDA/v4.1.
The source code for FindCUDA.cmake gives some hints on how this path is found:
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
I.e. check that NVSDKCOMPUTE_ROOT or NVSDKCUDA_ROOT environment variables are set correctly.
On a Linux machine,..
Add "$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C" to the 'find_path' options in FindCUDA.cmake module: (usr/share/cmake-2.8/Modules/FindCUDA.cmake)
########################
# Look for the SDK stuff. As of CUDA 3.0 NVSDKCUDA_ROOT has been replaced with
# NVSDKCOMPUTE_ROOT with the old CUDA C contents moved into the C subdirectory
find_path(CUDA_SDK_ROOT_DIR common/inc/cutil.h
"$ENV{HOME}/NVIDIA_GPU_Computing_SDK/C"
"$ENV{NVSDKCOMPUTE_ROOT}/C"
"$ENV{NVSDKCUDA_ROOT}"
"[HKEY_LOCAL_MACHINE\\SOFTWARE\\NVIDIA Corporation\\Installed Products\\NVIDIA SDK 10\\Compute;InstallDir]"
"/Developer/GPU\ Computing/C"
)
cmake now finds my 4.0 SDK automatically.
But my build still fails to find cutil.h, even though it is there. $HOME/NVIDIA_GPU_Computing_SDK/C/common/inc/cutil.h. I had to add an include flag to the project to get it to finally work. CUDA_NVCC_FLAGS : -I/home/bill/NVIDIA_GPU_Computing_SDK/C/common/inc
Note: -I/$HOME/NVIDIA_GPU_Computing_SDK/C/common/inc does NOT work. (The env $HOME is set correctly.)