Installing CUB in nvidia nsight - cuda

I want to use CUB with NVIDIA Nsight. I looked for tutorials on the internet for doing that, but I didn't find anything, even in the official pages pf CUB.
What do I need to do in order to use CUB in code I write using NVIDIA Nsight?

There is no need to do any installation since the CUB library is implemented as C++ headers check it in the section number 6 in this page here. The only thing you need to is to add the path of the library to the includes of the projects as follow:
right click on your project go to properties, after that go to C/C++ general then click on Paths and symbols. in the Includes tab ( the first tab) you add the path of your library that's all.

Related

How to create a PPAPI plugin for Google Chrome in Windows?

I am new to PPAPI development and have downloaded the already examples from here
However, even after coming across the documentation,
I am not able to build the project.
I have Microsoft Visual Studio 2010, Windows OS and Chrome:30.0.1599.65
I understand that once a dll is created, using the regsvr32 command will register the plugin, although getting the dll, even with available code, seems tough for me. Any help for building the dll is appreciated.
You will want to start here to download the and set up the SDK: https://developers.google.com/native-client/sdk/download
This page will take you through how to build and run the examples: https://developer.chrome.com/native-client/sdk/examples
This page goes over how to actually create your own plugin: https://developer.chrome.com/native-client/devguide/tutorial/tutorial-part1
And then you should read this entire section to code and structure your application: https://developer.chrome.com/native-client/devguide/coding/application-structure
If you need any third party libraries be sure to check here: https://chromium.googlesource.com/webports
Edit: Forgot to mention that you will want to use the same version of the pepper api as the version of chrome you're running (in this case pepper_30). Also, you have to use the NaCl toolchain (one of either glibc, newlib, or pnacl); you can't use the Visual C/C++ toolchains. I recommend trying pnacl now that it is available, as that is by far the most cross platform version, but if you run into trouble, you'll probably want to use the newlib toolchain as it has better support.

Monodevelop, where can I download it?

I was trying to download Monodevelop for mac, but on the official page there is everything but a compiled and downloadable file.
I've read around other threads on different forum and apparently it is required to compile the source code. Is this really the case?
What other alternative may exist for Mac? I just need to dig into some source code, using references and jumping from code portion to others without using the search filter.
Thanks.
If you want a binary for Mac, you need to go to Xamarin's homepage and simply download Xamarin Studio.
Xamarin Studio is basically the same thing as MonoDevelop, the only difference is a bit of branding and the inclusion of 3 plugins for their proprietary development offerings, which you can ignore if you're not interested in developing for the mobile platform.

Should we place C code in Static library or Runtime component?

We're moving to Windows Phone 8. But since many good libraries out there are in pure C. So what is the best way for Windows Phone C# application to consume this C library?
Place C code in WP Static library. Then reference it from WP Runtime
component
Place C code in WP Runtime component
What is the best practice ?
There isn't any real difference between the two approaches. A static library is nothing but a collection of .obj files, the exact same kind of .obj files that you'll get from approach #2. After the linker is done, there won't be any difference in the result.
That's when everything is perfect, an ideal that can be very difficult to achieve when you use open source C code. An advantage of a static .lib is that it improves build time, not having to re-generate the .obj files. But that's also their disadvantage, you'll shoot your foot if you use a .lib that was created by somebody else and he didn't use the same compiler version or compile options. The simplest example of such a trap is building your Debug version and the .lib was built for Release. Or if it uses winapi functions that are verboten in a Phone app, pretty common. So #3 is the best way to avoid problems, build the .lib yourself so you can control all the compile and link settings. Do beware however that it can be very difficult to get open source C code to build, it often comes with a very extensive configuration script, designed to deal with the differences between the many architecture and Unix variants.

How to change CUDA's linking directory?

I've been using CUDA 4.0 for sometime now. I've recently downloaded and copied CUDA 4.1 new API (I need Thrust's lambda expressions support) but my solution's properties are still linked to the old 4.0 API. How do I change it dynamically? My guess is that I need to change the $(CudaToolkitLibDir) variable, but how exactly?
edit : i'm asking this because i'm trying to use thrust::placeholders
To answer the specific question:
For VS2005 or VS2008, you need to change the Custom Build Rules to pick up the CUDA 4.1 rule instead of 4.0. See this post for more information.
For VS2010, you need to change the Build Customization to pick up CUDA 4.1 instead. See this post for more information.
Looking at the comments, it's also clear you will need to install a CUDA 4.1 driver which you can download from the NVIDIA website. You said your program crashed on the first cudaMalloc() when you updated to 4.1, you should check the error message (in general you should check all API calls for errors). The first CUDA API call will return an "Insufficient Driver Version" message if your driver is not up-to-date.

Using CUDA Kernels

I'm interested in using CUSP library for CUDA (available here). However, I'm either having trouble getting this library to work with my application linking with CUDA and/or CUBLAS static libraries. I'm assuming from glancing through the header and source files that I either use the kernels by building the related files as a static library file (using nvcc compiler) to be used in my application (which is built using MS Visual Studio compiler), or use the kernels directly in my application (which I don't know how it's going to work out). The CUSP library also uses METIS library as well, which I also have trouble figuring out how to install it in Windows. What would be your suggestions on the best way of using CUSP features in my application? Thanks in advance.
After a quick look through the CUSP source, it seems that CUSP follows the same model as (and even makes use of) Thrust. These are template-based libraries that only make use of header files (with some #included inline code), like most of the STL and boost libraries. Take dia_matrix.h for example. The 'implementation' is in dia_matrix.inl, which is #included at the bottom of dia_matrix.h.
Take a look at the Thrust and CUSP examples for how to use these libraries in your own code. It should be nothing more than a matter of including the correct header files and working with the data types they provide. The CUDA kernels will be generated at compile time for you and you shouldn't need to worry about those details.