Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 10 months ago.
Improve this question
I'm new to CUDA, and so far all the tutorials I've seen are for arrays.
I am wondering if you can define something like double variable on CUDA, or does something like that have to live on the CPU?
You can have scalar variable as a kernel parameter, as a private variable, as a shared memory variable and even as a global compilation unit variable.
You can have scalar fields in classes, array of structs, struct of arrays, anything that uses a plain old data. You can use typedef, define macro and any bit level hacking as long as the variable is loaded/stored with proper alignment.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Actually, the title already is the full question.
Why did Nvidia decide to call its GPU entry functions kernels, but in Cuda they must be annotated with __global__ instead of __kernel__?
The goal is to separate the entity (kernel) and its scope or location.
There three types of the functions which relate to your question:
__device__ functions can be called only from the device, and it is
executed only in the device.
__global__ functions can be called
from the host, and it is executed in the device.
__host__
functions run on the host, called from the host.
If they named functions scope __kernel__, it would be impossible to distinguish them in the way they are separated above.
The __global__ here means "in space shared between host and device" and in these terms in "global area between them".
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I want to understand what is Context in go gin , i see a lot of functions written which accept context as a parameter but dont see it passed anywhere or instantiated anywhere ? , can someone explain how it works
The gin Context is a structure that contains both the http.Request and the http.Response that a normal http.Handler would use, plus some useful methods and shortcuts to manipulate those.
The gin engine is responsible for the creation (and reuse) of those contexts, in the same manner as the http.Server is responsible for the creation of the http.Request objects a standard http.Handler would use.
The context is passed by the engine to its handlers, and it's your job to write those handlers and attach them to a router. A gin handler is any function that takes a gin.Context as its sole argument and doesn't return anything.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
This post was edited and submitted for review last year and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I am new to verilog and trying to figure out where the function can be defined/declared in verilog (like I know function can be defined in packages, what else?). Thanks in advance.
In Verilog, a function can be declared between
module and endmodule (ie in the current region of a module - inside a module, but outside an initial or always block)
generate and endgenerate
That's it.
In System-Verilog, a function can be declared between
module and endmodule
generate and endgenerate
and
class and endclass
interface and endinterface
checker and endchecker
package and endpackage
program and endprogram
and
outside a module / interface / checker / package / program
Probably, easiest way to declare and use function is like, declare all your functions in <module_function_pkg>.vh and include that to design verilog file.
Use it as #dave_59 said in comment.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I know the make_float4 constructor is in vector_functions.h, but which is the header file that implements float4 operation in CUDA?
Thanks.
I don't believe there is a standard cuda header file (i.e. one that will be found by nvcc automatically, such as those in /usr/local/cuda/include) that implements a variety of float4 operators.
However the "helper" header file at:
/usr/local/cuda/samples/common/inc/helper_math.h
(example path on linux) which gets installed with the cuda samples, defines a number of arithmetic operators on float4 quantities.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Using Ninject 3.x on WinRT, is it possible to query the kernel for bindings only by parameter or metadata i.e.: without specifying type?
No. You have to specify the type.
But you can use any kind of base Interface/class including object.
E.g. Create bindings from object to all possible types and get an object