Is it safe to call jcuda.driver.JCudaDriver/cuInit multiple times in a program? - cuda

I'm using a dynamic language (Clojure) to create CUDA contexts in a interactive development way using JCuda. Often I will call an initializer that includes the call to jcuda.driver.JCudaDriver/cuInit. Is it safe to call cuInit multiple times? In addition, is there something like a destroy method for cuInit? I ask since its possible for an error code CUDA_ERROR_DEINITIALIZED to be returned.

To answer the question, yes it is probably safe to call cuInit multiple times. I haven't noticed any side effects from doing so.
Note, however, that cuInit only triggers one-time initialisation processes inside the API. It doesn't do anything with devices, or contexts and it definitely can't return CUDA_ERROR_DEINITIALIZED. Doing the steps you would do after calling cuInit in an application (ie. creating a context) would have real implications - doing so creates a new context each time you call it and resource exhaustion will occur if contexts are not actively destroyed. There is no equivalent deinitialisation call for the API. I guess the intention is that once intialised, the runtime API is expected to stay in that state until an application terminates.

Related

EventHandler Duration and Asynchronous Execution

Here are a few questions in regard with event handling:
Question 1
Principally, may event handler (UI or not) methods execute for a relatively long time?
Question 2
If event handling may anyway take a lot of time in a given system, then this handling must, probably, be asynchronously performed, in order to avoid blocking the s/w. In this case, shall the class publishing this event asynchronously call all the registered handlers? Or may be it is better for this class to avoid any such assumptions, and have each handler, that takes a long time to execute, perform its massive work asynchronously, and immediately return without blocking?
Question 3
Anyway, when an event handler method is asynchronously called using BeginInvoke by the class publishing this event, is it a must to call the corresponding EndInvoke, and even take in account the possibility of an exception? Or may be it is better for the class raising the event to ignore them?
Answer to question 1
See jgauffin.
Basically, event handlers must perform in a relatively short time in order not to block the execution of the rest of the registered handlers, nor the handling of the following events. Needless to say, this demand becomes crucial when dealing with events whose handling holds the UI unresponsive!
Answer to question 2
Assuming that a class publishing an event isn't familiar with all possible event handlers (especially consider an event published by some DLL exported class), the class must call all the registered handlers asynchronously. But this has its cost (thread switching, caching, concurrency, synchronization, &c: see, for example, Brannon, Manoj Sharma, or even this wiki), which shouldn't be paid unless must be. Therefore the best alternatively is letting each registered handler, which, of course, knows how much time-consuming its own work is, decide whether it shall execute its own work synchronously or asynchronously. And here's a nice idea: an event may use its event-arguments structure to publish the time allocated for each handler (this time is basically calculated by dividing the overall event handling time permitted by the number of currently registered listeners), letting each handler use this info to decide whether to execute its own work synchronously or asynchronously.
Answer to question 3
See MSDN (note "Important!"), Hans Passant (a prominent world-class .NET expert), Bruno Brant, and eventually STW, which also supplies demo code; all of them seem to be strongly in favor of calling EndInvoke and catching possible exceptions. (Indeed, some programmers tend to avoid using exceptions, but exceptions are inherent with C++, Java, and .NET methods, and even more so with Python functions. If big-data s/w like Hadoop and the applications above it intensively use exceptions, then anyone may.)
Postscript
Eventually, after realizing that no one is going to reply my question, I turned to Microsoft's social MSDN, where I immediately received an answer from Magnus (also a prominent world-class .NET expert), which basically agreed with above arguments; see our correspondence here.

What does "to stub" mean in programming?

For example, what does it mean in this quote?
Integrating with an external API is almost a guarantee in any modern web app. To effectively test such integration, you need to stub it out. A good stub should be easy to create and consistently up-to-date with actual, current API responses. In this post, we’ll outline a testing strategy using stubs for an external API.
A stub is a controllable replacement for an Existing Dependency (or collaborator)
in the system. By using a stub, you can test your code without
dealing with the dependency directly.
External Dependency - Existing Dependency:
It is an object in your system that your code
under test interacts with and over which you have no control. (Common
examples are filesystems, threads, memory, time, and so on.)
Forexample in below code:
public void Analyze(string filename)
{
if(filename.Length>8)
{
try
{
errorService.LogError("long file entered named:" + filename);
}
catch (Exception e)
{
mailService.SendEMail("admin#hotmail.com", "ErrorOnWebService", "someerror");
}
}
}
You want to test mailService.SendEMail() method, but to do that you need to simulate an Exception in your test method, so you just need to create a Fake Stub errorService object to simulate the result you want, then your test code will be able to test mailService.SendEMail() method. As you see you need to simulate a result which is from an another Dependency which is ErrorService class object (Existing Dependency object).
A stub, in this context, means a mock implementation.
That is, a simple, fake implementation that conforms to the interface and is to be used for testing.
Layman's terms, it's dummy data (or fake data, test data...etc.) that you can use to test or develop your code against until you (or the other party) is ready to present/receive real data. It's a programmer's "Lorem Ipsum".
Employee database not ready? Make up a simple one with Jane Doe, John Doe...etc.
API not ready? Make up a fake one by creating a static .json file containing fake data.
In this context, the word "stub" is used in place of "mock", but for the sake of clarity and precision, the author should have used "mock", because "mock" is a sort of stub, but for testing. To avoid further confusion, we need to define what a stub is.
In the general context, a stub is a piece of program (typically a function or an object) that encapsulates the complexity of invoking another program (usually located on another machine, VM, or process - but not always, it can also be a local object). Because the actual program to invoke is usually not located on the same memory space, invoking it requires many operations such as addressing, performing the actual remote invocation, marshalling/serializing the data/arguments to be passed (and same with the potential result), maybe even dealing with authentication/security, and so on. Note that in some contexts, stubs are also called proxies (such as dynamic proxies in Java).
A mock is a very specific and restrictive kind of stub, because a mock is a replacement of another function or object for testing. In practice we often use mocks as local programs (functions or objects) to replace a remote program in the test environment. In any case, the mock may simulate the actual behaviour of the replaced program in a restricted context.
Most famous kinds of stubs are obviously for distributed programming, when needing to invoke remote procedures (RPC) or remote objects (RMI, CORBA). Most distributed programming frameworks/libraries automate the generation of stubs so that you don't have to write them manually. Stubs can be generated from an interface definition, written with IDL for instance (but you can also use any language to define interfaces).
Typically, in RPC, RMI, CORBA, and so on, one distinguishes client-side stubs, which mostly take care of marshaling/serializing the arguments and performing the remote invocation, and server-side stubs, which mostly take care of unmarshaling/deserializing the arguments and actually execute the remote function/method. Obviously, client stubs are located on the client side, while sever stubs (often called skeletons) are located on the server side.
Writing good efficient and generic stubs becomes quite challenging when dealing with object references. Most distributed object frameworks such as RMI and CORBA deal with distributed objects references, but that's something most programmers avoid in REST environments for instance. Typically, in REST environments, JavaScript programmers make simple stub functions to encapsulate the AJAX invocations (object serialization being supported by JSON.parse and JSON.stringify). The Swagger Codegen project provides an extensive support for automatically generating REST stubs in various languages.
Stub is a function definition that has correct function name, the correct number of parameters and produces dummy result of the correct type.
It helps to write the test and serves as a kind of scaffolding to make it possible to run the examples even before the function design is complete
This phrase is almost certainly an analogy with a phase in house construction — "stubbing out" plumbing. During construction, while the walls are still open, the rough plumbing is put in. This is necessary for the construction to continue. Then, when everything around it is ready enough, one comes back and adds faucets and toilets and the actual end-product stuff. (See for example How to Install a Plumbing Stub-Out.)
When you "stub out" a function in programming, you build enough of it to work around (for testing or for writing other code). Then, you come back later and replace it with the full implementation.
You have also a very good testing frameworks to create such a stub.
One of my preferrable is Mockito There is also EasyMock and others... But Mockito is great you should read it - very elegant and powerfull package
RPC Stubs
Basically, a client-side stub is a procedure that looks to the client as if it were a callable server procedure.
A server-side stub looks to the server as if it's a calling client.
The client program thinks it is calling the server; in fact, it's calling the client stub.
The server program thinks it's called by the client; in fact, it's called by the server stub.
The stubs send messages to each other to make the RPC happen.
Source
"Stubbing-out a function means you'll write only enough to show that the function was called, leaving the details for later when you have more time."
From: SAMS Teach yourself C++, Jesse Liberty and Bradley Jones
Stub is a piece of code which converts the parameters during RPC (Remote procedure call).The parameters of RPC have to be converted because both client and server use different address space. Stub performs this conversion so that server perceive the RPC as a local function call.
A stub can be said as a fake substitute of the original function, which gives output, which is not accessible right now because of reasons:
It is not developed right now
It is not callable from the current environment (maybe testing)
A stub has :
Exact number of parameters
Exact output format (not necessarily correct output)
Why a stub is used?
When function is not accessible in an environment such as testing, or when its implementation is not available.
Example:
let's say you want to test a function in which there's a network call. While testing the code, you cannot wait for a network call's result for your test. so you write a mock output of the network call and proceed with your test.
TestFunction(){
// Some things here
// Some things here
var result = networkCall(param)
// something depending on the result
}
This networkCall gives out lets say a string, so you have to create a function with exact same parameters and it should give string output.
String fakeNetworkCall(int param){
if(param == 1) return "OK";
else return "NOT OK";
}
Now you have written a fake function, use this as replacement in your code
TestFunction(){
// Some things here
// Some things here
var result = fakeNetworkCall(param)
// something depending on the result
}
This fakeNetworkCall is a stub.

Tcl extensions: Life Cycle of extensions' ClientData

Non-trivial native extensions will require per-interpreter data
structures that are dynamically allocated.
I am currently using Tcl_SetAssocData, with a key corresponding
to the extension's name and an appropriate deletion routine,
to prevent this memory from leaking away.
However, Tcl_PkgProvideEx also allows one to record such
information. This information can be retrieved by
Tcl_PkgRequireEx. Associating the extension's data structures
with its package seems more natural than in the "grab-bag"
AssocData; yet the Pkg*Ex routines do not provide an
automatically invoked deletion routine. So I think I need
to stay with the AssocData approach.
For which situations were the Pkg*Ex routines designed for?
Additionally, the Tcl Library allows one to install
ExitHandlers and ThreadExitHandlers. Paraphasing the
manual, this is for flushing buffers to disk etc.
Are there any other situations requiring use of ExitHandlers?
When Tcl calls exit, are Tcl_PackageUnloadProcs called?
The whole-extension ClientData is intended for extensions that want to publish their own stub table (i.e., an organized list of functions that represent an exact ABI) that other extensions can build against. This is a very rare thing to want to do; leave at NULL if you don't want it (and contact the Tcl core developers' mailing list directly if you do; we've got quite a bit of experience in this area). Since it is for an ABI structure, it is strongly expected to be purely static data and so doesn't need deletion. Dynamic data should be sent through a different mechanism (e.g., via the Tcl interpreter or through calling functions via the ABI).
Exit handlers (which can be registered at multiple levels) are things that you use when you have to delete some resource at an appropriate time. The typical points of interest are when an interpreter (a Tcl_Interp structure) is being deleted, when a thread is being deleted, and when the whole process is going away. What resources need to be specially deleted? Well, it's usually obvious: file handles, database handles, that sort of thing. It's awkward to answer in general as the details matter very much: ask a more specific question to get tailored advice.
However, package unload callbacks are only called in response to the unload command. Like package load callbacks, they use “special function symbol” registration, and if they are absent then the unload command will refuse to unload the package. Most packages do not use them. The use case is where there are very long-lived processes that need to have extra upgradeable functionality added to them.

Should I use custom exceptions to control the flow of application?

Is it a good practise to use custom business exceptions (e.g. BusinessRuleViolationException) to control the flow of user-errors/user-incorrect-inputs???
The classic approach:
I have a web service, where I have 2 methods, one is the 'checker' (UsernameAlreadyExists()) and the other one is 'creator' (CreateUsername())...
So if I want to create a username, I have to do 2 roundtrips to webservice, 1.check, 2.if check is OK, create.
What about using UsernameAlreadyExistsException? So I call only the 2. web service method (CrateUsername()), which contains the check and if not successfull, it throws the UsernameAlreadyExistsException.
So the end goal is to have only one round trip to web service and the checking can be contained also in other web service methods (so I avoid calling the UsernameAlreadyExists() all the times..).
Furthermore I can use this kind of business error handling with other web service calls completely avoiding the checking prior the call.
Using exceptions to manage application flow is a bad idea. If you think that you can check something, rather take the output to decide. Rather than a called method throwing an exception and the caller decide on what to do on the exception, it is better to let the called method return a state and let the caller decide the flow on the value of the state.
Some people do, and they are really good at developing.
Sometimes it is just easier and faster for the developer to throw an exception instead o chaining back a return value.
It must be taken into account that throwing an exception is no cheap, though. Stack TRace and the like is big, and in heavy use can impact performance.
I think it is OK to use exceptions sometimes when it is just really necessary, but it should not be your main way of controlling the flow.
just my 2/100

What are some good use cases for the CALLBACK pattern/idiom?

I don't use this pattern, maybe there are some places where it would have been appropriate and I used something else. Have you used it in your daily coding? Feel free to give samples, in your language of choice, along with your explanation.
Callbacks aren't really a "pattern" - more like a building block. A number of the gang of four design patterns use virtual methods in a callback-like way. Justin Niessner has already mentioned Observer.
Callbacks are much older than OOP (and probably older than 3GLs and even assembler). Another old idea is the parameter block - the C interpretation being a struct full of related members to be passed to a function so that function doesn't need a huge parameter list.
OOP classes build upon the parameter block (and add a philosophy to it). The class instance itself is a parameter block passed by reference to its methods. The virtual table is a dispatch-handling parameter block. Every virtual method has a callback pointer in the dispatch-handling parameter block. A pure virtual method reserves space for the callback pointer in the parameter block, and promises to provide the actual pointer later.
Since the class is the building block for object oriented design patterns, and parameter blocks and callbacks are the building blocks of classes - well, you could claim that all OOP design patterns are built from these ideas.
I'd like to be able to say "parameter blocks and callbacks, plus style rules guiding their use, inspired object orientation" but as appealing as it sounds, I don't know whether it's true.
I use callbacks pretty much every day in the following scenarios:
Events: When the user clicks their mouse on a control, presses a key or otherwise interacts with the UI in a way I need to handle, I subscribe to the delegate that the control publishes for the event. I can then handle it by updating the UI, cancelling the event in certain circumstances or otherwise taking some special action.
Multithreaded Programming: When programming a GUI, it's important to keep the UI responsive and indicate the progress of a long-running background event to the user. To do this, I kick off the task in a separate thread and then publish delegates (events in the .NET world) that provide my UI with the opporutinty to notify the user about progress that's happening.
Lambda functions: In .NET, lambda functions are a form of a delegate, one that lets me interact with another piece of code's operation at a later point in time. LINQ is a great example of this. I can create a small matching function and then supply it to a LINQ query. Later, when I execute my query against a collection, the matching function is called to determine if there is a match for the query. This allows me to not have to build or worry about the query mechanism. I just have to tell the query mechanism where to go to find out if a comparison is a match or not.
These examples just scratch the surface, I'm sure. But they are useful examples of how I use callbacks every day.
The .NET platform uses callbacks heavily to implement the Observer pattern.
They also get used for handling Asynchronous processes.
Objective C and the Cocoa framework make a lot of use of it. An example would be NSURLConnection, which will inform an object given to it (called its delegate) when something happens on the connection:
NSURLConnection *foo = [[NSURLConnection alloc] initWithRequest:request delegate:self];
Note the passing of delegate there. The request proceeds in the background, and the instance will then send messages to the delegate (in this case, self), like:
connectionDidFinishLoading:
connection:didFailWithError:
You get the idea. I believe this is called the "observer pattern". It's all tied in to Cocoa's event loop (as far as I know, I'm still learning) and is cheap 'n easy asynchronous programming. A lot of frameworks in a variety of languages follow this approach.
.NET has delegates as well, which are similar. Think events.
I use it a great deal in javascript to let me know when an asynchronous call has finished, so the result can be processed.
But, in javascript, and now in C#3, I pass in functions as a parameter, so that the processing can go on without explicitly setting up a delegate to be called.