As a side question to Use Vulkan VkImage as a CUDA cuArray, how could I get more details on what's wrong on a CUDA Driver API call that returns CUDA_ERROR_INVALID_VALUE?
Specifically, the call is to cuExternalMemoryGetMappedMipmappedArray() and the documentation does not list CUDA_ERROR_INVALID_VALUE among its return values.
Any suggestions on how to go about debugging this issue?
Specifically, the call is to cuExternalMemoryGetMappedMipmappedArray() and the documentation does not list CUDA_ERROR_INVALID_VALUE among its return values.
That appears to be have been a transient documentation error. The current documentation linked in the question (CUDA 11.5 at the time of writing), shows CUDA_ERROR_INVALID_VALUE as an expected return value.
As for the debugging part, the function only has two inputs, the memory object handle, and the array descriptor. One of those is invalid. It should be trivial to debug if you know that the function call is returning the error, and not a prior call.
Related
In a multi-gpu system, I use the return value of
cudaError_t cudaDeviceDisablePeerAccess ( int peerDevice ) to determine if peer access is disabled. In that case, the function returns cudaErrorPeerAccessNotEnabled
This is not an error in my program, but produces a warning in both cuda-gdb and cuda-memcheck since an API call did not return cudaSuccess.
In the same manner cudaDeviceEnablePeerAccess returns cudaErrorPeerAccessAlreadyEnabled if access has already been enabled.
How can one find out if peer access is enabled / disabled without producing a warning?
Summarizing comments into an answer: you can't.
The runtime API isn't blessed with the ability to have informational/warning level status returns and error returns. Everything which isn't success is treated as an error. And the toolchain utilities like cuda-memcheck cannot be instructed to ignore errors. The default beahaviour is to report and continue, so it will not interfere with anything, but it will emit an error message.
If you want to avoid the errors then you will need to build some layers of your own state tracking and condition preemption to avoid potential errors being returned.
I'm writing a C++/CX program that uses WiFiDirect. Target platform version is 10.0.10586.0. Everything works perfectly fine but one thing.
The problem is that there is no WiFiDirect::Close() method available, though it's mentioned in documentation.
The actual error I get is following:
Error C2039 'Close': is not a member of 'Windows::Devices::WiFiDirect::WiFiDirectDevice'
Does anyone know where I can find it?
Close is not projected for C++/CX; it is automatically called when the object's destructor is called (or when no more references are outstanding).
See the docs for IClosable:
Note to callers
Close methods aren't callable through Visual C++ component extensions (C++/CX) on Windows Runtime class instances. Instead, C++/CX code for runtime classes that wants to explicitly clean up a reference should call the destructor or set the last reference to null.
When I develop an android app, I run into the exception which I do not have any clue; I have googled related topics but none of them helped.
Fatal Exception: java.util.ConcurrentModificationException
java.util.HashMap$HashIterator.nextEntry (HashMap.java:806)
java.util.HashMap$KeyIterator.next (HashMap.java:833)
com.android.internal.util.XmlUtils.writeSetXml (XmlUtils.java:298)
com.android.internal.util.XmlUtils.writeValueXml (XmlUtils.java:447)
com.android.internal.util.XmlUtils.writeMapXml (XmlUtils.java:241)
com.android.internal.util.XmlUtils.writeMapXml (XmlUtils.java:181)
android.app.SharedPreferencesImpl.writeToFile (SharedPreferencesImpl.java:596)
android.app.SharedPreferencesImpl.access$800 (SharedPreferencesImpl.java:52)
android.app.SharedPreferencesImpl$2.run (SharedPreferencesImpl.java:511)
java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1112)
java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:587)
java.lang.Thread.run (Thread.java:841)
Preferences are thread safe(!), but not process safe. The answer of
#mohan mishra simply not true, no need to synchronize everything. The problem here, as statet out in another question is, that per documentation you MUST NOT modify any instance that is returned by getStringSet and getAll
getStringSet()
Note that you must not modify the set instance returned by this call.
The consistency of the stored data is not guaranteed if you do, nor is
your ability to modify the instance at all.
getAll()
Note that you must not modify the collection returned by this method,
or alter any of its contents. The consistency of your stored data is
not guaranteed if you do.
To the other question
Documentation
Please ensure that you are not accessing the preferences from any type of background thread. Also all your methods to add to preference must be synchronised(if you have your own preference managing class)
In CUDA we can get to know about errors simply by checking return type of functions such as cudaMemcpy(), cudaMalloc() etc. which is cudaError_t with cudaSuccess. Is there any method available in JCuda to check error for functions such as cuMemcpyHtoD(), cuMemAlloc(), cuLaunchKernel() etc.
First of all, the methods of JCuda (should) behave exactly like the corresponding CUDA functions: They return an error code in form of an int. These error codes are also defined in...
the cudaError class for the Runtime API
the CUresult class for the Driver API
the cublasStatus class for JCublas
the cufftResult class for JCufft
the curandStatus class for JCurand
the cusparseStatus class for JCusparse
and are the same error codes as in the respective CUDA library.
All these classes additionally have a static method called stringFor(int) - for example, cudaError#stringFor(int) and CUresult#stringFor(int). These methods return a human-readable String representation of the error code.
So you could do manual error checks, for example, like this:
int error = someCudaFunction();
if (error != 0= {
System.out.println("Error code "+error+": "+cudaError.stringFor(error));
}
which might print something like
Error code 10: cudaErrorInvalidDevice
But...
...the error checks may be a hassle. You might have noticed in the CUDA samples that NVIDIA introduced some macros that simplify the error checks. And similarly, I added optional exception checks for JCuda: All the libraries offer a static method called setExceptionsEnabled(boolean). When calling
JCudaDriver.setExceptionsEnabled(true);
then all subsequent method calls for the Driver API will automatically check the method return values, and throw a CudaException when there was any error.
(Note that this method exists separately for all libraries. E.g. the call would be JCublas.setExceptionsEnabled(true) when using JCublas)
The samples usually enable exception checks right at the beginning of the main method. And I'd recommend to also do this, at least during the development phase. As soon as it is clear that the program does not contain any errors, one could disable the exceptions, but there's hardly a reason to do so: They conveniently offer clear information about which error occurred, whereas otherwise, the calls may fail silently.
during my cublas initialization, i get an error, i.e. not the wanted CUBLAS_STATUS_SUCCESS.
Checking the returned status, i figured out that the returned status is CUBLAS_STATUS_NOT_INITIALIZED which is not listed as possible returns of that function.
Does anyone have an idea what may have caused that behavior?
The CUBLAS 4.x documentation mentions CUBLAS_STATUS_NOT_INITIALIZED as error code for cublasCreate with the meaning "the CUDA Runtime initialization failed".
Can you verify that you have a valid CUDA context?
If so, did you create a valid CUBLAS context?
For CUBLAS 3.x and CUBLAS 4.x using the legacy API: did you call cublasInit while there is a CUDA context in the current thread active, and did it return CUBLAS_STATUS_SUCCESS?
For CUBLAS 4.x with new API: did you call cublasCreate and did it return CUBLAS_STATUS_SUCCESS? Are you using the handle created when calling cublas..._v2 methods?