Z3_MEMOUT_FAIL assertion failure in Z3's C++ API solver check? - exception

I am getting a Z3_MEMOUT_FAIL as the assertion failure when I call solver.check(). What does this actually mean? Is it simply that Z3 function ran out of memory?

Yes, that exception means that the solver ran out of memory.

Related

UEFI ARM64 Synchronous Exception

I am developing an UEFI application for ARM64 (ARMv8-A) and I have come across the issue: "Synchronous Exceptions at 0xFF1BB0B8."
This value (0x0FF1BB0B8) is exception link register (ELR). ELR holds the exception return address.
There are a number of sources of Synchronous exceptions (https://developer.arm.com/documentation/den0024/a/AArch64-Exception-Handling/Synchronous-and-asynchronous-exceptions):
Instruction aborts from the MMU. For example, by reading an
instruction from a memory location marked as Execute Never.
Data Aborts from the MMU. For example, Permission failure or alignment checking.
SP and PC alignment checking.
Synchronous external aborts. For example, an abort when reading translation table.
Unallocated instructions.
Debug exceptions.
I can't update BIOS firmware to output more debug information. Is there any way to detect more precisely what causes Synchronous Exception?
Can I use the value of ELR (0x0FF1BB0B8) to locate the issue? I compile with -fno-pic -fno-pie options.

Do failed CUDA memory allocations require a cudaDeviceReset call? [duplicate]

CUDA document is not clear on how memory data changes after CUDA applications throws an exception.
For example, a kernel launch(dynamic) encountered an exception (e.g. Warp Out-of-range Address), current kernel launch will be stopped. After this point, will data (e.g. __device__ variables) on device still kept or they are removed along with the exceptions?
A concrete example would be like this:
CPU launches a kernel
The kernel updates the value of __device__ variableA to be 5 and then crashes
CPU memcpy the value of variableA from device to host, what is the value the CPU gets in this case, 5 or something else?
Can someone show the rationale behind this?
The behavior is undefined in the event of a CUDA error which corrupts the CUDA context.
This type of error is evident because it is "sticky", meaning once it occurs, every single CUDA API call will return that error, until the context is destroyed.
Non-sticky errors are cleared automatically after they are returned by a cuda API call (with the exception of cudaPeekAtLastError). Any "crashed kernel" type error (invalid access, unspecified launch failure, etc.) will be a sticky error. In your example, step 3 would (always) return an API error on the result of the cudaMemcpy call to transfer variableA from device to host, so the results of the cudaMemcpy operation are undefined and unreliable -- it is as if the cudaMemcpy operation also failed in some unspecified way.
Since the behavior of a corrupted CUDA context is undefined, there is no definition for the contents of any allocations, or in general the state of the machine after such an error.
An example of a non-sticky error might be an attempt to cudaMalloc more data than is available in device memory. Such an operation will return an out-of-memory error, but that error will be cleared after being returned, and subsequent (valid) cuda API calls can complete successfully, without returning an error. A non-sticky error does not corrupt the CUDA context, and the behavior of the cuda context is exactly the same as if the invalid operation had never been requested.
This distinction between sticky and non-sticky error is called out in many of the documented error code descriptions, for example:
synchronous, non-sticky, non-cuda-context-corrupting:
cudaErrorMemoryAllocation = 2
The API call failed because it was unable to allocate enough memory to perform the requested operation.
asynchronous, sticky, cuda-context-corrupting:
cudaErrorMisalignedAddress = 74
The device encountered a load or store instruction on a memory address which is not aligned. The context cannot be used, so it must be destroyed (and a new one should be created). All existing device memory allocations from this context are invalid and must be reconstructed if the program is to continue using CUDA.
Note that cudaDeviceReset() by itself is insufficient to restore a GPU to proper functional behavior. In order to accomplish that, the "owning" process must also terminate. See here.

States of memory data after cuda exceptions

CUDA document is not clear on how memory data changes after CUDA applications throws an exception.
For example, a kernel launch(dynamic) encountered an exception (e.g. Warp Out-of-range Address), current kernel launch will be stopped. After this point, will data (e.g. __device__ variables) on device still kept or they are removed along with the exceptions?
A concrete example would be like this:
CPU launches a kernel
The kernel updates the value of __device__ variableA to be 5 and then crashes
CPU memcpy the value of variableA from device to host, what is the value the CPU gets in this case, 5 or something else?
Can someone show the rationale behind this?
The behavior is undefined in the event of a CUDA error which corrupts the CUDA context.
This type of error is evident because it is "sticky", meaning once it occurs, every single CUDA API call will return that error, until the context is destroyed.
Non-sticky errors are cleared automatically after they are returned by a cuda API call (with the exception of cudaPeekAtLastError). Any "crashed kernel" type error (invalid access, unspecified launch failure, etc.) will be a sticky error. In your example, step 3 would (always) return an API error on the result of the cudaMemcpy call to transfer variableA from device to host, so the results of the cudaMemcpy operation are undefined and unreliable -- it is as if the cudaMemcpy operation also failed in some unspecified way.
Since the behavior of a corrupted CUDA context is undefined, there is no definition for the contents of any allocations, or in general the state of the machine after such an error.
An example of a non-sticky error might be an attempt to cudaMalloc more data than is available in device memory. Such an operation will return an out-of-memory error, but that error will be cleared after being returned, and subsequent (valid) cuda API calls can complete successfully, without returning an error. A non-sticky error does not corrupt the CUDA context, and the behavior of the cuda context is exactly the same as if the invalid operation had never been requested.
This distinction between sticky and non-sticky error is called out in many of the documented error code descriptions, for example:
synchronous, non-sticky, non-cuda-context-corrupting:
cudaErrorMemoryAllocation = 2
The API call failed because it was unable to allocate enough memory to perform the requested operation.
asynchronous, sticky, cuda-context-corrupting:
cudaErrorMisalignedAddress = 74
The device encountered a load or store instruction on a memory address which is not aligned. The context cannot be used, so it must be destroyed (and a new one should be created). All existing device memory allocations from this context are invalid and must be reconstructed if the program is to continue using CUDA.
Note that cudaDeviceReset() by itself is insufficient to restore a GPU to proper functional behavior. In order to accomplish that, the "owning" process must also terminate. See here.

Cuda device code doesn't throw an error when segfaults

Having a cuda kernel like this
__global__ void kernel(...)
{
free( 48190);
}
is probably not a good idea.
Let's say that 48190 is not a valid address to deallocate during the current execution.
If we were on the host, the runtime would probably stop execution right away, throw a segfault error and give us some nasty description of what happened like "A heap has been corrupted" or something.
But what if it did all that except for the message? what if when it hits that point, it blows up and exits without telling me anything about what happened. That is what that code gives me. If I wrote the above kernel on my machine, it would compile, run, and if that was everything my program did (just call that kernel) it would happily exit with no error message :(. I only find out later when I try to do a cudaMemcpy that something went wrong because it fails with error code 30: unknown error
My question is: is this supposed to happen? is there any way to enable some kind of error description when something goes wrong in a kernel call?
is there any way to enable some kind of message description when something goes wrong in a kernel call?
Yes, do cuda error checking. It's described here.

Crashing a kernel gracefully

A follow up to: CUDA: Stop all other threads
I'm looking for a way to exit a kernel if a "bad condition" occurs.
The prog manual say NVCC does not support exception handling. I'm wondering if there is a user defined cuda-error-code. In other words if "bad" happens, then terminate with this user error code. I doubt there is one, so my other idea would be to cause one.
Something like, if "bad" happens, divide by zero. But I'm unsure if one thread does a divide-by-zero, is that enough to crash the whole kernel, or just that thread?
Is there a better approach to terminating a kernel?
You should first read this question and the answers by harrism and tera (asked/answered yesterday).
You may be tempted to use something like
if (there_is_an_error) {
*status = MY_ERROR_CODE; // store to device pointer
__threadfence(); // ensure store issued before trap
asm("trap;"); // kill kernel with error
}
This does not exactly satisfy your condition of "graceful", in my opinion. Trap causes the kernel to exit and the runtime to report cudaErrorUnknown. But since kernel execution is asynchronous, you will need to synchronize your stream / device in order to catch this error, which means synchronizing after every kernel call, unless you are OK with having imprecise errors (i.e. you may not catch the error code until after calls to subsequent CUDA API calls).
But this is just the way kernel error handling is in CUDA, and well-written codes should be synchronizing in debug builds to check kernel errors, and settling for imprecise error messages in release builds. Unfortunately, I don't think there is a more graceful way than that.
edit: on Compute capability 2.0 and later you can use assert() to exit with an error in debug builds. It was unclear if this is what you want though.
The assertion may help you. You could find it in B.15 of CUDA C Programming Guide.