I have a method that launches a thread which I have no control over. I do not have a mechanism to stop it. I however expected for the thread launched in the test method to be killed when the test method execution is over. But this is not the case, I could see logs from the earlier test method in the current test method in execution. Is there a setting or anything I could do to ensure that all threads after a test is cleaned up?
Related
I am using the JetBrains Self Profiling API. For the most part it works great, but I have hit a problem that sometimes when I call DotTrace.SaveData it does not return. DotTrace.SaveData just keeps executing and thus everything after it does not get executed.
What I have tried:
Put it in a try catch block.
Ensure that the target folder is not locked. (It can write the intermediate files just fine.)
Ensure that there are not other instances or threads calling DotTrace profiling methods.
Ensure that the profiler has been initialized and attached.
It is frustrating because it does not throw an exception or give any details. It just hangs.
How can I debug more what is causing this issue?
I am using PIT mutation testing (1.6.7) and a maven build system. There is information on which test killed the mutants. But, what I need is whether the mutants were killed by JUnit assertion violation or by any implicit checks or run-time system? Is it possible by PIT?
If by 'implict checks' you mean a compile time error, then this is not possible under PIT. Unlike some earlier systems it will not produce mutants that are not viable.
The only way in which a mutant can be killed is if an uncaught exception is thrown while running a test. PIT does not distinguish between AssertionFailedErrors and other types of exception, so it is not possible to tell if the test failed or errored.
I implemented such an extension that differentiates between assertion failures and implicitly thrown exceptions a couple of years ago. Though, the code was not merged back to Pitest.
You can still find the branch here: https://github.com/hcoles/pitest/compare/master...nrainer:pitest:features/categorizeTestFailure
To get meaningful results, you will need to generate a full mutation matrix (for each mutation, executing all tests that cover it, without stopping when the first test case kills the mutation). This means that the mutation "kill type" depends on mutation and test case.
If I register a callback via cudaStreamAddCallback(), what thread is going to run it ?
The CUDA documentation says that cudaStreamAddCallback
adds a callback to be called on the host after all currently enqueued items in the stream have completed. For each cudaStreamAddCallback call, a callback will be executed exactly once. The callback will block later work in the stream until it is finished.
but says nothing about how the callback itself is called.
Just to flesh out comments so that this question has an answer and will fall off the unanswered queue:
The short answer is that this is an internal implementation detail of the CUDA runtime and you don't need to worry about it.
The longer answer is that if you look carefully at the operation of the CUDA runtime, you will notice that context establishment on a device (be it explicit via the driver API, or implicit via the runtime API) spawns a small thread pool. It is these threads which are used to implement features of the runtime like stream command queues and call back operations. Again, an internal implementation detail which the programmer doesn't need to know about.
Summary:
I'm trying to find out if a single method can be executed twice in overlap when executing on a single thread. Or if two different methods can be executed in overlap, where when they share access to a particular variable, some unwanted behaviour can occur.
Ex of a single method:
var ball:Date;
method1 ():Date {
ball = new Date();
<some code here>
return ball;
}
Questions:
1) If method1 gets fired every 20ms using the event system, and the whole method takes more than 20ms to execute, will the method be executed again in overlap?
2) Are there any other scenarios in a single thread environment where a method(s) can be executed in overlap, or is the AVM2 limited to executing 1 method at a time?
Studies: I've read through https://www.adobe.com/content/dam/Adobe/en/devnet/actionscript/articles/avm2overview.pdf which explains that the AVM2 has a stack for running code, and the description for methods makes it seem that if there isn't a second stack, the stack system can only accomodate 1 method execution at a time. I'd just like to double check with the StackeOverflow experts to see for sure.
I'm dealing with some time sensitive data, and have to make sure a method isn't changing a variable that is being accessed by another method at the same time.
ActionScript is single-threaded; although, can support concurrency through ActionScript workers, which are multiple SWF applications that run in parallel.
There are asynchronous patterns, if you want a nested function, or anonymous function to execute within the scope chain of a function.
What I think you're referring to is how AVM2 executes event-driven code, to which you should research AVM2 marshalled slice. Player events are executed at the beginning of the slice.
Heavy code execution will slow frame rate.
It's linear - blocking synchronously. Each frame does not invoke code in parallel.
AVM2 executes 20 millisecond marshalled slices, which depending on frame rate executes user actions, invalidations, and rendering.
I'm reading about HW\SW interrupts and something isn't clear to me:
When the normal flow is interrupted by an exception ("software interrupt"), the address of the instruction which caused the interrupt is saved, and then the OS gives the exception handler a chance to handle it.
The point I'm not sure about is which instruction is processed after the handler finishes:
If the same "faulty" instruction in run again, it might cause the same exception.
If the next instruction is run, aren't we losing the affect of the previous instruction (which might cause a "normal" exception, such as page fault)?
The instruction that caused the fault is executed again. The idea is that the handler should make appropriate changes so that the instruction will be able to execute properly.
For instance, if an instruction causes a page fault because it tries to access virtual memory that's paged out, the OS will load the page from backing store, update the page table, and then restart the instruction. This time it will succeed because the page is in RAM.
If the handler doesn't fix things, you'll get another interrupt when it's restarted, and the process will repeat.