Is it possible to design Arinc653 scheduler with Azure-RTOS? - threadx

I want to design a scheduler that works in Arinc653 manner just for experimental issues.
Is this possible to manipulate the scheduler in this way?
There is time-slicing in threadX I know but all examples I've encountered are using TX_NO_TIME_SLICE (And my shots with that did not work either.).
Besides I'm not sure if time-slice make the thread wait until its deadline met or put it into sleep so that other threads get running.
For short; Arinc653 scheduler defines a constant major frame that each
'thread' has its definite amount of running times and repeats major
frame endlessly. If a thread assigned with i.e 3ms within a major frame and it finishes its job in 1 ms; kernel still waits 2ms to switch next 'thread'.

You can use time slicing to limit the amount of time each thread runs: https://learn.microsoft.com/en-us/azure/rtos/threadx/chapter4#tx_thread_create

I understand that the characteristic of of the Arinc653 scheduler that you want to emulate is time partitioning. The ThreadX schedule policy is based on priority, preemption threshold and time-slicing.
You can emulate time partitioning with ThreadX. To achieve that you can use a timer, where you can suspend/resume threads for each frame. Timers execute in a different context than threads, they are light weight and not affected by priorities. By default ThreadX uses a timer thread, set to the highest priority, to execute threads; but to get better performance you can compile ThreadX to run the timers inside an IRQ (define the option TX_TIMER_PROCESS_IN_ISR).
An example:
Threads thd1,thd2,thd3 belong to frame A
Threads thd4,thd5,thd6 belong to frame B
Timer tm1 is triggered once every frame change
Pseudo code for tm1:
tm1()
{
static int i = 0;
if (i = ~i)
{
tx_thread_suspend(thd1);
tx_thread_suspend(thd2);
tx_thread_suspend(thd3);
tx_thread_resume(thd4);
tx_thread_resume(thd5);
tx_thread_resume(thd6);
}
else
{
tx_thread_suspend(thd4);
tx_thread_suspend(thd5);
tx_thread_suspend(thd6);
tx_thread_resume(thd1);
tx_thread_resume(thd2);
tx_thread_resume(thd3);
}
}

Related

Simulation loop GPU utilization

I am struggling with utilizing a simulation loop.
There are 3 kernel launched in every cycle.
The next time step size is computed by the second kernel.
while (time < end)
{
kernel_Flux<<<>>>(...);
kernel_Timestep<<<>>>(d_timestep);
memcpy(&h_timestep, d_timestep, sizeof(float), ...);
kernel_Integrate<<<>>>(d_timestep);
time += h_timestep;
}
I only need copy back a single float. What would be the most efficient way to avoid unnecessary synchronizations?
Thank you in advance. :-)
In CUDA all operations running from the default stream are synchronized. So in the code you've posted kernels will run one after another. From what I can see the kernel kernel_integrate() depends from the result of the kernel kernel_Timestep(), so no way of avoiding synchronization. Anyway if the kernels kernel_Flux() and kernel_Timestep() work on independent data, you can try to execute them in parallel, in two different streams.
If you care about the iteration time a lot, you can probably setup a new stream dedicated to the memcpy of the h_timestep out (you need to use cudaMemcpyAsync in this case). Then use something like speculative execution, where your loop proceeds before you figure out the time. To do so you will have to setup the GPU memory buffers for the next several iterations. You can probably do this by using a circular buffer. You also need to use cudaEventRecord and cudaStreamWaitEvent to synchronize the different streams, such that a next iteration is allowed to proceed only if the time corresponds to the buffer you are about to overwrite, has been calculated (the memcpy stream has done the job), because otherwise you will lose the state at that iteration.
Another potential solution, which I haven't tried but I suspect would work, is to make use of dynamic parallelism. If your cards support that, you can probably put the whole loop in GPU.
EDIT:
Sorry, I just realized that you have the third kernel. Your delay due to synchronization may be because you are not doing cudaMemcpyAsync? It's very likely that the third kernel will run longer than the memcpy. You should be able to proceed without any delay. The only synchronization needed is after each iteration.
The ideal solution would be to move everything to the GPU. However, I cannot do so, because I need to launch CUDPP compact after every few iterations, and it does not support CUDA streams nor dynamics parallelism. I know that the Thrust 1.8 library has copy_if method, which does the same, and it is working with dynamic parallelism. The problem is it does not compile with separate compilation on.
To sum up, now I use the following code:
while (time < end)
{
kernel_Flux<<<gs,bs, 0, stream1>>>();
kernel_Timestep<<<gs,bs, 0, stream1>>>(d_timestep);
cudaEventRecord(event, stream1);
cudaStreamWaitEvent(mStream2, event, 0);
memcpyasync(&h_timestep, d_timestep, sizeof(float), ..., stream2);
kernel_Integrate<<<>>>(d_timestep);
cudaStreamSynchronize(stream2);
time += h_timestep;
}

Why is the first cudaMalloc the only bottleneck?

I defined this function :
void cuda_entering_function(...)
{
StructA *host_input, *dev_input;
StructB *host_output, *dev_output;
host_input = (StructA*)malloc(sizeof(StructA));
host_output = (StructB*)malloc(sizeof(StructB));
cudaMalloc(&dev_input, sizeof(StructA));
cudaMalloc(&dev_output, sizeof(StructB));
... some more other cudaMalloc()s and cudaMemcpy()s ...
cudaKernel<< ... >>(dev_input, dev_output);
...
}
This function is called several times (about 5 ~ 15 times) throughout my program, and I measured this program's performance using gettimeofday().
Then I found that the bottleneck of cuda_entering_function() is the first cudaMalloc() - the very first cudaMalloc() throughout my whole program. Over 95% of the total execution time of cuda_entering_function() was consumed by the first cudaMalloc(), and this also happens when I changed the size of first cudaMalloc()'s allocating memory or when I changed the executing order of cudaMalloc()s.
What is the reason and is there any way to reduce the first cuda allocating time?
The first cudaMalloc is responsible for the initialization of the device too, because it's the first call to any function involving the device. This is why you take such a hit: it's overhead due to the use of CUDA and your GPU. You should make sure that your application can gain a sufficient speedup to compensate for the overhead.
In general, people use a call to an initialization function in order to setup their device. In this answer, you can see that apparently a call to cudaFree(0) is the canonical way to do so. This sample shows the use of cudaSetDevice, which could be a good habit if you ever work on machines with several CUDA-ready devices.

Is there a performance penalty for CUDA method not running in sync?

If i have a kernel which looks back the last Xmins and calculates the average of all the values in a float[], would i experience a performance drop if all the threads are not executing the same line of code at the same time?
eg:
Say # x=1500, there are 500 data points spanning the last 2hr period.
# x = 1510, there are 300 data points spanning the last 2hr period.
the thread at x = 1500 will have to look back 500 places yet the thread at x = 1510 only looks back 300, so the later thread will move onto the next position before the 1st thread is finished.
Is this typically an issue?
EDIT: Example code. Sorry but its in C# as i was planning to use CUDAfy.net. Hopefully it provides a rough idea of the type of programming structures i need to run (Actual code is more complicated but similar structure). Any comments regarding whether this is suitable for a GPU / coprocessor or just a CPU would be appreciated.
public void PopulateMeanArray(float[] data)
{
float lookFwdDistance = 108000000000f;
float lookBkDistance = 12000000000f;
int counter = thread.blockIdx.x * 1000; //Ensures unique position in data is written to (assuming i have less than 1000 entries).
float numberOfTicksInLookBack = 0;
float sum = 0; //Stores the sum of difference between two time ticks during x min look back.
//Note:Time difference between each time tick is not consistent, therefore different value of numberOfTicksInLookBack at each position.
//Thread 1 could be working here.
for (float tickPosition = SDS.tick[thread.blockIdx.x]; SDS.tick[tickPosition] < SDS.tick[(tickPosition + lookFwdDistance)]; tickPosition++)
{
sum = 0;
numberOfTicksInLookBack = 0;
//Thread 2 could be working here. Is this warp divergence?
for(float pastPosition = tickPosition - 1; SDS.tick[pastPosition] > (SDS.tick[tickPosition - lookBkDistance]); pastPosition--)
{
sum += SDS.tick[pastPosition] - SDS.tick[pastPosition + 1];
numberOfTicksInLookBack++;
}
data[counter] = sum/numberOfTicksInLookBack;
counter++;
}
}
CUDA runs threads in groups called warps. On all CUDA architectures that have been implemented so far (up to compute capability 3.5), the size of a warp is 32 threads. Only threads in different warps can truly be at different locations in the code. Within a warp, threads are always in the same location. Any threads that should not be executing the code in a given location are disabled as that code is executed. The disabled threads are then just taking up room in the warp and cause their corresponding processing cycles to be lost.
In your algorithm, you get warp divergence because the exit condition in the inner loop is not satisfied at the same time for all the threads in the warp. The GPU must keep executing the inner loop until the exit condition is satisfied for ALL the threads in the warp. As more threads in a warp reach their exit condition, they are disabled by the machine and represent lost processing cycles.
In some situations, the lost processing cycles may not impact performance, because disabled threads do not issue memory requests. This is the case if your algorithm is memory bound and the memory that would have been required by the disabled thread was not included in the read done by one of the other threads in the warp. In your case, though, the data is arranged in such a way that accesses are coalesced (which is a good thing), so you do end up losing performance in the disabled threads.
Your algorithm is very simple and, as it stands, the algorithm does not fit that well on the GPU. However, I think the same calculation can be dramatically sped up on both the CPU and GPU with a different algorithm that uses an approach more like that used in parallel reductions. I have not considered how that might be done in a concrete way though.
A simple thing to try, for a potentially dramatic increase in speed on the CPU, would be to alter your algorithm in such a way that the inner loop iterates forwards instead of backwards. This is because CPUs do cache prefetches. These only work when you iterate forwards through your data.

Synchronization in CUDA

I read cuda reference manual for about synchronization in cuda but i don't know it clearly. for example why we use cudaDeviceSynchronize() or __syncthreads()? if don't use them what happens, program can't work correctly? what difference between cudaMemcpy and cudaMemcpyAsync in action? can you show an example that show this difference?
cudaDeviceSynchronize() is used in host code (i.e. running on the CPU) when it is desired that CPU activity wait on the completion of any pending GPU activity. In many cases it's not necessary to do this explicitly, as GPU operations issued to a single stream are automatically serialized, and certain other operations like cudaMemcpy() have an inherent blocking device synchronization built into them. But for some other purposes, such as debugging code, it may be convenient to force the device to finish any outstanding activity.
__syncthreads() is used in device code (i.e. running on the GPU) and may not be necessary at all in code that has independent parallel operations (such as adding two vectors together, element-by-element). However, one example where it is commonly used is in algorithms that will operate out of shared memory. In these cases it's frequently necessary to load values from global memory into shared memory, and we want each thread in the threadblock to have an opportunity to load it's appropriate shared memory location(s), before any actual processing occurs. In this case we want to use __syncthreads() before the processing occurs, to ensure that shared memory is fully populated. This is just one example. __syncthreads() might be used any time synchronization within a block of threads is desired. It does not allow for synchronization between blocks.
The difference between cudaMemcpy and cudaMemcpyAsync is that the non-async version of the call can only be issued to stream 0 and will block the calling CPU thread until the copy is complete. The async version can optionally take a stream parameter, and returns control to the calling thread immediately, before the copy is complete. The async version typically finds usage in situations where we want to have asynchronous concurrent execution.
If you have basic questions about CUDA programming, it's recommended that you take some of the webinars available.
Moreover, __syncthreads() becomes really necessary when you have some conditional paths in your code, and then you want to run an operation that depends on several array element.
Consider the following example:
int n = threadIdx.x;
if( myarray[n] > 0 )
{
myarray[n] = - myarray[n];
}
double y = myarray[n] + myarray[n+1]; // Not all threads reaches here at the same time
In the above example, not all threads will have the same execution sequence. Some threads will take longer based on the if condition. When considering the last line of the example, you need to make sure that all the threads had exactly finished the if-condition and updated myarray correctly. If this wasn't the case, y may use some updated and non-updated values.
In this case, it becomes a must to add __syncthreads() before evaluating y to overcome this problem:
if( myarray[n] > 0 )
{
myarray[n] = - myarray[n];
}
__syncthreads(); // All threads will wait till they come to this point
// We are now quite confident that all array values are updated.
double y = myarray[n] + myarray[n+1];

cuda, dummy/implicit block synchronization

I am aware that block sync is not possible, the only way is launching a new kernel.
BUT, let's suppose that I launch X blocks, where X corresponds to the number of the SM on my GPU. I should aspect that the scheduler will assign a block to each SM...right? And if the GPU is being utilized as a secondary graphic card (completely dedicated to CUDA), this means that, theoretically, no other process use it... right?
My idea is the following: implicit synchronization.
Let's suppose that sometimes I need only one block, and sometimes I need all the X blocks. Well, in those cases where I need just one block, I can configure my code so that the first block (or the first SM) will work on the "real" data while the other X-1 blocks (or SMs) on some "dummy" data, executing exactly the same instruction, just with some other offset.
So that all of them will continue to be synchronized, until I am going to need all of them again.
Is the scheduler reliable under this conditions? Or can you be never sure?
You've got several questions in one, so I'll try to address them separately.
One block per SM
I asked this a while back on nVidia's own forums, as I was getting results that indicated that this is not what happens. Apparently, the block scheduler will not assign a block per SM if the number of blocks is equal to the number of SMs.
Implicit synchronization
No. First of all, you cannot guarantee that each block will have its own SM (see above). Secondly, all blocks cannot access the global store at the same time. If they run synchronously at all, they will loose this synchronicity as of the first memory read/write.
Block synchronization
Now for the good news: Yes, you can. The atomic instructions described in Section B.11 of the CUDA C Programming Guide can be used to create a barrier. Assume that you have N blocks executing concurrently on your GPU.
__device__ int barrier = N;
__global__ void mykernel ( ) {
/* Do whatever it is that this block does. */
...
/* Make sure all threads in this block are actually here. */
__syncthreads();
/* Once we're done, decrease the value of the barrier. */
if ( threadIdx.x == 0 )
atomicSub( &barrier , 1 );
/* Now wait for the barrier to be zero. */
if ( threadIdx.x == 0 )
while ( atomicCAS( &barrier , 0 , 0 ) != 0 );
/* Make sure everybody has waited for the barrier. */
__syncthreads();
/* Carry on with whatever else you wanted to do. */
...
}
The instruction atomicSub(p,i) computes *p -= i atomically and is only called by the zeroth thread in the block, i.e. we only want to decrement barrier once. The instruction atomicCAS(p,c,v) sets *p = v iff *p == c and returns the old value of *p. This part just loops until barrier reaches 0, i.e. until all blocks have crossed it.
Note that you have to wrap this part in calls to __synchtreads() as the threads in a block do not execute in strict lock-step and you have to force them all to wait for the zeroth thread.
Just remember that if you call your kernel more than once, you should set barrier back to N.
Update
In reply to jHackTheRipper's answer and Cicada's comment, I should have pointed out that you should not try to start more blocks than can be concurrently scheduled on the GPU! This is limited by a number of factors, and you should use the CUDA Occupancy Calculator to find the maximum number of blocks for your kernel and device.
Judging by the original question, though, only as many blocks as there are SMs are being started, so this point is moot.
#Pedro is definitely wrong!
Achieving global synchronization has been the subject of several research works recently and, at last for non-Kepler architectures (I don't have one yet). The conclusion is always the same (or should be): it is not possible to achieve such a global synchronization across the whole GPU.
The reason is simple: CUDA blocks cannot be preempted, so given that you fully occupy the GPU, threads waiting for the barrier rendez-vous will never allow the block to terminate. Thus, it will not be removed from the SM, and will prevent the remaining blocks to run.
As a consequence, you will just freeze the GPU that will never be able to escape from this deadlock state.
-- edit to answer Pedro's remarks --
Such shortcomings have been noticed by other authors such as:
http://www.openclblog.com/2011/04/eureka.html
by the author of OpenCL in action
-- edit to answer Pedro's second remarks --
The same conclusion is made by #Jared Hoberock in this SO post:
Inter-block barrier on CUDA