How does sleep(), wait() and pause() work? - language-agnostic

How do sleep(), wait(), pause(), functions work?

We can see the sleeping operation from a more abstract point of view: it is an operation that let you wait for an event.
The event in question is triggered when the time passed from sleep invocation exceeds the sleep parameter.
When a process is active (ie: it owns a CPU) it can wait for an event in an active or in a passive way:
An active wait is when a process actively/explicitly waits for the event:
sleep( t ):
while not [event: elapsedTime > t ]:
NOP // no operatior - do nothing
This is a trivial algorithm and can be implemented wherever in a portable way, but has the issue that while your process is actively waiting it still owns the CPU, wasting it (since your process doesn't really need the CPU, while other tasks could need it).
Usually this should be done only by those process that cannot passively wait (see the point below).
A passive wait instead is done by asking to something else to wake you up when the event happens, and suspending yourself (ie: releasing the CPU):
sleep( t ):
system.wakeMeUpWhen( [event: elapsedTime > t ] )
release CPU
In order to implement a passive wait you need some external support: you must be able to release your CPU and to ask somebody else to wake you up when the event happens.
This could be not possible on single-task devices (like many embedded devices) unless the hardware provides a wakeMeUpWhen operation, since there's nobody to release the CPU to or to ask to been waken up.
x86 processors (and most others) offer a HLT operation that lets the CPU sleep until an external interrupt is triggered. This way also operating system kernels can sleep in order to keep the CPU cool.

Modern operating systems are multitasking, which means it appears to run multiple programs simultaneously. In fact, your computer only (traditionally, at least) only has one CPU, so it can only execute one instruction from one program at the same time.
The way the OS makes it appear that multiple stuff (you're browsing the web, listening to music and downloading files) is happening at once is by executing each task for a very short time (let's say 10 ms). This fast switching makes it appear that stuff is happening simultaneously when everything is in fact happening sequentially. (with obvious differences for multi-core system).
As for the answer to the question: with sleep or wait or synchronous IO, the program is basically telling the OS to execute other tasks, and do not run me again until: X ms has elapsed, the event has been signaled, or the data is ready.

sleep() causes the calling thread to be removed from of the Operating System's ready queue and inserted into another queue where the OS periodically checks if the sleep() has timed out, after which the thread is readied again. When the thread is removed from the queue, the operating system will schedule other readied threads during the sleep period, including the 'idle' thread, which is always in the ready queue.

These are system calls. Lookup the implementation in Open-source code like in Linux or Open BSD.

Related

Cache Implementation in Pipelined Processor

I have recently started coding in verilog. I have completed my first project, prototyping a MIPS 32 processor using 5 stage pipelining. Now my next task is to implement a single level cache hiearchy on the instruction set memory.
I have sucessfully implemented a 2-way set associative cache.
Previously I had declared the instruction set memory as a array of registers, so whenever I need to access the next instruction in IF stage, the data(instruction) gets instantaneously allotted to the register for further decoding (since blocking/non_blocking assignment is instantaneous from any memory location).
But now since I have a single level cache added on top of it, it takes a few more cycles for the cache FSM to work (like data searching, and replacement policies in case of cache miss). Max. delay is about 5 cycles when there is a cache miss.
Since my pipelined stage proceeds to the next stage within just a single cycle, hence whenever there is a cache miss, the cache fails to deliver the instruction before the pipeline stage moves to the next stage. So desired output is always wrong.
To counteract this , I have increased the clock of the cache by 5 times as compared the processor pipelined clock. This does do the work, since the cache clock is much faster, it need not to worry about the processor clock.
But is this workaround legit?? I mean i haven't heard of multiple clocks in a processor system. How does the processors in real world overcome this issue.
Yes ofc, there is an another way of using stall cycles in pipeline until the data is readily made available in cache (hit). But just wondering is making memory system more faster by increasing clock is justified??
P.S. I am newbie to computer architecture and verilog. I dont know about VLSI much. This is my first question ever, because whatever questions strikes, i get it readily available in webpages, but i cant find much details about this problem, so i am here.
I also asked my professor, she replied me to research more in this topic, bcs none of my colleague/ senior worked much on pipelined processors.
But is this workaround legit??
No, it isn't :P You're not only increasing the cache clock, but also apparently the memory clock. And if you can run your cache 5x faster and still make the timing constraints, that means you should clock your whole CPU 5x faster if you're aiming for max performance.
A classic 5-stage RISC pipeline assumes and is designed around single-cycle latency for cache hits (and simultaneous data and instruction cache access), but stalls on cache misses. (Data load/store address calculation happens in EX, and cache access in MEM, which is why that stage exists)
A stall is logically equivalent to inserting a NOP, so you can do that on cache miss. The program counter needs to not increment, but otherwise it should be a pretty local change.
If you had hardware performance counters, you'd maybe want to distinguish between real instructions vs. fake stall NOPs so you could count real instructions executed.
You'll need to implement pipeline interlocks for other stages that stall to wait for their inputs to be ready, e.g. a cache-miss load followed by an add that uses the result.
MIPS I had load-delay slots (you can't use the result of a load in the following instruction, because the MEM stage is after EX). So that ISA rule hides the 1 cycle latency of a cache hit without requiring the HW to detect the dependency and stall for it.
But a cache miss still had to be detected. Probably it stalled the whole pipeline whether there was a dependency or not. (Again, like inserting a NOP for the rest of the pipeline while holding on to the incoming instruction. Except this isn't the first stage, so it has to signal to the previous stage that it's stalling.)
Later versions of MIPS removed the load delay slot to avoid bloating code with NOPs when compilers couldn't fill the slot. Simple HW then had to detect the dependency and stall if needed, but smarter hardware probably tracked loads anyway so they could do hit under miss and so on. Not stalling the pipeline until an instruction actually tried to read a load result that wasn't ready.
MIPS = "Microprocessor without Interlocked Pipeline Stages" (i.e. no data-hazard detection). But it still had to stall for cache misses.
An alternate expansion for the acronym (which still fits MIPS II where the load delay slot as removed, requiring HW interlocks to detect that data hazard) would be "Minimally Interlocked Pipeline Stages" but apparently I made that up in my head, thanks #PaulClayton for catching that.

cudaElapsedTime with non-default streams

My question is about the use of the funcion cudaEventElapsedTime to measure the execution time in a multi-stream application.
According to CUDA documentation
If either event was last recorded in a non-NULL stream, the resulting time may be greater than expected (even if both used the same stream handle).This happens because the cudaEventRecord() operation takes place asynchronously and there is no guarantee that the measured latency is actually just between the two events. Any number of other different stream operations could execute in between the two measured events, thus altering the timing in a significant way.
I am genuinely struggling to understand the sentences in bold in the above. It seems, it is more accurate to measure the time using the default stream. But I want to understand why? If i want to measure the execution time in a stream, i find it more logical to attach the start/stop events by that stream instead of the default stream. Any clarification, please? Thank you
First of all let's remember basic CUDA stream semantics:
CUDA activity issued into the same stream will always execute in issue order.
There is no defined relationship between the order of execution of CUDA activities issued into separate streams.
The CUDA default stream (assuming we have not overridden the default legacy behavior) has an additional characteristic of implicit synchronization, which roughly means that a CUDA operation issued into the default stream will not begin executing until all prior issued CUDA activity to that device has completed.
Therefore, if we issue 2 CUDA events (say, start and stop) into the legacy default stream, we can be confident that any and all CUDA activity issued between those two issue points will be timed (regardless of which stream they were issued into, or which host thread they were issued from). I would suggest for casual usage this is intuitive, and less likely to be misinterpreted. Furthermore, it should yield consistent timing behavior, run-to-run (assuming host thread behavior is the same, i.e. somehow synchronized).
OTOH, let's say we have a multi-streamed application. Let's assume that we are issuing kernels into 2 or more non-default streams:
Stream1: cudaEventRecord(start)|Kernel1|Kernel2|cudaEventRecord(stop)
Stream2: |Kernel3|
It does not really matter too much whether these were issued from the same host thread or from separate host threads. For example, let's say our single host thread activity looked like this (condensed):
cudaEventRecord(start, Stream1);
Kernel1<<<..., Stream1>>>(...);
Kernel2<<<..., Stream1>>>(...);
Kernel3<<<..., Stream2>>>(...);
cudaEventRecord(stop, Stream1);
What timing should we expect? Will Kernel3 be included in the elapsed time between start and stop?
In fact the answer is unknown, and could vary from run-to-run, and probably would depend on what else is happening on the device before and during the above activity.
For the above issue order, and assuming we have no other activity on the device, we can assume that immediately after the cudaEventRecord(start) operation, that the Kernel1 will launch and begin executing. Let's suppose it "fills the device" so that no other kernels can execute concurrently. Let's also assume that the duration of Kernel1 is much longer than the launch latency of Kernel2 and Kernel3. Therefore, while Kernel1 is executing, both Kernel2 and Kernel3 are queued for execution. At the completion of Kernel1, the device scheduler has the option of beginning either Kernel2 or Kernel3. If it chooses Kernel2 then at the completion of Kernel2 it can mark the stop event as completed, which will establish the time duration between start and stop as the duration of Kernel1 and Kernel2, approximately.
Device Execution: event(start)|Kernel1|Kernel2|event(stop)|Kernel3|
| Duration |
However, if the scheduler chooses to begin Kernel3 before Kernel2 (an entirely legal and valid choice based on the stream semantics) then the stop event cannot be marked as complete until Kernel2 finishes, which means the measured duration will now included the duration of Kernel1 plus Kernel2 plus Kernel3. There is nothing in the CUDA programming model to sort this out, which means the measured timing could alternate even run-to-run:
Device Execution: event(start)|Kernel1|Kernel3|Kernel2|event(stop)|
| Duration |
Furthermore, we could considerably alter the actual issue order, placing the issue/launch of Kernel3 before the first cudaEventRecord or after the last cudaEventRecord, and the above argument/variability still holds. This is where the meaning of the asynchronous nature of the cudaEventRecord call comes in. It does not block the CPU thread, but like a kernel launch it is asynchronous. Therefore all of the above activity can issue before any of it actually begins to execute on the device. Even if Kernel3 begins executing before the first cudaEventRecord, it will occupy the device for some time, delaying the beginning of execution of Kernel1, and therefore increasing the measured duration by some amount.
And if the Kernel3 is issued even after the last cudaEventRecord, because all these issue operations are asynchronous, the Kernel3 may still be queued up and ready to go when Kernel1 is complete, meaning the device scheduler can still make a choice about which to launch, making for possibly variable timing.
There are certainly other similar hazards that can be mapped out. This sort of possibility for variation in a multi-streamed scenario is what gives rise to the conservative advice to avoid trying to do cudaEvent based timing using events issued into the non-legacy-default stream.
Of course, if you for example use the visual profiler then there should be relatively little ambiguity about what was measured between two events (although it may still vary run-to-run). However, if you're going to use the visual profiler, you can read the duration directly off the timeline view, without needing an event elapsed time call.
Note that if you override the default stream legacy behavior, the default stream roughly becomes equivalent to an "ordinary" stream (especially for a single-threaded host application). In this case, we can't rely on the default stream semantics to sort this out. One possible option might be to precede any cudaEventRecord() call with a cudaDeviceSynchronize() call. I'm not suggesting this sorts out every possible scenario, but for single-device single host-thread applications, it should be equivalent to cudaEvent timing issued into default legacy stream.
Complex scenario timing might be best done using a profiler. Many folks also dispense entirely with cudaEvent based timing and revert to high-resolution host timing methodologies. In any event, the timing of a complex concurrent asynchronous system is non-trivial. The conservative advice intends to avoid some of these issues for casual use.

Understanding hardware interrupts and exceptions at processor and hardware level

After a lot of reading about interrupt handling etcetera, i still can figure out the full process of interrupt handling from the very beginning.
For example:
A division by zero.
The CPU fetches the instruction to divide a number by zero and send it to the ALU.
Assuming the the ALU started the process of the division or run some checks before starting it.
How the exception is signaled to the CPU ?
How the CPU knows what exception has occurred from only one bit signal ? Is there a register that is reads after it gets interrupted to know this ?
2.How my application catches the exception?
Do i need to write some function to catch a specipic SIGNAL or something else? And when i write expcepion handling routine like
Try {}
Catch {}
And an exception occurres how can i know what exeption is thrown and handle it well ?
The most important part that bugs me is for example when an interupt is signaled from the keyboard to the PIC the pic in his turn signals to the CPU that an interrupt occurred by changing the wite INT.
But how does the CPU knows what device need to be served ?
What is the processes the CPU is doing when his INTR pin turns on ?
Does he has a routine that checks some register that have a value of the interrupt (that set by the PIC when it turns on the INT wire? )
Please don't ban the post, it's really important for me to understand this topic, i read a researched a couple of weaks but connot connect the dots in my head.
Thanks.
There are typically several thing associated with interrupts other than just a pin. Normally for more recent micro-controllers there is a interrupt vector placed on memory that addresses each interrupt call, and a register that signals the interrupt event/flag.
When a event that is handled by an interruption occurs and a specific flag is set. Depending on priority's and current state of the CPU the context switch time may vary for example a low priority interrupt flagged duding a higher priority interrupt will have to wait till the high priority interrupt is finished. In the event that nesting is possible than higher priority interrupts may interrupt lower priority interrupts.
In the particular case of exceptions like dividing by 0, that indeed would be detected by the ALU, the CPU may offer or not a derived interruption that we will call in events like this. For other types of exceptions an interrupt might not be available and the CPU would just act accordingly for example rebooting.
As a conclusion the interrupt events would occur in the following manner:
Interrupt event is flagged and the corresponding flag on the register is set
When the time comes the CPU will switch context to the interruption handler function.
At the end of the handler the interruption flag is cleared and the CPU is ready to re-flag the interrupt when the next event comes.
Deciding between interrupts arriving at the same time or different priority interrupts varies with different hardware.
It may be simplest to understand interrupts if one starts with the way they work on the Z80 in its simplest interrupt mode. That processor checks the state of a
pin called /IRQ at a certain point during each instruction; if the pin is asserted and an "interrupt enabled" flag is set, then when it is time to fetch the next instruction the processor won't advance the program counter or read a byte from memory, but instead disable the "interrupt enabled" flag and "pretend" that it read an "RST 38h" instruction. That instruction behaves like a single-byte "CALL 0038h" instruction, pushing the program counter and transferring control to that address.
Code at 0038h can then poll various peripherals if they need any service, use an "ei" instruction to turn the "interrupt enabled" flag back on, and perform a "ret". If no peripheral still has an immediate need for service at that point, code can then resume with whatever it was doing before the interrupt occurred. To prevent problems if the interrupt line is still asserted when the "ret" is executed, some special logic will ensure that the interrupt line will be ignored during that instruction (or any other instruction which immediately follows "ei"). If another peripheral has developed a need for service while the interrupt handler was running, the system will return to the original code, notice the state of /IRQ while it processes the first instruction after returning, and then restart the sequence with the RST 38h.
In the simple Z80 approach, there is only one kind of interrupt; any peripheral can assert /IRQ, and if any peripheral does so the Z80 will need to ask every peripheral if it wants attention. In more advanced systems, it's possible to have many different interrupts, so that when a peripheral needs service control can be dispatched to a routine which is designed to handle just that peripheral. The same general principles still apply, however: an interrupt effectively inserts a "call" instruction into whatever the processor was doing, does something to ensure that the processor will be able to service whatever needed attention without continuously interrupting that process [on the Z80, it simply disables interrupts, but systems with multiple interrupt sources can leave higher-priority sources enabled while servicing lower ones], and then returns to whatever the processor had been doing while re-enabling interrupts.

How can I make an SQL query thread start, then do other work before getting results?

I have a program that does a limited form of multithreading. It is written in Delphi, and uses libmysql.dll (the C API) to access a MySQL server. The program must process a long list of records, taking ~0.1s per record. Think of it as one big loop. All database access is done by worker threads which either prefetch the next records or write results, so the main thread doesn't have to wait.
At the top of this loop, we first wait for the prefetch thread, get the results, then have the prefetch thread execute the query for the next record. The idea being that the prefetch thread will send the query immediately, and wait for results while the main thread completes the loop.
It often does work that way. But note there's nothing to ensure that the prefetch thread runs right away. I found that often the query was not sent until the main thread looped around and started waiting for the prefetch.
I sort-of fixed that by calling sleep(0) right after launching the prefetch thread. This way the main thread surrenders the remainder of it's time slice, hoping that the prefetch thread will now run, sending the query. Then that thread will sleep while waiting, which allows the main thread to run again.
Of course, there's plenty more threads running in the OS, but this did actually work to some extent.
What I really want to happen is for the main thread to send the query, and then have the worker thread wait for the results. Using libmysql.dll I call
result := mysql_query(p.SqlCon,pChar(p.query));
in the worker thread. Instead, I'd like to have the main thread call something like
mysql_threadedquery(p.SqlCon,pChar(p.query),thread);
which would hand off the task as soon as the data went out.
Anybody know of anything like that?
This is really a scheduling problem, so I could try is lauching the prefetch thread at a higher priority, then have it reduce its priority after the query is sent. But again, I don't have any mysql call that separates sending the query from receiving the results.
Maybe it's in there and I just don't know about it. Enlighten me, please.
Added Question:
Does anyone think this problem would be solved by running the prefetch thread at a higher priority than the main thread? The idea is that the prefetch would immediately preempt the main thread and send the query. Then it would sleep waiting for the server reply. Meanwhile the main thread would run.
Added: Details of current implementation
This program performs calculations on data contained in a MySQL DB. There are 33M items with more added every second. The program runs continuously, processing new items, and sometimes re-analyzing old items. It gets a list of items to analyze from a table, so at the beginning of a pass (current item) it knows the next item ID it will need.
As each item is independent, this is a perfect target for multiprocessing. The easiest way to do this is to run multiple instances of the program on multiple machines. The program is highly optimized via profiling, rewrites, and algorithm redesign. Still, a single instance utilizes 100% of a CPU core when not data-starved. I run 4-8 copies on two quad-core workstations. But at this rate they must spend time waiting on the MySQL server. (Optimization of the Server/DB schema is another topic.)
I implemented multi-threading in the process solely to avoid blocking on the SQL calls. That's why I called this "limited multi-threading". A worker thread has one task: send a command and wait for results. (OK, two tasks.)
It turns out there are 6 blocking tasks associated with 6 tables. Two of these read data and the other 4 write results. These are similar enough to be defined by a common Task structure. A pointer to this Task is passed to a threadpool manager which assigns a thread to do the work. The main thread can check the task status through the Task structure.
This makes the main thread code very simple. When it needs to perform Task1, it waits for Task1 to be not busy, puts the SQL command in Task1 and hands it off. When Task1 is no longer busy, it contains the results (if any).
The 4 tasks that write results are trivial. The main thread has a Task write records while it goes on to the next item. When done with that item it makes sure the previous write finished before starting another.
The 2 reading threads are less trivial. Nothing would be gained by passing the read to a thread and then waiting for the results. Instead, these tasks prefetch data for the next item. So the main thread, coming to this blocking tasks, checks if the prefetch is done; Waits if necessary for the prefetch to finish, then takes the data from the Task. Finally, it reissues the Task with the NEXT Item ID.
The idea is for the prefetch task to immediately issue the query and wait for the MySQL server. Then the main thread can process the current Item and by the time it starts on the next Item the data it needs is in the prefetch Task.
So the threading, a thread pool, the synchronization, data structures, etc. are all done. And that all works. What I'm left with is a Scheduling Problem.
The Scheduling Problem is this: All the speed gain is in processing the current Item while the server is fetching the next Item. We issue the prefetch task before processing the current item, but how do we guarantee that it starts? The OS scheduler does not know that it's important for the prefetch task to issue the query right away, and then it will do nothing but wait.
The OS scheduler is trying to be "fair" and allow each task to run for an assigned time slice. My worst case is this: The main thread receives its slice and issues a prefetch, then finishes the current item and must wait for the next item. Waiting releases the rest of its time slice, so the scheduler starts the prefetch thread, which issues the query and then waits. Now both threads are waiting. When the server signals the query is done the prefetch thread restarts, and requests the Results (dataset) then sleeps. When the server provides the results the prefetch thread awakes, marks the Task Done and terminates. Finally, the main thread restarts and takes the data from the finished Task.
To avoid this worst-case scheduling I need some way to ensure that the prefetch query is issued before the main thread goes on with the current item. So far I've thought of three ways to do that:
Right after issuing the prefetch task, the main thread calls Sleep(0). This should relinquish the rest of its time slice. I then hope that the scheduler runs the prefetch thread, which will issue the query and then wait. Then the scheduler should restart the main thread (I hope.) As bad as it sounds, this actually works better than nothing.
I could possibly issue the prefetch thread at a higher priority than the main thread. That should cause the scheduler to run it right away, even if it must preempt the main thread. It may also have undesirable effects. It seems unnatural for a background worker thread to get a higher priority.
I could possibly issue the query asynchronously. That is, separate sending the query from receiving the results. That way I could have the main thread send the prefetch using mysql_send_query (non blocking) and go on with the current item. Then when it needed the next item it would call mysql_read_query, which would block until the data is available.
Note that solution 3 does not even use a worker thread. This looks like the best answer, but requires a rewrite of some low-level code. I'm currently looking for examples of such asynchronous client-server access.
I'd also like any experienced opinions on these approaches. Have I missed anything, or am I doing anything wrong? Please note that this is all working code. I'm not asking how to do it, but how to do it better/faster.
Still, a single instance utilizes 100% of a CPU core when not data-starved. I run 4-8 copies on two quad-core workstations.
I have a conceptual problem here. In your situation I would either create a multi-process solution, with each process doing everything in its single thread, or I would create a multi-threaded solution that is limited to a single instance on any particular machine. Once you decide to work with multiple threads and accept the added complexity and probability of hard-to-fix bugs, then you should make maximum use of them. Using a single process with multiple threads allows you to employ varying numbers of threads for reading from and writing to the database and to process your data. The number of threads may even change during the runtime of your program, and the ratio of database and processing threads may too. This kind of dynamic partitioning of the work is only possible if you can control all threads from a single point in the program, which isn't possible with multiple processes.
I implemented multi-threading in the process solely to avoid blocking on the SQL calls.
With multiple processes there wouldn't be a real need to do so. If your processes are I/O-bound some of the time they don't consume CPU resources, so you probably simply need to run more of them than your machine has cores. But then you have the problem to know how many processes to spawn, and that may again change over time if the machine does other work too. A threaded solution in a single process can be made adaptable to a changing environment in a relatively simple way.
So the threading, a thread pool, the synchronization, data structures, etc. are all done. And that all works. What I'm left with is a Scheduling Problem.
Which you should leave to the OS. Simply have a single process with the necessary pooled threads. Something like the following:
A number of threads reads records from the database and adds them to a producer-consumer queue with an upper bound, which is somewhere between N and 2*N where N is the number of processor cores in the system. These threads will block on the full queue, and they can have increased priority, so that they will be scheduled to run as soon as the queue has more room and they become unblocked. Since they will be blocked on I/O most of the time their higher priority shouldn't be a problem.
I don't know what that number of threads is, you would need to measure.
A number of processing threads, probably one per processor core in the system. They will take work items from the queue mentioned in the previous point, on block on that queue if it's empty. Processed work items should go to another queue.
A number of threads that take processed work items from the second queue and write data back to the database. There should probably an upper bound for the second queue as well, to make it so that a failure to write processed data back to the database will not cause processed data to pile up and fill all your process memory space.
The number of threads needs to be determined, but all scheduling will be performed by the OS scheduler. The key is to have enough threads to utilise all CPU cores, and the necessary number of auxiliary threads to keep them busy and deal with their outputs. If these threads come from pools you are free to adjust their numbers at runtime too.
The Omni Thread Library has a solution for tasks, task pools, producer consumer queues and everything else you would need to implement this. Otherwise you can write your own queues using mutexes.
The Scheduling Problem is this: All the speed gain is in processing the current Item while the server is fetching the next Item. We issue the prefetch task before processing the current item, but how do we guarantee that it starts?
By giving it a higher priority.
The OS scheduler does not know that it's important for the prefetch task to issue the query right away
It will know if the thread has a higher priority.
The OS scheduler is trying to be "fair" and allow each task to run for an assigned time slice.
Only for threads of the same priority. No lower priority thread will get any slice of CPU while a higher priority thread in the same process is runnable.
[Edit: That's not completely true, more information at the end. However, it is close enough to the truth to ensure that your higher priority network threads send and receive data as soon as possible.]
Right after issuing the prefetch task, the main thread calls Sleep(0).
Calling Sleep() is a bad way to force threads to execute in a certain order. Set the thread priority according to the priority of the work they perform, and use OS primitives to block higher priority threads if they should not run.
I could possibly issue the prefetch thread at a higher priority than the main thread. That should cause the scheduler to run it right away, even if it must preempt the main thread. It may also have undesirable effects. It seems unnatural for a background worker thread to get a higher priority.
There is nothing unnatural about this. It is the intended way to use threads. You only must make sure that higher priority threads block sooner or later, and any thread that goes to the OS for I/O (file or network) does block. In the scheme I sketched above the high priority threads will also block on the queues.
I could possibly issue the query asynchronously.
I wouldn't go there. This technique may be necessary when you write a server for many simultaneous connections and a thread per connection is prohibitively expensive, but otherwise blocking network access in a threaded solution should work fine.
Edit:
Thanks to Jeroen Pluimers for the poke to look closer into this. As the information in the links he gave in his comment shows my statement
No lower priority thread will get any slice of CPU while a higher priority thread in the same process is runnable.
is not true. Lower priority threads that haven't been running for a long time get a random priority boost and will indeed sooner or later get a share of CPU, even though higher priority threads are runnable. For more information about this see in particular "Priority Inversion and Windows NT Scheduler".
To test this out I created a simple demo with Delphi:
type
TForm1 = class(TForm)
Label1: TLabel;
Label2: TLabel;
Label3: TLabel;
Label4: TLabel;
Label5: TLabel;
Label6: TLabel;
Timer1: TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure Timer1Timer(Sender: TObject);
private
fLoopCounters: array[0..5] of LongWord;
fThreads: array[0..5] of TThread;
end;
var
Form1: TForm1;
implementation
{$R *.DFM}
// TTestThread
type
TTestThread = class(TThread)
private
fLoopCounterPtr: PLongWord;
protected
procedure Execute; override;
public
constructor Create(ALowerPriority: boolean; ALoopCounterPtr: PLongWord);
end;
constructor TTestThread.Create(ALowerPriority: boolean;
ALoopCounterPtr: PLongWord);
begin
inherited Create(True);
if ALowerPriority then
Priority := tpLower;
fLoopCounterPtr := ALoopCounterPtr;
Resume;
end;
procedure TTestThread.Execute;
begin
while not Terminated do
InterlockedIncrement(PInteger(fLoopCounterPtr)^);
end;
// TForm1
procedure TForm1.FormCreate(Sender: TObject);
var
i: integer;
begin
for i := Low(fThreads) to High(fThreads) do
// fThreads[i] := TTestThread.Create(True, #fLoopCounters[i]);
fThreads[i] := TTestThread.Create(i >= 4, #fLoopCounters[i]);
end;
procedure TForm1.FormDestroy(Sender: TObject);
var
i: integer;
begin
for i := Low(fThreads) to High(fThreads) do begin
if fThreads[i] <> nil then
fThreads[i].Terminate;
end;
for i := Low(fThreads) to High(fThreads) do
fThreads[i].Free;
end;
procedure TForm1.Timer1Timer(Sender: TObject);
begin
Label1.Caption := IntToStr(fLoopCounters[0]);
Label2.Caption := IntToStr(fLoopCounters[1]);
Label3.Caption := IntToStr(fLoopCounters[2]);
Label4.Caption := IntToStr(fLoopCounters[3]);
Label5.Caption := IntToStr(fLoopCounters[4]);
Label6.Caption := IntToStr(fLoopCounters[5]);
end;
This creates 6 threads (on my 4 core machine), either all with lower priority, or 4 with normal and 2 with lower priority. In the first case all 6 threads run, but with wildly different shares of CPU time:
In the second case 4 threads run with roughly equal share of CPU time, but the other two threads get a little share of the CPU as well:
But the share of CPU time is very very small, way below a percent of what the other threads receive.
And to get back to your question: A program using multiple threads with custom priority, coupled via producer-consumer queues, should be a viable solution. In the normal case the database threads will block most of the time, either on the network operations or on the queues. And the Windows scheduler will make sure that even a lower priority thread will not completely starve to death.
I don't know any database access layer that permits this.
The reason is that each thread has its own "thread local storage" (The threadvar keyword in Delphi, other languages have equivalents, it is used in a lot of frameworks).
When you start things on one thread, and continue it on another, then you get these local storages mixed up causing all sorts of havoc.
The best you can do is this:
pass the query and parameters to the thread that will handle this (use the standard Delphi thread synchronization mechanisms for this)
have the actual query thread perform the query
return the results to the main thread (use the standard Delphi thread synchronization mechanisms for this)
The answers to this question explains thread synchronization in more detail.
Edit: (on presumed slowness of starting something in an other thread)
"Right away" is a relative term: it depends in how you do your thread synchronization and can be very very fast (i.e. less than a millisecond).
Creating a new thread might take some time.
The solution is to have a threadpool of worker threads that is big enough to service a reasonable amount of requests in an efficient manner.
That way, if the system is not yet too busy, you will have a worker thread ready to start servicing your request almost immediately.
I have done this (even cross process) in a big audio application that required low latency response, and it works like a charm.
The audio server process runs at high priority waiting for requests. When it is idle, it doesn't consume CPU, but when it receives a request it responds really fast.
The answers to this question on changes with big improvements and this question on cross thread communication provide some interesting tips on how to get this asynchronous behaviour working.
Look for the words AsyncCalls, OmniThread and thread.
--jeroen
I'm putting in a second answer, for your second part of the question: your Scheduling Problem
This makes it easier to distinguish both answers.
First of all, you should read Consequences of the scheduling algorithm: Sleeping doesn't always help which is part of Raymond Chen's blog "The Old New Thing".
Sleeping versus polling is also good reading.
Basically all these make good reading.
If I understand your Scheduling Problem correctly, you have 3 kinds of threads:
Main Thread: makes sure the Fetch Threads always have work to do
Fetch Threads: (database bound) fetch data for the Processing Threads
Processing Threads: (CPU bound) process fetched data
The only way to keep 3 running is to have 2 fetch as much data as they can.
The only way to keep 2 fetching, is to have 1 provide them enough entries to fetch.
You can use queues to communicate data between 1 and 2 and between 2 and 3.
Your problem now is two-fold:
finding the balance between the number of threads in category 2 and 3
making sure that 2 always have work to do
I think you have solved the former.
The latter comes down to making sure the queue between 1 and 2 is never empty.
A few tricks:
You can use Sleep(1) (see the blog article) as a simple way to "force" 2 to run
Never let the treads exit their execute: creating and destroying threads is expensive
choose your synchronization objects (often called IPC objects) carefully (Kudzu has a nice article on them)
--jeroen
You just have to use the standard Thread synchronization mechanism of the Delphi threading.
Check your IDE help for TEvent class and its associated methods.

How to determine why a task destroys , VxWorks?

I have a VxWorks application running on ARM uC.
First let me summarize the application;
Application consists of a 3rd party stack and a gateway application.
We have implemented an operating system abstraction layer to support OS in-dependency.
The underlying stack has its own memory management&control facility which holds memory blocks in a doubly linked list.
For instance ; we don't directly perform malloc/new , free/delege .Instead we call OSA layer's routines and it gets the memory from OS and puts it in a list then returns this memory to application.(routines : XXAlloc , XXFree,XXReAlloc)
And when freeing the memory we again use XXFree.
In fact this block is a struct which has
-magic numbers indication the beginning and end of memory block
-size that user requested allocated
-size in reality due to alignment issue previous and next pointers
-pointer to piece of memory given back to application. link register that shows where in the application xxAlloc is called.
With this block structure stack can check if a block is corrupted or not.
Also we have pthread library which is ported from Linux that we use to
-create/terminate threads(currently there are 22 threads)
-synchronization objects(events,mutexes..)
There is main task called by taskSpawn and later this task created other threads.
this was a description of application and its VxWorks interface.
The problem is :
one of tasks suddenly gets destroyed by VxWorks giving no information about what's wrong.
I also have a jtag debugger and it hits the VxWorks taskDestoy() routine but call stack doesn't give any information neither PC or r14.
I'm suspicious of specific routine in code where huge xxAlloc is done but problem occurs
very sporadic giving no clue that I can map it to source code.
I think OS detects and exception and performs its handling silently.
any help would be great
regards
It resolved.
I did an isolated test. Allocated 20MB with malloc and memset with 0x55 and stopped thread of my application.
And I wrote another thread which checks my 20MB if any data else than 0x55 is written.
And quess what!! some other thread which belongs other components in CPU (someone else developed them) write my allocated space.
Thanks 4 your help
If your task exits, taskDestroy() is called. If you are suspicious of huge xxAlloc, verify that the allocation code is not calling exit() when memory is exhausted. I've been bitten by this behavior in a third party OSAL before.
Sounds like you are debugging after integration; this can be a hell of a job.
I suggest breaking the problem into smaller pieces.
Process
1) you can get more insight by instrumenting the code and/or using VxWorks intrumentation (depending on which version). This allows you to get more visibility in what happens. Be sure to log everything to a file, so you move back in time from the point where the task ends. Instrumentation is a worthwile investment as it will be handy in more occasions. Interesting hooks in VxWorks: Taskhooklib
2) memory allocation/deallocation is very fundamental functionality. It would be my first candidate for thorough (unit) testing in a well-defined multi-thread environment. If you have done this and no errors are found, I'd first start to look why the tas has ended.
other possible causes
A task will also end when the work is done.. so it may be a return caused by a not-so-endless loop. Especially if it is always the same task, this would be my guess.
And some versions of VxWorks have MMU support which must be considered.