How do I use multi threading in TCL? - tcl

I'm trying to run two procedures in parallel. As TCL is the interpreter, it will process procedures one by one. Can someone explain with an example how I can use multi-threading in TCL?

These days, the usual way to do multi-threading in Tcl is to use its Thread extension — it's being developed along with the Tcl's core, but on certain platforms (such as various Linux-based OSes) you might need to install a separate package to get this extension available.
The threading model the Thread extension implements is "one thread per interpreter". This means, each thread can "host" just one Tcl interpreter (and an unlimited number of its child interpreters), but no code executed by any thread may access interpreters hosted in other threads. This, in turn, means that when you work with threads in Tcl, you have to master the idea of multiple interpreters.
The classical approach to exchanging data between interpreters running in different threads is message passing: you post scripts to the input queue of the target interpreter running in different thread and then wait for reply. On the other hand, thread-shared variables (implementing sharing memory by locking) is also available. Another available feature is support for thread pools.
Read the "Tcl and threads" wiki page, the Thread's extension manual pages.
The code examples are on the wiki. Here's just one of them.
Please note that if your procedures which, you think, have to be run in parrallel, are mostly I/O bound (that is, they read something from the network and/or send something there) and not CPU-bound (doing heavy computations), you might have better results with the event-based approach to processing: the Tcl has built-in support for the event loop, and you are able to make Tcl execute your code when the next chunk of data can be read from a channel (such as a network socket) or written to a channel.

Related

What is the use of task graphs in CUDA 10?

CUDA 10 added runtime API calls for putting streams (= queues) in "capture mode", so that instead of executing, they are returned in a "graph". These graphs can then be made to actually execute, or they can be cloned.
But what is the rationale behind this feature? Isn't it unlikely to execute the same "graph" twice? After all, even if you do run the "same code", at least the data is different, i.e. the parameters the kernels take likely change. Or - am I missing something?
PS - I skimmed this slide deck, but still didn't get it.
My experience with graphs is indeed that they are not so mutable. You can change the parameters with 'cudaGraphHostNodeSetParams', but in order for the change of parameters to take effect, I had to rebuild the graph executable with 'cudaGraphInstantiate'. This call takes so long that any gain of using graphs is lost (in my case). Setting the parameters only worked for me when I build the graph manually. When getting the graph through stream capture, I was not able to set the parameters of the nodes as you do not have the node pointers. You would think the call 'cudaGraphGetNodes' on a stream captured graph would return you the nodes. But the node pointer returned was NULL for me even though the 'numNodes' variable had the correct number. The documentation explicitly mentions this as a possibility but fails to explain why.
Task graphs are quite mutable.
There are API calls for changing/setting the parameters of task graph nodes of various kinds, so one can use a task graph as a template, so that instead of enqueueing the individual nodes before every execution, one changes the parameters of every node before every execution (and perhaps not all nodes actually need their parameters changed).
For example, See the documentation for cudaGraphHostNodeGetParams and cudaGraphHostNodeSetParams.
Another useful feature is the concurrent kernel executions. Under manual mode, one can add nodes in the graph with dependencies. It will explore the concurrency automatically using multiple streams. The feature itself is not new but make it automatic becomes useful for certain applications.
When training a deep learning model it happens often to re-run the same set of kernels in the same order but with updated data. Also, I would expect Cuda to do optimizations by knowing statically what will be the next kernels. We can imagine that Cuda can fetch more instructions or adapt its scheduling strategy when knowing the whole graph.
CUDA Graphs is trying to solve the problem that in the presence of too many small kernel invocations, you see quite some time spent on the CPU dispatching work for the GPU (overhead).
It allows you to trade resources (time, memory, etc.) to construct a graph of kernels that you can use a single invocation from the CPU instead of doing multiple invocations. If you don't have enough invocations, or your algorithm is different each time, then it won't worth it to build a graph.
This works really well for anything iterative that uses the same computation underneath (e.g., algorithms that need to converge to something) and it's pretty prominent in a lot of applications that are great for GPUs (e.g., think of the Jacobi method).
You are not going to see great results if you have an algorithm that you invoke once or if your kernels are big; in that case the CPU invocation overhead is not your bottleneck. A succinct explanation of when you need it exists in the Getting Started with CUDA Graphs.
Where task graph based paradigms shine though is when you define your program as tasks with dependencies between them. You give a lot of flexibility to the driver / scheduler / hardware to do scheduling itself without much fine-tuning from the developer's part. There's a reason why we have been spending years exploring the ideas of dataflow programming in HPC.

QEMU/QMP alert when writing to memory

I'm using QEMU to test some software for a personal project and I would like to know whenever the program is writing to memory. The best solution I have come up with is to manually add print statements in the file responsible for writing to memory. Which this would require remaking the object for the file and building QEMU, if I'm correct. But I came across QMP which uses JSON commands to manipulate QEMU, which has an entire list of commands, found here: https://raw.githubusercontent.com/Xilinx/qemu/master/qmp-commands.hx.
But after looking at that I didn't really see anything that would do what I want. I am sort of a new programmer and am not that advanced. And was wondering if anyone had some idea how to go about this a better way.
Recently (9 jun 2016) there were added powerful tracing features to mainline QEMU.
Please see qemu/docs/tracing.txt file as manual.
There are a lot of events that could be traced, see
qemu/trace_events file for list of them.
As i can understand the code, the "guest_mem_before" event is that you need to view guest memory writes.
Details:
There are tracing hooks placed at following functions:
qemu/tcg/tcg-op.c: tcg_gen_qemu_st * All guest stores instructions tcg-generation
qemu/include/exec/cpu_ldst_template.h all non-tcg memory access (fetch/translation time, helpers, devices)
There historically hasn't been any support in QEMU for tracing all guest memory accesses, because there isn't any one place in QEMU where you could easily add print statements to trace them. This is because more guest memory accesses go through the "fast path", where we directly generate native host instructions which look up the host RAM address in a data structure (QEMU's TLB) and perform the load or store. It's only if this fast path doesn't find a hit in the TLB that we fall back to a slow path that's written in C.
The recent trace-events event 'tcg guest_mem_before' can be used to trace virtual memory accesses, but note that it won't tell you:
whether the access succeeded or faulted
what the data being loaded or stored was
the physical address that's accessed
You'll also need to rebuild QEMU to enable it (unlike most trace events which are compiled into QEMU by default and can be enabled at runtime.)

Compiling TCL libraries with TCL_MEM_DEBUG

I compiled TCL libraries with mem flag. But when i tried to use the libraries on my application i couldn't see any message in the console. will the trace messages out to standardard output(terminal) or will there be any log files to log the messages?
When you compile Tcl with the memory debugging enabled (using a Posix configuration style, this means that you passed in --enable-symbols=mem or --enable-symbols=all to configure; I'm not certain about what happens with Windows) there is a substantial amount of extra checking of memory allocation handling by default, and an extra Tcl command — memory — is defined. Some memory subcommands do cause messages to be written to stderr; you'll need to be running inside a suitable console in order to see them, and this can be something of an issue on Windows if you are not aware of it. Other commands will dump things to a named file.
FWIW, when developing Tcl I usually build with --enable-symbols=all except when doing performance testing. The various debugging options are known to have substantial impacts on the speed of Tcl's implementation (which is why it is a compile option rather than being always present, and consequently why the interface is rather rougher than for the rest of Tcl).

How come the macro is used as a function, but is not implemented anywhere?

The following code is in MySQL 5.5 storage/example/ha_example.cc:
MYSQL_READ_ROW_START(table_share->db.str, table_share->table_name.str, TRUE);
rc= HA_ERR_END_OF_FILE;
MYSQL_READ_ROW_DONE(rc);
I search the MYSQL_READ_ROW_START definition in the whole project, and find it in the include/probes_mysql_nodtrace.h:
#define MYSQL_READ_ROW_START(arg0, arg1, arg2)
#define MYSQL_READ_ROW_START_ENABLED() (0)
#define MYSQL_READ_ROW_DONE(arg0)
#define MYSQL_READ_ROW_DONE_ENABLED() (0)
It is just an empty macro definition here.
My question is, How came this macro MYSQL_READ_ROW_START is not associate with any function, but used as a function in the above code?
Thanks.
These aren't traditional macros: they're probe points for DTrace,
an observability framework for Solaris, OS X, FreeBSD and various
other operating systems.
DTrace revolves around the notion that different providers offer
certain probes at which one can observe running executables or
even the operating system itself. Some providers are time-based;
by firing at regular intervals the probes can, for example, be
used to profile the use of a CPU. Other providers are code-based,
and their probes might, for example, fire at the entrance to and
exit from functions.
The code you highlight is an example of the USDT (User-land
Statically Defined Tracing) provider. The canonical use of the
USDT provider is to expose meaningful events within transactions.
For example, the beginning and end of a transaction might well
occur somewhere deep within different functions; in this case
it's best for the developer to identify exactly what he wants to
reveal and when.
A USDT probe is more than a switchable printf() although it can
of course be used to reveal information, e.g. some local value
such as the intermediate result of a transaction. A USDT probe
can also be used to trigger behaviour. For example, one might
want to activate some network probes for only the duration of a
certain transaction.
Returning to your question, USDT probes are implemented by writing
macros in the code that correspond to a description of the
provider in a ".d" file elsewhere. This is parsed by the
dtrace(1) utility, which generates a header file that is suitable
for compilation. On a system that lacks DTrace it would make
sense to define a header file in which the USDT macros became null
ops, and judging by the given filename (probes_mysql_nodtrace.h)
this is what you are observing.
See http://dev.mysql.com/tech-resources/articles/getting_started_dtrace_saha.html.
To quote:
DTrace probes are implemented by kernel modules called providers, each
of which performs a particular kind of instrumentation to create
probes. Providers can thus described as publishers of probes that can
be consumed by DTrace consumers (see below). Providers can be used for
instrumenting kernel and user-level code. For user-level code, there
are two ways in which probes can be defined- User-Level Statically
Defined Tracing (USDT) or PID provider.
So it appears to be up to DTrace providers to implement such a macro.

CUDA contexts, streams, and events on multiple GPUs

TL;DR version: "What's the best way to round-robin kernel calls to multiple GPUs with Python/PyCUDA such that CPU and GPU work can happen in parallel?" with a side of "I can't have been the first person to ask this; anything I should read up on?"
Full version:
I would like to know the best way to design context, etc. handling in an application that uses CUDA on a system with multiple GPUs. I've been trying to find literature that talks about guidelines for when context reuse vs. recreation is appropriate, but so far haven't found anything that outlines best practices, rules of thumb, etc.
The general overview of what we're needing to do is:
Requests come in to a central process.
That process forks to handle a single request.
Data is loaded from the DB (relatively expensive).
The the following is repeated an arbitrary number of times based on the request (dozens):
A few quick kernel calls to compute data that is needed for later kernels.
One slow kernel call (10 sec).
Finally:
Results from the kernel calls are collected and processed on the CPU, then stored.
At the moment, each kernel call creates and then destroys a context, which seems wasteful. Setup is taking about 0.1 sec per context and kernel load, and while that's not huge, it is precluding us from moving other quicker tasks to the GPU.
I am trying to figure out the best way to manage contexts, etc. so that we can use the machine efficiently. I think that in the single-gpu case, it's relatively simple:
Create a context before starting any of the GPU work.
Launch the kernels for the first set of data.
Record an event for after the final kernel call in the series.
Prepare the second set of data on the CPU while the first is computing on the GPU.
Launch the second set, repeat.
Insure that each event gets synchronized before collecting the results and storing them.
That seems like it should do the trick, assuming proper use of overlapped memory copies.
However, I'm unsure what I should do when wanting to round-robin each of the dozens of items to process over multiple GPUs.
The host program is Python 2.7, using PyCUDA to access the GPU. Currently it's not multi-threaded, and while I'd rather keep it that way ("now you have two problems" etc.), if the answer means threads, it means threads. Similarly, it would be nice to just be able to call event.synchronize() in the main thread when it's time to block on data, but for our needs efficient use of the hardware is more important. Since we'll potentially be servicing multiple requests at a time, letting other processes use the GPU when this process isn't using it is important.
I don't think that we have any explicit reason to use Exclusive compute modes (ie. we're not filling up the memory of the card with one work item), so I don't think that solutions that involve long-standing contexts are off the table.
Note that answers in the form of links to other content that covers my questions are completely acceptable (encouraged, even), provided they go into enough detail about the why, not just the API. Thanks for reading!
Caveat: I'm not a PyCUDA user (yet).
With CUDA 4.0+ you don't even need an explicit context per GPU. You can just call cudaSetDevice (or the PyCUDA equivalent) before doing per-device stuff (cudaMalloc, cudaMemcpy, launch kernels, etc.).
If you need to synchronize between GPUs, you will need to potentially create streams and/or events and use cudaEventSynchronize (or the PyCUDA equivalent). You can even have one stream wait on an event inserted in another stream to do sophisticated dependencies.
So I suspect the answer to day is quite a lot simpler than talonmies' excellent pre-CUDA-4.0 answer.
You might also find this answer useful.
(Re)Edit by OP: Per my understanding, PyCUDA supports versions of CUDA prior to 4.0, and so still uses the old API/semantics (the driver API?), so talonmies' answer is still relevant.