While I think I understand the gist of the problem (i.e. a good GC tracks objects, not scope), I don't know enough about the subject to convince others.
Can you give me an explanation on why there are no garbage-collected languages with deterministic destructors?
They are NOT mutually exclusive. Feel free to use C++ with libgc (Boehm-Reiser-Detlefs collector). You can still use RAII, smart pointers, and manual deletion, but with the GC running you can also just "forget" to delete some objects.
#Andy's answer regarding resources being disposed too late misses the important point: it isn't the delay releasing resources which is crucial semantically, but rather the order of release.
The reason GC tends not to order release well is that it would require a topological sort on ordering requirements (dependencies) and that's an expensive algorithm.
Nevertheless Ocaml GC has an interesting facility where you can attach a finaliser to an object. If the object becomes unreachable the finaliser is run, however the object is not deleted (because the finaliser could make it reachable again: in that case you can even attach another finaliser). These finalisers can provide some control over ordering.
From Wikipedia, after noting that tracing garbage collectors are the most common type:
Tracing garbage collection is not
deterministic. An object which becomes
eligible for garbage collection will
usually be cleaned up eventually, but
there is no guarantee when (or even
if) that will happen.
Therefore, relying on RAII could lead to the resource being disposed of too late.
As a result, for example, Java has a guideline to "avoid finalizers" (Item 6 in "Effective Java" by Josua Bloch). "Nothing time-critical should ever be done in a finalizer."
The garbage collector can't run all the time (refcounting gets closer, but generally doesn't count as garbage collection), so it doesn't even try. It's plain impractical. Therefore, there is an inevitable delay between an object becoming unreachable (e.g. because the only reference goes out of scope) and the GC collecting it, possibly firing a finalizer. This delay is not deterministic... unless (and then, deterministic destruction in the strictest sense of the word is possible, although still impractical) force the GC into a deterministic schedule - but this gets pretty close to "GC running all the time", which is still incredibly impractical.
So GC and deterministic cleanup are mutually exclusive because the GC does all the cleanup and it cannot afford do be deterministic but must rely on maximizing its efficiency.
Related
In our large code base, I could find there are multiple cudaStreamCreate() functions. However, I could not find cudaStreamDestroy() anywhere. Is it important to destroy streams after program is complete or one does not need to worry about this? What is a good programming practice in this regard?
Is it important to destroy streams after program is complete or one does not need to worry about this?
The runtime API will clean up all resources allocated (streams, memory, events, etc) by the context owned by the process during normal process termination. It isn't necessary to explicitly destroy streams in most situations.
While talonmies answer is correct, it is still often important to destroy your streams, and other entities created in CUDA:
If you're writing a library - you may finish your work well before the application exits. (Although in that case you might be working in a different CUDA context, and maybe you'll simply destroy the whole context).
If your code which creates streams might be called many times.
also, if you don't synchronize your streams after completing all work on them, then you might be missing some errors (and the results of your last bits of work); and if you do have a "last synch", that often means an opportunity to also destroy the stream.
Finally, if you use C++-flavored wrappers, like mine, then streams get destroyed when you leave the scope in which they were created, and you don't have to worry about it (but you pay the overhead of stream destruction API calls).
Which garbage collection algorithms can recognize garbage objects as soon as they become garbage?
The only thing which comes to my mind is reference counting with an added cycle-search every time a reference count is decremented to a non-zero value.
Are there any other interesting collection algorithms which can achieve that? (Note that I'm asking out of curiosity only; I'm aware that all such collectors would probably be incredibly inefficient)
Though not being a garbage collection algorithm, escape analysis allows reasoning about life-time of objects. So, if efficiency is a concern and objects should be collected not in all but in "obvious" cases, it can be handy. The basic idea is to perform static analysis of the program (at compile time or at load time if compiled for a VM) and to figure out if a newly created object may escape a routine it is created in (hence the name of the analysis). If the object is not passed anywhere else, not stored in global memory, not returned from the given routine, etc., it can be released before returning from this routine or even earlier, at the place of its last use.
The objects that do not live longer than the associated method call can be allocated on the stack rather than in the heap so they can be removed from the garbage collection cycle at compile time thus lowering pressure on the general GC.
Such a mechanism would be called "heap management", not garbage collection.
By definition, garbage collection is decoupled from heap management. This is because in some environments/applications it is more efficient to skip doing a "free" operation and keeping track of what is in use. Instead, every once in a while, just ago around and gather all the unreferenced nodes and put them back on the free list.
== Addendum ==
I am being downvoted for attempting to correct the terminology of heap management with garbage collection. The Wikipedia article agrees with my usage, and what I learned at university, even though that was several decades ago. Languages such as Lisp and Snobol invented the need for garbage collection. Languages such as C don't provide such a heavy duty runtime environment; instead the rely on the programmer to manage cleanup of unused bits of memory and resources.
I was thinking about garbage collection on the way home, and I began wondering, why does the garbage collector totally freeze execution of a program? Personally I would have designed it to block any threads which try to allocate a new object, but threads which were running would be left alone.
I can't imagine any situation where this would be a problem compared to how a garbage collector currently works.
I was thinking about garbage collection on the way home, and I began wondering, why does the garbage collector totally freeze execution of a program?
There is a trade-off between latency and throughput in GC design. You can either process heap-allocated blocks individually ("incremental") or you can batch them up and process them all at the same time ("stop the world"). Fully incremental collection never totally freezes a program and it has very low latency but it also has very poor throughput. Stop the world garbage collectors have the worst possible latency (freezing the program for seconds or even minutes at a time) but near-optimal throughput.
All of the major production GCs today provide a middle ground, typically with generational collection with the per-thread nursery generations collected in batches and incremental or concurrent collection of the shared old generation. Thus, only nursery collections incur pauses and nursery size is bounded so pause times are kept low, e.g. 10-100ms in .NET with the workstation GC.
For a simple GC algorithm that never pauses, see Baker's Treadmill. For more information on garbage collection I highly recommend the Memory Management Reference and the Garbage Collection Handbook.
There is a lot of misinformation in the other answers here. Jon Skeet wrote some source code and started discussing it from the point of view of garbage collection. You need to be very careful doing this because there is little correspondence between source code and what the GC sees. The compiler does instruction block rearrangements, register allocation, promotion and so on, all of which affect what is visible to the GC at run time. In particular, scope in source code is not carried through to compiled code and is typically replaced with the related concept of liveness. Jon also wrote that you must pause in order to get the global roots. That is not strictly true although it is the most efficient way to get the global roots and the resulting pause is almost always tiny (sub-millisecond) because you're just copying less than a kB of stack from each thread.
Powerlord wrote that moving collectors must block reads and, therefore, all threads that read. This is also not true. The simplest counter example is immutable data: referential transparency means you can read from any copy safely.
Kico wrote that pauses are required to determine reachability. This is also not true. See Dijkstra's research about "on-the-fly" collectors and any recent real-time GC such as Stacatto.
Jerry Coffin wrote the best answer but moving isn't the reason GCs pause. There are GCs that don't move but do pause (e.g. HLVM's) and those that do move but don't pause (e.g. Stacatto).
Modern garbage collectors (in .NET and Java, anyway) don't actually "stop the world" - they do all kinds of clever things to collect concurrently.
However, you might want to consider a situation like this:
object x = null;
object y = new object();
...
x = y;
y = null;
Now, suppose the GC looks at x, then the lines below the ... run, and then the GC looks at y - it won't have seen any live objects... but the object should still be live.
Basically there needs to be a certain amount of pausing in order to get a consistent set of references. Then there's compaction, reference reassignment etc. However, it isn't nearly as bad as it used to be in terms of requiring everything to be stopped for the whole of the GC cycle. It does, however, get painful to think about :)
In addition to what Kico Lobo said, Garbage Collectors can also move things around in memory.
Therefore, they don't just have to block threads that write to memory, but also threads that read from memory.
Which is every thread.
Most GCs stop execution because objects can move in memory during a collection cycle (at least with most reasonably recent designs). That means either reading or writing almost any object at the wrong time can cause a problem.
There are collectors that have been designed around the idea of just blocking reads (or writes) to the specific parts of memory being modified at a given time, so as long as execution only uses objects that aren't (currently) being moved around, it can proceed unhindered. The problem is that most typical hardware doesn't provide efficient support for this, so even though they work in principle, they're fairly inefficient in practice. There has been at least one attempt at adapting that type of algorithm to use the write protection available in a typical paging unit, but I'm not aware of its having been used for much other than research and experimentation.
The primary alternative is to make the collector incremental -- i.e. have it do only a small amount of work at a time, so even though other execution gets stopped, it only has to stop for a little while at any given time.
With multi-core machines becoming so common, however, I'd expect to see more work put into garbage collection algorithms that can run in parallel with other execution. Up until recently, the primary emphasis was on minimizing the total time/effort spent on garbage collection. The growing number of cores available is likely to (often) mean that doing more total work in garbage collection may be easily justified, if doing so allows the mainstream of the code to run with fewer hindrances.
Edit: You might want to read Paul Wilson's Survey of Uniprocessor Garbage Collection Techniques. This isn't definitive (especially any more, given its age), but it's at least a reasonable starting point.
Because that's the only way it can assure that the refereces it is going to clean are not been used by anyone else.
If it didn´t freezed the execution, it could not assure that.
Lets say you want to write a high performance method which processes a large data set.
Why shouldn't developers have the ability to turn on manual memory management instead of being forced to move to C or C++?
void Process()
{
unmanaged
{
Byte[] buffer;
while (true)
{
buffer = new Byte[1024000000];
// process
delete buffer;
}
}
}
Because allowing you to manually delete a memory block while there may still be references to it (and the runtime has no way of knowing that without doing a GC cycle) can produce dangling pointers, and thus break memory safety. GC languages are generally memory-safe by design.
That said, in C#, in particular, you can do what you want already:
void Process()
{
unsafe
{
byte* buffer;
while (true)
{
buffer = Marshal.AllocHGlobal(1024000000);
// process
Marshal.FreeHGlobal(buffer);
}
}
}
Note that, as in C/C++, you have full pointer arithmetic for raw pointer types in C# - so buffer[i] or buffer+i are valid expressions.
If you need high performance and detailed control, maybe you should write what you're doing in C or C++. Not all languages are good for all things.
Edited to add: A single language is not going to be good for all things. If you add up all the useful features in all the good programming languages, you're going to get a really big mess, far worse than C++, even if you can avoid inconsistency.
Features aren't free. If a language has a feature, people are likely to use it. You won't be able to learn C# well enough without learning the new C# manual memory management routines. Compiler teams are going to implement it, at the cost of other compiler features that are useful. The language is likely to become difficult to parse like C or C++, and that leads to slow compilation. (As a C++ guy, I'm always amazed when I compile one of our C# projects. Compilation seems almost instantaneous.)
Features conflict with each other, sometimes in unexpected ways. C90 can't do as well as Fortran at matrix calculations, since the possibility that C pointers are aliased prevents some optimizations. If you allow pointer arithmetic in a language, you have to accept its consequences.
You're suggesting a C# extension to allow manual memory management, and in a few cases that would be useful. That would mean that memory would have to be allocated in separate ways, and there would have to be a way to tell manually managed memory from automatically managed memory. Suddenly, you've complicated memory management, there's more chance for a programmer to screw up, and the memory manager itself is more complicated. You're gaining a little performance that matters in a few cases in exchange for more complication and slower memory management in all cases.
It may be that at some time we'll have a programming language that's good for almost all purposes, from scripting to number crunching, but there's nothing popular that's anywhere near that. In the meantime, we have to be willing to accept the limitations of using only one language, or the challenge of learning several and switching between them.
In the example you posted, why not just erase the buffer and re-use it?
The .NET garbage collector is very very good at working out which objects aren't referenced anymore and freeing the associated memory in a timely manner. In fact, the garbage collector has a special heap (the large object heap) in which it puts large objects like this, that is optimized to deal with them.
On top of this, not allowing references to be explicitly freed simply removes a whole host of bugs with memory leaks and dangling pointers, that leads to much safer code.
Freeing each unused block individually as done in a language with explicit memory management can be more expensive than letting a Garbage Collector do it, because the GC has the possibility to use copying schemes to spend a time linear to the number of blocks left alive (or the number of recent blocks left alive) instead of having to handle each dead block.
The same reason most kernels won't let you schedule your own threads. Because 99.99+% of time you don't really need to, and exposing that functionality the rest of the time will only tempt you to do something potentially stupid/dangerous.
If you really need fine grain memory control, write that section of code in something else.
For example, say one was to include a 'delete' keyword in C# 4. Would it be possible to guarantee that you'd never have wild pointers, but still be able to rely on the garbage collecter, due to the reference-based system?
The only way I could see it possibly happening is if instead of references to memory locations, a reference would be an index to a table of pointers to actual objects. However, I'm sure that there'd be some condition where that would break, and it'd be possible to break type safety/have dangling pointers.
EDIT: I'm not talking about just .net. I was just using C# as an example.
You can - kind of: make your object disposable, and then dispose it yourself.
A manual delete is unlikely to improve memory performance in a managed environment. It might help with unmanaged ressources, what dispose is all about.
I'd rather have implementing and consuming Disposable objects made easier. I have no consistent, complete idea how this should look like, but managing unmanaged ressources is a verbose pain under .NET.
An idea for implementing delete:
delete tags an object for manual deletion. At the next garbage collection cycle, the object is removed and all references to it are set to null.
It sounds cool at first (at least to me), but I doubt it would be useful.
This isn't particulary safe, either - e.g. another thread might be busy executing a member method of that object, such an methods needs to throw e.g. when accessing object data.
With garbage collection, as long as you have a referenced reference to the object, it stays alive. With manual delete you can't guarantee that.
Example (pseudocode):
obj1 = new instance;
obj2 = obj1;
//
delete obj2;
// obj1 now references the twilightzone.
Just to be short, combining manual memory management with garbage collection defeats the purpose of GC. Besides, why bother? And if you really want to have control, use C++ and not C#. ;-).
The best you could get would be a partition into two “hemispheres” where one hemisphere is managed and can guarantee the absence of dangling pointers. The other hemisphere has explicit memory management and gives no guarantees. These two can coexist, but no, you can't give your strong guarantees to the second hemisphere. All you could do is to track all pointers. If one gets deleted, then all other pointers to the same instance could be set to zero. Needless to say, this is quite expensive. Your table would help, but introduce other costs (double indirection).
Chris Sells also discussed this on .NET Rocks. I think it was during his first appearance but the subject might have been revisited in later interviews.
http://www.dotnetrocks.com/default.aspx?showNum=10
My first reaction was: Why not? I can't imagine that you want to do is something as obscure as just leave an unreferenced chunk out on the heap to find it again later on. As if a four-byte pointer to the heap were too much to maintain to keep track of this chunk.
So the issue is not leaving unreferenced memory allocated, but intentionally disposing of memory still in reference. Since garbage collection performs the function of marking the memory free at some point, it seems that we should just be able to call an alternate sequence of instructions to dispose of this particular chunk of memory.
However, the problem lies here:
String s = "Here is a string.";
String t = s;
String u = s;
junk( s );
What do t and u point to? In a strict reference system, t and u should be null. So that means that you have to not only do reference counting, but perhaps tracking as well.
However, I can see that you should be done with s at this point in your code. So junk can set the reference to null, and pass it to the sweeper with a sort of priority code. The gc could be activated for a limited run, and the memory freed only if not reachable. So we can't explicitly free anything that somebody has coded to use in some way again. But if s is the only reference, then the chunk is deallocated.
So, I think it would only work with a limited adherence to the explicit side.
It's possible, and already implemented, in non-managed languages such as C++. Basically, you implement or use an existing garbage collector: when you want manual memory management, you call new and delete as normal, and when you want garbage collection, you call GC_MALLOC or whatever the function or macro is for your garbage collector.
See http://www.hpl.hp.com/personal/Hans_Boehm/gc/ for an example.
Since you were using C# as an example, maybe you only had in mind implementing manual memory management in a managed language, but this is to show you that the reverse is possible.
If the semantics of delete on a object's reference would make all other references referencing that object be null, then you could do it with 2 levels of indirection (1 more than you hint). Though note that while the underlying object would be destroyed, a fixed amount of information (enough to hold a reference) must be kept live on the heap.
All references a user uses would reference a hidden reference (presumably living in a heap) to the real object. When doing some operation on the object (such as calling a method or relying on its identity, wuch as using the == operator), the reference the programmer uses would dereference the hidden reference it points to. When deleting an object, the actual object would be removed from the heap, and the hidden reference would be set to null. Thus the references programmers would see evaluate to null.
It would be the GC's job to clean out these hidden references.
This would help in situations with long-lived objects. Garbage Collection works well when objects are used for short periods of time and de-referenced quickly. The problem is when some objects live for a long time. The only way to clean them up is to perform a resource-intensive garbage collection.
In these situations, things would work much easier if there was a way to explicitly delete objects, or at least a way to move a graph of objects back to generation 0.
Yes ... but with some abuse.
C# can be abused a little to make that happen.
If you're willing to play around with the Marshal class, StructLayout attribute and unsafe code, you could write your very own manual memory manager.
You can find a demonstration of the concept here: Writing a Manual Memory Manager in C#.