What "time precise" garbage collection algorithms do exist? - language-agnostic

Which garbage collection algorithms can recognize garbage objects as soon as they become garbage?
The only thing which comes to my mind is reference counting with an added cycle-search every time a reference count is decremented to a non-zero value.
Are there any other interesting collection algorithms which can achieve that? (Note that I'm asking out of curiosity only; I'm aware that all such collectors would probably be incredibly inefficient)

Though not being a garbage collection algorithm, escape analysis allows reasoning about life-time of objects. So, if efficiency is a concern and objects should be collected not in all but in "obvious" cases, it can be handy. The basic idea is to perform static analysis of the program (at compile time or at load time if compiled for a VM) and to figure out if a newly created object may escape a routine it is created in (hence the name of the analysis). If the object is not passed anywhere else, not stored in global memory, not returned from the given routine, etc., it can be released before returning from this routine or even earlier, at the place of its last use.
The objects that do not live longer than the associated method call can be allocated on the stack rather than in the heap so they can be removed from the garbage collection cycle at compile time thus lowering pressure on the general GC.

Such a mechanism would be called "heap management", not garbage collection.
By definition, garbage collection is decoupled from heap management. This is because in some environments/applications it is more efficient to skip doing a "free" operation and keeping track of what is in use. Instead, every once in a while, just ago around and gather all the unreferenced nodes and put them back on the free list.
== Addendum ==
I am being downvoted for attempting to correct the terminology of heap management with garbage collection. The Wikipedia article agrees with my usage, and what I learned at university, even though that was several decades ago. Languages such as Lisp and Snobol invented the need for garbage collection. Languages such as C don't provide such a heavy duty runtime environment; instead the rely on the programmer to manage cleanup of unused bits of memory and resources.

Related

How does a copying garbage collector ensure objects are not accessed while copied?

On collection, the garbage collector copies all live objects into another memory space, thus discarding all garbage objects in the process. A forward pointer to the copied object in new space is installed into the 'old' version of an object to ensure the collector updates all remaining references to the object correctly and doesn't erroneously copy the same object twice.
This obviously works quite well for stop-the-world-collectors. However, since pause times are long with stop-the-world, nowadays most garbage collectors allow the mutator threads to run concurrently with the collector, only stopping the mutators for a short time to do the initial stack scan.
So how can the collector ensure that the 'old' version of an object is not accessed by the mutator while/after copying it? I imagine the mutators could check for the forward pointer with some sort of read barrier, however this seems to costly to me since variables are read so often.
The Loaded Value Barrier implemented in Azul's Generational Pauseless Garbage Collector is an example of a solution to this problem. You can read about it in the article The Azul Garbage Collector posted on InfoQ in early 2011.
You pretty much need to use a read barrier or a write barrier. You're apparently already aware of read barriers so I won't try to get into them.
Writer barriers work because as long as you prevent writes from happening, you simply don't care whether somebody accesses the old or the new copy of the data. You set the write barrier, copy the data, and then start adjusting pointers. After the copy is made, you don't really care whether somebody reads the old or the new copy of the data, because the write barrier ensures they're identical. Once you're done adjusting pointers, everything will be working with the new data, so you revoke the write barrier.
There has been some work done with using page protection bits to mark an area of memory as read-only to create a write-barrier on fairly standard hardware. At least the last time I looked into it, however, this was still pretty much at a proof of concept stage -- working, but too slow to be very practical.
Disclaimer: this is related to java's Shenandoah GC only.
Your reasons are absolutely spot on for Shenandoah! Some details here for example.
In the not so long ago days, Shenandoah had write and read barriers for all primitive and reference types. The read barrier was actually just a single indirection via the forwarding pointer as you assume. Since they are so much more compared to writes (which was a much more complicated barrier), these read barriers were more expensive (cumulative time-wise) vs the write ones. This had to do with the sole fact that these are so darn many.
But, things have changed in jdk-13, when a Load Reference barrier was implemented. Thus only loads have a barrier, write happen the usual way. If you think about it, this makes perfect sense, in order to write to an Object field, you need to read that object first, as such if your barrier preserves the "to-space invariant", you will always read from the most recent and correct copy of the object; without the need to use a read barrier with a forwarding pointer.

How does a tracing generational GC determine garbage in the young generation?

Lets assume we have a simple generational GC with only two generations, the "old" generation (objects who survived at least one collection) and the "young" generation (newly allocated). So how exactly would the GC determine a "young" object to be garbage without tracing the whole reference graph from the very roots? Or to put it a different way: What does the GC choose as roots for the trace when indending to collect the "young" generation only?
I'm interested in the general method but in specific examples from existing implementations as well.
Thanks!
There are a few techniques, which all boil down to maintaining knowledge of which old-gen objects (or ranges of old-gen memory) may contain references to young objects.
Pretty much all implementations I can think of maintain this knowledge by adding write barriers. Those write barriers trigger when a young-gen reference is stored in a old-gen object, and thereby cause execution of a small code snippet which remembers the new reference.
To store that knowledge, some GCs use card marking, where a compact bitmap is used to mark small-ish memory blocks as "contains references to younger generations". Others maintain explicit "remembered sets", which does something similar for individual objects. In both cases, young-gen collections then add the objects in the (remembered set/memory blocks marked by the card table) to the roots.
As for specific implementations:
Mono uses remembered sets.
PyPy has several GCs, the newest and shiniest (Minimark) uses remembered sets, with the addition of card marking for individual large arrays.
.NET uses card marking.

Why is RAII and garbage collection mutually exclusive?

While I think I understand the gist of the problem (i.e. a good GC tracks objects, not scope), I don't know enough about the subject to convince others.
Can you give me an explanation on why there are no garbage-collected languages with deterministic destructors?
They are NOT mutually exclusive. Feel free to use C++ with libgc (Boehm-Reiser-Detlefs collector). You can still use RAII, smart pointers, and manual deletion, but with the GC running you can also just "forget" to delete some objects.
#Andy's answer regarding resources being disposed too late misses the important point: it isn't the delay releasing resources which is crucial semantically, but rather the order of release.
The reason GC tends not to order release well is that it would require a topological sort on ordering requirements (dependencies) and that's an expensive algorithm.
Nevertheless Ocaml GC has an interesting facility where you can attach a finaliser to an object. If the object becomes unreachable the finaliser is run, however the object is not deleted (because the finaliser could make it reachable again: in that case you can even attach another finaliser). These finalisers can provide some control over ordering.
From Wikipedia, after noting that tracing garbage collectors are the most common type:
Tracing garbage collection is not
deterministic. An object which becomes
eligible for garbage collection will
usually be cleaned up eventually, but
there is no guarantee when (or even
if) that will happen.
Therefore, relying on RAII could lead to the resource being disposed of too late.
As a result, for example, Java has a guideline to "avoid finalizers" (Item 6 in "Effective Java" by Josua Bloch). "Nothing time-critical should ever be done in a finalizer."
The garbage collector can't run all the time (refcounting gets closer, but generally doesn't count as garbage collection), so it doesn't even try. It's plain impractical. Therefore, there is an inevitable delay between an object becoming unreachable (e.g. because the only reference goes out of scope) and the GC collecting it, possibly firing a finalizer. This delay is not deterministic... unless (and then, deterministic destruction in the strictest sense of the word is possible, although still impractical) force the GC into a deterministic schedule - but this gets pretty close to "GC running all the time", which is still incredibly impractical.
So GC and deterministic cleanup are mutually exclusive because the GC does all the cleanup and it cannot afford do be deterministic but must rely on maximizing its efficiency.

Why do garbage collectors freeze execution?

I was thinking about garbage collection on the way home, and I began wondering, why does the garbage collector totally freeze execution of a program? Personally I would have designed it to block any threads which try to allocate a new object, but threads which were running would be left alone.
I can't imagine any situation where this would be a problem compared to how a garbage collector currently works.
I was thinking about garbage collection on the way home, and I began wondering, why does the garbage collector totally freeze execution of a program?
There is a trade-off between latency and throughput in GC design. You can either process heap-allocated blocks individually ("incremental") or you can batch them up and process them all at the same time ("stop the world"). Fully incremental collection never totally freezes a program and it has very low latency but it also has very poor throughput. Stop the world garbage collectors have the worst possible latency (freezing the program for seconds or even minutes at a time) but near-optimal throughput.
All of the major production GCs today provide a middle ground, typically with generational collection with the per-thread nursery generations collected in batches and incremental or concurrent collection of the shared old generation. Thus, only nursery collections incur pauses and nursery size is bounded so pause times are kept low, e.g. 10-100ms in .NET with the workstation GC.
For a simple GC algorithm that never pauses, see Baker's Treadmill. For more information on garbage collection I highly recommend the Memory Management Reference and the Garbage Collection Handbook.
There is a lot of misinformation in the other answers here. Jon Skeet wrote some source code and started discussing it from the point of view of garbage collection. You need to be very careful doing this because there is little correspondence between source code and what the GC sees. The compiler does instruction block rearrangements, register allocation, promotion and so on, all of which affect what is visible to the GC at run time. In particular, scope in source code is not carried through to compiled code and is typically replaced with the related concept of liveness. Jon also wrote that you must pause in order to get the global roots. That is not strictly true although it is the most efficient way to get the global roots and the resulting pause is almost always tiny (sub-millisecond) because you're just copying less than a kB of stack from each thread.
Powerlord wrote that moving collectors must block reads and, therefore, all threads that read. This is also not true. The simplest counter example is immutable data: referential transparency means you can read from any copy safely.
Kico wrote that pauses are required to determine reachability. This is also not true. See Dijkstra's research about "on-the-fly" collectors and any recent real-time GC such as Stacatto.
Jerry Coffin wrote the best answer but moving isn't the reason GCs pause. There are GCs that don't move but do pause (e.g. HLVM's) and those that do move but don't pause (e.g. Stacatto).
Modern garbage collectors (in .NET and Java, anyway) don't actually "stop the world" - they do all kinds of clever things to collect concurrently.
However, you might want to consider a situation like this:
object x = null;
object y = new object();
...
x = y;
y = null;
Now, suppose the GC looks at x, then the lines below the ... run, and then the GC looks at y - it won't have seen any live objects... but the object should still be live.
Basically there needs to be a certain amount of pausing in order to get a consistent set of references. Then there's compaction, reference reassignment etc. However, it isn't nearly as bad as it used to be in terms of requiring everything to be stopped for the whole of the GC cycle. It does, however, get painful to think about :)
In addition to what Kico Lobo said, Garbage Collectors can also move things around in memory.
Therefore, they don't just have to block threads that write to memory, but also threads that read from memory.
Which is every thread.
Most GCs stop execution because objects can move in memory during a collection cycle (at least with most reasonably recent designs). That means either reading or writing almost any object at the wrong time can cause a problem.
There are collectors that have been designed around the idea of just blocking reads (or writes) to the specific parts of memory being modified at a given time, so as long as execution only uses objects that aren't (currently) being moved around, it can proceed unhindered. The problem is that most typical hardware doesn't provide efficient support for this, so even though they work in principle, they're fairly inefficient in practice. There has been at least one attempt at adapting that type of algorithm to use the write protection available in a typical paging unit, but I'm not aware of its having been used for much other than research and experimentation.
The primary alternative is to make the collector incremental -- i.e. have it do only a small amount of work at a time, so even though other execution gets stopped, it only has to stop for a little while at any given time.
With multi-core machines becoming so common, however, I'd expect to see more work put into garbage collection algorithms that can run in parallel with other execution. Up until recently, the primary emphasis was on minimizing the total time/effort spent on garbage collection. The growing number of cores available is likely to (often) mean that doing more total work in garbage collection may be easily justified, if doing so allows the mainstream of the code to run with fewer hindrances.
Edit: You might want to read Paul Wilson's Survey of Uniprocessor Garbage Collection Techniques. This isn't definitive (especially any more, given its age), but it's at least a reasonable starting point.
Because that's the only way it can assure that the refereces it is going to clean are not been used by anyone else.
If it didn´t freezed the execution, it could not assure that.

Can garbage collection coexist with explicit memory management?

For example, say one was to include a 'delete' keyword in C# 4. Would it be possible to guarantee that you'd never have wild pointers, but still be able to rely on the garbage collecter, due to the reference-based system?
The only way I could see it possibly happening is if instead of references to memory locations, a reference would be an index to a table of pointers to actual objects. However, I'm sure that there'd be some condition where that would break, and it'd be possible to break type safety/have dangling pointers.
EDIT: I'm not talking about just .net. I was just using C# as an example.
You can - kind of: make your object disposable, and then dispose it yourself.
A manual delete is unlikely to improve memory performance in a managed environment. It might help with unmanaged ressources, what dispose is all about.
I'd rather have implementing and consuming Disposable objects made easier. I have no consistent, complete idea how this should look like, but managing unmanaged ressources is a verbose pain under .NET.
An idea for implementing delete:
delete tags an object for manual deletion. At the next garbage collection cycle, the object is removed and all references to it are set to null.
It sounds cool at first (at least to me), but I doubt it would be useful.
This isn't particulary safe, either - e.g. another thread might be busy executing a member method of that object, such an methods needs to throw e.g. when accessing object data.
With garbage collection, as long as you have a referenced reference to the object, it stays alive. With manual delete you can't guarantee that.
Example (pseudocode):
obj1 = new instance;
obj2 = obj1;
//
delete obj2;
// obj1 now references the twilightzone.
Just to be short, combining manual memory management with garbage collection defeats the purpose of GC. Besides, why bother? And if you really want to have control, use C++ and not C#. ;-).
The best you could get would be a partition into two “hemispheres” where one hemisphere is managed and can guarantee the absence of dangling pointers. The other hemisphere has explicit memory management and gives no guarantees. These two can coexist, but no, you can't give your strong guarantees to the second hemisphere. All you could do is to track all pointers. If one gets deleted, then all other pointers to the same instance could be set to zero. Needless to say, this is quite expensive. Your table would help, but introduce other costs (double indirection).
Chris Sells also discussed this on .NET Rocks. I think it was during his first appearance but the subject might have been revisited in later interviews.
http://www.dotnetrocks.com/default.aspx?showNum=10
My first reaction was: Why not? I can't imagine that you want to do is something as obscure as just leave an unreferenced chunk out on the heap to find it again later on. As if a four-byte pointer to the heap were too much to maintain to keep track of this chunk.
So the issue is not leaving unreferenced memory allocated, but intentionally disposing of memory still in reference. Since garbage collection performs the function of marking the memory free at some point, it seems that we should just be able to call an alternate sequence of instructions to dispose of this particular chunk of memory.
However, the problem lies here:
String s = "Here is a string.";
String t = s;
String u = s;
junk( s );
What do t and u point to? In a strict reference system, t and u should be null. So that means that you have to not only do reference counting, but perhaps tracking as well.
However, I can see that you should be done with s at this point in your code. So junk can set the reference to null, and pass it to the sweeper with a sort of priority code. The gc could be activated for a limited run, and the memory freed only if not reachable. So we can't explicitly free anything that somebody has coded to use in some way again. But if s is the only reference, then the chunk is deallocated.
So, I think it would only work with a limited adherence to the explicit side.
It's possible, and already implemented, in non-managed languages such as C++. Basically, you implement or use an existing garbage collector: when you want manual memory management, you call new and delete as normal, and when you want garbage collection, you call GC_MALLOC or whatever the function or macro is for your garbage collector.
See http://www.hpl.hp.com/personal/Hans_Boehm/gc/ for an example.
Since you were using C# as an example, maybe you only had in mind implementing manual memory management in a managed language, but this is to show you that the reverse is possible.
If the semantics of delete on a object's reference would make all other references referencing that object be null, then you could do it with 2 levels of indirection (1 more than you hint). Though note that while the underlying object would be destroyed, a fixed amount of information (enough to hold a reference) must be kept live on the heap.
All references a user uses would reference a hidden reference (presumably living in a heap) to the real object. When doing some operation on the object (such as calling a method or relying on its identity, wuch as using the == operator), the reference the programmer uses would dereference the hidden reference it points to. When deleting an object, the actual object would be removed from the heap, and the hidden reference would be set to null. Thus the references programmers would see evaluate to null.
It would be the GC's job to clean out these hidden references.
This would help in situations with long-lived objects. Garbage Collection works well when objects are used for short periods of time and de-referenced quickly. The problem is when some objects live for a long time. The only way to clean them up is to perform a resource-intensive garbage collection.
In these situations, things would work much easier if there was a way to explicitly delete objects, or at least a way to move a graph of objects back to generation 0.
Yes ... but with some abuse.
C# can be abused a little to make that happen.
If you're willing to play around with the Marshal class, StructLayout attribute and unsafe code, you could write your very own manual memory manager.
You can find a demonstration of the concept here: Writing a Manual Memory Manager in C#.