I was not trying to upset you. I was attempting to make sure that I was crystal clear. I also attempted to avoid making premature categorizations, for instance I had been about to just say that OO languages defined destructors when I realized that AFAIK JavaScript does not. How many others are the same? How many stripped down little languages call themselves OO? I have NFC.
I am sorry that this made you feel talked-down to. That was not my intention...
I am a frigging OO programmer, you know; I bloody well know what a destructor is! My question was only and specifically about Jim's expression "guaranteed destructor timing" -- which was used, both in his original post and several subsequent ones (including yours that I'm replying to now) in a confusing and self-contradictory way that showed that it isn't really destructors you're talking about. Thankfully, Scott began to clear this up a little... And I'll spell it out all the way for you, so you won't have to embarass yourself like this again.
[explanation of alternate strategies for resource freeing strategies at block exit deleted]
Geez, when you decide to make yourself look silly, you don't go halfway, do you? The fact that the very first post by gfolkert was not necessarily talking about destructors was indeed clarified at [link|http://z.iwethey.org/forums/render/content/show?contentid=109997|http://z.iwethey.org...?contentid=109997]. If you will read that post, see the techniques, and note
who wrote it you might realize that I am perfectly aware of that. You might also start acknowledging the right person.
Which leads to the common sense advice that before you write a rant,
review the thread.
Now there is one thing that Jim's response to you reminded me of. C++ differentiates between stack based and heap based objects, and manages them differently. Since all of the systems that I have dealt with extensively keep data in the heap, it slipped my mind that he would include C++ because of how it handles stack based objects. My bad. (Though many C++ programmers do use automatic reference counting in the form of smart pointers. And that is what
I was thinking of in my response.) He happened to walk into an active debate in the Perl world right now, because Perl is going from reference counting (Perl 5) to true garbage collection (Perl 6) and so there have been interminal debates over the exact value of guaranteed destructor timing.
And answering one of your sarcastic comments, no, reference counting is
not the same as true GC. Systems with true GC should be able to collect all garbage - including garbage that refers to itself through circular references. Reference counting fails to ever collect circular references unless you manually adjust for that. (Weak references make that easier.)
As for whether true GC is better or worse than reference counting, people debate that forever. To forestall pointless debate here, allow me to summarize:
- Reference counting is easier to understand.
- Reference counting offers more deterministic behaviour.
- Once implemented, true GC is easier to maintain. (In a large software project with reference counting, it is virtually inevitable that bugs will crop up in the counting, leading to memory leaks. This is far less of an issue with true GC.)
- GC handles circular references, which can easily come up in complex data structures, reference counting does not.
- Neither offers clearly superior performance. Or rather each clearly outperforms the other in the right circumstances. GC runs are not cheap. But neither are cache misses caused by constantly having to update reference counts in random places in memory.
- People tend to resist moving away from whichever one they currently use.
But like it or not, true GC is an idea that is working its way into more and more environments as time goes on. Higher order languages favoured by those who are into "research languages" have long favoured true GC. See Lisp, Smalltalk, Haskell, and Ocaml for examples. Mainstream languages being created by major software vendors in recent years have followed suit. This is true whether you are talking the Microsoft world (VB, C#, etc), web development (JavaScript) or Sun's world (Java). (Before you get annoyed at my not including Delphi, I didn't include Perl either - and Perl at this point probably has more users than Delphi.) There are some preliminary signs that scripting languages are starting to head that way (Ruby has it, some forms of Python do, the next generation of Perl will). Older mainstream languages and direct derivations thereof (C, C++, Pascal, etc) do not, and seem unlikely to change.
So why the trend? My take is that people who have built enough big systems get sick and tired of bugs in reference counting and eventually go with true GC because it is more reliable with less work. Of course I could be wrong, but that is the biggest argument given for true GC if you look for references on it, and it fits with comments from specific project leaders that I know who have chosen to use it in their projects...
Cheers,
Ben
"good ideas and bad code build communities, the other three combinations do not"
- [link|http://archives.real-time.com/pipermail/cocoon-devel/2000-October/003023.html|Stefano Mazzocchi]
Edited by
ben_tilly
July 17, 2003, 10:50:14 AM EDT
Why are you so upset?
I was not trying to upset you. I was attempting to make sure that I was crystal clear. I also attempted to avoid making premature categorizations, for instance I had been about to just say that OO languages defined destructors when I realized that AFAIK JavaScript does not. How many others are the same? How many stripped down little languages call themselves OO? I have NFC.
I am sorry that this made you feel talked-down to. That was not my intention...
I am a frigging OO programmer, you know; I bloody well know what a destructor is! My question was only and specifically about Jim's expression "guaranteed destructor timing" -- which was used, both in his original post and several subsequent ones (including yours that I'm replying to now) in a confusing and self-contradictory way that showed that it isn't really destructors you're talking about. Thankfully, Scott began to clear this up a little... And I'll spell it out all the way for you, so you won't have to embarass yourself like this again.
[explanation of alternate strategies for resource freeing strategies at block exit deleted]
Geez, when you decide to make yourself look silly, you don't go halfway, do you? The fact that the very first post by gfolkert was not necessarily talking about destructors was indeed clarified at [link|http://z.iwethey.org/forums/render/content/show?contentid=109997|http://z.iwethey.org...?contentid=109997]. If you will read that post, see the techniques, and note
who wrote it you might realize that I am perfectly aware of that. You might also start acknowledging the right person.
Which leads to the common sense advice that before you write a rant,
review the thread.
Now there is one thing that Jim's response to you reminded me of. C++ differentiates between stack based and heap based objects, and manages them differently. Since all of the systems that I have dealt with extensively keep data in the heap, it slipped my mind that he would include C++ because of how it handles stack based objects. My bad. (Though many C++ programmers do use automatic reference counting in the form of smart pointers.) He happened to walk into an active debate in the Perl world right now, because Perl is going from reference counting (Perl 5) to true garbage collection (Perl 6) and so there have been interminal debates over the exact value of guaranteed destructor timing.
And answering one of your sarcastic comments, no, reference counting is
not the same as true GC. Systems with true GC should be able to collect all garbage - including garbage that refers to itself through circular references. Reference counting fails to ever collect circular references unless you manually adjust for that. (Weak references make that easier.)
As for whether true GC is better or worse than reference counting, people debate that forever. To forestall pointless debate here, allow me to summarize:
- Reference counting is easier to understand.
- Reference counting offers more deterministic behaviour.
- Once implemented, true GC is easier to maintain. (In a large software project with reference counting, it is virtually inevitable that bugs will crop up in the counting, leading to memory leaks. This is far less of an issue with true GC.)
- GC handles circular references, which can easily come up in complex data structures, reference counting does not.
- Neither offers clearly superior performance. Or rather each clearly outperforms the other in the right circumstances. GC runs are not cheap. But neither are cache misses caused by constantly having to update reference counts in random places in memory.
- People tend to resist moving away from whichever one they currently use.
But like it or not, true GC is an idea that is working its way into more and more environments as time goes on. Higher order languages favoured by those who are into "research languages" have long favoured true GC. See Lisp, Smalltalk, Haskell, and Ocaml for examples. Mainstream languages being created by major software vendors in recent years have followed suit. This is true whether you are talking the Microsoft world (VB, C#, etc), web development (JavaScript) or Sun's world (Java). (Before you get annoyed at my not including Delphi, I didn't include Perl either - and Perl at this point probably has more users than Delphi.) There are some preliminary signs that scripting languages are starting to head that way (Ruby has it, some forms of Python do, the next generation of Perl will). Older mainstream languages and direct derivations thereof (C, C++, Pascal, etc) do not, and seem unlikely to change.
So why the trend? My take is that people who have built enough big systems get sick and tired of bugs in reference counting and eventually go with true GC because it is more reliable with less work. Of course I could be wrong, but that is the biggest argument given for true GC if you look for references on it, and it fits with comments from specific project leaders that I know who have chosen to use it in their projects...
Cheers,
Ben
"good ideas and bad code build communities, the other three combinations do not"
- [link|http://archives.real-time.com/pipermail/cocoon-devel/2000-October/003023.html|Stefano Mazzocchi]