From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,21960280f1d61e84 X-Google-Attributes: gid103376,public X-Google-Language: ENGLISH,ASCII-7-bit Newsgroups: comp.lang.ada Subject: Re: How come Ada isn't more popular? References: From: Markus E Leypold Organization: N/A Date: Fri, 09 Feb 2007 23:05:23 +0100 Message-ID: <9rveib56n0.fsf@hod.lan.m-e-leypold.de> User-Agent: Some cool user agent (SCUG) Cancel-Lock: sha1:7tKWsjejrjk6fM8t1GyiYdF1TSg= MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii NNTP-Posting-Host: 88.72.200.144 X-Trace: news.arcor-ip.de 1171058410 88.72.200.144 (9 Feb 2007 23:00:10 +0200) X-Complaints-To: abuse@arcor-ip.de Path: g2news2.google.com!news3.google.com!newsfeed.gamma.ru!Gamma.RU!news.tele.dk!news.tele.dk!small.news.tele.dk!news-fra1.dfn.de!newsfeed.arcor-ip.de!news.arcor-ip.de!not-for-mail Xref: g2news2.google.com comp.lang.ada:9216 Date: 2007-02-09T23:05:23+01:00 List-Id: "Randy Brukardt" writes: >> Maciej Sobczak writes: >> > Still, there is a strong argument is that for some class of algorithms > it >> > might be beneficial to be able to "drop on the floor" a bigger part of > the >> > graph altogether. [...] Reclaiming that abandoned part might require >> > implementing the same tracking logic that GC already provides out of the > box >> > and therefore the argument goes that the use of off-the-shelf GC can be >> > beneficial for the memory-management aspect of such an algorithm. (Any >> > thoughts on this?) >> >> My thoughts: not "might", but "definitely". Also, the advantage is not >> restricted to some class of algorithms -- it is in fact the common case. > > YMMV, of course, but it's not my experience. It's rare to destroy only part > of a data structure; it's usually the entire structure that needs to be > discarded. "it's usually the entire structure" => In a world without representation sharing. In one with representation sharing (and that makes the production of new trees from existing ones really efficient), one would have to traverse the whole subtree any time a tree referencing this subtree is deleted - if only to adjust the reference counters. In a world with GC you just overwrite one reference to some subtree. When the GC hits it just traverses the tree once (and then perhaps decides which parts to deallocate). Dependend on the application domain (e.g. if you produce and drop many shared representions between GC cycles) that might be much more efficient. >> You don't need a complex mesh for this kind of advantage to kick in, even >> regular tree cleanup is greatly simplified: just let go of the >> subbranch you no longer need, and avoid that whole cleanup traversal. > > I'm primarily interested in destroying the entire structure, and often I > need no destruction at all: the structures exist until the termination of > the program. This certainly is a special case? E.g a chess playing program (any playing program) would have to create and destroy game trees at a pretty good rate. > There's no point of earlier cleanup in such programs, and > surely no point in non-zero overhead to support such cleanup. I'm not aware > of any zero-overhead GC algorithm, because the basic tenet of GC is that you > can find all reachable objects: doing that requires some time and/or space > overhead. I can imagine to allocate some memory in a special, non garabage collected heap. Furthermore generational garbage collection AFAIK has the property to touch old objects seldom -- so there is not so much cost (as it might seem at first glance) for keeping large structures around unchanged. And if the program is short running you could also increase the garbage collection threshold(s), so that the collection never sets in. Regards -- Markus