From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,3498dd887729ed19 X-Google-Attributes: gid103376,public From: Hans-Juergen Boehm Subject: Re: Garbage Collection in Ada Date: 1996/10/16 Message-ID: <3265C383.167E@mti.sgi.com>#1/1 X-Deja-AN: 189991969 references: <01bbb910$f1e73f60$829d6482@joy.ericsson.se> <199610132138291604607@dialup101-6-14.swipnet.se> <19961014115513529729@dialup105-2-16.swipnet.se> <32652A7C.41C6@mti.sgi.com> cc: boehm content-type: text/plain; charset=us-ascii organization: Silicon Graphics Inc., Mountain View, CA mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.0S (X11; I; IRIX 6.2 IP22) Date: 1996-10-16T00:00:00+00:00 List-Id: Robert Dewar wrote: > > No, there are general purpose allocators with much better performance. > Just consider a simple binary allocator, at worst you waste half the > space, i.e. a factor of 2. But perhaps I don't understand your point. > N log K, where N is the size of live data and K is the size ratio between largest and smallest bound is the best worst-case bound acheivable by a general purpose nonmoving allocator. This is an old result due to Robson, cited in Knuth (I don't have the reference handy). This means a binary buddy system without coalescing is asymptotically optimal for the worst case. It does poorly if your program allocates 1MB of 5 byte objects, deallocates them, then allocates 1MB of 9 byte objects, deallocates them, allocates 17 byte objects, etc. (It's also less space efficient in the typical case tah other algorithms with worse worst-case behavior.) > Also, there are many programs which use storage only of a certain > limited range of sizes, and it is quite easy to make general purpose > allocators that handle this situation MUCH more efficiently. Yes, to a point. But many of them do not. The language spec gives me no guarantees. And most allocator implementations don't even bother to document such behaviour. And most of my programs allocate a wide variety of sizes. > In the > limiting case consider the case where a program uses ONLY blocks > of a fixed size. You have zero fragmentation there, but a conservative > GC can still hold on to an arbitrary number of blocks, so you cannot > put a bound on the effective overuse of storage. Yes, but my programs don't behave that way. > > It is quite easy to bound the fragmentation effect for a given > application. Only if I understand the allocator. For some commonly used allocators it's not clear anyone understands it well enough. > It is almost impossible to bound the accidental > holding-onto-storage effect of a conservative allocator, since > for example, the particular floating-point values in use can > result in holding onto storage one day, and not the next day, > depending on where your program loads in memory. But there are techniques that can bound the impact of such accidental retention by ensuring that such a value causes at most a fixed, one-time leak. > > Finally, there is one disadvantage of a conservative collector > that you did not mention, namely the possibility that it will > unexpectedly collect blocks that are in use. This can result > from either compiler optimizations, or code that cheats with > poitners (code which may exist in practice, and which in > particular typically exists in the conservative allocator > itself! I addressed this in an earlier message that may have been lost. It is easy to define a minimally restricted subset of even C for which this can't happen with the right compiler (e.g. lcc or gcc with a suitable preprocessor). Details are in my PLDI 96 paper. (See http://reality.sgi.com/employees/boehm_mti/papers/pldi96.ps.gz) Standard disclaimer ... Hans-Juergen Boehm boehm@mti.sgi.com