From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,3498dd887729ed19 X-Google-Attributes: gid103376,public From: Hans-Juergen Boehm Subject: Re: Garbage Collection in Ada Date: 1996/10/17 Message-ID: <3266C15E.2781@mti.sgi.com>#1/1 X-Deja-AN: 190157617 references: <01bbb910$f1e73f60$829d6482@joy.ericsson.se> <199610132138291604607@dialup101-6-14.swipnet.se> <19961014115513529729@dialup105-2-16.swipnet.se> <32652A7C.41C6@mti.sgi.com> <3265C383.167E@mti.sgi.com> cc: boehm content-type: text/plain; charset=us-ascii organization: Silicon Graphics Inc., Mountain View, CA mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.0S (X11; I; IRIX 6.2 IP22) Date: 1996-10-17T00:00:00+00:00 List-Id: Robert Dewar wrote: > > The essence of the binary storage allocator is that it DOES coalesce > on return, and that for a given maximum block size, the time to coalesce > is bounded by a constant, as is the time to allocate. I believe the version with full coalescing has worst-case allocation time log N, where N is the heap size. Consider a heap with only one large empty block and repeated requests to alternately allocate and deallocate a 1 byte block. (I'm referring to the algorithm in Knuth, vol. 1, 2nd edition, starting at page 442. Knuth credits it to Markowitz and Knowlton.) I mentioned the algorithm without coalescing, since it seems to be commonly implemented. Allocation and deallocation is a bit faster, and worst-case space usage is only worse by a constant (see below). > And in such a system > it is obvious that you cannot be worse than a factor of 2 inefficient > in use of storage (the worst case occcurs if all your allocations are for > one size of block, that is a power of 2 plus one in size). You can do considerably worse than that. Consider allocating size 1MB os size 1 blocks. Deallocate most of them, so that only ever 16th remains. Repeat with size 16 blocks. They will need to go into a new region of memory. Leave every 16th. Repeat with sizes 256, 4096 and 64K. At most 1.25MB were live at any point. But 5MB were needed. This is in addition to the factor of 2 you get with non-power-of 2 sizes. So we have something like a factor of 8. And this isn't quite the worst case. See also exercise 41 on page 455 in Knuth V. 1. Knuth states (Vol 1, 2nd ed., bottom of page 451): "J.M. Robson has shown [JACM 18 (1971), pp. 416-423] that dynamic storage allocation strategies that never reallocate reserved blocks cannot possibly be guaranteed to use memory efficiently; there will always be pathological circumstances in which the method breaks down." He gives details and bounds in the exercises. -- Hans-Juergen Boehm boehm@mti.sgi.com