comp.lang.ada
 help / color / mirror / Atom feed
From: Arthur Schwarz <schwarza@gdls.com>
To: Stephen.Leake@gsfc.nasa.gov
Subject: Re: dynamic memory allocation
Date: 1997/06/19
Date: 1997-06-19T00:00:00+00:00	[thread overview]
Message-ID: <33A97144.5732@gdls.com> (raw)
In-Reply-To: 33A7E665.10A@gsfc.nasa.gov


Stephen Leake wrote: (Stephen.Leake@gsfc.nasa.gov)
> 
> Thanks to all who responded; I was feeling very lonely here, trying to
> raise the level of awareness of software engineering principles. It was
> good to hear from so many supportive people.
> 
> I will now done my flame-proof suit of armor, and tilt at the windmills
> again :)
> 
> --

I will try to get you some references for dynamic allocation and
it's associated problems. The generic resource list would be to
look at text books covering operating systems and paging systems.
As near's I c'n remember, the general result is that over time:

    # holes = 1/2 # tasks (for operating systems)

This can be generalized as a statement that the number of holes
which exist with a heap allocation scheme with unbounded variable
size allocations equals the 1/2 the number of active allocations.
The result of these holes is to have unusable regions of space.
The aggregate size of these regions may be large enough for the next
heap request but the request may not be honored because no single
'hole' is large enough to satisfy the request.

Mitigation of the problems in the creation of 'holes' is to provide
fixed size allocatable blocks. Each block or set of blocks must be
sufficient to satisfy any heap request. In this case, you will have
guaranteed request satisfaction, providing holes exist, and the frag-
mentation problem disappears. (These results were stated in previous
messages.)

In systems that I have worked on, multiple strategies have been 
employed to provide 'dynamic' allocation without fragmentation.
To list some approaches (not all):

[1] All blocks have a fixed size = maximum size request. This will
    always satisfy the reqeust for space if there is any space
    available. This approach (however) is the most costly in 
    terms of actual memory used in that (usually) not many blocks
    are available and these can get quickly eaten up.

[2] Multiple sized blocks. Based on statistical analysis, a range
    of block sizes is provided. Lost space in smaller blocks are
    minimized with small blocks, allocations for larger blocks
    are satisfied with larger blocks. Some work has to be done
    to determine how many blocks of each kind are required.

[3] "Linked" blocks. A uniform block size is provided in a global
    memory pool. Each usage of data from within the pool is satisfied
    by using one or more blocks. This has the advantage of simplicity
    of allocation and management but the disadvantage of requiring 
    more complex read access (for information). In a message passing
    scheme the complexity is hidden.

The above schemes can be used as is for message-passing-with-copy:

     Task i => (copy to message block) => (copy to local memory) Task 2

If globally shared memory is used, then some care needs to be done
with respect to [3] in particular.

As has been previously mentioned, Knuth has some discussion on
memory allocation. I am familiar with Knuth, Vol I, Chapter 2,
The Art of Computer Programming and the discussion, eg., on the
Buddy System. The allocation system allows flexible allocation of
blocks with variable size (mod 2*n). Some care must be used with
this scheme as well as others mentioned.

Since memory is not unlimited, some careful analysis needs to be
performed with respect to average / typical / maximum memory 
requirements (this has been mentioned in a previous letter). If
there isn't enough memory to satisfy peak requirements a strategy
needs to be implemented for 'losing' messages. If you can't satisfy
a memory request, then what? Several strategies come to mind 
(message priority, message aging, new data overwriting, etc.). This
type of analysis is best suited to queuing theory / monte carlo
methods / stochastic analysis (they all refer to similar modeling).

One issue not seen in this discussion is the impact of using
provided dynamic memory allocation libraries. Each allocation 
from the 'heap' might have an associated overhead. Some number
of bytes which define (at least) the size of the allocation 
request and /or some administrative overhead. Before a standard
library is used, find out what it will cost. 

The 'idea' of dynamic memory allocation embedded systems is not bad.
There are some definitional problems associated with it. For one,
you really can't get to 'dynamic'. It should be clear that it is
best to preallocate the space needed (into an array) and dole this
out based on the strategy of choice. In an embedded system, you need
absolute control over resource usage. You must be able to predetermine
that all requests are satisfied in some acceptable way - even if this
'satisfaction' requires that the request be ignored.

I roll-my-own all the time and with good effect. I do so because
the ability to dynamically allocate and deallocate memory often
simplifies my problems. It isolates the design decisions associated
with memory use from the decisions associated with memory allocation.
I don't have to code and consider both together - and this simplifies
the overall design. Since I control the allocation mechanism, I can
collect statistics concerning it; I can provide additional security
in access and use; and I have some flexibility in error detection and
recovery. 

Some terms to look up are:

    virtual paging
    multi-tasking
    IBM MVT OS (Multi-tasking with a Variable number of Tasks)
    memory holes (not Black Holes though)
    first-fit
    best-fit
    

A tough resource (which may be out of print) is:

    Queuing Theory (Vol. I & II)
    Kleinrock
    Addison Wesley (?)

Some Operating Systems authors are:

    Donovan
    Per Brinch Hansen
    Tannenbaum

I'm terribly sorry about the disjointed nature of this reply. If you
have any questions, please don't hesitate to write.

Arthur Schwarz
schwarza@gdls.com (work)
aschwarz@acm.org  (home)






  reply	other threads:[~1997-06-19  0:00 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1997-06-16  0:00 dynamic memory allocation Stephen Leake
1997-06-16  0:00 ` Joel Seidman
1997-06-16  0:00 ` Samuel Mize
1997-06-17  0:00 ` Glen Cornell
1997-06-17  0:00 ` Jon S Anthony
1997-06-18  0:00   ` Mats.Weber
1997-06-18  0:00     ` Jon S Anthony
1997-06-17  0:00 ` Robert Dewar
1997-06-17  0:00   ` Stephen Leake
1997-06-17  0:00     ` Michael F Brenner
1997-06-17  0:00     ` Brian Rogoff
1997-06-17  0:00   ` Spam Hater
1997-06-17  0:00     ` Robert Dewar
1997-06-18  0:00 ` David Wheeler
1997-06-18  0:00   ` Stephen Leake
1997-06-19  0:00     ` Arthur Schwarz [this message]
1997-06-20  0:00     ` David Wheeler
1997-06-19  0:00   ` JP Thornley
1997-06-18  0:00 ` David Wheeler
  -- strict thread matches above, loose matches on Subject: below --
1997-06-19  0:00 Marin David Condic, 561.796.8997, M/S 731-93
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox