* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
@ 1997-06-16 0:00 ` Samuel Mize
1997-06-16 0:00 ` Joel Seidman
` (5 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Samuel Mize @ 1997-06-16 0:00 UTC (permalink / raw)
Stephen Leake wrote:
>I
> have suggested that the heap could become fragmented (the buffers are
> NOT all the same size). They say "we'll just test it thoroughly".
This is pathetic.
On the other hand, you say the messages are not all the same size.
If they of specific-size message types, AND you can derive a
maximum number that will need to exist at any time, each message
type can have its own storage pool. The result is that there is
not one heap being fragmented, each message type has its own chunk
of memory to allocate from. If you can define a maximum number for
each type, you can ensure you will always have enough memory.
If the buffers are of variable size, fragmentation certainly
might occur.
If you can't define a maximum number of messages to be handled at
a given moment, you can't define how much memory you will need to
begin with (you can determine "likely to be OK" but not "proven
to be OK no matter what").
> Can anyone provide a reference to a book or study article that
> says this is bad? To me it seems obvious, and the general tone
> in this newsgroup is that it's obvious. I have a couple books on
> realtime embedded design, and they don't even bother to mention
> dynamic allocation - unfortunately, that makes it hard to say
> "see, this book says it's bad".
>
> On the other side, are there any discussions of how to test such a
> system, to show that it does not become fragmented? Or a book on
> designing dynamic memory allocation algorithms to avoid fragmentation?
Real-time embedded system developers tend to avoid dynamic memory,
so it's not surprising that your references don't mention it.
I don't have a reference, but I suggest you look at books outside
the embedded real-time domain. Real-time embedded system
developers have tended to avoid dynamic memory, so you won't find
much in that literature.
Best of luck with it,
Sam Mize
--
-- Samuel Mize (817) 619-8622 smize@link.com "Team Ada"
-- Hughes Training Inc. PO Box 6171 m/s 400, Arlington TX 76005
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
1997-06-16 0:00 ` Samuel Mize
@ 1997-06-16 0:00 ` Joel Seidman
1997-06-17 0:00 ` Jon S Anthony
` (4 subsequent siblings)
6 siblings, 0 replies; 20+ messages in thread
From: Joel Seidman @ 1997-06-16 0:00 UTC (permalink / raw)
In article <33A55F1B.63FE@gsfc.nasa.gov>, Stephen.Leake@gsfc.nasa.gov wrote:
> They are proposing a message passing scheme where sending tasks allocate
> buffers for each message from a heap, and receiving tasks deallocate. I
> have suggested that the heap could become fragmented (the buffers are
> NOT all the same size). They say "we'll just test it thoroughly".
Running out of memory is another "bad" heap state. Or is this what you
meant by fragmentation?
Excuse me, but thorough testing will not prevent fragmentation or any
other problem. At best it may show that fragmentation occurs. Or it may
not show that (absence of fragmentation during testing does not guarentee
it will not occur). I kind of have to wonder about this off-hand-sounding
quoted response.
I assume "it" refers to the final, integrated system. The question you
need to ask is what is the impact to shedule and cost if fragmentation is
discovered during the testing phase. Generally, the later you discover a
problem, the more costly to fix. Up-front risk analysis is needed to
address potentially costly problems.
> Can anyone provide a reference to a book or study article that says this
> is bad? To me it seems obvious, and the general tone in this newsgroup
> is that it's obvious. I have a couple books on realtime embedded design,
> and they don't even bother to mention dynamic allocation -
> unfortunately, that makes it hard to say "see, this book says it's bad".
I don't have a good reference. Knuth talks about heap management
algorithms in general.
The embedded real-time projects I've been on have discouraged the use of
malloc (or new). Dynamic memory allocation in a heap isn't necessarily bad
per se, but it can get complicated (or possibly impossible) to convince
yourself that, in all possible scenarios, you won't run out of memory, and
that the procedures to allocate and deallocate won't exceed their time
budget. In simple situations, analysis may be possible. For example, if
all messages are the same size, fragmentation won't occur. You might know
something about the message sending and receiving activity that will allow
you to bound the amount of message memory you need even with worst-case
fragmentation, and you also know that you can make the heap that big. This
would also bound the execution time for allocation/deallocation. In some
systems, occasional message loss is tolerable, so you might be willing to
allow occasional out-of-heap-space conditions. Monte Carlo type simulation
could also be helpful to determine the possibility and frequency of bad
heap states.
Bottom line is if fragmentation is a bad thing, do some analysis up front,
or be willing to pay the price later.
-- Joel
--
Joel Seidman
joel_seidman@smtp.svl.trw.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
1997-06-16 0:00 ` Samuel Mize
1997-06-16 0:00 ` Joel Seidman
@ 1997-06-17 0:00 ` Jon S Anthony
1997-06-18 0:00 ` Mats.Weber
1997-06-17 0:00 ` Robert Dewar
` (3 subsequent siblings)
6 siblings, 1 reply; 20+ messages in thread
From: Jon S Anthony @ 1997-06-17 0:00 UTC (permalink / raw)
In article <33A55F1B.63FE@gsfc.nasa.gov> Stephen Leake <Stephen.Leake@gsfc.nasa.gov> writes:
> They are proposing a message passing scheme where sending tasks allocate
> buffers for each message from a heap, and receiving tasks deallocate. I
> have suggested that the heap could become fragmented (the buffers are
> NOT all the same size). They say "we'll just test it thoroughly".
Unless there is something "special" about the allocation vs. liveness
in this application, fragmentation is a certainty if deallocation does
not coalesce adjacent free space.
> Can anyone provide a reference to a book or study article that says this
> is bad?
Whether it's bad or not will really depend on the specifics of the
application and the memory manager. If the MM coaleces or compacts, I
don't see how _fragmentation_ will be a problem. However, the
_overhead_ of such capability could be a problem.
> and they don't even bother to mention dynamic allocation -
> unfortunately, that makes it hard to say "see, this book says it's
> bad".
Part of the problem is the meaning of "dynamic allocation". Certainly
stack allocation is just as dynamic as heap allocation. And you could
have "heap" style allocation from a preallocated pool.
> Or a book on designing dynamic memory allocation algorithms to avoid
> fragmentation?
I'm not sure there is any _book_ devoted to "memory management in
general", but it occurs in many books. For example, Aho, Hopcroft and
Ullman's "classic" _Data Structures and Algorithms_ has a chapter on
MM and it discusses this stuff including "buddy" systems and
compaction. Griswold & Griswold go over this is rather great detail
in _The Implementation of the Icon Programming Language_ (this is in
the context of GC). And of course this is discussed in Knuth.
Fragmentation concerns are also discussed in various settings in
Jones' and Lins' _Garbage Collection_ (which also has a very extensive
bibliography). On the Web you will find a lot of relevant information
(though mostly in the context of GC) at:
http://www.harlequin.com/mm/reference/bib/misc.html -- General bib
ftp://ftp.netcom.com/pub/hb/hbaker/home.html -- A lot of good stuff
ftp://ftp.cs.utexas.edu/pub/garbage/ -- All kinds of allocator stuff
/Jon
--
Jon Anthony
Organon Motives, Inc.
Belmont, MA 02178
617.484.3383
jsa@organon.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Jon S Anthony
@ 1997-06-18 0:00 ` Mats.Weber
1997-06-18 0:00 ` Jon S Anthony
0 siblings, 1 reply; 20+ messages in thread
From: Mats.Weber @ 1997-06-18 0:00 UTC (permalink / raw)
Jon S Anthony wrote:
> Whether it's bad or not will really depend on the specifics of the
> application and the memory manager. If the MM coaleces or compacts, I
> don't see how _fragmentation_ will be a problem. [...]
Should be "coaleces _and_ compacts". If it coaleces xor compacts, it is easy
to construct a program that runs out of memory due to fragmentation. Moreover,
compaction is quite unlikely to happen on most implementations as it requires
a double indirection or a GC-like scanning of all live pointers.
What we do to minimize fragmentation is to allocate only blocks with sizes
that are powers of two, and maintain one free list per power of two. It does
waste some storage but the program runs well for long periods of time.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-18 0:00 ` Mats.Weber
@ 1997-06-18 0:00 ` Jon S Anthony
0 siblings, 0 replies; 20+ messages in thread
From: Jon S Anthony @ 1997-06-18 0:00 UTC (permalink / raw)
In article <33A7B0B4.F75C277D@elca-matrix.ch> Mats.Weber@elca-matrix.ch writes:
> > Whether it's bad or not will really depend on the specifics of the
> > application and the memory manager. If the MM coaleces or compacts, I
> > don't see how _fragmentation_ will be a problem. [...]
>
> Should be "coaleces _and_ compacts".
Well, no that is certainly not right. If it compacts, coalecing is
irrelevant. So, I guess you could claim that "if it coaleces, then it
needs to also compact".
> If it coaleces xor compacts, it is easy to construct a program that
> runs out of memory due to fragmentation.
Not for the compaction disjunct. No way.
> Moreover, compaction is quite unlikely to happen on most
> implementations as it requires a double indirection or a GC-like
> scanning of all live pointers.
Yes, I would agree with this.
> What we do to minimize fragmentation is to allocate only blocks with
> sizes that are powers of two, and maintain one free list per power
> of two. It does waste some storage but the program runs well for
> long periods of time.
Yes this is a pretty standard trick (with various bases). Buddy
systems are related, and they coalesce, and for an appropriate base
for your application, should avoid fragmentation issues reasonbly
well.
/Jon
--
Jon Anthony
OMI, Belmont, MA 02178
617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
` (2 preceding siblings ...)
1997-06-17 0:00 ` Jon S Anthony
@ 1997-06-17 0:00 ` Robert Dewar
1997-06-17 0:00 ` Spam Hater
1997-06-17 0:00 ` Stephen Leake
1997-06-17 0:00 ` Glen Cornell
` (2 subsequent siblings)
6 siblings, 2 replies; 20+ messages in thread
From: Robert Dewar @ 1997-06-17 0:00 UTC (permalink / raw)
Stephen says
<<They are proposing a message passing scheme where sending tasks allocate
buffers for each message from a heap, and receiving tasks deallocate. I
have suggested that the heap could become fragmented (the buffers are
NOT all the same size). They say "we'll just test it thoroughly".>>
In this case, thorough testing would have to mean that they will test all
conceivable inputs and sequences of inputs. If they can do that, fine, but
note that this is often difficult :-)
In particular, for example, Intel could not or at least did not thoroughly
test the divide on the Pentium (if you need an example in discussing this).
Obviously we have to assume this is non-critical software where it does
not matter if it sometimes fails. We deduce that from the fact that someone
thinks that testing is an adequate indicator of correctness. Often for
non-critical software this is the case, and indeed such software does often
use dynamic allocation.
For critical software however, where reliability and correctness are
required, it is out of the question to use dynamic allocation unless
you can prove that storage error cannot occur.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Robert Dewar
@ 1997-06-17 0:00 ` Spam Hater
1997-06-17 0:00 ` Robert Dewar
1997-06-17 0:00 ` Stephen Leake
1 sibling, 1 reply; 20+ messages in thread
From: Spam Hater @ 1997-06-17 0:00 UTC (permalink / raw)
> They are proposing a message passing scheme where sending tasks
> allocate buffers for each message from a heap, and receiving tasks
> deallocate. I have suggested that the heap could become fragmented
> (the buffers areNOT all the same size). ....
If they're using out-of-the-box Verdix (now Rational?) the heap
WILL become fragmented. I've heard but not verified the same about
Alsys.
--
----------------------------------------------------------------------
Wes Groleau, Hughes Defense Communications, Fort Wayne, IN USA
Senior Software Engineer - AFATDS Tool-smith Wanna-be
Don't send advertisements to this domain unless asked! All disk space
on fw.hac.com hosts belongs to either Hughes Defense Communications or
the United States government. Using email to store YOUR advertising
on them is trespassing!
----------------------------------------------------------------------
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Spam Hater
@ 1997-06-17 0:00 ` Robert Dewar
0 siblings, 0 replies; 20+ messages in thread
From: Robert Dewar @ 1997-06-17 0:00 UTC (permalink / raw)
Wes said
<<If they're using out-of-the-box Verdix (now Rational?) the heap
WILL become fragmented. I've heard but not verified the same about
Alsys.>>
Much too strong a statement. Whether a given pattern of allocations causes
fragmentation with a given storage allocatoin algorithm is highly dependent
on the pattern. For any particular algorithm, there surely will be a pattern
of allocatoins that does NOT cause fragmentation. On most allocators, the
use of all fixed size blocks will guarantee no fragmentation, but you cannot
count on this without knowing the algorithm used.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Robert Dewar
1997-06-17 0:00 ` Spam Hater
@ 1997-06-17 0:00 ` Stephen Leake
1997-06-17 0:00 ` Michael F Brenner
1997-06-17 0:00 ` Brian Rogoff
1 sibling, 2 replies; 20+ messages in thread
From: Stephen Leake @ 1997-06-17 0:00 UTC (permalink / raw)
Robert Dewar wrote:
>
> Stephen says
>
> <<They are proposing a message passing scheme where sending tasks allocate
> buffers for each message from a heap, and receiving tasks deallocate. I
> have suggested that the heap could become fragmented (the buffers are
> NOT all the same size). They say "we'll just test it thoroughly".>>
>
> In this case, thorough testing would have to mean that they will test all
> conceivable inputs and sequences of inputs. If they can do that, fine, but
> note that this is often difficult :-)
That is precisely my point; I do not believe they can adequately test
this system.
> In particular, for example, Intel could not or at least did not thoroughly
> test the divide on the Pentium (if you need an example in discussing this).
Good example.
> Obviously we have to assume this is non-critical software where it does
> not matter if it sometimes fails. We deduce that from the fact that someone
> thinks that testing is an adequate indicator of correctness. Often for
> non-critical software this is the case, and indeed such software does often
> use dynamic allocation.
Unfortunately, this is the "Safe" mode of a science satellite; it is
supposed to work no matter what. It is a VERY critical system!
> For critical software however, where reliability and correctness are
> required, it is out of the question to use dynamic allocation unless
> you can prove that storage error cannot occur.
I whole-heartedly agree, but I don't carry enough weight around here to
change minds. And unfortunately, neither do newsgroup discussions.
So I repeat my query; can anyone provide references to authoritative
texts that discuss this issue?
--
- Stephe
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Stephen Leake
@ 1997-06-17 0:00 ` Michael F Brenner
1997-06-17 0:00 ` Brian Rogoff
1 sibling, 0 replies; 20+ messages in thread
From: Michael F Brenner @ 1997-06-17 0:00 UTC (permalink / raw)
To: Stephen.Leake@gsfc.nasa.gov
Subject: Re: dynamic memory allocation
Some discussion on very conservative memory allocation (either no
dynamic memory allocation or allocation only at the beginning
and then no de-allocation) can be seen at:
The OberonT Language
Hard Real-Time Systems
Cache Coherency
Unfortunately, there is very little reasearch done (outside of NASA,
MITRE, and the rest of the hard-real-time community) to rigorously
prove ways to make something last many years. They just dont need it.
Consumers accept text editors, operating systems, Net browsers,
spread sheets, command and control systems, e-mail software,
airplane avionics, fragmented disks, traffic lights, billing systems,
government services, databases, year 2000 problems, and teller
machines that Hang the Mouse (that is, run off the end of a
hanging pointer or sieze due to memory fragmentation). For example,
most managers of messaging systems suffering from Year 2000 overflows have
no intention of changing the messages from 2-digit
to 4-digit years, because they feel they can afford the consequences.
Mike mikeb@mitre.org
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-17 0:00 ` Stephen Leake
1997-06-17 0:00 ` Michael F Brenner
@ 1997-06-17 0:00 ` Brian Rogoff
1 sibling, 0 replies; 20+ messages in thread
From: Brian Rogoff @ 1997-06-17 0:00 UTC (permalink / raw)
To: Stephen Leake
On Tue, 17 Jun 1997, Stephen Leake wrote:
> I whole-heartedly agree, but I don't carry enough weight around here to
> change minds. And unfortunately, neither do newsgroup discussions.
>
> So I repeat my query; can anyone provide references to authoritative
> texts that discuss this issue?
http://www.cs.utexas.edu/users/oops/papers.html
See the allocator survey, "Dynamic Storage Allocation: A Survey and
Critical Review". Maybe not exactly what you want, but it should be close,
and has lots of useful references.
-- Brian
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
` (3 preceding siblings ...)
1997-06-17 0:00 ` Robert Dewar
@ 1997-06-17 0:00 ` Glen Cornell
1997-06-18 0:00 ` David Wheeler
1997-06-18 0:00 ` David Wheeler
6 siblings, 0 replies; 20+ messages in thread
From: Glen Cornell @ 1997-06-17 0:00 UTC (permalink / raw)
Stephen Leake wrote:
...
>
> Can anyone provide a reference to a book or study article that says this
> is bad? To me it seems obvious, and the general tone in this newsgroup
> is that it's obvious. I have a couple books on realtime embedded design,
> and they don't even bother to mention dynamic allocation -
> unfortunately, that makes it hard to say "see, this book says it's bad".
>
I'm not sure that dynamic memory allocation for an embedded system is
such
a bad practice for the "qualified" embedded systems developer. However,
most programmers new to embedded systems that I've seen have a
"workstation" mindset. That is, memory is inexhaustible. Like they
say, "it's not the gun that kills, it's the person pulling the
trigger". This mindset is bad for systems that need to remain
operational indefinitely. For systems such as the M1 tank, we
completely disallow the practice of dynamic memory allocation - sort of.
Allocators are a great compromise, allowing some flexibility to designs
while enforcing some discipline. I took the C++ Standard Template
Library as an example for its implementation, although any algorithms
book probably would cover the topic. The member functions allocate and
deallocate can be implementors of your chosen memory allocation scheme.
This could be simple wrappers around new and unchecked_deallocation or
fixed block allocation at elaboration. Whatever the implementation,
keep the interface the same, to easily change to a new implementation.
The following is an example of an allocator's interface:
--
-- File Name: bound_allocator.1.ada
-- This is the embedded version of the unbound_allocator package.
-- this class allocates and initializes objects from a fixed size pool
-- created at elaboration.
--
generic
type T_Object is limited private;
type T_Handle is access T_Object;
Maximum_Number_Of_Objects : in Positive := 20;
-- keep it small to force the programmer to do a
-- memory utilization analysis.
-- T operations:
with procedure Create (This : in out T_Handle) is <>;
with procedure Destroy (This : in out T_Handle) is <>;
with procedure Assign (To : in out T_Object; From : in T_Object) is
<>;
package Bound_Allocator is
Invalid_Object_Execption : exception;
-- the user tries to deallocate memory that was not allocated here
Memory_Empty_Exception : exception;
-- the user tries to deallocate from an empty pool (should never
occur)
Memory_Full_Exception : exception;
-- The user tries to allocate memory when the pool is full
function Allocate
-- allocate memory, initialize the object's data structure,
-- and invoke the constructor.
return T_Handle;
procedure Deallocate
-- invoke the destructor and
-- return the object's memory to the free pool
(Obj : in out T_Handle
-- the object to destroy
);
function Verify
-- was the object allocated from this pool?
-- this method will NOT raise an exception
(Obj : in T_Handle
-- The object
) return Boolean;
end Bound_Allocator;
This is certainly not the panacea, but it does the job for simple types
(i.e. watch out for discriminant types without default discriminants,
task types, and other voodoo).
Try to convince your contemporaries to use the allocator, even if it
simply calls new and unchecked_conversion. This way, at least you can
manage the transition to other allocation schemes or even have a means
to search for memory leaks. (Once you've switched over to a safer
allocator and corrected the memory leak, you can say "I told you so!")
> On the other side, are there any discussions of how to test such a
> system, to show that it does not become fragmented? Or a book on
> designing dynamic memory allocation algorithms to avoid fragmentation?
>
Wouldn't it be great to have Purify for Ada?
----------
Glen Cornell
cornell@gdls.com
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
` (4 preceding siblings ...)
1997-06-17 0:00 ` Glen Cornell
@ 1997-06-18 0:00 ` David Wheeler
1997-06-18 0:00 ` Stephen Leake
1997-06-19 0:00 ` JP Thornley
1997-06-18 0:00 ` David Wheeler
6 siblings, 2 replies; 20+ messages in thread
From: David Wheeler @ 1997-06-18 0:00 UTC (permalink / raw)
Stephen Leake (Stephen.Leake@gsfc.nasa.gov) wrote:
: I'm trying to convince my project that dynamic memory allocation is a
: bad idea in an embedded software system (a satellite control system, in
: this case).
: They are proposing a message passing scheme where sending tasks allocate
: buffers for each message from a heap, and receiving tasks deallocate. I
: have suggested that the heap could become fragmented (the buffers are
: NOT all the same size). They say "we'll just test it thoroughly".
"Testing it thoroughly" cannot provide
strong evidence that something can't happen -- at best it can show that
it will only happen occasionally when the real inputs equal the test inputs.
There are some good referenceable papers that show the futility of
depending on testing alone to give any useful assurance in systems that
must be highly reliable.
Let's be practical. If you knowingly insert code that doesn't always
work, and then testing doesn't encounter the problem, what exactly have
you shown? What you've shown is that your testing wasn't good enough
to find that bug. That's possibly interesting information, especially
if you want to know how good your testing is, but that
does NOT provide evidence that the system won't encounter a serious problem
in flight. It doesn't even provide strong evidence that such an encounter
is unlikely unless you can prove that the tests are exactly what the
system will encounter -- this is the weakness of reliability measures.
Digital systems have a different failure mode than analog systems -- in most
analog systems similar inputs tend to produce similar outputs, and digital
systems need not do so.
: Can anyone provide a reference to a book or study article that says this
: is bad? To me it seems obvious, and the general tone in this newsgroup
: is that it's obvious. I have a couple books on realtime embedded design,
: and they don't even bother to mention dynamic allocation -
: unfortunately, that makes it hard to say "see, this book says it's bad".
Regarding memory allocation issues, this is the sort of thing that's
so obvious to practitioners that it's rarely written down.
I believe that the U.K.'s 00-55 military standard required
that allocation only occur during system initialization; note that
00-55 also advocated the use of formal methods.
Perhaps someone in the U.K. can enlighten you on the current status of 00-55.
See the Ada Quality & Style Guide 5.4.5, which discusses this.
It states that avoiding the use of dynamic allocation is unnecessary
since Ada has a "Storage_Error" exception. Realize that this statement
in the AQ&S is only half true; if there's nothing you can DO about
a Storage_Error and must still perform the work correctly,
then you'll need a design that makes it impossible to have a Storage_Error.
You MAY be able to make the allocation reliable, depending on other factors.
For example, if there are only 3 message sizes, you may be able to
preallocate those sizes. You might even be able to create a simple proof that
as long as a certain order is maintained, the approach is fine.
Naturally, this is all application-dependent.
I'm assuming that this really has to work with very high reliability.
If occasional failure IS acceptable (true for many ground-based
systems), then it may be cheaper to permit an occasional failure and
then reprogram it when the failure rate becomes unacceptable.
--- David A. Wheeler
dwheeler@ida.org
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-18 0:00 ` David Wheeler
@ 1997-06-18 0:00 ` Stephen Leake
1997-06-19 0:00 ` Arthur Schwarz
1997-06-20 0:00 ` David Wheeler
1997-06-19 0:00 ` JP Thornley
1 sibling, 2 replies; 20+ messages in thread
From: Stephen Leake @ 1997-06-18 0:00 UTC (permalink / raw)
Thanks to all who responded; I was feeling very lonely here, trying to
raise the level of awareness of software engineering principles. It was
good to hear from so many supportive people.
I will now done my flame-proof suit of armor, and tilt at the windmills
again :)
--
- Stephe
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-18 0:00 ` Stephen Leake
@ 1997-06-19 0:00 ` Arthur Schwarz
1997-06-20 0:00 ` David Wheeler
1 sibling, 0 replies; 20+ messages in thread
From: Arthur Schwarz @ 1997-06-19 0:00 UTC (permalink / raw)
To: Stephen.Leake
Stephen Leake wrote: (Stephen.Leake@gsfc.nasa.gov)
>
> Thanks to all who responded; I was feeling very lonely here, trying to
> raise the level of awareness of software engineering principles. It was
> good to hear from so many supportive people.
>
> I will now done my flame-proof suit of armor, and tilt at the windmills
> again :)
>
> --
I will try to get you some references for dynamic allocation and
it's associated problems. The generic resource list would be to
look at text books covering operating systems and paging systems.
As near's I c'n remember, the general result is that over time:
# holes = 1/2 # tasks (for operating systems)
This can be generalized as a statement that the number of holes
which exist with a heap allocation scheme with unbounded variable
size allocations equals the 1/2 the number of active allocations.
The result of these holes is to have unusable regions of space.
The aggregate size of these regions may be large enough for the next
heap request but the request may not be honored because no single
'hole' is large enough to satisfy the request.
Mitigation of the problems in the creation of 'holes' is to provide
fixed size allocatable blocks. Each block or set of blocks must be
sufficient to satisfy any heap request. In this case, you will have
guaranteed request satisfaction, providing holes exist, and the frag-
mentation problem disappears. (These results were stated in previous
messages.)
In systems that I have worked on, multiple strategies have been
employed to provide 'dynamic' allocation without fragmentation.
To list some approaches (not all):
[1] All blocks have a fixed size = maximum size request. This will
always satisfy the reqeust for space if there is any space
available. This approach (however) is the most costly in
terms of actual memory used in that (usually) not many blocks
are available and these can get quickly eaten up.
[2] Multiple sized blocks. Based on statistical analysis, a range
of block sizes is provided. Lost space in smaller blocks are
minimized with small blocks, allocations for larger blocks
are satisfied with larger blocks. Some work has to be done
to determine how many blocks of each kind are required.
[3] "Linked" blocks. A uniform block size is provided in a global
memory pool. Each usage of data from within the pool is satisfied
by using one or more blocks. This has the advantage of simplicity
of allocation and management but the disadvantage of requiring
more complex read access (for information). In a message passing
scheme the complexity is hidden.
The above schemes can be used as is for message-passing-with-copy:
Task i => (copy to message block) => (copy to local memory) Task 2
If globally shared memory is used, then some care needs to be done
with respect to [3] in particular.
As has been previously mentioned, Knuth has some discussion on
memory allocation. I am familiar with Knuth, Vol I, Chapter 2,
The Art of Computer Programming and the discussion, eg., on the
Buddy System. The allocation system allows flexible allocation of
blocks with variable size (mod 2*n). Some care must be used with
this scheme as well as others mentioned.
Since memory is not unlimited, some careful analysis needs to be
performed with respect to average / typical / maximum memory
requirements (this has been mentioned in a previous letter). If
there isn't enough memory to satisfy peak requirements a strategy
needs to be implemented for 'losing' messages. If you can't satisfy
a memory request, then what? Several strategies come to mind
(message priority, message aging, new data overwriting, etc.). This
type of analysis is best suited to queuing theory / monte carlo
methods / stochastic analysis (they all refer to similar modeling).
One issue not seen in this discussion is the impact of using
provided dynamic memory allocation libraries. Each allocation
from the 'heap' might have an associated overhead. Some number
of bytes which define (at least) the size of the allocation
request and /or some administrative overhead. Before a standard
library is used, find out what it will cost.
The 'idea' of dynamic memory allocation embedded systems is not bad.
There are some definitional problems associated with it. For one,
you really can't get to 'dynamic'. It should be clear that it is
best to preallocate the space needed (into an array) and dole this
out based on the strategy of choice. In an embedded system, you need
absolute control over resource usage. You must be able to predetermine
that all requests are satisfied in some acceptable way - even if this
'satisfaction' requires that the request be ignored.
I roll-my-own all the time and with good effect. I do so because
the ability to dynamically allocate and deallocate memory often
simplifies my problems. It isolates the design decisions associated
with memory use from the decisions associated with memory allocation.
I don't have to code and consider both together - and this simplifies
the overall design. Since I control the allocation mechanism, I can
collect statistics concerning it; I can provide additional security
in access and use; and I have some flexibility in error detection and
recovery.
Some terms to look up are:
virtual paging
multi-tasking
IBM MVT OS (Multi-tasking with a Variable number of Tasks)
memory holes (not Black Holes though)
first-fit
best-fit
A tough resource (which may be out of print) is:
Queuing Theory (Vol. I & II)
Kleinrock
Addison Wesley (?)
Some Operating Systems authors are:
Donovan
Per Brinch Hansen
Tannenbaum
I'm terribly sorry about the disjointed nature of this reply. If you
have any questions, please don't hesitate to write.
Arthur Schwarz
schwarza@gdls.com (work)
aschwarz@acm.org (home)
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-18 0:00 ` Stephen Leake
1997-06-19 0:00 ` Arthur Schwarz
@ 1997-06-20 0:00 ` David Wheeler
1 sibling, 0 replies; 20+ messages in thread
From: David Wheeler @ 1997-06-20 0:00 UTC (permalink / raw)
Stephen Leake (Stephen.Leake@gsfc.nasa.gov) wrote:
Another good reference regarding the limitations of reliability measures
(with hints about the general limitations of testing) is:
"The infeasibility of quantifying the reliability of life-critical
real-time software." Ricky W. Butler and George B. Finelli.
IEEE Transactions on Software Engineering, 19(1):3-12, January 1993.
--- David A. Wheeler
dwheeler@ida.org
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-18 0:00 ` David Wheeler
1997-06-18 0:00 ` Stephen Leake
@ 1997-06-19 0:00 ` JP Thornley
1 sibling, 0 replies; 20+ messages in thread
From: JP Thornley @ 1997-06-19 0:00 UTC (permalink / raw)
In article: <5o7jql$jpo@news.ida.org> wheeler@ida.org (David Wheeler)
writes:
> Regarding memory allocation issues, this is the sort of thing that's
> so obvious to practitioners that it's rarely written down.
> I believe that the U.K.'s 00-55 military standard required
> that allocation only occur during system initialization; note that
> 00-55 also advocated the use of formal methods.
> Perhaps someone in the U.K. can enlighten you on the current status of 00-55.
>
The 00-55 standard has now been finalised and we're waiting to get the
printed versions.
In the meantime the document is available on the web - go to www.mod.uk
and follow the link "00-56 and Computer Guidance". There are html and
downloadable versions of both 00-55 and 00-56.
On the question of dynamic memory, 00-55 now contains the following
(apologies for the loss of formatting):-
The standard says:-
35.3 The Software Design shall be:
..
(c) such that justification can be provided which shows that it meets
its specification in terms of both functionality and performance and
that it does nothing else;
(d) consistent with all non-functional requirements of the Software
Requirement, including fault-tolerance, size, capacity, accuracy,
maintainability, reliability, usability and configuration.
and the Guidance says:-
35.3 The requirement for the Software Design to be constructed in a
manner which permits justification that it meets its specification tends
to restrict the features and styles that may be used. The Design Team
should pay careful attention to the following:
..
(j) Dynamic memory allocation (heaps): The use of pointers is a natural
way to declare certain data structures, including linked lists, graphs
and trees. It is almost inconceivable to imagine developing a compiler
without the use of pointers. Pointers may reference global areas where
objects are allocated and their storage is managed (the heap space).
The main concern with using dynamic memory allocation is the difficulty
of predicting whether the memory space allowed is adequate, particularly
if it becomes fragmented as it is reused. The use of dynamic memory
allocation is not a problem provided that exhausting the memory space
does not constitute a dangerous failure. Note that even if the
application code does not use dynamic memory allocation, the compiler
may well use memory space, typically when arrays and records are passed
as parameters to procedures and functions. Object oriented languages
generally require the use of dynamic memory allocation. For most real
time systems, heap space should only be used when:
(i) the operations are deterministic;
(ii) the pointers are typed (ie the space required for an object can be
predicted);
(iii) the space used is bounded;
(iv) storage fragmentation does not occur.
HTH
Phil Thornley
--
------------------------------------------------------------------------
| JP Thornley EMail jpt@diphi.demon.co.uk |
| phil.thornley@acm.org |
------------------------------------------------------------------------
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: dynamic memory allocation
1997-06-16 0:00 dynamic memory allocation Stephen Leake
` (5 preceding siblings ...)
1997-06-18 0:00 ` David Wheeler
@ 1997-06-18 0:00 ` David Wheeler
6 siblings, 0 replies; 20+ messages in thread
From: David Wheeler @ 1997-06-18 0:00 UTC (permalink / raw)
Stephen Leake (Stephen.Leake@gsfc.nasa.gov) wrote:
: I'm trying to convince my project that dynamic memory allocation is a
: bad idea in an embedded software system (a satellite control system, in
: this case).
: They are proposing a message passing scheme where sending tasks allocate
: buffers for each message from a heap, and receiving tasks deallocate. I
: have suggested that the heap could become fragmented (the buffers are
: NOT all the same size). They say "we'll just test it thoroughly".
"NASA Guidebook for Safety Critical Software" (NASA-GB-1740.13-96)
(see http://www.ivv.nasa.gov) says in 4.2.2.4 on page 47:
Dynamic memory allocation: ensure adequate resources are available
to accomodate usage of dynamic memory allocation, without conflicts.
Identify and protect critical memory blocks. Poor memory management
has been a leading factor in several critical failures.
Alsys' (now Aonix') "Safety-Critical Handbook" (sorry, I don't see
a year of publication), page 26, says:
Heap Storage
Use of heap storage presents a number of problems for certification...
To minimize fragmentation, the runtime system typically uses algorithms
to search for space availability ... As these searches are not
deterministic, they are not permitted in safety critical systems.
The use of the heap must be restricted to a predictable set of
operations where the time and memory used can be determined by analysis,
and verified by testing.
You might also wish to see:
"Assessing Traditional Verification's Effectiveness on Safety-Critical
Software Systems" by Gowen and Collofello, Journal of Systems Software,
1994:26:103-115. In an experiment, testing failed to find key safety defects.
--- David A. Wheeler
dwheeler@ida.org
^ permalink raw reply [flat|nested] 20+ messages in thread