comp.lang.ada
 help / color / mirror / Atom feed
From: dewar@merv.cs.nyu.edu (Robert Dewar)
Subject: Re: Variable-block memory managers bad in realtime
Date: 1997/12/14
Date: 1997-12-14T00:00:00+00:00	[thread overview]
Message-ID: <dewar.882107495@merv> (raw)
In-Reply-To: gwinn-1012971816280001@dh5055225.res.ray.com


<<The historical problem with such algorithms, many of which are quite good,
is that they aren't *perfect*, and there is always a spectrum of
allocations and deallocations that will cause failure, because their
design depends on statistical regularities of one kind or another.

Another reason we have always avoided such algorithms is that they were
too complex for us to prove that they would *never* jam, even after months
of continuous heavy field use without a system reset.  Even the slightest
memory leakage or fragmentation will accumulate and become a problem under
such a scenario.>>


This is plain false. There are well known algorithms that work just fine,
and do not depend on statistical regularity. Basically the trade off is
that if you want guaranteed constant time allocation, you have to be willing
to waste some fixed proportion of memory in the worst case.

As or proving such algorithms correct, again you need to go to the literature.

This is precisely the kind of area where you don't need "domain experts".
You need people who are aware of the standard work in analysis of algorithms,
for which there is a large established literature. 

No doubt you will complain that you don't want or need "experts in memory
allocation", but as your note makes clear you certainly COULD benefit from
having people in your project who know this kind of material. Once again,
this is not the sort of thing you can pick up from newsgroup FAQ's. The
literature on memory allocation is comprehensive, and accessible to someone
with basic knowledge of the area, but it is not so accessible if you don't
have the general background. 

If I gave my graduate students in CS the task of looking up a memory
allocation algorithm that had "perfect" characterstics as far as constant
time (i.e. O(1)) allocation, I would expect them to be able to track it
down. On the other hand, if I asked them to track down papers in high
energy physics or quantum chemistry, I would not expect them to have the
background.

So this is another opportunity to emphasize that these days, we recognize
that programming is a complex task, but that there are many tools available
to help, some of them quite complex, and requiring a significant level of
training and understanding. The best programming efforts are where domain
experts work with experts in the technology in a cooperative manner to
make sure that the best technology is used to incorproate the most accurate
model of the domain, reaching for the best knowledge available in both areas.

If you look at what is happening in scientific programming these days, you
see an interesting trend. The typical quantum chemistry or crystallography
programs of the 60's (my thesis is in the xtalography area), did not require
complex algorithms from a CS point of view, and the work was done by chemists
who had picked up some general Fortran programming knowledge.

But even back then, the result was very effective programming of silly
algorithms in some cases. I improved one data gathering program by replacing
a bubble sort with treesort3 (people know it as heapsort today), and then
further improved it by devising a modification of treesort3 that halved the
number of compares from 2logN to logN [already I was becoming more of a
computer scientist than a chemist (*)]

These days, scientific programs tend to use highly complex algorithms, and
numerical computing is an avid consumer of some of the most advanced work
in algorithmic design and analysis. At the Courant Institute, computer
scientist, numerical analysis, and other scientific experts work together
to solve problems in such areas as secondary oil recovery, design of airframes,
and design of fusion reactors.

Robert Dewar
Ada Core Technologies

(*) This optimziation has never been properly published, although full details
are in my thesis. The idea appeared as an excercise in Volume III of Knuth,
though Knuth's answer was in fact wrong in an important detail (one fo the
several nice Wells Fargo checks for $1 that I got from this volume was for
finding this bug :-)

If you are interested in learning more about that optimization, or using
it, you have easy access to the necessary information. Just look at the
extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in
files g-hesora.adb and g-hesorg.adb. In general spend some time wandering
through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you
will find it worth while :-)





  reply	other threads:[~1997-12-14  0:00 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1997-12-10  0:00 Variable-block memory managers bad in realtime Joe Gwinn
1997-12-14  0:00 ` Robert Dewar [this message]
  -- strict thread matches above, loose matches on Subject: below --
1997-12-19  0:00 Joe Gwinn
1997-12-22  0:00 Joe Gwinn
1997-12-23  0:00 ` Robert Dewar
1997-12-23  0:00 ` John Briggs
1997-12-24  0:00 ` Jon S Anthony
1997-12-30  0:00   ` Joe Gwinn
1997-12-31  0:00     ` Jon S Anthony
1998-01-01  0:00       ` Henry Baker
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox