comp.lang.ada
 help / color / mirror / Atom feed
From: gwinn@res.ray.com (Joe Gwinn)
Subject: Re: Variable-block memory managers bad in realtime
Date: 1997/12/19
Date: 1997-12-19T00:00:00+00:00	[thread overview]
Message-ID: <gwinn-1912971430500001@dh5055052.res.ray.com> (raw)


> From: dewar@merv.cs.nyu.edu (Robert Dewar)
> Newsgroups: comp.lang.ada
> Date: 14 Dec 1997 09:08:07 -0500
> 
> <<The historical problem with such algorithms, many of which are quite good,
> is that they aren't *perfect*, and there is always a spectrum of
> allocations and deallocations that will cause failure, because their
> design depends on statistical regularities of one kind or another.
> 
> Another reason we have always avoided such algorithms is that they were
> too complex for us to prove that they would *never* jam, even after months
> of continuous heavy field use without a system reset.  Even the slightest
> memory leakage or fragmentation will accumulate and become a problem under
> such a scenario.>> -- from Joe Gwinn
> 
> 
> This is plain false. There are well known algorithms that work just fine,
> and do not depend on statistical regularity. Basically the tradeoff is
> that if you want guaranteed constant time allocation, you have to be willing
> to waste some fixed proportion of memory in the worst case.

Waste lots of memory, actually.  Sometimes we don't have the memory to
waste, or we would rather waste it on something else.  I don't know that
these algorithms have guaranteed absolute upper bounds on response time,
and absolute protection against fragmentation, together with the requisite
efficiency.  

Fixed-block allocators waste no space, and have absolutely constant
service time.

 
> As or proving such algorithms correct, again you need to go to the literature.

Strikeout!  The literature on memory management algorithms is huge.  I've
yet to see a fully suitable variable-block algorithm.  It appears that you
have dug much deeper into the literature than I have.  Perhaps you can
point me to a suitable article or two?  

 
> This is precisely the kind of area where you don't need "domain experts".
> You need people who are aware of the standard work in analysis of algorithms,
> for which there is a large established literature. 

See above.  The vast bulk of that large established literature solves
problems we don't have in realtime, at the expense of problems we do
have.  Perhaps you have been so fortunate as to find one of the few truly
relevant articles.  I haven't.  If so, a pointer or two would be greatly
appreciated.  I doubt that I will ever have the time to read a significant
fraction of that literature.


> No doubt you will complain that you don't want or need "experts in memory
> allocation", but as your note makes clear you certainly COULD benefit from
> having people in your project who know this kind of material. Once again,
> this is not the sort of thing you can pick up from newsgroup FAQ's. The
> literature on memory allocation is comprehensive, and accessible to someone
> with basic knowledge of the area, but it is not so accessible if you don't
> have the general background. 

There are plenty of people here who can read the literature.  The question
is time and interest and reason, especially if they have an adequate,
proven solution in hand.  It's rare to risk a new and untried algorithm in
something so fundamental, unless the current algorithms are clearly
inadequate.  

We are a development lab, not a research university.  The objectives and
tradeoffs are quite different.  There is no endowed chair in anything so
specific as memory allocation; we are all of necessity generalists.  We
draw on the university experts if we can, but are not ourselves
university-level experts in such things, and never will be.

Actually, newsgroups and FAQs are a rich source of specific pointers.  The
more people combing the literature, the higher the chance someone
somewhere will strike paydirt.  Also, the more practical information there
is on what part of the theory really works in industrial applications, or
why this or that theory fails in practice, the better.

 
> If I gave my graduate students in CS the task of looking up a memory
> allocation algorithm that had "perfect" characterstics as far as constant
> time (i.e. O(1)) allocation, I would expect them to be able to track it
> down. On the other hand, if I asked them to track down papers in high
> energy physics or quantum chemistry, I would not expect them to have the
> background.

We are not students, we are practitioners, and have limited time and
interest in research, and even less interest in reinventing the wheel.

Is that "O(1)" the average behaviour, or the upper bound?   We need an
algorithm that is mathematically proven to be fixed-time, regardless of
just about everything, and the proof has to be simple enough that we (and
our customers) can understand it without having to take an added graduate
degree, which would take longer than it takes us to build the typical
system.

 
> So this is another opportunity to emphasize that these days, we recognize
> that programming is a complex task, but that there are many tools available
> to help, some of them quite complex, and requiring a significant level of
> training and understanding. The best programming efforts are where domain
> experts work with experts in the technology in a cooperative manner to
> make sure that the best technology is used to incorporate the most accurate
> model of the domain, reaching for the best knowledge available in both areas.

It would be nice, yes...  Sounds like the Courant Institute.  I don't
think they build embedded realtime systems under contract, though.

 
> If you look at what is happening in scientific programming these days, you
> see an interesting trend. The typical quantum chemistry or crystallography
> programs of the 60's (my thesis is in the xtalography area), did not require
> complex algorithms from a CS point of view, and the work was done by chemists
> who had picked up some general Fortran programming knowledge.
> 
> But even back then, the result was very effective programming of silly
> algorithms in some cases. I improved one data gathering program by replacing
> a bubble sort with treesort3 (people know it as heapsort today), and then
> further improved it by devising a modification of treesort3 that halved the
> number of compares from 2logN to logN [already I was becoming more of a
> computer scientist than a chemist (*)]

I have like war stories.  My favorite, from fortran on Univac 1108s in the
1970s, involved a young radio propagation engineer who recomputed
everything at every turn, because he didn't know what a disk was.  The CPU
and elapsed time went from days to minutes after he discovered disks.  

These stories prove that not all programmers (engineers, cobblers,
tailors, ...) are equally adept, and that domain experts may not be
computer experts, and vice versa.  But, we knew that.

 
> These days, scientific programs tend to use highly complex algorithms, and
> numerical computing is an avid consumer of some of the most advanced work
> in algorithmic design and analysis. At the Courant Institute, computer
> scientist, numerical analysis, and other scientific experts work together
> to solve problems in such areas as secondary oil recovery, design of
airframes,
> and design of fusion reactors.

Well, we are trying to solve much different problems.  We are trying to do
by comparison simple things, but do them at 64 Hz, controlling hardware
and *never* overrunning.  I doubt that those scientific programmers would
learn much useful from our projects, and vice versa.


> (*) This optimziation has never been properly published, although full details
> are in my thesis. The idea appeared as an excercise in Volume III of Knuth,
> though Knuth's answer was in fact wrong in an important detail (one of the
> several nice Wells Fargo checks for $1 that I got from this volume was for
> finding this bug :-)

Congratulations.  It isn't just a dollar -- it comes complete with
bragging rights.  

So, it's Knuth with bugfix?  I assume that the second edition has the
problem fixed.  Which problem was it?

 
> If you are interested in learning more about that optimization, or using
> it, you have easy access to the necessary information. Just look at the
> extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in
> files g-hesora.adb and g-hesorg.adb. In general spend some time wandering
> through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you
> will find it worth while :-)

Actually, it sounds like I should instead read the relevant parts of your
thesis.  This way I get peer-reviewed rationale, not just the code.  What
are the title, date and school?  I'm pretty sure the crystallography won't
do permanent damage.


Joe Gwinn




             reply	other threads:[~1997-12-19  0:00 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1997-12-19  0:00 Joe Gwinn [this message]
  -- strict thread matches above, loose matches on Subject: below --
1997-12-22  0:00 Variable-block memory managers bad in realtime Joe Gwinn
1997-12-23  0:00 ` John Briggs
1997-12-23  0:00 ` Robert Dewar
1997-12-24  0:00 ` Jon S Anthony
1997-12-30  0:00   ` Joe Gwinn
1997-12-31  0:00     ` Jon S Anthony
1998-01-01  0:00       ` Henry Baker
1997-12-10  0:00 Joe Gwinn
1997-12-14  0:00 ` Robert Dewar
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox