comp.lang.ada
 help / color / mirror / Atom feed
* Variable-block memory managers bad in realtime
@ 1997-12-10  0:00 Joe Gwinn
  1997-12-14  0:00 ` Robert Dewar
  0 siblings, 1 reply; 10+ messages in thread
From: Joe Gwinn @ 1997-12-10  0:00 UTC (permalink / raw)



Robert Dewar said:

> This is an over-generalization, there are algorithms known that provide
> variable size allocation with constant performance. These algorithms are
> the ones that should be used for real time performance.

The historical problem with such algorithms, many of which are quite good,
is that they aren't *perfect*, and there is always a spectrum of
allocations and deallocations that will cause failure, because their
design depends on statistical regularities of one kind or another.  

Another reason we have always avoided such algorithms is that they were
too complex for us to prove that they would *never* jam, even after months
of continuous heavy field use without a system reset.  Even the slightest
memory leakage or fragmentation will accumulate and become a problem under
such a scenario.

With UNIX X/Windows systems, we have found it necessary to periodically
reset the platform to reclaim the leaked memory (at various levels in the
system).  If one provides enough memory, and presses hard enough on
finding the leaks, resets may be done as part of standard scheduled
preventive maintenace.

I have repaired my share of systems that used something other than a
simple fixed-block memory manager; the fix was always to revert to a
linked-list fixed-block memory manager with something like six different
sizes of block.


That said, I would be curious for references to your favorite memory
management algorithms.  


As for malloc() and free() in VxWorks, I talked to Wind River today, and
they said that their malloc uses a standard first-fit algorithm, and they
do not claim that it is absolutely fragmentation proof.  Most mallocs use
first-fit, actually, so the point of there being a VxWorks malloc is to
tie it to VxWorks' underlying OS memory allocator, just as UNIX systems'
malloc calls on the UNIX kernel to get more memory for the calling UNIX
process when the process heap is exhausted.  All in all, Wind River made
no claim of special realtime abilities for their malloc.


Joe Gwinn




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-10  0:00 Variable-block memory managers bad in realtime Joe Gwinn
@ 1997-12-14  0:00 ` Robert Dewar
  0 siblings, 0 replies; 10+ messages in thread
From: Robert Dewar @ 1997-12-14  0:00 UTC (permalink / raw)



<<The historical problem with such algorithms, many of which are quite good,
is that they aren't *perfect*, and there is always a spectrum of
allocations and deallocations that will cause failure, because their
design depends on statistical regularities of one kind or another.

Another reason we have always avoided such algorithms is that they were
too complex for us to prove that they would *never* jam, even after months
of continuous heavy field use without a system reset.  Even the slightest
memory leakage or fragmentation will accumulate and become a problem under
such a scenario.>>


This is plain false. There are well known algorithms that work just fine,
and do not depend on statistical regularity. Basically the trade off is
that if you want guaranteed constant time allocation, you have to be willing
to waste some fixed proportion of memory in the worst case.

As or proving such algorithms correct, again you need to go to the literature.

This is precisely the kind of area where you don't need "domain experts".
You need people who are aware of the standard work in analysis of algorithms,
for which there is a large established literature. 

No doubt you will complain that you don't want or need "experts in memory
allocation", but as your note makes clear you certainly COULD benefit from
having people in your project who know this kind of material. Once again,
this is not the sort of thing you can pick up from newsgroup FAQ's. The
literature on memory allocation is comprehensive, and accessible to someone
with basic knowledge of the area, but it is not so accessible if you don't
have the general background. 

If I gave my graduate students in CS the task of looking up a memory
allocation algorithm that had "perfect" characterstics as far as constant
time (i.e. O(1)) allocation, I would expect them to be able to track it
down. On the other hand, if I asked them to track down papers in high
energy physics or quantum chemistry, I would not expect them to have the
background.

So this is another opportunity to emphasize that these days, we recognize
that programming is a complex task, but that there are many tools available
to help, some of them quite complex, and requiring a significant level of
training and understanding. The best programming efforts are where domain
experts work with experts in the technology in a cooperative manner to
make sure that the best technology is used to incorproate the most accurate
model of the domain, reaching for the best knowledge available in both areas.

If you look at what is happening in scientific programming these days, you
see an interesting trend. The typical quantum chemistry or crystallography
programs of the 60's (my thesis is in the xtalography area), did not require
complex algorithms from a CS point of view, and the work was done by chemists
who had picked up some general Fortran programming knowledge.

But even back then, the result was very effective programming of silly
algorithms in some cases. I improved one data gathering program by replacing
a bubble sort with treesort3 (people know it as heapsort today), and then
further improved it by devising a modification of treesort3 that halved the
number of compares from 2logN to logN [already I was becoming more of a
computer scientist than a chemist (*)]

These days, scientific programs tend to use highly complex algorithms, and
numerical computing is an avid consumer of some of the most advanced work
in algorithmic design and analysis. At the Courant Institute, computer
scientist, numerical analysis, and other scientific experts work together
to solve problems in such areas as secondary oil recovery, design of airframes,
and design of fusion reactors.

Robert Dewar
Ada Core Technologies

(*) This optimziation has never been properly published, although full details
are in my thesis. The idea appeared as an excercise in Volume III of Knuth,
though Knuth's answer was in fact wrong in an important detail (one fo the
several nice Wells Fargo checks for $1 that I got from this volume was for
finding this bug :-)

If you are interested in learning more about that optimization, or using
it, you have easy access to the necessary information. Just look at the
extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in
files g-hesora.adb and g-hesorg.adb. In general spend some time wandering
through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you
will find it worth while :-)





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
@ 1997-12-19  0:00 Joe Gwinn
  0 siblings, 0 replies; 10+ messages in thread
From: Joe Gwinn @ 1997-12-19  0:00 UTC (permalink / raw)



> From: dewar@merv.cs.nyu.edu (Robert Dewar)
> Newsgroups: comp.lang.ada
> Date: 14 Dec 1997 09:08:07 -0500
> 
> <<The historical problem with such algorithms, many of which are quite good,
> is that they aren't *perfect*, and there is always a spectrum of
> allocations and deallocations that will cause failure, because their
> design depends on statistical regularities of one kind or another.
> 
> Another reason we have always avoided such algorithms is that they were
> too complex for us to prove that they would *never* jam, even after months
> of continuous heavy field use without a system reset.  Even the slightest
> memory leakage or fragmentation will accumulate and become a problem under
> such a scenario.>> -- from Joe Gwinn
> 
> 
> This is plain false. There are well known algorithms that work just fine,
> and do not depend on statistical regularity. Basically the tradeoff is
> that if you want guaranteed constant time allocation, you have to be willing
> to waste some fixed proportion of memory in the worst case.

Waste lots of memory, actually.  Sometimes we don't have the memory to
waste, or we would rather waste it on something else.  I don't know that
these algorithms have guaranteed absolute upper bounds on response time,
and absolute protection against fragmentation, together with the requisite
efficiency.  

Fixed-block allocators waste no space, and have absolutely constant
service time.

 
> As or proving such algorithms correct, again you need to go to the literature.

Strikeout!  The literature on memory management algorithms is huge.  I've
yet to see a fully suitable variable-block algorithm.  It appears that you
have dug much deeper into the literature than I have.  Perhaps you can
point me to a suitable article or two?  

 
> This is precisely the kind of area where you don't need "domain experts".
> You need people who are aware of the standard work in analysis of algorithms,
> for which there is a large established literature. 

See above.  The vast bulk of that large established literature solves
problems we don't have in realtime, at the expense of problems we do
have.  Perhaps you have been so fortunate as to find one of the few truly
relevant articles.  I haven't.  If so, a pointer or two would be greatly
appreciated.  I doubt that I will ever have the time to read a significant
fraction of that literature.


> No doubt you will complain that you don't want or need "experts in memory
> allocation", but as your note makes clear you certainly COULD benefit from
> having people in your project who know this kind of material. Once again,
> this is not the sort of thing you can pick up from newsgroup FAQ's. The
> literature on memory allocation is comprehensive, and accessible to someone
> with basic knowledge of the area, but it is not so accessible if you don't
> have the general background. 

There are plenty of people here who can read the literature.  The question
is time and interest and reason, especially if they have an adequate,
proven solution in hand.  It's rare to risk a new and untried algorithm in
something so fundamental, unless the current algorithms are clearly
inadequate.  

We are a development lab, not a research university.  The objectives and
tradeoffs are quite different.  There is no endowed chair in anything so
specific as memory allocation; we are all of necessity generalists.  We
draw on the university experts if we can, but are not ourselves
university-level experts in such things, and never will be.

Actually, newsgroups and FAQs are a rich source of specific pointers.  The
more people combing the literature, the higher the chance someone
somewhere will strike paydirt.  Also, the more practical information there
is on what part of the theory really works in industrial applications, or
why this or that theory fails in practice, the better.

 
> If I gave my graduate students in CS the task of looking up a memory
> allocation algorithm that had "perfect" characterstics as far as constant
> time (i.e. O(1)) allocation, I would expect them to be able to track it
> down. On the other hand, if I asked them to track down papers in high
> energy physics or quantum chemistry, I would not expect them to have the
> background.

We are not students, we are practitioners, and have limited time and
interest in research, and even less interest in reinventing the wheel.

Is that "O(1)" the average behaviour, or the upper bound?   We need an
algorithm that is mathematically proven to be fixed-time, regardless of
just about everything, and the proof has to be simple enough that we (and
our customers) can understand it without having to take an added graduate
degree, which would take longer than it takes us to build the typical
system.

 
> So this is another opportunity to emphasize that these days, we recognize
> that programming is a complex task, but that there are many tools available
> to help, some of them quite complex, and requiring a significant level of
> training and understanding. The best programming efforts are where domain
> experts work with experts in the technology in a cooperative manner to
> make sure that the best technology is used to incorporate the most accurate
> model of the domain, reaching for the best knowledge available in both areas.

It would be nice, yes...  Sounds like the Courant Institute.  I don't
think they build embedded realtime systems under contract, though.

 
> If you look at what is happening in scientific programming these days, you
> see an interesting trend. The typical quantum chemistry or crystallography
> programs of the 60's (my thesis is in the xtalography area), did not require
> complex algorithms from a CS point of view, and the work was done by chemists
> who had picked up some general Fortran programming knowledge.
> 
> But even back then, the result was very effective programming of silly
> algorithms in some cases. I improved one data gathering program by replacing
> a bubble sort with treesort3 (people know it as heapsort today), and then
> further improved it by devising a modification of treesort3 that halved the
> number of compares from 2logN to logN [already I was becoming more of a
> computer scientist than a chemist (*)]

I have like war stories.  My favorite, from fortran on Univac 1108s in the
1970s, involved a young radio propagation engineer who recomputed
everything at every turn, because he didn't know what a disk was.  The CPU
and elapsed time went from days to minutes after he discovered disks.  

These stories prove that not all programmers (engineers, cobblers,
tailors, ...) are equally adept, and that domain experts may not be
computer experts, and vice versa.  But, we knew that.

 
> These days, scientific programs tend to use highly complex algorithms, and
> numerical computing is an avid consumer of some of the most advanced work
> in algorithmic design and analysis. At the Courant Institute, computer
> scientist, numerical analysis, and other scientific experts work together
> to solve problems in such areas as secondary oil recovery, design of
airframes,
> and design of fusion reactors.

Well, we are trying to solve much different problems.  We are trying to do
by comparison simple things, but do them at 64 Hz, controlling hardware
and *never* overrunning.  I doubt that those scientific programmers would
learn much useful from our projects, and vice versa.


> (*) This optimziation has never been properly published, although full details
> are in my thesis. The idea appeared as an excercise in Volume III of Knuth,
> though Knuth's answer was in fact wrong in an important detail (one of the
> several nice Wells Fargo checks for $1 that I got from this volume was for
> finding this bug :-)

Congratulations.  It isn't just a dollar -- it comes complete with
bragging rights.  

So, it's Knuth with bugfix?  I assume that the second edition has the
problem fixed.  Which problem was it?

 
> If you are interested in learning more about that optimization, or using
> it, you have easy access to the necessary information. Just look at the
> extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in
> files g-hesora.adb and g-hesorg.adb. In general spend some time wandering
> through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you
> will find it worth while :-)

Actually, it sounds like I should instead read the relevant parts of your
thesis.  This way I get peer-reviewed rationale, not just the code.  What
are the title, date and school?  I'm pretty sure the crystallography won't
do permanent damage.


Joe Gwinn




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
@ 1997-12-22  0:00 Joe Gwinn
  1997-12-23  0:00 ` Robert Dewar
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Joe Gwinn @ 1997-12-22  0:00 UTC (permalink / raw)



This is a re-posting.  Robert Dewar reports that although he got the email
of the original posting, he never saw it on the newsgroup.  I can see it,
but who knows what happened and who got it?  So, it's safest to repost.  I
have added some clarifying text from the side discussion between Robert
and myself.

Date: Fri, 19 Dec 1997 14:30:51 -0500
From: gwinn@ED.RAY.COM (Joe Gwinn)
To: dewar@merv.cs.nyu.edu (Robert Dewar)
Subject: Re: Variable-block memory managers bad in realtime
Newsgroups: comp.lang.ada

(A copy of this message has also been posted to the following newsgroups:
comp.lang.ada)

> From: dewar@merv.cs.nyu.edu (Robert Dewar)
> Newsgroups: comp.lang.ada
> Date: 14 Dec 1997 09:08:07 -0500
> 
> <<The historical problem with such algorithms, many of which are quite good,
> is that they aren't *perfect*, and there is always a spectrum of
> allocations and deallocations that will cause failure, because their
> design depends on statistical regularities of one kind or another.
> 
> Another reason we have always avoided such algorithms is that they were
> too complex for us to prove that they would *never* jam, even after months
> of continuous heavy field use without a system reset.  Even the slightest
> memory leakage or fragmentation will accumulate and become a problem under
> such a scenario.>> -- from Joe Gwinn
> 
> 
> This is plain false. There are well known algorithms that work just fine,
> and do not depend on statistical regularity. Basically the tradeoff is
> that if you want guaranteed constant time allocation, you have to be willing
> to waste some fixed proportion of memory in the worst case.

Waste lots of memory, actually.  Sometimes we don't have the memory to
waste, or we would rather waste it on something else.  I don't know that
these algorithms have guaranteed absolute upper bounds on response time,
and absolute protection against fragmentation, together with the requisite
efficiency.  

Fixed-block allocators waste no space, and have absolutely constant
service time.

 
> As or proving such algorithms correct, again you need to go to the literature.

Strikeout!  The literature on memory management algorithms is huge.  I've
yet to see a fully suitable variable-block algorithm.  It appears that you
have dug much deeper into the literature than I have.  Perhaps you can
point me to a suitable article or two?  

 
> This is precisely the kind of area where you don't need "domain experts".
> You need people who are aware of the standard work in analysis of algorithms,
> for which there is a large established literature. 

See above.  The vast bulk of that large established literature solves
problems we don't have in realtime, at the expense of problems we do
have.  Perhaps you have been so fortunate as to find one of the few truly
relevant articles.  I haven't.  If so, a pointer or two would be greatly
appreciated.  I doubt that I will ever have the time to read a significant
fraction of that literature.


> No doubt you will complain that you don't want or need "experts in memory
> allocation", but as your note makes clear you certainly COULD benefit from
> having people in your project who know this kind of material. Once again,
> this is not the sort of thing you can pick up from newsgroup FAQ's. The
> literature on memory allocation is comprehensive, and accessible to someone
> with basic knowledge of the area, but it is not so accessible if you don't
> have the general background. 

There are plenty of people here who can read the literature.  The question
is time and interest and reason, especially if they have an adequate,
proven solution in hand.  It's rare to risk a new and untried algorithm in
something so fundamental, unless the current algorithms are clearly
inadequate.  

We are a development lab, not a research university.  The objectives and
tradeoffs are quite different.  There is no endowed chair in anything so
specific as memory allocation; we are all of necessity generalists.  We
draw on the university experts if we can, but are not ourselves
university-level experts in such things, and never will be.

Actually, newsgroups and FAQs are a rich source of specific pointers.  The
more people combing the literature, the higher the chance someone
somewhere will strike paydirt.  Also, the more practical information there
is on what part of the theory really works in industrial applications, or
why this or that theory fails in practice, the better.

 
> If I gave my graduate students in CS the task of looking up a memory
> allocation algorithm that had "perfect" characterstics as far as constant
> time (i.e. O(1)) allocation, I would expect them to be able to track it
> down. On the other hand, if I asked them to track down papers in high
> energy physics or quantum chemistry, I would not expect them to have the
> background.

We are not students, we are practitioners, and have limited time and
interest in research, and even less interest in reinventing the wheel.

Is that "O(1)" the average behaviour, or the upper bound?   We need an
algorithm that is mathematically proven to be fixed-time, regardless of
just about everything, and the proof has to be simple enough that we (and
our customers) can understand it without having to take an added graduate
degree, which would take longer than it takes us to build the typical
system.

 
> So this is another opportunity to emphasize that these days, we recognize
> that programming is a complex task, but that there are many tools available
> to help, some of them quite complex, and requiring a significant level of
> training and understanding. The best programming efforts are where domain
> experts work with experts in the technology in a cooperative manner to
> make sure that the best technology is used to incorporate the most accurate
> model of the domain, reaching for the best knowledge available in both areas.

It would be nice, yes...  Sounds like the Courant Institute.  I don't
think they build embedded realtime systems under contract, though.

 
> If you look at what is happening in scientific programming these days, you
> see an interesting trend. The typical quantum chemistry or crystallography
> programs of the 60's (my thesis is in the xtalography area), did not require
> complex algorithms from a CS point of view, and the work was done by chemists
> who had picked up some general Fortran programming knowledge.
> 
> But even back then, the result was very effective programming of silly
> algorithms in some cases. I improved one data gathering program by replacing
> a bubble sort with treesort3 (people know it as heapsort today), and then
> further improved it by devising a modification of treesort3 that halved the
> number of compares from 2logN to logN [already I was becoming more of a
> computer scientist than a chemist (*)]

I have like war stories.  My favorite, from fortran on Univac 1108s in the
1970s, involved a young radio propagation engineer who recomputed
everything at every turn, because he didn't know what a disk was.  The CPU
and elapsed time went from days to minutes after he discovered disks.  

These stories prove that not all programmers (engineers, cobblers,
tailors, ...) are equally adept, and that domain experts may not be
computer experts, and vice versa.  But, we knew that.

 
> These days, scientific programs tend to use highly complex algorithms, and
> numerical computing is an avid consumer of some of the most advanced work
> in algorithmic design and analysis. At the Courant Institute, computer
> scientist, numerical analysis, and other scientific experts work together
> to solve problems in such areas as secondary oil recovery, design of
airframes,
> and design of fusion reactors.

Well, we are trying to solve much different problems.  We are trying to do
by comparison simple things, but do them at 64 Hz, controlling hardware
and *never* overrunning.  I doubt that those scientific programmers would
learn much useful from our projects, and vice versa.


> (*) This optimziation has never been properly published, although full details
> are in my thesis. The idea appeared as an excercise in Volume III of Knuth,
> though Knuth's answer was in fact wrong in an important detail (one of the
> several nice Wells Fargo checks for $1 that I got from this volume was for
> finding this bug :-)

Congratulations.  It isn't just a dollar -- it comes complete with
bragging rights.  

So, it's Knuth with bugfix?  I assume that the second edition has the
problem fixed.  Which problem was it?

 
> If you are interested in learning more about that optimization, or using
> it, you have easy access to the necessary information. Just look at the
> extensively commented body of GNAT.Heap_Sort_A or GNAT.Heap_Sort_G in
> files g-hesora.adb and g-hesorg.adb. In general spend some time wandering
> through the GNAT.xxx hierarchy of useful stuff if you are a GNAT user, you
> will find it worth while :-)

Actually, it sounds like I should instead read the relevant parts of your
thesis.  This way I get peer-reviewed rationale, not just the code.  What
are the title, date and school?  I'm pretty sure the crystallography won't
do permanent damage.  <Got the ref; thanks.>


One thing that occurred to me after the original posting was that in one
sense we had drifted off the original point, which was the use of malloc
in Ada RTEs.  Note that some Ada RTEs are using malloc to acquire memory
for such things as saving exception contexts, so the user has no control
over this.  Even if your algorithm is perfect in every way, it isn't what
people are using (like it or not), and I liked your suggestion to Tucker
Taft that there should be hooks allowing one to substitute one's own
memory manager for malloc or whatever the Ada RTE is using.  This solves
in a stroke the problem of finding the perfect memory manager by allowing
use of domain expertise to choose.


>Note that an allocator that allocates only fixed length blocks may be
>*VERY* wasteful if your application really requires variable length
>blocks.  <From a side discussion>

True, but our applications are happy with from three to six different
block sizes; the specific spectrum of sizes varies with the specific
application.  Remember, I don't have to solve as general a problem as does
a compiler or runtime writer, and I am more interested in predictability
and behaviour under overload and near overload than memory efficiency per
se.  Nor do I agree that there is one "best" algorithm, for all
applications.



Joe Gwinn




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-22  0:00 Joe Gwinn
  1997-12-23  0:00 ` Robert Dewar
@ 1997-12-23  0:00 ` John Briggs
  1997-12-24  0:00 ` Jon S Anthony
  2 siblings, 0 replies; 10+ messages in thread
From: John Briggs @ 1997-12-23  0:00 UTC (permalink / raw)



In article <gwinn-2212971856140001@dh5055115.res.ray.com>, gwinn@res.ray.com (Joe Gwinn) writes:
> Fixed-block allocators waste no space, and have absolutely constant
> service time.

If I ask for 10 bytes and you feed me 100 bytes, there's 90 bytes wasted, no?
If I ask for 1000 bytes and your allocation area has 50 100 byte blocks
remaining, there's 5000 bytes wasted, no?

But you should know this.  So I'm puzzled.

	John Briggs			vaxs09@alpha.vitro.com





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-22  0:00 Joe Gwinn
@ 1997-12-23  0:00 ` Robert Dewar
  1997-12-23  0:00 ` John Briggs
  1997-12-24  0:00 ` Jon S Anthony
  2 siblings, 0 replies; 10+ messages in thread
From: Robert Dewar @ 1997-12-23  0:00 UTC (permalink / raw)



Joe says

<<Waste lots of memory, actually.  Sometimes we don't have the memory to
  waste, or we would rather waste it on something else.  I don't know that
  these algorithms have guaranteed absolute upper bounds on response time,
  and absolute protection against fragmentation, together with the requisite
  efficiency.

  Fixed-block allocators waste no space, and have absolutely constant
  service time.>>

There are most certainly algorithms that have guaranteed worst case
(I assume that is what you mean by absolute) upper bounds on response
time. Obviously absolute protection against fragmentation is impossible
for an arbitrary allocation pattern unless you do compacting (and there
are algorithms for doing compacting that meet the other requirements,
talk to Henry Baker for example!) As to requisite efficienty, well that
of course is undefined.

It is of course not true that fixed-block allocators waste no space. For
an arbitrary allocation pattern, they waste lots of space by allocating
more than space than is required.

If on the other time you only need fixed-blocks of a given size in a
particular application, it is trivial to write a general purpose
algorithm that behaves as you demand for this particular allocation
pattern.

<<See above.  The vast bulk of that large established literature solves
  problems we don't have in realtime, at the expense of problems we do
  have.  Perhaps you have been so fortunate as to find one of the few truly
  relevant articles.  I haven't.  If so, a pointer or two would be greatly
  appreciated.  I doubt that I will ever have the time to read a significant
  fraction of that literature.>>

There is a substantial literature that specifically addresses the issues
of dynamic allocation for non-critical real time systems (critical real
time systems usuall don't do dynamic allocation at all, which is why I
make the distinction -- except in very trivial cases, certification of
safety-critical requirements is very dificult if there is dynamic allocation
of any kind, which is why most protocols for SC programming forbid dynamic
allocation completely).

<<Well, we are trying to solve much different problems.  We are trying to do
  by comparison simple things, but do them at 64 Hz, controlling hardware
  and *never* overrunning.  I doubt that those scientific programmers would
  learn much useful from our projects, and vice versa.>>

Your may doubt, but my experience (from doing a great deal of real time
programming in very small [4K to 16K byte memories] real time systems is
that sophisticated algorithmic techniques, and particularly the use of
clever data compression techniques, is highly relevant to that domain.
(and needs people who know the current algorithms literature reasonably
well).

<<True, but our applications are happy with from three to six different
  block sizes; the specific spectrum of sizes varies with the specific
  application.  Remember, I don't have to solve as general a problem as does
  a compiler or runtime writer, and I am more interested in predictability
  and behaviour under overload and near overload than memory efficiency per
  se.  Nor do I agree that there is one "best" algorithm, for all
  applications.>>

No, of course not, and that is why Ada provides the storage_pool capability
so that you can write specialized allocators that are tailored to your
application.

However, one can do a lot better than you think in providing a general
algorithm. Despite your skepticism, it is quite possible to create general
purpose algorithms that give guaranteed worst case response times that are
highly efficient for simple allocation patterns of the type you mention here.

Robert





^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-22  0:00 Joe Gwinn
  1997-12-23  0:00 ` Robert Dewar
  1997-12-23  0:00 ` John Briggs
@ 1997-12-24  0:00 ` Jon S Anthony
  1997-12-30  0:00   ` Joe Gwinn
  2 siblings, 1 reply; 10+ messages in thread
From: Jon S Anthony @ 1997-12-24  0:00 UTC (permalink / raw)



gwinn@res.ray.com (Joe Gwinn) writes:

> Waste lots of memory, actually.  Sometimes we don't have the memory
> to waste, or we would rather waste it on something else.  I don't
> know that these algorithms have guaranteed absolute upper bounds on
> response time, and absolute protection against fragmentation,
> together with the requisite efficiency.
>
> Fixed-block allocators waste no space, and have absolutely constant
> service time.

What in the world are you talking about?  Fixed block allocators waste
space up the wazoo unless (of course) you _always_ require the _exact_
size they provide.

Second, you should probably take a look at some of the work on
allocators and collectors in the GC literature.  Including stuff on
compaction.  Many of these do indeed have guranteed worst case
behaviors.  In partiuclar, check out Henry Baker's work on this stuff
(including his realtime GC papers).  These may or may not work for
you, but unless you are targetting realtime engine control, or some
such, they will likely be OK.

> True, but our applications are happy with from three to six different
> block sizes; the specific spectrum of sizes varies with the specific
> application.

OK, fine, then maybe a suite of 3-6 fixed size allocators will work
for you.  Of course, this may well be memory inefficient as well, but
if it is good enough, go for it.

> Remember, I don't have to solve as general a problem as does
> a compiler or runtime writer, and I am more interested in predictability
> and behaviour under overload and near overload than memory efficiency per
> se.  Nor do I agree that there is one "best" algorithm, for all
> applications.

OK, I can pretty much agree with all of this.

/Jon

-- 
Jon Anthony
Synquiry Technologies, Ltd., Belmont, MA 02178, 617.484.3383
"Nightmares - Ha!  The way my life's been going lately,
 Who'd notice?"  -- Londo Mollari




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-24  0:00 ` Jon S Anthony
@ 1997-12-30  0:00   ` Joe Gwinn
  1997-12-31  0:00     ` Jon S Anthony
  0 siblings, 1 reply; 10+ messages in thread
From: Joe Gwinn @ 1997-12-30  0:00 UTC (permalink / raw)



In article <ufk9cu7bjl.fsf@synquiry.com>, Jon S Anthony <jsa@synquiry.com>
wrote:

> gwinn@res.ray.com (Joe Gwinn) writes:
> 
> > Waste lots of memory, actually.  Sometimes we don't have the memory
> > to waste, or we would rather waste it on something else.  I don't
> > know that these algorithms have guaranteed absolute upper bounds on
> > response time, and absolute protection against fragmentation,
> > together with the requisite efficiency.
> >
> > Fixed-block allocators waste no space, and have absolutely constant
> > service time.
> 
> What in the world are you talking about?  Fixed block allocators waste
> space up the wazoo unless (of course) you _always_ require the _exact_
> size they provide.

See below.  We get to pick the block sizes to suit the application, so the
wastage isn't a problem, and we really like the constant-time behaviour.


> Second, you should probably take a look at some of the work on
> allocators and collectors in the GC literature.  Including stuff on
> compaction.  Many of these do indeed have guranteed worst case
> behaviors.  In particular, check out Henry Baker's work on this stuff
> (including his realtime GC papers).  These may or may not work for
> you, but unless you are targetting realtime engine control, or some
> such, they will likely be OK.

Will do.  Do you have more specific references?  Thanks.

We are doing the equivalent of engine control in many places.


> > True, but our applications are happy with from three to six different
> > block sizes; the specific spectrum of sizes varies with the specific
> > application.
> 
> OK, fine, then maybe a suite of 3-6 fixed size allocators will work
> for you.  Of course, this may well be memory inefficient as well, but
> if it is good enough, go for it.

It *does* work.  These values are taken from at least a billion dollars
worth of fielded systems.


> > Remember, I don't have to solve as general a problem as does
> > a compiler or runtime writer, and I am more interested in predictability
> > and behaviour under overload and near overload than memory efficiency per
> > se.  Nor do I agree that there is one "best" algorithm, for all
> > applications.
> 
> OK, I can pretty much agree with all of this.

Thanks.


Joe Gwinn




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-30  0:00   ` Joe Gwinn
@ 1997-12-31  0:00     ` Jon S Anthony
  1998-01-01  0:00       ` Henry Baker
  0 siblings, 1 reply; 10+ messages in thread
From: Jon S Anthony @ 1997-12-31  0:00 UTC (permalink / raw)



gwinn@res.ray.com (Joe Gwinn) writes:

> > Second, you should probably take a look at some of the work on
> > allocators and collectors in the GC literature.  Including stuff on
> > compaction.  Many of these do indeed have guranteed worst case
> > behaviors.  In particular, check out Henry Baker's work on this stuff
> > (including his realtime GC papers).  These may or may not work for
> > you, but unless you are targetting realtime engine control, or some
> > such, they will likely be OK.
> 
> Will do.  Do you have more specific references?  Thanks.

HB's paper repository:

ftp://ftp.netcom.com/pub/hb/hbaker/home.html

UT (Paul Wilson) GC repository:

ftp://ftp.cs.utexas.edu/pub/garbage/


> We are doing the equivalent of engine control in many places.

If that's really true (with cycle times measured in microseconds),
this stuff isn't going to work for you.


/Jon

-- 
Jon Anthony
Synquiry Technologies, Ltd., Belmont, MA 02178, 617.484.3383
"Nightmares - Ha!  The way my life's been going lately,
 Who'd notice?"  -- Londo Mollari




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Variable-block memory managers bad in realtime
  1997-12-31  0:00     ` Jon S Anthony
@ 1998-01-01  0:00       ` Henry Baker
  0 siblings, 0 replies; 10+ messages in thread
From: Henry Baker @ 1998-01-01  0:00 UTC (permalink / raw)



In article <uf90t16x43.fsf@synquiry.com>, Jon S Anthony <jsa@synquiry.com>
wrote:

> gwinn@res.ray.com (Joe Gwinn) writes:
> > Will do.  Do you have more specific references?  Thanks.

> > We are doing the equivalent of engine control in many places.
> 
> If that's really true (with cycle times measured in microseconds),
> this stuff isn't going to work for you.

Well, it depends upon what you mean by 'this stuff'.

Well-engineered dynamic storage algorithms can be used for hard real-time
applications, but they do have to be designed for this purpose.  A number
of 'multi-media' systems with hard real-time requirements have been built
using real-time GC techniques.

Most dynamic storage allocators -- including nearly all implementations
of 'malloc' -- have not been designed for hard real-time use.  Most mallocs
have been designed to make the standard benchmarks run fast (if even that!)
and probably have to be replaced for many serious applications.  There have
been some very interesting measurements of the 'standard' mallocs that ship
with various operating systems, and they aren't pretty.  So the problem
isn't _GC_, per se, but non-real-time allocators themselves.

A number of people in the GC community are interested in embedded systems
applications of GC.  Do a web search for Kelvin Nielsen.




^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~1998-01-01  0:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-12-10  0:00 Variable-block memory managers bad in realtime Joe Gwinn
1997-12-14  0:00 ` Robert Dewar
  -- strict thread matches above, loose matches on Subject: below --
1997-12-19  0:00 Joe Gwinn
1997-12-22  0:00 Joe Gwinn
1997-12-23  0:00 ` Robert Dewar
1997-12-23  0:00 ` John Briggs
1997-12-24  0:00 ` Jon S Anthony
1997-12-30  0:00   ` Joe Gwinn
1997-12-31  0:00     ` Jon S Anthony
1998-01-01  0:00       ` Henry Baker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox