From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c70f02b79bc3d231 X-Google-Attributes: gid103376,public From: joel_seidman@smtp.svl.trw.com (Joel Seidman) Subject: Re: dynamic memory allocation Date: 1997/06/16 Message-ID: #1/1 X-Deja-AN: 248887663 References: <33A55F1B.63FE@gsfc.nasa.gov> Organization: TRW, Inc. Newsgroups: comp.lang.ada Date: 1997-06-16T00:00:00+00:00 List-Id: In article <33A55F1B.63FE@gsfc.nasa.gov>, Stephen.Leake@gsfc.nasa.gov wrote: > They are proposing a message passing scheme where sending tasks allocate > buffers for each message from a heap, and receiving tasks deallocate. I > have suggested that the heap could become fragmented (the buffers are > NOT all the same size). They say "we'll just test it thoroughly". Running out of memory is another "bad" heap state. Or is this what you meant by fragmentation? Excuse me, but thorough testing will not prevent fragmentation or any other problem. At best it may show that fragmentation occurs. Or it may not show that (absence of fragmentation during testing does not guarentee it will not occur). I kind of have to wonder about this off-hand-sounding quoted response. I assume "it" refers to the final, integrated system. The question you need to ask is what is the impact to shedule and cost if fragmentation is discovered during the testing phase. Generally, the later you discover a problem, the more costly to fix. Up-front risk analysis is needed to address potentially costly problems. > Can anyone provide a reference to a book or study article that says this > is bad? To me it seems obvious, and the general tone in this newsgroup > is that it's obvious. I have a couple books on realtime embedded design, > and they don't even bother to mention dynamic allocation - > unfortunately, that makes it hard to say "see, this book says it's bad". I don't have a good reference. Knuth talks about heap management algorithms in general. The embedded real-time projects I've been on have discouraged the use of malloc (or new). Dynamic memory allocation in a heap isn't necessarily bad per se, but it can get complicated (or possibly impossible) to convince yourself that, in all possible scenarios, you won't run out of memory, and that the procedures to allocate and deallocate won't exceed their time budget. In simple situations, analysis may be possible. For example, if all messages are the same size, fragmentation won't occur. You might know something about the message sending and receiving activity that will allow you to bound the amount of message memory you need even with worst-case fragmentation, and you also know that you can make the heap that big. This would also bound the execution time for allocation/deallocation. In some systems, occasional message loss is tolerable, so you might be willing to allow occasional out-of-heap-space conditions. Monte Carlo type simulation could also be helpful to determine the possibility and frequency of bad heap states. Bottom line is if fragmentation is a bad thing, do some analysis up front, or be willing to pay the price later. -- Joel -- Joel Seidman joel_seidman@smtp.svl.trw.com