From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: * X-Spam-Status: No, score=1.1 required=5.0 tests=BAYES_00,INVALID_MSGID, LOTS_OF_MONEY,TO_NO_BRKTS_PCNT,T_MONEY_PERCENT autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,c70f02b79bc3d231 X-Google-Attributes: gid103376,public From: "Marin David Condic, 561.796.8997, M/S 731-93" Subject: Re: dynamic memory allocation Date: 1997/06/19 Message-ID: <97061910142505@psavax.pwfl.com>#1/1 X-Deja-AN: 251118636 Sender: Ada programming language Comments: Gated by NETNEWS@AUVM.AMERICAN.EDU X-Vms-To: SMTP%"INFO-ADA@VM1.NODAK.EDU" Newsgroups: comp.lang.ada X-Vms-Cc: CONDIC Date: 1997-06-19T00:00:00+00:00 List-Id: Stephen Leake writes: >Can anyone provide a reference to a book or study article that says this >is bad? To me it seems obvious, and the general tone in this newsgroup >is that it's obvious. I have a couple books on realtime embedded design, >and they don't even bother to mention dynamic allocation - >unfortunately, that makes it hard to say "see, this book says it's bad". > Sorry, I don't know of a reference book. David Wheeler posted a reference to http://www.ivv.nasa.gov wherein I was not able to locate the NASA guidebook in question. (Maybe it just didn't "jump up and bite me" so I couldn't see where it was.) I wouldn't go so far as to say that dynamic allocation as you describe for messages is "bad" - just non-deterministic. If you can live with the potential for dropping a message from time to time (requesting a retransmit, etc.) or can live with the fact that a message may not get handled within N miliseconds or other such "failure modes", then you might be able to make it work. We avoid dynamic allocation around here with our control systems because we can't live with the non-determinism. (Well... I can imagine some limited circumstances where it could be made deterministic - but then, what's the point? You tend to use dynamic allocation because you *don't* know what you're going to get or when.) There's also some speed penalty to dynamic allocation which, in most of our computers, is non-trivial and argues against dynamic use. As far as testing goes, I'd suggest that if you can determine what is the maximum rate of arrival of messages (probably based on your smallest message) and are able to determine that you can process all of those messages within the alloted time frame with sufficient percentage of spare then you're probably going to be O.K. The percentage of spare needs to be sufficient to allow for the garbage collection overhead which you might be able to determine by running some mixes of different size messages equal in data volume to the maximum rate, but, naturally, fewer in number. If after running some test cases you find you've still got - oh, say 50% spare - then maybe you go home and don't worry about it. (The 50% will naturally get whittled down by just how brave your management will decide you must be.) You may discover that the garbage collection overhead is so far down in the weeds when compared to the cost of processing the messages, that it may not be an issue. I think the software design ought to be able to handle Storage_Error, task overrun errors and anything else associated with running out of space and/or time if you're going to do anything dynamic. If the application can do something intelligent in the event of exhausting its available space or time, it might not be unsafe. MDC Marin David Condic, Senior Computer Engineer ATT: 561.796.8997 Pratt & Whitney GESP, M/S 731-96, P.O.B. 109600 Fax: 561.796.4669 West Palm Beach, FL, 33410-9600 Internet: CONDICMA@PWFL.COM =============================================================================== "A man who has a million dollars is as well off as if he were rich" -- John Jacob Astor. ===============================================================================