From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,FROM_WORDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: f5d71,d275ffeffdf83655 X-Google-Attributes: gidf5d71,public X-Google-Thread: 115aec,d275ffeffdf83655 X-Google-Attributes: gid115aec,public X-Google-Thread: 103376,d275ffeffdf83655 X-Google-Attributes: gid103376,public X-Google-Thread: 1014db,31269dcc0261900b X-Google-Attributes: gid1014db,public X-Google-Thread: 109fba,d275ffeffdf83655 X-Google-Attributes: gid109fba,public X-Google-Thread: 146b77,d275ffeffdf83655 X-Google-Attributes: gid146b77,public From: "Nick Roberts" Subject: Re: Real-time dyn allctn (was: Ada vs C++ vs Java) Date: 1999/01/26 Message-ID: <78lv2q$dpl$3@plug.news.pipex.net> X-Deja-AN: 437354890 References: <369C1F31.AE5AF7EF@concentric.net> <369DDDC3.FDE09999@sea.ericsson.se> <369e309a.32671759@news.demon.co.uk> <77mu1h$n6m$1@galaxy.mchh.siemens.de> <77no7p$d65$1@nnrp1.dejanews.com> <78i6m3$505$4@plug.news.pipex.net> <36ACC4D6.3E3FEF4@icon.fi> X-Mimeole: Produced By Microsoft MimeOLE V4.72.3110.3 Organization: UUNET WorldCom server (post doesn't reflect views of UUNET WorldCom) Newsgroups: comp.lang.ada,comp.lang.c++,comp.vxworks,comp.lang.java,comp.lang.c,comp.realtime Date: 1999-01-26T00:00:00+00:00 List-Id: Niklas Holsti wrote in message <36ACC4D6.3E3FEF4@icon.fi>... [...] |Even if you think that the computed stack usage is an overestimate, |what is the alternative? Guess on a "reasonable" value, and hope |it works? Or use a deeper, context-sensitive static analysis to |eliminate infeasible call paths? The latter is OK, if you can do it... Guesswork. Yes really! It is a method I have used, and it works. Of course, I then test the software: if stack crashes occur too often, increase the stack space; if too infrequently, decrease it. It is vital to build in effective monitoring or logging functionality, so that the number of crashes occurring can be either continuously seen or retrieved at the end of a run/flight/whatever. Deep analysis is often helpful, if it's available (its never been to me :-( [...] |Yes of course (a) and (b) are essential. And using other forms of |dynamic allocation, such as bounded arrays of preallocated blocks, |is the traditional (and workable) method of achieving these |requirements. | |Have you practical experience of achieving these requirements with a |standard or custom-made malloc(), without constraining the distribution |of the number of allocated blocks over block size? It seems to me that |the memory fragmentation problem will not be easy to beat, without |a lot of application-specific analysis. Yes I have practical experience of memory management systems of nighmarish complexity. You are quite right that fragmentation and other problems are not easily beaten! But they can be beaten, and I have found that it is surprisingly worthwhile doing so. Having powerful memory management functionality licked really 'leverages' the power of your embedded software: suddenly there is so much more you can do. This has been my own experience, with certain kinds of embedded application which were not terrifically hard ('torpid' perhaps? Can I say that? :-) YMMV. [...]> These conditions do not need to be imposed on |> any of the other parts of the software (if there are any). | |Since we are considering real-time programs, we should omit the "other |parts". So your case (a) is the only real case, and it is rather common |opinion that the typical malloc()/free() implementations are |not predictably fast enough (I don't have any first-hand data, though). I don't think so, really. If you consider a typical whole application - e.g. a jet engine management system - it will still tend to have parts that are not particularly 'hard'. A jet engine management system, for example, will probably have a task whose role is to record engine data onto a logger. Whilst this task will undoubtedly have some tough timing constraints on it, it is not really a 'hard' task: it could run out of memory (stack or heap) every now and then, and nobody is going to die; it could drop packets every so often, and while this may be inconvenient (very annoying perhaps), still nobody is going to die. Providing that this task's crashing and resetting or dropping out does not disturb other tasks - and this is the vital proviso - all is well. Perhaps such a task would never need to use dynamic allocation (or recursion, or multiple stack frames), but then again it might: it might, for example, need to be able to encode data in a reverse/forward Polish order. [...] |Your malloc_a() etc. seem to be exactly the "other forms of dynamic |allocation" that you criticise earlier. Could you explain the |difference? Must malloc_x return a "pointer" (and not an index) to |count as true dynamic allocation? I could have been clearer here. A program might have ten different arrays, with distinct associated allocation, deallocation, and management code for each one, just so as to avoid using malloc. All (or most) of this redundant code might be replaceable by a single malloc_a function, which allocates memory from a central heap. But, malloc_a may well be programmed to do garbage collection if necessary (or other slow operations). You may, therefore, wish to write a malloc_b which allocates from the heap if it can, but never tries anything slow, so that time-critical software can call malloc_b (with determinate upper bound execution time). You may even need a malloc_c which is even quicker, or a malloc_d which is intermediate. You get the picture (I hope ;-) I must emphasise: it will, I repeat, _will_ be _very rare_ for the malloc_b etc variations to be required: the vast majority of time-criticial code can be, should be, or must be written without dynamic allocation (or recursion). I never intended to give anyone the impression I thought otherwise. |> [...] my |> comments above about dynamic allocation can be applied to stack allocation |> as well [...] | |I don't understand your meaning here. Please elaborate. Do you mean |some application-specific method of allocating and deallocating |stack space? That would interfere wit typical compilers... I simply mean that, just as a heap crash can be perfectly acceptable for some parts of the software - provided that it is handled properly - a stack crash is equally acceptable. Provided such stack crashes fulfil the same criteria (not causing danger, not upsetting other software, being detected and handled properly, etc.) this software is then absolved from the ligature of static analysis and stack oversizing. For the purposes of squeezing software into tight spaces, it can be a handy trick. ------------------------------------------- Nick Roberts -------------------------------------------