From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 115aec,d275ffeffdf83655 X-Google-Attributes: gid115aec,public X-Google-Thread: 103376,d275ffeffdf83655 X-Google-Attributes: gid103376,public X-Google-Thread: f5d71,d275ffeffdf83655 X-Google-Attributes: gidf5d71,public X-Google-Thread: 146b77,d275ffeffdf83655 X-Google-Attributes: gid146b77,public X-Google-Thread: 1014db,31269dcc0261900b X-Google-Attributes: gid1014db,public X-Google-Thread: 109fba,d275ffeffdf83655 X-Google-Attributes: gid109fba,public From: johnb@invision.co.uk (John Birch) Subject: Re: Real-time dyn allctn (was: Ada vs C++ vs Java) Date: 1999/01/25 Message-ID: <36acb008.31947998@news.demon.co.uk> X-Deja-AN: 436784984 X-NNTP-Posting-Host: invision.demon.co.uk:158.152.59.42 References: <369C1F31.AE5AF7EF@concentric.net> <369DDDC3.FDE09999@sea.ericsson.se> <369e309a.32671759@news.demon.co.uk> <77mu1h$n6m$1@galaxy.mchh.siemens.de> <77no7p$d65$1@nnrp1.dejanews.com> <78i6m3$505$4@plug.news.pipex.net> X-Complaints-To: abuse@demon.net X-Trace: news.demon.co.uk 917290174 nnrp-10:11396 NO-IDENT invision.demon.co.uk:158.152.59.42 Reply-To: johnb@invision.co.uk Newsgroups: comp.lang.ada,comp.lang.c++,comp.vxworks,comp.lang.java,comp.lang.c,comp.realtime Date: 1999-01-25T00:00:00+00:00 List-Id: On Mon, 25 Jan 1999 01:13:44 -0000, "Nick Roberts" wrote: >Michael J. Tobler wrote in message ... >|> In article <77mu1h$n6m$1@galaxy.mchh.siemens.de>, >|> Wolfgang Denk wrote: >|> > But even under hard realtime you certainly will very >|> > often use dynamic memory allocation - all variables put >|> > on the stack are using a method of "dynamic memory >|> > allocation" >|This is an incorrect statement. The stack is a static area of storage. >|Objects, both native and user-defined are simply defined there and the >|stack pointer is adjusted appropriately. There is no concept of "searching >|for available memory", allocating it, and recording it - the memory >|reserved for the stack is "allocated" at startup. >Michael's comment is not strictly correct. >In the general case, the pattern of calls that will be made during the >execution of a program - even a 'hard' real-time one - will be unpredictable >(because it depends on unpredictable input values). This is simply wrong! You argue that unpredictable input values cause unpredictable patterns of function calls. This is at best an exaggeration of the typical case. For a well structured hard real-time program, no individual input value should be unexpected (i.e. not catered for in the code). Your argument assumes that the input values are coupled to the algorithms of the system in a manner that allows unpredictable behaviour! If you use input values to determine program execution flow then I would grant you that unpredicatble behaviour might result, e.g. take input value, multiply by random number and put result in program counter will certainly get interesting in a hurry! A well designed hard realtime system would never be designed in such a manner :-) For example in a process control system, input might be aquired at known rates under interrupt. A background (i.e. non interrupt) cycle would process the current set of inputs and update output values. These might also be moved to the output hardware under interrupt. There is nowhere a possibility of an input value changing the course of execution of the background cycle (except through well understood paths to cope for error conditions). Consequently in such a design the stack utilisation can be well determined. The coupling here is by means of global variables (global between input and processing, and between processing and output) Many other valid design strategies are also possible. For performance reasons it may be desirable to put more processing into the interrupt mechanism and to retain the background 'loop' for detection of errors and housekeeping only. > The maximum possible stack usage can be calculated. However, this figure will often be vastly >greater than any stack usage that will occur in practise (and thus be >useless). Certainly the maximum value calculated will be high, but this value can be adjusted into a more reasonable range by cursory examination of the program. Firstly all interrupt routines and all routines that can be called as a result of an interrupt must be accumulated (in terms of stack requirement). It is assumed here that an interrupt cannot interrupt itself (if this is the case then the sums get harder ;-)). Having done this the background loop can be examined, looking at the call tree it is perfectly possible to deduce which routines can never be called from inside others. This allows the maximum initial stack requirement to be considerably reduced. This stage is often automated by the toolset provided by the compiler vendor. >The occurance of recursion which is controlled by an input value >usually makes the aforementioned effect inevitable. So don't use recursion. If you believe recursion is a necessity for hard real-time programming - please tell me which hard real-time project you have used it on - I can then avoid it. ;-) If you do have to use recursion then it must be bounded - if it is not then you have a non-deterministic system. By definition if the system may behave non-deterministically, then it cannot solve a hard real-time problem (a deadline can never be missed!) >The allocation of data within a stack frame may well be static (but not >necessarily). There is a certain, small, subset of programs which have a >statically predictable pattern of stack allocation, and thus a memory >requirement which is statically determinable with little or no waste. But it >is misleading to describe allocation on the stack as 'static': it is >generally nothing of the kind. 'Generally' here meaning in programs written without careful consideration of the issues involved in memory management for resource limited systems! >I also feel that the sentiment "don't use dynamic allocation in real-time >programs" is somewhat naive, to put it mildly. I rather suspect that, in >rejecting the use of 'malloc' or 'new', the programmer simply replaces these >forms of dynamic allocation with others (a statically declared array used to >hold dynamically changing data, for example), for no better reason than >superstition. I rather suspect you have never done any real-time coding. If you have then you should go back to school. Whilst I would not completely rule out the use of dynamic memory allocation (i.e. the use of malloc and free) I would state categorically that there are very few hard real-time systems IMHO where its use is a necessity. Since by its very nature it makes deterministic behaviour more difficult to implement, it is better avoided. >There is no need to take this attitude for any kind of program. >Taking the example of a program which flies an aeroplane, you will want to >adopt a strategy of absolute robustness ('never crash'). >[1] This means that, if the program uses dynamic allocation, either it must >be possible to statically determine that memory will never be insufficient, A near tautology. >or there must be a means of: (a) reliably detecting insufficient memory; and >(b) upon detecting this, returning the software to a state that enables it >to continue safely flying the aeroplane. These requirements are not usually >impractical to achieve. Oops - what's this? I know, it's the equivalent of the Microsoft BSOD. "CTRL ALT DELETE To regain control of aircraft Captain!" GET REAL!!!! You are advocating designing a piece of software that has known flaws. A good embedded software designer would regard stack (or heap) bounds checking, (or a watchdog timer) as a belt and braces safety measure - not as an integral part of the design. BTW - if the input condition causes the program to CRASH, ie fail to perform to specification - how does restarting the program resolve that problem if the input condition continues unchanged. I.e. If the pilot puts his aircraft into an inverted flat sipn, and the program runs out of heap trying to solve the problem, how does " >returning the software to a state that enables it to continue safely flying the aeroplane< stop the same thing happening again? >[2] If the software has interrupt servicing time constraints, the interrupt >servicing routines which have these constraints must either: (a) be able to >use the dynamic allocation in a way which is predictably fast enough; or (b) >not use dynamic allocation. These conditions do not need to be imposed on >any of the other parts of the software (if there are any). Why oh why would an interrupt need to dynamically allocate memory????? This is rather like saying, I want to catch a bus home but, rather than work out how long it takes me to walk to the bus stop and how often the bus comes, I'll just get myself a set of Bionic legs so that I'll always be able to catch it! It is this kind of crap that causes real-time software problems in the first place. Whilst I would grant that removing dynamic memory allocation from interrupts is a start, your follow up about it not being necessary for the rest of the system is only true if the rest of the system has no role in a deterministic process required by the system. This would possibly be true if all the real work was done under interrupt and none in background. While this is a workable solution for small problems, the moment there is a moderate degree of interaction between values calculated or aquired by different interrupt routines, then it becomes extremely difficult, if not impossible, to maintain any synchronous behaviour that may be required by the problem domain. >It may well be that typical supplied dynamic allocation facilities are too >crude or unreliable to satisfy the special requirements of a real-time >program. But the answer is not to simply abandon the use of 'malloc' and >replace it with a large number of ad-hoc, hybrid solutions (involving arrays >and indexes). Yes it is! BTW - have you ever used a union? Oh and while I'm at it, the use of malloc requires the use of pointer casts and generally pointer arithmetic. Are these sound coding practices when you are working in an environment where the background radiation level can induce bit failures at rates of up to 1 million times those on the average desktop? > Instead, all that is required is to reprogram the dynamic >allocation facilities to your requirements: you may need a 'malloc_a', a >'malloc_b', and a 'malloc_c', or whatever. The result is likely to be better >source code and better performing software, with no reduction in robustness. Oh goody yet more potentially bug ridden code to add into our resource limited system to support an unnecessary convenience. >In addition, in view of my comments earlier in this posting about the >dynamic nature of allocation on the stack, it will often be the case that my >comments above about dynamic allocation can be applied to stack allocation >as well (saving a lot of memory space wasted on making sure the stack cannot >crash). I think you missed the entire point. You don't code around problems. You design in such a way as to never encounter them. This is exactly what the contention is. By not using dynamic memory allocation and having a statically allocated program - you simply never encounter the following problems (and therefore never need any code for them!) Heap overflow. Heap fragmentation. Stack Overflow (assuming you get the sums right - but at least you have tried, which is more than can be said for the never, never approach you are advocating). regards John B.