From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.6 required=5.0 tests=BAYES_00,FROM_WORDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: f5d71,d275ffeffdf83655 X-Google-Attributes: gidf5d71,public X-Google-Thread: 115aec,d275ffeffdf83655 X-Google-Attributes: gid115aec,public X-Google-Thread: 1014db,31269dcc0261900b X-Google-Attributes: gid1014db,public X-Google-Thread: 146b77,d275ffeffdf83655 X-Google-Attributes: gid146b77,public X-Google-Thread: 103376,d275ffeffdf83655 X-Google-Attributes: gid103376,public X-Google-Thread: 109fba,d275ffeffdf83655 X-Google-Attributes: gid109fba,public From: "Nick Roberts" Subject: Re: Real-time dyn allctn (was: Ada vs C++ vs Java) Date: 1999/01/26 Message-ID: <78lv2m$dpl$1@plug.news.pipex.net> X-Deja-AN: 437354888 References: <369C1F31.AE5AF7EF@concentric.net> <369DDDC3.FDE09999@sea.ericsson.se> <369e309a.32671759@news.demon.co.uk> <77mu1h$n6m$1@galaxy.mchh.siemens.de> <77no7p$d65$1@nnrp1.dejanews.com> <78i6m3$505$4@plug.news.pipex.net> <36acb008.31947998@news.demon.co.uk> X-Mimeole: Produced By Microsoft MimeOLE V4.72.3110.3 Organization: UUNET WorldCom server (post doesn't reflect views of UUNET WorldCom) Newsgroups: comp.lang.ada,comp.lang.c++,comp.vxworks,comp.lang.java,comp.lang.c,comp.realtime Date: 1999-01-26T00:00:00+00:00 List-Id: John Birch wrote in message <36acb008.31947998@news.demon.co.uk>... |On Mon, 25 Jan 1999 01:13:44 -0000, "Nick Roberts" | wrote: | |>Michael J. Tobler wrote in message ... | |>|> In article <77mu1h$n6m$1@galaxy.mchh.siemens.de>, |>|> Wolfgang Denk wrote: | |>|> > But even under hard realtime you certainly will very |>|> > often use dynamic memory allocation - all variables put |>|> > on the stack are using a method of "dynamic memory |>|> > allocation" | |>|This is an incorrect statement. The stack is a static area of storage. |>|Objects, both native and user-defined are simply defined there and the |>|stack pointer is adjusted appropriately. There is no concept of "searching |>|for available memory", allocating it, and recording it - the memory |>|reserved for the stack is "allocated" at startup. | | |>Michael's comment is not strictly correct. | |>In the general case, the pattern of calls that will be made during the |>execution of a program - even a 'hard' real-time one - will be unpredictable |>(because it depends on unpredictable input values). | |This is simply wrong! | |You argue that unpredictable input values cause unpredictable patterns |of function calls. This is at best an exaggeration of the typical |case. No it really isn't an exaggeration; it's not always the case, of course, but it really is the typical case. Consider the following (entirely concocted) code: int elev_rqd; init() { ... robin = 0; ... }; handle_clock_tick() { ... compensate_elevators(); ... }; ... compensate_elevators() { int elev_dif = elev_rqd - read_port(ELEV_PSN_PORT); int elev_act; if (INSIDE(elev_dif,ELEV_DIF_THRESH,-ELEV_DIF_THRESH)) { elev_act = compute_elevator_actuator_power(elev_dif); CONSTRN(elev_act,ELEV_ACT_MIN,ELEV_ACT_MAX); } else { elev_act = 0; }; write_port(ELEV_ACT_PORT,elev_act); }; This is only the simplest example, but consider the pattern of calling of the function compute_elevator_actuator_power (relative to the calling of all the other functions). In other words, when will it get called, compared to when the others get called? Will this pattern of calling be the same for each flight of the aeroplane? It will be different every time. |For a well structured hard real-time program, no individual |input value should be unexpected It is a fundamental quality of input values that they are unpredictable. If the value of a certain input variable was predictable, you could use a constant, or a lookup table, or whatever, and not input it at all! Think about this. |(i.e. not catered for in the code). There is a difference between a value being unpredictable, and it not being catered for in the code. |Your argument assumes that the input values are coupled to the |algorithms of the system in a manner that allows unpredictable |behaviour! No, I am not saying this. I use very precise language very precisely. Perhaps you should go back, and read what I say very carefully. |If you use input values to determine program execution flow |then I would grant you that unpredicatble behaviour might result, e.g. |take input value, multiply by random number and put result in program |counter will certainly get interesting in a hurry! I think your example demonstrates that you don't really understand the nature of Von-Neumann architecture machines. Every time you execute a conditional jump (as a result of an 'if', or other conditional construct) whose condition depends on an input value you are effectively loading the program counter with a value that depends on an input value. In fact, some compilers will translate some 'switch/case' statements into a table lookup straight into the PC. |A well designed |hard realtime system would never be designed in such a manner :-) Smiley noted, but this is not true, John! I don't like saying it, but I really am a veteran at this sort of thing, and I know what I'm talking about. |For example in a process control system, input might be aquired at |known rates under interrupt. A background (i.e. non interrupt) cycle |would process the current set of inputs and update output values. |These might also be moved to the output hardware under interrupt. |There is nowhere a possibility of an input value changing the course |of execution of the background cycle (except through well understood |paths to cope for error conditions). I think (I hope ;-) my example above shows you that the course of execution can be - and indeed generally is - changed by input values. |Consequently in such a design the |stack utilisation can be well determined. The coupling here is by |means of global variables (global between input and processing, and |between processing and output) | |Many other valid design strategies are also possible. For performance |reasons it may be desirable to put more processing into the interrupt |mechanism and to retain the background 'loop' for detection of errors |and housekeeping only. I think, therefore, the rest of your argument is predicated on false premises. |> The maximum possible stack usage can be calculated. However, this figure will often be vastly |>greater than any stack usage that will occur in practise (and thus be |>useless). | |Certainly the maximum value calculated will be high, but this value |can be adjusted into a more reasonable range by cursory examination of |the program. Firstly all interrupt routines and all routines that can |be called as a result of an interrupt must be accumulated (in terms of |stack requirement). It is assumed here that an interrupt cannot |interrupt itself (if this is the case then the sums get harder ;-)). | |Having done this the background loop can be examined, looking at the |call tree it is perfectly possible to deduce which routines can never |be called from inside others. This allows the maximum initial stack |requirement to be considerably reduced. This stage is often automated |by the toolset provided by the compiler vendor. Cursory examination of the program? A small program maybe! Yes, a toolset (or appropriate compiler) can be used, but there are typically certain, more complicated, constructs that it cannot apply its deductive reasoning to. The result is, you have to avoid these constructs; in practice, doing this is always inconvenient but rarely impossible. My argument is that it is not always necessary to ensure a stack 'crash' cannot happen - typically by ensuring the amount of memory reserved for the stack will always be sufficient - because sometimes it is perfectly workable to allow such a crash to happen, providing you take the necessary precautions. |>The occurance of recursion which is controlled by an input value |>usually makes the aforementioned effect inevitable. | |So don't use recursion. If you believe recursion is a necessity for |hard real-time programming - please tell me which hard real-time |project you have used it on - I can then avoid it. ;-) No I don't believe that recursion is a necessity for hard real-time programming, and I never said so. I simply believe that it is not always necessary to avoid using recursion, for two reasons: (a) in your own words ... |If you do have |to use recursion then it must be bounded - if it is not then you have |a non-deterministic system. By definition if the system may behave |non-deterministically, then it cannot solve a hard real-time problem |(a deadline can never be missed!) ... and (b) there are almost always, in real real-time projects, parts of the program which can fail - providing the failure can be caught and properly recovered - without going beyond the bounds of reliability or robustness required. For these parts of the program, it is acceptable to use code which may run out memory - be it stack memory or heap memory (or any other kind of memory) - provided that you can ensure that if it ever does run out of memory, the ensuing 'crash' (or whatever perjorative term you fancy using) and recovery does not jeopardise the minimum level of reliable execution required of the software. Of course, this does place constraints on the 'crashable' parts of the program: (1) you must be sure that the crashes won't happen at a time or in a circumstance which will jeopardise safety (or otherwise be unacceptable); (2) you must be sure that the memory overflow will always be detected; (3) you must be sure that the overflow will never interfere with the memory or operation of the rest of the program; (4) you must be sure that recovery will always be possible and correctly executed; (5) you must be sure that the recovery process will never interfere with the proper operation of the rest of the software; (6) you must be sure that the crash/recovery process will never causes the gradual erosion of memory or other resources; and so on. But all of these constraints can be met, and it is not usually impractical to do so. |>The allocation of data within a stack frame may well be static (but not |>necessarily). There is a certain, small, subset of programs which have a |>statically predictable pattern of stack allocation, and thus a memory |>requirement which is statically determinable with little or no waste. But it |>is misleading to describe allocation on the stack as 'static': it is |>generally nothing of the kind. | |'Generally' here meaning in programs written without careful |consideration of the issues involved in memory management for resource |limited systems! No, not at all. Again, this is not what I said, and not what I meant. |>I also feel that the sentiment "don't use dynamic allocation in real-time |>programs" is somewhat naive, to put it mildly. I rather suspect that, in |>rejecting the use of 'malloc' or 'new', the programmer simply replaces these |>forms of dynamic allocation with others (a statically declared array used to |>hold dynamically changing data, for example), for no better reason than |>superstition. | |I rather suspect you have never done any real-time coding. If you have |then you should go back to school. I simply repeat that I am a veteran of these things. |Whilst I would not completely rule |out the use of dynamic memory allocation (i.e. the use of malloc and |free) Hooray! ;-) |I would state categorically that there are very few hard |real-time systems IMHO where its use is a necessity. I don't disagree with this, and I didn't in my original post! I'm not arguing that its use is ever necessary, but simply that its non-use is not always necessary. |Since by its very |nature it makes deterministic behaviour more difficult to implement, |it is better avoided. The nature of 'ordinary' dynamic allocation facilities often makes their use best avoided. But I am talking about dynamic allocation facilities specially programmed, if necessary, to have the required characteristics. |>There is no need to take this attitude for any kind of program. | |>Taking the example of a program which flies an aeroplane, you will want to |>adopt a strategy of absolute robustness ('never crash'). | |>[1] This means that, if the program uses dynamic allocation, either it must |>be possible to statically determine that memory will never be insufficient, | |A near tautology. Not at all! There are some programs (only a few) which use dynamic allocation which can be analysed and proved to never require more than a certain amount of memory at any one point in time. For such a program, if you supply at least that amount of memory, you have a program which uses dynamic allocation but where it is possible to statically determine that memory will never be insufficient. No tautology! |>or there must be a means of: (a) reliably detecting insufficient memory; and |>(b) upon detecting this, returning the software to a state that enables it |>to continue safely flying the aeroplane. These requirements are not usually |>impractical to achieve. | |Oops - what's this? I know, it's the equivalent of the Microsoft BSOD. |"CTRL ALT DELETE To regain control of aircraft Captain!" |GET REAL!!!! Joke acknowledged, but I am not advocating the dreaded blue screen, nor the attitude that goes with it (of 'throw-away' software as I think a recent poster put it). Some means - be it a language's exception mechanism or something else - must be used to catch the memory overflow condition and then reset variables to a stable state (including the PC, if you want to look at it that way) so that execution can continue 'gracefully'. |You are advocating designing a piece of software that has known flaws. Perjorative language doth not an argument make. You can call it a 'flaw' if you like, but it is only a convention to consider it a flaw that a piece of software may run out of memory from time to time. In reality, this is only a flaw if it happens without the programmer's intent or consent; if it happens but is planned for and expected, it is not a flaw, John. |A good embedded software designer would regard stack (or heap) bounds |checking, (or a watchdog timer) as a belt and braces safety measure - |not as an integral part of the design. Only because that is the orthodoxy they have been taught. |BTW - if the input condition causes the program to CRASH, ie fail to |perform to specification - how does restarting the program resolve |that problem if the input condition continues unchanged. I.e. If the |pilot puts his aircraft into an inverted flat sipn, and the program |runs out of heap trying to solve the problem, how does " |>returning the software to a state that enables it to continue safely flying the aeroplane< |stop the same thing happening again? I am pointedly not advocating that vital flight systems are programmed so that they could run out of memory at a critical time! Mary, God, and all the saints forbid. But, to continue with the example of an aeroplane, there will be lots of subsystems which are not at all crititcal to flight safety. An example of this might be a task whose sole function was to monitor the fuel levels in the various fuel tanks, and (in 'auto' mode) automatically pump fuel from one tank to another so as to retain optimum 'flight profile'. If this task were to crash once in a while, and have to reset itself to recover, this would never jeopardise anyone's safety at any time. |>[2] If the software has interrupt servicing time constraints, the interrupt |>servicing routines which have these constraints must either: (a) be able to |>use the dynamic allocation in a way which is predictably fast enough; or (b) |>not use dynamic allocation. These conditions do not need to be imposed on |>any of the other parts of the software (if there are any). | |Why oh why would an interrupt need to dynamically allocate memory????? I did not suggest that interrupts are likely to need to to dynamically allocate memory. Yet again, I did not write this and I did not mean it. I am simply saying that, provided the dynamic allocation can be done in a way which is predictably fast enough, it can be safely done. Are you disagreeing with this statement? If so, say how, precisely. |This is rather like saying, I want to catch a bus home but, rather |than work out how long it takes me to walk to the bus stop and how |often the bus comes, I'll just get myself a set of Bionic legs so that |I'll always be able to catch it! Bionic legs are not required: only a special allocation routine that will not attempt something too slow, such as garbage collection, before returning. |It is this kind of crap that causes |real-time software problems in the first place. I would suggest that it is a poor understanding of the true situation which causes real-time software problems. |Whilst I would grant that removing dynamic memory allocation from |interrupts is a start, your follow up about it not being necessary for |the rest of the system is only true if the rest of the system has no |role in a deterministic process required by the system. This would |possibly be true if all the real work was done under interrupt and |none in background. While this is a workable solution for small |problems, the moment there is a moderate degree of interaction between |values calculated or aquired by different interrupt routines, then it |becomes extremely difficult, if not impossible, to maintain any |synchronous behaviour that may be required by the problem domain. I'm getting the impression that you're one of those programmers who thinks that synchronisation between tasks is a matter of 'suck it and see', rather than using the proper synchronisation mechanisms in the proper places. A good embedded software designer will only ever rely upon these mechanisms to maintain the necessary synchronous behaviour between tasks, and not upon hoping that some tasks will be able to 'keep up'. |>It may well be that typical supplied dynamic allocation facilities are too |>crude or unreliable to satisfy the special requirements of a real-time |>program. But the answer is not to simply abandon the use of 'malloc' and |>replace it with a large number of ad-hoc, hybrid solutions (involving arrays |>and indexes). | |Yes it is! No it is not! |BTW - have you ever used a union? What have unions got to do with it? |Oh and while I'm at it, the use of malloc requires the use of pointer |casts and generally pointer arithmetic. Are these sound coding |practices when you are working in an environment where the background |radiation level can induce bit failures at rates of up to 1 million |times those on the average desktop? Wow, what an egregiously bogus argument! John, the processor uses pointers (e.g. the program counter) whether you like it or not. Take a look at the machine code produced by your compiler. It is full of pointers! Memory for hard applications has had sophisticated ECC schemes for years: even if it fails (to correct a bad bit) the failure is caught (it tugs on the NMI) and then the software - and this is the bit you need to read, John - must then be able to restart itself 'gracefully' (either that or BSOD). I take it you don't design yet your software to do this. |> Instead, all that is required is to reprogram the dynamic |>allocation facilities to your requirements: you may need a 'malloc_a', a |>'malloc_b', and a 'malloc_c', or whatever. The result is likely to be better |>source code and better performing software, with no reduction in robustness. | |Oh goody yet more potentially bug ridden code to add into our resource |limited system to support an unnecessary convenience. I'm not advocating anybody introduce bug-ridden code, and I'm not advocating anybody introduce code which hogs too much resource. Of course there will be applications which cannot support the overhead or the complexity of dynamic allocation. But I bet you there are many applications for which this is not necessarily true. Using dynamic allocation can improve code (machine code speed and size, and source code size and complexity) by the classic technique of 'factoring out' allocation code: instead of having what is effectively allocation code (for arrays) in several places in your program, you have it in only one place (for the heap) instead. |>In addition, in view of my comments earlier in this posting about the |>dynamic nature of allocation on the stack, it will often be the case that my |>comments above about dynamic allocation can be applied to stack allocation |>as well (saving a lot of memory space wasted on making sure the stack cannot |>crash). | |I think you missed the entire point. You don't code around problems. |You design in such a way as to never encounter them. This is exactly |what the contention is. By not using dynamic memory allocation and |having a statically allocated program - you simply never encounter the |following problems (and therefore never need any code for them!) | |Heap overflow. |Heap fragmentation. |Stack Overflow (assuming you get the sums right - but at least you |have tried, which is more than can be said for the never, never |approach you are advocating). But John, I repeat, these are only problems if they are not properly anticipated and dealt with. You must avoid the pitfall of being blinded by the perjorative terms you have been taught for things. I am saying that, with care, all of these 'problems' can be anticipated and dealt with perfectly satisfactorily. There is nothing 'never never' about this approach: it is rock solid! On the other hand your attitude of 'at least I have tried' is nearer to a 'never never' approach. | |regards John B. | My best wishes to you, John. I entreat you to objectively rethink your position on this subject, and I wish you the best of success in your real-time projects. ------------------------------------------- Nick Roberts -------------------------------------------