From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: f5d71,d275ffeffdf83655 X-Google-Attributes: gidf5d71,public X-Google-Thread: 1014db,31269dcc0261900b X-Google-Attributes: gid1014db,public X-Google-Thread: 146b77,d275ffeffdf83655 X-Google-Attributes: gid146b77,public X-Google-Thread: 103376,d275ffeffdf83655 X-Google-Attributes: gid103376,public X-Google-Thread: 115aec,d275ffeffdf83655 X-Google-Attributes: gid115aec,public X-Google-Thread: 109fba,d275ffeffdf83655 X-Google-Attributes: gid109fba,public From: johnb@invision.co.uk (John Birch) Subject: Re: Real-time dyn allctn (was: Ada vs C++ vs Java) Date: 1999/01/27 Message-ID: <36aeeded.5738942@news.demon.co.uk> X-Deja-AN: 437500408 X-NNTP-Posting-Host: invision.demon.co.uk:158.152.59.42 References: <369C1F31.AE5AF7EF@concentric.net> <369DDDC3.FDE09999@sea.ericsson.se> <369e309a.32671759@news.demon.co.uk> <77mu1h$n6m$1@galaxy.mchh.siemens.de> <77no7p$d65$1@nnrp1.dejanews.com> <78i6m3$505$4@plug.news.pipex.net> <36acb008.31947998@news.demon.co.uk> <78lv2m$dpl$1@plug.news.pipex.net> X-Complaints-To: abuse@demon.net X-Trace: news.demon.co.uk 917442007 nnrp-12:29198 NO-IDENT invision.demon.co.uk:158.152.59.42 Reply-To: johnb@invision.co.uk Newsgroups: comp.lang.ada,comp.lang.c++,comp.vxworks,comp.lang.java,comp.lang.c,comp.realtime Date: 1999-01-27T00:00:00+00:00 List-Id: On Tue, 26 Jan 1999 21:58:25 -0000, "Nick Roberts" you (Nick.Roberts) originally wrote: >>In the general case, the pattern of calls that will be made during the >>execution of a program - even a 'hard' real-time one - will be >>unpredictable (because it depends on unpredictable input values). I wrote >|This is simply wrong! >|You argue that unpredictable input values cause unpredictable patterns >|of function calls. This is at best an exaggeration of the typical case. >No it really isn't an exaggeration; it's not always the case, of course, but >it really is the typical case. Consider the following (entirely concocted) >code: >int elev_rqd; >init() >{ > ... > robin = 0; > ... >}; >handle_clock_tick() >{ > ... > compensate_elevators(); > ... >}; >... > >compensate_elevators() >{ > int elev_dif = elev_rqd - read_port(ELEV_PSN_PORT); > int elev_act; > if (INSIDE(elev_dif,ELEV_DIF_THRESH,-ELEV_DIF_THRESH)) { > elev_act = compute_elevator_actuator_power(elev_dif); > CONSTRN(elev_act,ELEV_ACT_MIN,ELEV_ACT_MAX); > } else { > elev_act = 0; > }; > write_port(ELEV_ACT_PORT,elev_act); >}; >This is only the simplest example, but consider the pattern of calling of >the function compute_elevator_actuator_power (relative to the calling of all >the other functions). In other words, when will it get called, compared to >when the others get called? Will this pattern of calling be the same for >each flight of the aeroplane? It will be different every time. >From your previous example I see what you are getting at by 'unpredictable pattern of calls' - assuming a repeating cycle program (which all continuous execution real-time programs must be i.e. main never exits!) - from cycle to cycle the pattern of calls _may_ (and probably would) differ because of input changes . >|For a well structured hard real-time program, no individual >|input value should be unexpected >It is a fundamental quality of input values that they are unpredictable. If >the value of a certain input variable was predictable, you could use a >constant, or a lookup table, or whatever, and not input it at all! Think >about this. Why don't you re-read my comment - the word I used was 'unexpected' not unpredictable! >|(i.e. not catered for in the code). >There is a difference between a value being unpredictable, and it not being >catered for in the code. Agreed, but this is not the point! >|Your argument assumes that the input values are coupled to the >|algorithms of the system in a manner that allows unpredictable >|behaviour! >No, I am not saying this. I use very precise language very precisely. >Perhaps you should go back, and read what I say very carefully. Yep, I concede I misunderstood what you were saying about 'patterns of calls' - but practise what you preach! >|If you use input values to determine program execution flow >|then I would grant you that unpredicatble behaviour might result, e.g. >|take input value, multiply by random number and put result in program >|counter will certainly get interesting in a hurry! >I think your example demonstrates that you don't really understand the >nature of Von-Neumann architecture machines. Not at all, I simply didn't understand the 'pattern of calls' point until you gave an example. >|A well designed >|hard realtime system would never be designed in such a manner :-) This comment refers to the previous misunderstandings about input values and program flow, it shoul I hope be obvious to anybody writing a program that an input value changes the program flow - I hardly think it is necessary to elaborate such points. This is in fact the source of my misunderstanding over your pattern of calls point. >Smiley noted, but this is not true, John! I don't like saying it, but I >really am a veteran at this sort of thing, and I know what I'm talking >about. Since you have absolutely no idea of my experience in this field, stating that you are a veteran seems like a chep attempt at brow beating (and I didn't like saying that either!) >|For example in a process control system, input might be aquired at >|known rates under interrupt. A background (i.e. non interrupt) cycle >|would process the current set of inputs and update output values. >|These might also be moved to the output hardware under interrupt. >|There is nowhere a possibility of an input value changing the course >|of execution of the background cycle (except through well understood >|paths to cope for error conditions). >I think (I hope ;-) my example above shows you that the course of execution >can be - and indeed generally is - changed by input values. Well, assuming that the code is written using conditional statements, which might not be the case, there will of course be _minor_ variations in execution path, since efficient coding practice would be to not execute unnecessary functions (re. your example). >I think, therefore, the rest of your argument is predicated on false >premises. Actually it is your argument that is based on false premises. You stated; >The maximum possible stack usage can be calculated. However, this figure >will often be vastly greater than any stack usage that will occur in practise (and thus be >useless). Your argument being that calculation of the maximum possible stack is useless because it is vastly more than will occur in practise. This is obvious, so I followed this with a number of points about how this value was refined; >|Certainly the maximum value calculated will be high, but this value >|can be adjusted into a more reasonable range by cursory examination of >|the program. Firstly all interrupt routines and all routines that can >|be called as a result of an interrupt must be accumulated (in terms of >|stack requirement). This is because interrupt routines can interrupt each other (if allowed to within the bounds of priority mechanisms etc.) >|It is assumed here that an interrupt cannot >|interrupt itself (if this is the case then the sums get harder ;-)). Because it is a form of unbounded recursion! >|Having done this the background loop can be examined, looking at the >|call tree it is perfectly possible to deduce which routines can never >|be called from inside others. This allows the maximum initial stack >|requirement to be considerably reduced. This stage is often automated >|by the toolset provided by the compiler vendor. >Cursory examination of the program? A small program maybe! Are you advocating the development of large monolithic embedded hard real time programs here that are _NOT_ amenable to human analysis? >Yes, a toolset >(or appropriate compiler) can be used, but there are typically certain, more >complicated, constructs that it cannot apply its deductive reasoning to. The >result is, you have to avoid these constructs; in practice, doing this is >always inconvenient but rarely impossible. My argument is that it is not >always necessary to ensure a stack 'crash' cannot happen You can't _ensure_ a stack crash won't happen. There is always scope for a) programmer error, b) bit errors, c) erroneous CPU operation - caused by brown outs on the power supply, etc etc etc > - typically by >ensuring the amount of memory reserved for the stack will always be >sufficient - because sometimes it is perfectly workable to allow such a >crash to happen, providing you take the necessary precautions. When, in a hard real-time embedded system is a stack crash acceptable? Can you guarantee (unless we are talking about cycling through a reset vector here -[ is this your necessary precaution?]) that post crash your system is not corrupted in some subtle way? There is no way that a crash _should_ occur in a hard real-time system as a deliberate result of programming. Any reasonably sane embedded engineer will of course have watchdog routines to catch infinte looping, they may add traps for stack overflow and they may test critical variables. In all cases the unexpected would cause (generally) a restart from zero. But I am horriffied at the thought of a hard real-time system restarting because of memory overflow - that to me indicates a significant design failure. >No I don't believe that recursion is a necessity for hard real-time >programming, and I never said so. Neither did I - what's your point? >I simply believe that it is not always >necessary to avoid using recursion, for two reasons: (a) in your own words >|If you do have >|to use recursion then it must be bounded - if it is not then you have >|a non-deterministic system. By definition if the system may behave >|non-deterministically, then it cannot solve a hard real-time problem >|(a deadline can never be missed!) >... and (b) there are almost always, in real real-time projects, parts of >the program which can fail - providing the failure can be caught and >properly recovered - without going beyond the bounds of reliability or >robustness required. For these parts of the program, it is acceptable to use >code which may run out memory - be it stack memory or heap memory (or any >other kind of memory) - provided that you can ensure that if it ever does >run out of memory, the ensuing 'crash' (or whatever perjorative term you >fancy using) and recovery does not jeopardise the minimum level of reliable >execution required of the software. I'm sorry, I find it difficult to conceive of "parts of the program which can fail - providing the failure can be caught and properly recovered - without going beyond the bounds of reliability or robustness required", in a real time system. What are they in there for if they are not critical to the system? If they are not part of an agreed customer specification - should they be included if there is the slightest chance that they might impact the performance of what is actually required. After all, how can one of these 'parts' fail with a stack crash without potentialy impacting for example an interrupt routine that calls a re-enterant sub routine? >Of course, this does place constraints on the 'crashable' parts of the >program: (1) you must be sure that the crashes won't happen at a time or in >a circumstance which will jeopardise safety (or otherwise be unacceptable); So this rules out any system that uses interrupts, or at least when interrupts are enabled! Of course in an HRT system unacceptable would be any impact on performance that affects response time. How do you constrain when a crash happens? By only running the offending code at specific times? Perhaps you could put up a warning message. Don't apply up elevator just yet - the software is performing an operation that may crash? Just how in a real-time system do you anticipate when is a good time to do something? >(2) you must be sure that the memory overflow will always be detected; Easy to say - how would you advocate doing it? >(3) you must be sure that the overflow will never interfere with the memory or >operation of the rest of the program; Right, so lets take a couple of cases; Stack overflow - Oops that'll stuff up any interrupt routines - unless of course you just let them compund the problem. Heap overflow - Oh corrupts the stack and stuffs a return address for some other bit of the program. Divide by zero - steals cycles from rest of code? >(4) you must be sure that recovery >will always be possible and correctly executed; How do you recover the state of the memory corrupted by a stack or heap crash. > (5) you must be sure that the recovery process will never interfere with the proper operation of the >rest of the software; So the crash occurs - and you need cycles to put right the damage (or rewind) but there is no impact on the proper operation of the software. I guess if you're just eating idle time there's no problem, but what about interrupts. After all we're talking real-time here, so bang worst case high priority interrupt 1 microsecond after you've stack crashed. > (6) you must be sure that the crash/recovery process >will never causes the gradual erosion of memory or other resources; and so >on. Memory leaks should never occur in any kind of system! But are you talking about fragmentation here? >But all of these constraints can be met, and it is not usually impractical >to do so. Frankly that last comment is bull. >I simply repeat that I am a veteran of these things. As am I. >|Whilst I would not completely rule >|out the use of dynamic memory allocation (i.e. the use of malloc and >|free) >Hooray! ;-) Smiley acknowledged. >|I would state categorically that there are very few hard >|real-time systems IMHO where its use is a necessity. >I don't disagree with this, and I didn't in my original post! I'm not >arguing that its use is ever necessary, but simply that its non-use is not >always necessary. Sorry, I don't disagree :-). I also never said that it's non-use is necessary; --------------------------------------------------------------V >|I would state categorically that there are very few hard >|real-time systems IMHO where its use is a necessity. What I did say was; >|Since by its very >|nature it makes deterministic behaviour more difficult to implement, >|it is better avoided. >|>Taking the example of a program which flies an aeroplane, you will want to >|>adopt a strategy of absolute robustness ('never crash'). >|>[1] This means that, if the program uses dynamic allocation, either it must >|>be possible to statically determine that memory will never be insufficient, >|A near tautology. >Not at all! There are some programs (only a few) which use dynamic >allocation which can be analysed and proved to never require more than a >certain amount of memory at any one point in time. For such a program, if >you supply at least that amount of memory, you have a program which uses >dynamic allocation but where it is possible to statically determine that >memory will never be insufficient. No tautology! That's why I said a near tautology, you yourself make the point that only a few programs fall into this category! >|>or there must be a means of: (a) reliably detecting insufficient memory; >and >|>(b) upon detecting this, returning the software to a state that enables it >|>to continue safely flying the aeroplane. These requirements are not >usually >|>impractical to achieve. Now here is where I fundamentally disagree. Is it truly practical to implement the constrained mechanisms you advocate simply to allow the use of dynamic memory for only those parts of the system that are not critical in meeting reliability and robustness requirments? What benefit do you get by adding complexity here? >I am not advocating the dreaded blue screen, nor the >attitude that goes with it (of 'throw-away' software as I think a recent >poster put it). Some means - be it a language's exception mechanism or >something else - must be used to catch the memory overflow condition and >then reset variables to a stable state (including the PC, if you want to >look at it that way) so that execution can continue 'gracefully'. But how can you achieve this given all the constraints YOU put upon the mechanism? BTW I see nothing graceful in any kind of crash. >|You are advocating designing a piece of software that has known flaws. >Perjorative language doth not an argument make. You can call it a 'flaw' if >you like, but it is only a convention to consider it a flaw that a piece of >software may run out of memory from time to time. In reality, this is only a >flaw if it happens without the programmer's intent or consent; if it happens >but is planned for and expected, it is not a flaw, John. I would consider it a flaw in the programmers design strategy, Nick. >|A good embedded software designer would regard stack (or heap) bounds >|checking, (or a watchdog timer) as a belt and braces safety measure - >|not as an integral part of the design. >Only because that is the orthodoxy they have been taught. Seems OK to me. Layered safety I think they call it! >|BTW - if the input condition causes the program to CRASH, ie fail to >|perform to specification - how does restarting the program resolve >|that problem if the input condition continues unchanged. I.e. If the >|pilot puts his aircraft into an inverted flat sipn, and the program >|runs out of heap trying to solve the problem, how does " >|>returning the software to a state that enables it to continue safely >flying the aeroplane< >|stop the same thing happening again? >I am pointedly not advocating that vital flight systems are programmed so >that they could run out of memory at a critical time! Mary, God, and all the >saints forbid. But, to continue with the example of an aeroplane, there will >be lots of subsystems which are not at all crititcal to flight safety. An >example of this might be a task whose sole function was to monitor the fuel >levels in the various fuel tanks, and (in 'auto' mode) automatically pump >fuel from one tank to another so as to retain optimum 'flight profile'. If >this task were to crash once in a while, and have to reset itself to >recover, this would never jeopardise anyone's safety at any time. But you seem to be advocating that this critical function be implemented as a part of a safety critical system, and further that if it is incorporated into such a system that it can be allowed to fail. It is evident that you have no experience of failure analysis if this is truly what you are advocating. The safety guys would eat you for breakfast. >|>[2] If the software has interrupt servicing time constraints, the >|>interrupt servicing routines which have these constraints must either: (a) be able >|>to use the dynamic allocation in a way which is predictably fast enough; or >|>(b) not use dynamic allocation. These conditions do not need to be imposed on >|>any of the other parts of the software (if there are any). >| Why oh why would an interrupt need to dynamically allocate memory????? >I did not suggest that interrupts are likely to need to to dynamically >allocate memory. Yet again, I did not write this and I did not mean it. Oh I'm sorry, shall I quote you -> " the interrupt servicing routines which have these constraints must either: (a) be able to use the dynamic allocation in a way which is predictably fast enough; Seems perfectly clear to me what was written and that I therefore replied to. >I am >simply saying that, provided the dynamic allocation can be done in a way >which is predictably fast enough, it can be safely done. Are you disagreeing >with this statement? If so, say how, precisely. Yes I am. If you use dynamic memory allocation you are at risk of over running the available memory. Note that this is not necessarily (but typically is) the case. Speed of allocation is only one of the issues here. You have not shown that it is safe (or safer) to use dynamic memory allocation in a hard real-time system, rather you have emphasised all the constraints one might be under if you do. Given that my contention was >|Since by its very >|nature it makes deterministic behaviour more difficult to implement, >|it is better avoided. I think you've made my point rather well. >I'm getting the impression that you're one of those programmers who thinks >that synchronisation between tasks is a matter of 'suck it and see', rather >than using the proper synchronisation mechanisms in the proper places. A >good embedded software designer will only ever rely upon these mechanisms to >maintain the necessary synchronous behaviour between tasks, and not upon >hoping that some tasks will be able to 'keep up'. I don't see where you got this impression at all. It's not correct. >Wow, what an egregiously bogus argument! Admittedly a somewhat spurious comment about the effects of radiation on hardware - would you dispute that an array of 256 chars indexed by a short integer (8 bit) is safer than a 256 byte block of memory accessed by 32 bit pointer? After all, corruption of my index at least stays within the bounds of the array. >John, the processor uses pointers (e.g. the program counter) This is a register _inside_ the CPU - the radiation implications are somewhat different - for instance it may well be implemented using static RAM rather than external memory which is where pointers would typically be stored. > whether you like it or not. Take a look at the >machine code produced by your compiler. It is full of pointers! Memory for >hard applications has had sophisticated ECC schemes for years: even if it >fails (to correct a bad bit) the failure is caught (it tugs on the NMI) and >then the software - and this is the bit you need to read, John - must then >be able to restart itself 'gracefully' (either that or BSOD). I take it you >don't design yet your software to do this. Resorting to casting aspertions on my competence is a poor debating tactic. Restarting software as a result of a hardware failure has no relevance to the topic of dynamic memory allocation in embedded systems. I never disputed the requirement for layered backups to the software - I even mentioned then in my posting. You appear to be trying to mix the arguments here. >|> Instead, all that is required is to reprogram the dynamic >|>allocation facilities to your requirements: you may need a 'malloc_a', a >|>'malloc_b', and a 'malloc_c', or whatever. The result is likely to be better >|>source code and better performing software, with no reduction in robustness. >|Oh goody yet more potentially bug ridden code to add into our resource >|limited system to support an unnecessary convenience. >I'm not advocating anybody introduce bug-ridden code, Neither was I, I said yet more potentially bug-ridden code, the emphasis being of course on the fact that a) yuo're increasing the size of the code base, b) more code means more bugs - since all code has bugs. > and I'm not advocating >anybody introduce code which hogs too much resource. Of course there will be >applications which cannot support the overhead or the complexity of dynamic >allocation. But I bet you there are many applications for which this is not >necessarily true. > Using dynamic allocation can improve code (machine code >speed and size, and source code size and complexity) by the classic >technique of 'factoring out' allocation code: instead of having what is >effectively allocation code (for arrays) in several places in your program, >you have it in only one place (for the heap) instead. allocation code for arrays??? >|>In addition, in view of my comments earlier in this posting about the >|>dynamic nature of allocation on the stack, it will often be the case thatmy >|>comments above about dynamic allocation can be applied to stack allocation >|>as well (saving a lot of memory space wasted on making sure the stack cannot >|>crash). You were the one who brought up this idea of writing code to ensure the stack would'nt crash. I don't see any memory requirement in a statically allocated system for this at all. It is your 'solution' with multiple allocation routines and horrendous constraints that is going to increase the memory wasted. For example you are going to use memory to indicate whether it is safe to enter one of your non-critical - might blow the stack / heap - routines. I simply stated that the stack requirement for a statically allocated program can be easily calculated (given certain inconsequential restrictions - such as recursion and function pointers), and that this was not so easy for a system using dynamic memory allocation. My contention from there was that dynamic memory allocation is not _necessary_ for the majority of embedded real time systems. I then made the following comment; >|You don't code around problems. >|You design in such a way as to never encounter them. This is exactly >|what the contention is. By not using dynamic memory allocation and >|having a statically allocated program - you simply never encounter the >|following problems (and therefore never need any code for them!) >|Heap overflow. >|Heap fragmentation. >|Stack Overflow (assuming you get the sums right - but at least you >|have tried, which is more than can be said for the never, never >|approach you are advocating). >But John, I repeat, these are only problems if they are not properly >anticipated and dealt with. You must avoid the pitfall of being blinded by >the perjorative terms you have been taught for things. I am saying that, >with care, all of these 'problems' can be anticipated and dealt with >perfectly satisfactorily. There is nothing 'never never' about this >approach: it is rock solid! On the other hand your attitude of 'at least I >have tried' is nearer to a 'never never' approach. I have only one further comment to make KISS. >My best wishes to you, John. I entreat you to objectively rethink your >position on this subject, and I wish you the best of success in your >real-time projects. Thanks for the blessing, I am sorry if this posting appears agressive in tone, but I would entreat you to consider taking a less patronising tone. FYI I have a history of very successful hard real time projects behind me. BTW - If you follow this up, can we continue this thread only in comp.realtime, this thread is far too X-posted. regards John B.