From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,b7566e485e23e171 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Mandatory stack check (was: Changing discriminants...) Date: 1996/08/14 Message-ID: <3211C27A.2A2F@lmtas.lmco.com>#1/1 X-Deja-AN: 174167511 references: <3209AC29.3E21@lmtas.lmco.com> <320F0DD1.7A66@lmtas.lmco.com> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.02 (Macintosh; I; 68K) Date: 1996-08-14T00:00:00+00:00 List-Id: Robert A Duff wrote: > > I'm not sure what you're asking. Yes, you can suppress the checks. > Whether or not the implementation actually does anything about it, is up > to the implementation. As Robert explained, GNAT doesn't do these > checks at all (yet) on most targets, so there's nothing to suppress. Perhaps I misunderstood. I thought that when GNAT implemented this check, it would implement it using hardware memory mapping support and thus suppressing it would have no effect AFTER implementation, either. My question was: what happens for GNAT running on a CPU where code is needed to do the check? Will such implementations support supressing the generation of this code? > I don't know if GNAT runs on any CPU's that don't have hardware memory > mapping support. Possibly not today - but I assume GNAT attempts to minimize hardware-specific assumptions in its interface to the GNU back end. Would, for example, this mean that porting GNAT to an architecture that already had GNU support, but did not have hardware support for stack overflow, would be more complicated (require more GNAT changes) than porting to an architecture with such hardware support? > You can easily show that the mechanism jumps to the handler it's > supposed to, and cleans up the stack properly. Easy for you, maybe! In my experience, you have to locate the checks, see if they set up any parameters to the handler correctly, and then do the jump correctly. And, you have to do this for each unit, and regression test it for each change to that unit. Most importantly, you have to do this at the object code level. No fun at all! > However, it seems to me > that it's a bit harder to prove anything about what that handler code > does -- its precondition is essentially "false". That is, the handler > can't assume much of anything about the state of the program variables. > I suppose it could just zero memory and start the program over, which > might be appropriate in *some* cases. At least with the compiler I'm faimilar with, the handler for Storage_Error is a single unit of code in the RTS. It's usually a pretty simple piece of code, at that. So, testing the handler is not the issue. It's making sure that the handler isn't called when it shouldn't be, or an attempt to call it ends up actually calling something else, or a call to the handler passes bad data, etc. that's the trickier part. And, usually, that stuff is sprinkled throughout the application, unless you suppress the check (or the check takes no code in the first place to be raised). > >Of course, if there's _nothing_ you can do to handle Storage_Error, then > >why have code lurking in your application that raises it (possibly when > >you _don't_ want it raised)? > > Well, there's one thing that you can do -- kill the program. Right -- but that means there is _something_ you can do! If you can't reasonably kill/restart the program, and you can't reliably react to it in any other way, then it seems to me a better use of resources to attempt to avoid the condition in the first place, rather than attempt to handle the unhandle-able. For example, I've seen systems that do the following: 1. Suppress range checking. 2. Put in a handler for the hardware overflow interrupt that simply returns to the next instruction. What this effectively does is ignore range checking. If an out-of-range value is generated, that bad value is used in subsequent calculations, memory accesses, etc. This may sound stupid, but having an incorrect output generated in one calculation cycle (assuming the error is due to a hardware glitch, not a software fault) might be preferable to attempting to provide a "safe" value, or killing/restarting the program, etc. This is particularly true if there is an operator in the loop who can distinguish a "hard" failure from a transient. Of course, the key to doing something like this is performing adequate analysis. You have to convince yourself that this is preferable to attempting to handle the error using Ada exceptions. -- LMTAS - "Our Brand Means Quality"