From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,6405eefbf080daa6 X-Google-Attributes: gid103376,public From: Robert A Duff Subject: Re: Is an RTOS Required for Ada? Date: 1999/06/09 Message-ID: #1/1 X-Deja-AN: 487567244 Sender: bobduff@world.std.com (Robert A Duff) References: <373B2927.7B22F898@pop.safetran.com> <19990514155120.03860.00000396@ng-cr1.aol.com> <7hmc18$jr6$1@nnrp1.deja.com> <7i1b7p$3nb$1@nnrp1.deja.com> <7ifapi$lf1$1@nnrp1.deja.com> Organization: The World Public Access UNIX, Brookline, MA Newsgroups: comp.lang.ada Date: 1999-06-09T00:00:00+00:00 List-Id: The question was why is a no-run-time-system implementation of Ada better for safety-critical applications. I don't think Robert Dewar and George Romanski have answered it in the technical sense. They both seemed to be saying, "because the standards say so". Well, that's a reason, but it's not a *technical* reasion. "George Romanski" writes: > Let's take an array assignment as an example. > If the arrays do not overlap (slicing is forbidden) then a single decision > may be present (for the loop - assuming the loop is not unrolled) > > Providing the assignment is completed the decision would have been > evaluated both true and false. > > If the arrays overlap, then a decision may be required to decide > the direction of the indexing, and a decision for the loop. > > If the array has a smaller component size than a normal addressed > memory unit (e.g. we can move a word quicker than 4 bytes) then the > loop may move words until it gets to the edges. This will require > more decisions for the operation. > > For Level B code, ALL decisions must be shown to have been taken > in both directions. ( or the code must be identified and analysed > explicitly). It may be hard to write test conditions that evaluate all > inlined > decisions in both directions. Fine -- we both agree that the more complicated the code is, the harder it is to test and verify. But I don't see how putting any of the above array-assignment algorithms in a run-time system, as opposed to generated code, makes things worse. If anything, it should make things easier, because there's only one copy of that algorithm to verify (recall Robert's 1 machine instruction per day metric). > My personal view (there is majority, but no concensus on this at present) is > that > inserted code which includes decisions must be identified and verified, for > level B. For level A multiple conditions would require additional > verification. > > Inserted code that includes no branches will be verified with the > application itself, it must be shown that it has been executed, but > may not require specific tests. I'm not sure what you mean by "inserted code". Is it any different than "generated code"? Does it make any difference to what extent the compiler is table driven? I don't see why it should. In any case, it seems to me that OF COURSE you have to verify all the code in a safety-critical application -- and that OF COURSE includes code from a run-time system, if any. And you have to do your analysis at the machine code level, because you don't trust the compiler. - Bob -- Change robert to bob to get my real email address. Sorry.