From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/04 Message-ID: <31639EA2.7AE2@lfwc.lockheed.com>#1/1 X-Deja-AN: 145800343 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> <3160EFBF.BF9@lfwc.lockheed.com> <828475321.18492@assen.demon.co.uk> <31623F5E.4EAE@lfwc.lockheed.com> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-04-04T00:00:00+00:00 List-Id: Robert Dewar wrote: > > Ken, it continues to worry me that you could possibly think that a set > of black box tests (no code coverage testing, no path testing) could > possibly be sufficient as proof at any high level of assurance of a > complex program. Comsider the assumptions that appear to be buried in this statement: 1. The ACVC is inherently limited to black box testing. I could think of several ways to include other types of testing in an ACVC, e.g. a requirement to deliver of source code and supporting data to an AVF for analysis, or a requirement that the vendor do some specified level of analysis and deliver a summary of the results to the AVF as part of the certification process. However, since you've said that this is infeasible, I'll assume you're correct. 2. The ACVC does sufficient _black box_ testing to _support_ its stated goal (presumably, that users should _reasonably_ expect that the compiler will meet the language standard.) Is there some quantitative or qualitative measure to support this assumption? For example, are there requirements for vendors to report discoveries of noncompliance, so trending measures can be done? The ACVC may inherently be _insufficient_ to support the use of these compilers for critical systems. In fact, as you've noted previously, there's no known technique or combination of techniques sufficient to "prove" in the strictest sense that the software is correct. However, once we all agree to this statement, it seems to me that there are two choices available: a. "We can't get there, so we have to live with the way things are." b. "We can't get there, but we can continue to improve on where we are." All I know is, I don't get to build high-assurance systems without choosing (b). > Surely you do not mean to tell me that safety critical > programs that you write are tested only to this extent (or for that > matter that these programs trust the compiler!) Absolutely not. And yet, even though we know that we cannot prove correctness through black-box testing, we continue to not only _do_ black-box testing, but continue to invest in tools, process changes, etc. to _measure_ and _improve_ our black-box testing. Why? It's a Beizer thing. No, we don't trust the compilers, and so we analyze the object code and all of that. However, when we find only a few errors as a part of that analysis, we are more confident of the final result than if we find only a few hundred errors. Why? It's a Musa thing. As an intelligent man said very recently, the main thing is that the compiler not be the weakest link! What's more, I always have this eerie feeling, as we run our various analyses, that there's some poor guy (maybe Mr. McCabe) running that same analysis on the same code, and finding the same errors. Just think, if we weren't having to isolate those errors, and report them to the vendor, and do workarounds in the code, etc. etc., we'd have more time to find/prevent errors in _our_ code! > If you are indeed a serious potential customer for GNAT, contact > support@gnat.com. (or stop by our booth at STC!) We'll find out in October (assuming the JSF selection schedule holds) if we're a serious potential customer. Of course, if/when we go to Alpha/VMS on F-22, we will definitely be a serious potential customer! I can't go to STC, but I'll ask someone to stop by and request a copy of your process manual. Better yet, if you're going to the TTCP meeting, maybe you could bring it with you? Of course, that doesn't fully answer the real question. On several occasions, I've heard people say, "most compiler vendors" do a certain type of testing, or quality control, or something like that. How is this known? Do vendors share information on their processes with each other? Is there a minimum set of standards to which "decent" compiler vendors adhere?