From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/02 Message-ID: <3160EFBF.BF9@lfwc.lockheed.com> X-Deja-AN: 145432296 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-04-02T00:00:00+00:00 List-Id: Robert Dewar wrote: > "1. We have a requirements specification that uniquely identifies each > requirement. > " > Yes, of course this was what was done for ACVC version 1 (ever read > the implementors guide? I guess not! This was the requirements spec > for the testing) Yes, I did. It looks like no requirements specification I've ever used. > "2. We have a test or set of tests which can be traced back to each requirement." > > Yes, of course this was done (don't you see the objectives in the test > traced back to the requirements, you said you read the tests). In the ACVC 1.0 sources I received, each test had an identifier. I did not receive a cross-reference of that identifier to the requirements, as I recall. > "3. We have consultations with the end user of the system to see if the tests > are adequate, and reflect the usage of the system. " > > This is *especially* being done for the new ACVC 2 (I guess you are > unfamiliar with the process here). Since I expect to be one of the end users, my being unfamiliar with the process would tend to support my statement, wouldn't it? :) > Your comments on white box testing are not relevant for a general > validatoin facility, though of course for a given compiler, these > kind of procedures are followed. 1. Then, perhaps, Mr. McCabe and I want something other than a general validation facility. How about standards that each vendor must meet for development and test -- SEI, ISO, and/or something else? 2. Please identify the requirement/guide where I can verify that, for a given compiler, these kind of procedures are followed. Is there a document in the public domain that describes the GNAT development and testing process? > It would not be practical to incorporate all test progrms for all bugs > found in all compilers into the ACVC (it would rapidly have tens of > thousands of tests, and become completely unmanagable). Probably true. I suppose the equivalent in my domain would be to take all the tests for all the DoD embedded systems and put them in one place. (Well, actually, we do that -- we deliver them to DoD. But they aren't done in one facility). On the other hand, those tests do exist -- for my domain. > For example, > the GNAT regression tests now are larger than the whole ACVC test > suite by a considerable factor. As the man said, it's not the size of the test suite, it's what you do with it that counts. :) Also, if the regression suite is that big, is that a good or a bad thing? > Also, the effort of taking every > bug and putting it into ACVC form is out of the question. Again, a difference in philosophy: In my domain, _not_ having a regression test for every bug found is out of the question. > The Ada ACVC suite is by fac the most comprehensive test suite ever > generated for a programming language. THe fact that it is stlil not > truly comprehensive just serves to emphasize how complex compilers > for modern large languages are. I certainly know that the lines of code in the average Ada toolset rival those of the average fighter aircraft. I am also painfully familiar with the problem of continously reducing the defect rate of such complex systems. However, if I go to my customer, and say: "Our systems really complex, and there's no way to develop a test suite that guarantees bug-free operation, so you'll just have to live with the current defect rate," he'll nod knowingly through the first two statements, and cheerfully chop my head off after the conclusion. That's the environment that I (and Mr McCabe, I suspect) live in. As a result, I have to define a means to reduce that error rate over time. I have to measure that error rate, to show that it is being reduced. And, (worst of all!) I have to share that error rate with my customer. When the measured rate fails to meet my goals, I get clobbered. When it meets or exceeds my goals, do I get flowers? No! But, at least I don't get clobbered. I understand that commercial software can and does work differently. I also know that talking about a set of different, competing companies as though they were a single entity ("the Ada vendors") is also naive. > Ken, a minimal effort on your part invested in learning about the ACVC > process would seem a worthwhile effort, and would certainly make your > comments about the ACVC more informed. I've read what AJPO puts out on the net, and that's about as much time as I can devote to the subject. I'm much too busy tracking down tool bugs to do much more than that :) > Have you even read John Goodenough's papers on the subject? No. Has he read mine? (Sorry, couldn't resist.)