From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: john@assen.demon.co.uk (John McCabe) Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/02 Message-ID: <828474655.17825@assen.demon.co.uk> X-Deja-AN: 145474884 x-nntp-posting-host: assen.demon.co.uk references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> newsgroups: comp.lang.ada Date: 1996-04-02T00:00:00+00:00 List-Id: Ken Garlington wrote: >Since I often find myself expressing the same sentiments as Mr. McCabe, I >thought I'd add my two cents: >I can't disagree with anything in your response. However, when my company >does testing, there are several things that happen. I suspect some of these >happen in Mr. McCabe's shop as well: >1. We have a requirements specification that uniquely identifies each >requirement. Yes. All cross referenced through Architectural Design, Detailed Design, code and tests. >2. We have a test or set of tests which can be traced back to each requirement. Yes. As above. >3. We have consultations with the end user of the system to see if the tests >are adequate, and reflect the usage of the system. Yes (sort of). Ultimately our customer is the European Space Agency. In between them and us however are Dornier (DE), another division of my company, and Alcatel (FR). At the end of the day therefore we have 4 independant judges of the suitability of our testing, and of our design at all stages. Unfortunately Alcatel's reps know practically nothing about software so are not of much use in deciding whether our testing is adequate. They are responsible for integrating our equipment with theirs. Unfortunately (again) they don't appear to know how the equipment they've designed works never mind how it interfaces with ours! The other division of my company, because of its responsibility for providing a maintenance facility, takes a much greater interest in S/W and is probably the most difficult of the lot to please (except ESA - see later). Dornier seem to be more interested in how the software looks and whether it can be maintained easily - they provided the coding rules which forbid the use of manu extremely useful Ada features!. Finally, the ESA rep has some very strange ideas about software and gets very confused. We spend hours explaining things to him, and he seems to take it in, then he brings up exactly the same topic at the next meeting - even when the topic has nothing to do with software. It's very frustrating. >4. In addition to functional tests, we may also have other tests designed to >meet certain criteria (particularly for safety-critical software). This criteria >might include measures of statement/branch/path coverage and/or measures of data >coverage. We do this by using LDRA Testbed with limits on the minimum level of statement and branch coverage of 100%, and 70% on LCSJ's. I'm not sure exactly where those figures are derived from, but they seem reasonable. The only problem here is that we've found a few bugs in that tool as well! >5. In addition to the use of "tests" in the narrow sense of throwing inputs >at the software and looking at the outputs, we can also use other analytical tools >with regard to software quality, such as peer reviews of the design and >implementation of the compiler, static analysis tools, etc. At the moment the compiler we use (TLD Ada for MIL-STD-1750A) has been mandated by Dornier. We did have to provide justification on our use of LDRA Testbed rather than Dornier's preferred Logiscope. >6. Not that it happens much in my systems, but if a deficiency were found in a >product after release, a test that checks for that deficiency gets added back >into the test suite. Same here. >It's probably just ignorance on my part about the ACVC process, but I don't <..snip..> >I know that NPL has a tool that they sell that tests Ada compilers for bugs, that >apparently provides much more coverage than the ACVC. Why should such a tool >exist outside of the validation/certification process? If it's provides more coverage than the ACVC, why isn't it used instead, or alongside ACVC. Going back to point 3, I get the impression that ACVC is inherently limited by its need to be applicable to all Ada compilers. Based on the methods you and I use, would it not be better to use the ACVC suite as a basis for the compiler vendors tests, and also require the compiler vendors to submit their own test suites for approval. I know this would create a lot of work for both the vendors and those responsible for validation, but I think in the long run it would put more emphasis on improving the quality of Ada compilers. Best Regards John McCabe