From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,e4e62e0a73fb6667 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: The Ada Compiler Evaluation System Date: 1996/04/18 Message-ID: <31761BD5.7D11@lmtas.lmco.com>#1/1 X-Deja-AN: 148203740 references: <4l2nt1$p4k@ns1.sw-eng.falls-church.va.us> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-04-18T00:00:00+00:00 List-Id: Philip Brashear wrote: > ...as one who has > led both ACVC (validation suite) and ACES (evaluation suite) development > efforts and the application thereof to various Ada compilers, I can state > that these are the interpretations that have gone into the efforts. If > there is a need to change the direction of either, then there should be a > discussion along the lines of "I suggest that the validation (or evaluation) > effort be modified as follows:". I suggest that the validation (or evaluation) effort be modified as follows: 1. The ACVC should be modified to improve the quality of Ada compilers across all vendors. (examples given below, and in previous posts). 2. Alternately, something outside the ACVC should be introduced or modified to improve the quality of Ada compilers across all vendors. > HOWEVER (he said, finally getting to his point), both the ACVC and the > ACES are extremely useful. Why? Is it because... > ACVC usage by vendors is pretty much required (at least for those selling > to the U.S. Government) Being required is good, so long as we can state the extent to which the ACVC adds to vendor quality. So far, the only proposal is that, because of the ACVC, Ada compiler quality may be better than Pascal compiler quality. That doesn't seem to be a particularly valuable measure (e.g. as a basis for continuing process improvement). Of course, this appears to be the only standard of quality (and, clearly, there is disagreement as to whether this is even _a_ standard of quality) that all compiler vendors must meet. Is it the most efficient standard? Should there be other standards? > ACES usage isn't. Neither is ISO 9001. ISO 12207. SEI CMM. The NPL tool. The Rational TestMate tool. There's a plethora of tools and techniques that _can_ be used. They all have one thing in common. There's nothing that _requires_ their use. Each user has to apply (and reapply, and reapply) some or all of these requirements, and pay for the privilege. As a result, none of these techniques apply "across all vendors," and so they don't meet the criteria of my questions (now suggestions). Frankly, I had hoped this thread would turn out to be a discussion of _which_ techniques _should_ be proposed as a common basis for compiler quality measurement and improvement. Instead, it has wormed around the tired refrains of "ACVC is defined to be x, so it can't be anything else" and "it's too hard to do anything" and "my company does good (but unspecified) stuff" and "ACVC is getting better (with some unspecified effect on compiler quality)". > Some vendors are known to use the ACES; perhaps all do. There are > organization (such as mine) who are prepared to perform ACES evaluations > and comparisons on a fee basis. It is true that one answer to these suggestions is: "Each user has to pay to get whatever quality he wants." However, this process seems very inefficient. For example, once one user requests an evaluation, do you make that evaluation available free of charge to subsequent users? If not, then users are having to pay for work that's already done. If so, then I pity the poor user who had to pay for the rest of the user base (in particular. since on more than one occasion _I've_ been that user!) Somehow, it makes sense for one organization to do NIST certification for all users, but if you want (other) measures of quality, it doesn't make sense for one organization to do those measures for all users, even though there are now some fairly widely-accepted common measures of software quality. If it's OK for users to pay for quality improvements, why not have users pay for AVO support? If they want NIST certification, fine. If not, they can use the money to pay for ISO 9001 certification, or more tool functionality, or something else. It appears to be the stock answer for every (other) potential standard method of measuring/improving compiler quality. Why not apply it to ACVC? -- LMTAS - "Our Brand Means Quality"