From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/22 Message-ID: <317B65B5.216A@lmtas.lmco.com> X-Deja-AN: 150825168 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> <3160EFBF.BF9@lfwc.lockheed.com> <829851188.11037@assen.demon.co.uk> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-04-22T00:00:00+00:00 List-Id: Robert Dewar wrote: > > Perhaps I can put it this way. Suppose a vendor has resources to do exacty > one of the following two tasks: > > 1. Rewrite the loop optimizer so that all loops run faster > > 2. Rewrite the handling of static expressions so that one very obscure > test in the ACVC suite which has never shown up in a custoer programer > and is very unlikely *ever* to show up in customer programs, under the > condition that this rewriting is extensive and will likely cause > regressions (in programs other than ACVC tests). > > Which do YOU think would contribute more to quality for most users? I think the former would be better. However, as I understand the state of affairs today, the vendor will do the latter, since he is mandated to pass the ACVC. > If we follow Ken's repeated request, and extend the scope of mandatory > testing, then we distort things still more. That's the risk. Actually, my request was to do any of the following: 1. Defend the status quo (ACVC is the best and only mandated measure of quality). 2. Define ways to change the ACVC -- add tests, delete tests, write different tests -- that would improve quality, and are not part of the status quo. 3. Define alternative measures of quality -- either in addition to, or instead of -- the ACVC, that should be mandated. I am also willing for these to be "non-mandated" standards; that is, there's no official requirement, but there is general consensus that any vendor who fails to use these measures is not a quality vendor. I will certainly agree with you that mandating measures that have a net penalty on compiler quality is a bad idea. Would you agree with me that mandating measures that have a net benefit for compiler quality is a good idea? > The DoD policy in this area is that requiring the ACVC conformance testing > is as far as it is desirable to go for general requirements. And yet, per your statements above, the ACVC can lead to "the condition that this rewriting is extensive and will likely cause regressions..." It sounds like we should be looking for an alternative that does not cause this sort of problem, or perhaps discontinue mandated testing. Given your issue with ACVC testing, is the DoD policy rational? > Ken, if your > procurement did not specify this requirement, all one can ask is why > not? Do you really need the DoD to tell you what testing needs to be done? An interesting question, given that DoD does in fact tell me what testing needs to be done -- the ACVC, to be precise. Of course, I don't know that the DoD is right to demand this testing, since I can't figure out if the ACVC is the best test to demand, or the only test to demand. As far as why additional measures aren't required, I certainly agree that "all one can ask is why not?" That's what I'm doing. Another interesting question is, "If it's the user's job to define the measures to be taken, is there any measure that is sufficiently general-purpose to always request, regardless of use?" If the answer to that question is "yes," then there's a follow-on: "Since this measure is always useful, why shouldn't users demand that the compiler vendors do this measure routinely, and share results with the users, rather than billing each user to do the same testing on the same product?" ACVC cost is spread among all users. Should I want to pay my share? Would users be willing to spread the cost for additional/alternate measures? > In the commercial marketplace, the market determines what testing is > desirable (for instance a lot of the C++ commercial market is comfortable > with no testing whatsoever), but in other contexts he commercial marketplace > requires testing, e.g. many commercial COBOL customers wlil only used > NIST certified compilers. Why is it that DoD customers can't work the > same way. I thought I _was_ working that way. I'm part of the marketplace (contact customer-support@tartan.com for verification). I'm trying to discuss what the marketplace should be demanding of Ada vendors, particularly given that the supposed market for Ada vendors is in high-quality systems. You're the one who's hung up on only accepting what the DoD demands, not me. If DoD decided to stop demanding ACVC testing tomorrow, I would still be asking my questions. If c.l.a. isn't a place for users to bring up ideas of this type, and hopefully elicit feedback from users and vendors, where do you suggest they be raised? Doesn't the commercial C and C++ community use Internet as a forum for such ideas? (As an aside, should we using the C and C++ community as the benchmark for responsible compiler users?) Of course, in the commercial marketplace, there are other de facto measures than NIST cerification. For example, as I understand it, it is almost impossible to sell a large transaction processing system without measuring it against certain industry standard benchmarks. If a TPS vendor said, "I'll only do these tests if you pay me" the users would run, not walk, to another vendor. Why can't Ada vendors work the same way? Furthermore, in the commercial marketplace, software vendors perform surveys to discover demand, rather than waiting for the users to come to them. Why can't Ada vendors work the same way? (But that's another useless thread, so never mind.) > By the way Ken, you ask how the DoD has determined that it is reasonable > to generally require the ACVC testing and nothing more? I find it a bit > odd that a DoD contractor should be asking this question to someone > outside -- why not ask within the DoD, it's their policy! I do ask DoD (and his brother RoD :), and will continue to ask DoD, questions on Ada policy. A few personal observations: 1. DoD policy on Ada seems to be in flux at the moment, and answers of this type seem to be waiting on the NRC study, etc. 2. Just because the DoD thinks ACVC is a good idea, doesn't make it a good idea. 3. Just because DoD thinks ACVC is a good idea, doesn't make it the only good idea. 4. Is DoD excluded from comp.lang.ada? 5. If you have a specific person in DoD who has the answers to my questions, feel free to let me know. > P.S. when I used critical in talking about non-critical banking applications, > I was abbreviating not for mission-critical, but for safety-critical. Sorry > for not being clear. But to clarify my point here. A banking applicatoin > may well not care about ACES testing because they don't care about > performance, and their own domain specific testing (e.g. actual testing > of the application in question) shows that a given compiler works > adeqately for their purposes. Sounds like a good reason not to use ACES as a general-purpose measure, assuming that it doesn't cover much of the Ada domains. (By the way, I expect domain specific measures to be required, even if a generally acceptable measure exists.) Now, is there a measure that _is_ generally useful? -- LMTAS - "Our Brand Means Quality"