From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: dewar@cs.nyu.edu (Robert Dewar) Subject: Re: Ada Core Technologies and Ada95 Standards Date: 1996/04/27 Message-ID: #1/1 X-Deja-AN: 151682128 references: <00001a73+00002c20@msn.com> <828038680.5631@assen.demon.co.uk> <828127251.85@assen.demon.co.uk> <315FD5C9.342F@lfwc.lockheed.com> <3160EFBF.BF9@lfwc.lockheed.com> <829851188.11037@assen.demon.co.uk> <830205883.24190@assen.demon.co.uk> <3180C246.5F63@lmtas.lmco.com> organization: Courant Institute of Mathematical Sciences newsgroups: comp.lang.ada Date: 1996-04-27T00:00:00+00:00 List-Id: Ken asks "If the writers of the ACVC tests are expected to write tests that reflect how Ada is really used, how do they gain this information? For that matter, when looking at a particular test, how do I judge whether that test reflects common Ada usage? I know how I use Ada, but how do I know whether my usage is common or "marginal"? " First of all, you are way ahead just asking the question "How would this feature be used in a real program?" That question was not even asked in the test generation philosophy of the earlier ACVC suites. Very often, the answer is pretty obvious and non-controversial. But it is true that sometimes it is not so obvious. The process we follow is that tests are reviewed by a review team that is coposed of experienced users, implementors, and testers. If all members of the review team agree that a given test is realistically usage-oriented, that is certainly not a guarantee that this is the case, but it is a useful indicator. There is no other way to do things *at this stage*, since there is not enough experience with many of these features to actually measure usage patterns, though if there is an ACVC 2.2 to follow on, then you could certainly consider such measurement. As I mentioned in an earlier message, our anecdotal experience with GNAT is encouraging. IN broad terms, we see the new Ada 95 tests have similar general testing profiles to the tests that come from our users code (and which presmably are definitely user-oriented). As for your last question, all I can say is that is why we have more than one person on the review team. It helps avoid individual idiosyncratic usage influencing the suite (note I said help, not avoid, this is definitely not a purely objective process). But as I said before, why not look at the 2.1 tests and see what you think in comparison to the ACVC 1.11 tests. They are availabe for your perusal, and comments are welcome!