From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,446231e9f9fb9a1c X-Google-Attributes: gid103376,public From: dewar@cs.nyu.edu (Robert Dewar) Subject: ACVC tests Date: 1996/04/28 Message-ID: #1/1 X-Deja-AN: 151908151 organization: Courant Institute of Mathematical Sciences newsgroups: comp.lang.ada Date: 1996-04-28T00:00:00+00:00 List-Id: Ken said "Well, I did. Unfortunately, I don't know what represents common usage. I only know how I (and my group use Ada). Is it common usage? Beats me. This was exactly my comment: Without some sort of survey, etc. how can _any_ one person or small group of people know what represents common usage? There's another issue. Common usage today is based on Ada 83 (unless you wish to claim that most existing Ada code was developed using Ada 95). Therefore, even if such a survey was done, how could it predict how people will be using Ada 95 unique features?" That's much too pessimistic I think. These "unique features" were not handed out "deus ex machina", they were carefully designed with a pretty good idea of how they are expected to be used. This means that you can quite accurately predict how they will be used. Sure you will miss some interesting unintended or unforseen uses, which can inform later versions of the ACVC tests, but I think you can do quite a good job of figuring out how features in the language will be used. > The resulting test is then > reviewed by the ACVC review team, which represents implementors and > users, to see how well it seems to match potential use. How does this team represent me, if they have never contacted me? Or have they contacted most users, and I represent some irrelevant minority? I did not mean represent in a political sense. I meant that the user viewpoint is represented. This committee was chosen by an open competitive selection process. I don't know if you applied or not to be a member, but you certainy could have. I also did not make the choices! Of course the team has not "contacted most users", who would number in the tens of thousands. In fact the experience in the past has been that, although the ACVC tests were generally available for review, it has been extremely difficult to get ANY tecnical input from anyone. Even vendors do not in general look at the tests in advance of the formal release of the suite, and it is extremely rare to get any technical input from users on the tests (I can't remember any example of such). Thus the phiolosophy behind the committee was precisely to get at least *some* users, implementors and testers looking at the test carefully in advance. > It is hard to establish objective criterion for how well one is doing > in this process. The review group certainly has not found anyway of > pre-establishing what will turn out to be typical usage. Exactly. EXACTLY. I don't how they _could_ do this. As I say, I think this is an easier task than you think, and certainly, as you acknowledge the important thing is to ask the question. At the very least, anyone who writes a test must be able to defend it on these grounds, and many of the old ACVC 1.11 tests could clearly NOT be defended as usage-oriented.