comp.lang.ada
 help / color / mirror / Atom feed
* ACVC tests
@ 1996-04-27  0:00 Robert Dewar
  0 siblings, 0 replies; 7+ messages in thread
From: Robert Dewar @ 1996-04-27  0:00 UTC (permalink / raw)



Ken said


 "Why not read the older posts where I said I _did_ look, going so far as
 to quote from the ACVC documents I reviewed on the AdaIC server? Why not
 read my response to Mr. McCabe, discussing what I found? Why not answer
 my questions regarding what I _couldn't_ find, AFTER the review?
 
 If you have other documents you wish for me to review (and they, in
 fact, can be accessed in a reasonably easy fashion), please feel free to
 send me pointers to them. If not, please discontinue asking me to review
 something that you can't identify?"

Ken, I am suggesting looking at the *tests* themselves. The entire 1.11
suite is of course availabe for review, and so is the 2.0.1 suite that
contains good examples of the new philosophy. Preliminary versions of
many 2.1 tests are also available for review.

There! identified! :-)

Actually, EVERYONE is encouraged to look at the new ACVC tests, and
comments are welcome. As Ken so often points out, we lack objective
criteria for some aspects of these tests, e.g. are they really
usage oriented. Comments from lots of users would be most helpfu :-)





^ permalink raw reply	[flat|nested] 7+ messages in thread
* ACVC tests
@ 1996-04-28  0:00 Robert Dewar
  1996-05-06  0:00 ` John McCabe
  0 siblings, 1 reply; 7+ messages in thread
From: Robert Dewar @ 1996-04-28  0:00 UTC (permalink / raw)



John McCabe said

"When I have some time, or when I can persuade my company to let me, I
will look at the ACVC tests, but the fact that I haven't so far must
not exclude me from expressing opinions on the effect of the tests,
based on the experience I have of using the products that have
(mysteriously) passed the tests!"

Ah, that's the point. You don't have this experience, since there are
no products that have passed the new ACVC 2.1 regression suite yet,
and so to make judgments based on this kind of anecdotal experience,
we at least have to get the right anecdotes.

I was not suggesting a thorough review of the tests, just take a look
at a couple of the new tests, I think you will be surprised by the
significant difference in style.





^ permalink raw reply	[flat|nested] 7+ messages in thread
* ACVC tests
@ 1996-04-28  0:00 Robert Dewar
  1996-05-06  0:00 ` John McCabe
  0 siblings, 1 reply; 7+ messages in thread
From: Robert Dewar @ 1996-04-28  0:00 UTC (permalink / raw)



Ken said

  "Well, I did. Unfortunately, I don't know what represents common usage.
  I only know how I (and my group use Ada). Is it common usage? Beats me.
  This was exactly my comment: Without some sort of survey, etc. how can
  _any_ one person or small group of people know what represents common usage?

  There's another issue. Common usage today is based on Ada 83 (unless you
  wish to claim that most existing Ada code was developed using Ada 95).
  Therefore, even if such a survey was done, how could it predict how people
  will be using Ada 95 unique features?"

That's much too pessimistic I think. These "unique features" were not
handed out "deus ex machina", they were carefully designed with a pretty
good idea of how they are expected to be used. This means that you can
quite accurately predict how they will be used. Sure you will miss some
interesting unintended or unforseen uses, which can inform later versions
of the ACVC tests, but I think you can do quite a good job of figuring
out how features in the language will be used.

  > The resulting test is then
  > reviewed by the ACVC review team, which represents implementors and
  > users, to see how well it seems to match potential use.

  How does this team represent me, if they have never contacted me? Or have
  they contacted most users, and I represent some irrelevant minority?


I did not mean represent in a political sense. I meant that the user
viewpoint is represented. This committee was chosen by an open competitive
selection process. I don't know if you applied or not to be a member, but
you certainy could have. I also did not make the choices!

Of course the team has not "contacted most users", who would number in
the tens of thousands. In fact the experience in the past has been that,
although the ACVC tests were generally available for review, it has been
extremely difficult to get ANY tecnical input from anyone. Even vendors
do not in general look at the tests in advance of the formal release
of the suite, and it is extremely rare to get any technical input from
users on the tests (I can't remember any example of such). Thus the
phiolosophy behind the committee was precisely to get at least *some*
users, implementors and testers looking at the test carefully in advance.

  > It is hard to establish objective criterion for how well one is doing
  > in this process. The review group certainly has not found anyway of
  > pre-establishing what will turn out to be typical usage.

  Exactly. EXACTLY. I don't how they _could_ do this.

As I say, I think this is an easier task than you think, and certainly,
as you acknowledge the important thing is to ask the question. At the
very least, anyone who writes a test must be able to defend it on these
grounds, and many of the old ACVC 1.11 tests could clearly NOT be
defended as usage-oriented.





^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~1996-05-07  0:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1996-04-27  0:00 ACVC tests Robert Dewar
  -- strict thread matches above, loose matches on Subject: below --
1996-04-28  0:00 Robert Dewar
1996-05-06  0:00 ` John McCabe
1996-04-28  0:00 Robert Dewar
1996-05-06  0:00 ` John McCabe
1996-05-06  0:00   ` Robert Dewar
1996-05-07  0:00   ` Kevin D. Heatwole

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox