comp.lang.ada
 help / color / mirror / Atom feed
* Ada Core Technologies and Ada95 Standards
@ 1996-03-25  0:00 Kenneth Mays
  1996-03-25  0:00 ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Kenneth Mays @ 1996-03-25  0:00 UTC (permalink / raw)


Greetings,

I read that Ada Core Technologies is taking over the GNAT project. 
There is also talk of GNAT V3.04 becoming available on all platforms. 
For the latest news check: http://www.gnat.com and 
ftp://ftp.cs.nyu.edu/pub/gnat.

As far as the Air Force and Ada95, depends on some agencies. 
Engineers at Warner Robins AFB and a few other companies (who don't 
want their names spread) like Ada95 because it is easier to deal with 
over than C/C++. Most computer scientists/programmers/engineers 
forget that nonprogrammers have to deal with the code - and they 
would like to read that code (not the spaghetti stuff people used to 
write). Many programmers can confess that if you maintain code - its 
no\ice to be able to read it (even your own).

Again, C++ is a system programming language mainly for system 
programming - even though application programming is possible. C++ is 
an alternative to going back into the stone ages and writing in 
assembler. Ada95 leans more towards the application programming and 
embedded controllers. True, you can gripe over this. Personal talks 
to many contractors state that they do like C++, but I found it 
foolish to write everything in C++. Ada95 seems to fit the bill for 
the basic productivity, and C++ routines for high speed efficiency 
(this is typical of patching machine/assembler code to C/C++ 
programs).

By the way, talking about a validated or standardized Ada95 compiler. 
I think it is smart to have a baseline standard that all compilers 
should meet. Basically, the baseline standard would reflect that all 
compilers must comply to MIL-STD or whatever (like ANSI V2.5/3.0 of 
C++). That way, anything you pick up at the bookstore on Ada95 on 
BASIC/ADVANCED ADA95 programming - will compile and work as long as 
you stay away from platform specific libraries (GNAT-Ada95 or GCC 
works off this principle). I don't think this is too much to ask of 
any vendor (since the specs came out in Feb. 95). Without some sort 
of baseline standard, it is very hard to call a programming language 
**PORTABLE** in cross platforming world.

-Ken

"Make it work first, then add the darn features!"

 




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 Ada Core Technologies and Ada95 Standards Kenneth Mays
@ 1996-03-25  0:00 ` Robert Dewar
  1996-03-28  0:00   ` John McCabe
                     ` (10 more replies)
  0 siblings, 11 replies; 100+ messages in thread
From: Robert Dewar @ 1996-03-25  0:00 UTC (permalink / raw)


Ken notes/asks:
I read that Ada Core Technologies is taking over the GNAT project.
There is also talk of GNAT V3.04 becoming available on all platforms.
For the latest news check: http://www.gnat.com and
ftp://ftp.cs.nyu.edu/pub/gnat.

  Version 3.04 is already avaiable for our customers, and we expect public
  binary releases to be made available soon. As for all platforms, I don't
  know quite what that might mean, after all GCC supports a couple of hundred
  platforms, and we don't have GNAT on all of them yet :-) However, there
  are over 20 GNAT ports now, with more appearing!

By the way, talking about a validated or standardized Ada95 compiler.
I think it is smart to have a baseline standard that all compilers
should meet. Basically, the baseline standard would reflect that all
compilers must comply to MIL-STD or whatever (like ANSI V2.5/3.0 of
C++). That way, anything you pick up at the bookstore on Ada95 on
BASIC/ADVANCED ADA95 programming - will compile and work as long as
you stay away from platform specific libraries (GNAT-Ada95 or GCC
works off this principle). I don't think this is too much to ask of
any vendor (since the specs came out in Feb. 95). Without some sort
of baseline standard, it is very hard to call a programming language
**PORTABLE** in cross platforming world.

  Here things are in much better shape than you are aware of. There is
  and will be no MIL-STD for Ada 95, but, unike the situation with C++
  which is not standardized yet, there is both an ISO and ANSI standard
  for Ada 95 (the 95 refers to the year of standardization, and it was
  early in 95 (we also made 94) that the standard was approved).

  There is also a well funded effort to create a comprehensive test suite
  which will be used as a basis for formal validation under the auspices
  of NIST. An initial version of this suite is already available, and
  has been used to validate several Ada 95 compilers, including GNAT.






^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
@ 1996-03-28  0:00   ` John McCabe
  1996-03-28  0:00     ` Robert Dewar
  1996-03-29  0:00   ` Applet Magic works great, sort of Vince Del Vecchio
                     ` (9 subsequent siblings)
  10 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-03-28  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

>  There is also a well funded effort to create a comprehensive test suite
>  which will be used as a basis for formal validation under the auspices
>  of NIST. An initial version of this suite is already available, and
>  has been used to validate several Ada 95 compilers, including GNAT.

Yes. but at the moment the validation suite only consists of those
parts of Ada that are common between Ada 83 and Ada 95 does it not.
The fact is that the full validation suite including all the Ada 95
features won't be available until sometime in 1997.

Hopefully the fact that the language has been divided into the Core
language and the specialised needs annexes willhelp to ensure that Ada
95 validation is superior to Ada 83 validation. From my experience,
Ada 83 validation didn't appear to prove much!


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-28  0:00   ` John McCabe
@ 1996-03-28  0:00     ` Robert Dewar
  1996-03-29  0:00       ` John McCabe
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-03-28  0:00 UTC (permalink / raw)


John McCabe said:

"Yes. but at the moment the validation suite only consists of those
parts of Ada that are common between Ada 83 and Ada 95 does it not.
The fact is that the full validation suite including all the Ada 95
features won't be available until sometime in 1997.

Hopefully the fact that the language has been divided into the Core
language and the specialised needs annexes willhelp to ensure that Ada
95 validation is superior to Ada 83 validation. From my experience,
Ada 83 validation didn't appear to prove much!"

Wrong! The validation suite does contain tests for all parts of Ada 95
including all the special needs annexes, and this is true "at the moment"
(where DO these rumours come from? :-) It is certainly true that the
initial release of ACVC 2.0 and now 2.0.1 does not thoroughly cover
all new parts of the language, but as any Ada 95 compiler implementor
can tell you, they are definitely non-trivial, and any compiler passing
all or nearly all of these tests is a pretty complete Ada 95 compiler.

As for Ada 83 validation not proving much, if you feel this way, probably
you somehow had completely unrealistic ideas of what validation was
supposed to prove.

For example, some people, surprisingly, thought that validatoin would
guarantee full compliance. Gosh! We are all in the software busines,
you would think that everyone knows that testing alone cannot guarantee
absense of bugs.

Still other people thought that validation would guarantee a usable
compiler, even more surprising! One would have thought that the widely
known fact that the Ada/Ed Semantic Speciication of Ada was validated
would have tipped people off that this might not be the case (the
ACVC was never, and still is not, a performance analysis suite).

What does validation do? It makes sure that the vendor has implemented
the entire language without significant gaps, and that the vendor has
implemented large parts of the language (those parts tested) accurately.
As a result, it is a good guarantee that the vendor undrstands the
language completely and thoroughly.

Can a test suite do more than this? No! Can it do a better or worse job
of this? Sure. We think the ACVC 2.1 suite will turn out to be more
effective, because we have learned something in 12 years! In particular
we (the ACVC team and the reviewers) believe that the orientation to
more user-oriented testing will be helpful in this regard (compare some
typical 2.0 test with 1.11 tests, and you will see that the 2.0 tests
are much more like real programs -- the test writer testing a particular
feature thinks "how would this feature be used in a real program", and
constructs a real program to answer that question.

HOWEVER, although the suite will, we believe, be even more effective
than the 1.11 suite, no one would claim that it guarantees 100%
conformance or usability. If you hear anyone saying this, beware!
they do not know what they are talking about.

There are many ways to evaluate a compiler. GNAT is validated, but it has
also been in wide use by thousands of users, in all sorts of different
settings, from ingenious academic tests of the outer reaches of the
Ada 95 language, to large (>500,000 lines) real-world applications.
Frankly I think this real world testing of GNAT (or any other compiler)
is probably worth more than the validation if I had to choose, but I
don't have to choose, and it is nice to have both. The validation
procedures against 2.0 certainly turned up some problems that are
non-obvious, and had escaped the vigilance our thousands of users.
I would guess that all other Ada vendors have similar experiences.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Applet Magic works great, sort of
  1996-03-25  0:00 ` Robert Dewar
  1996-03-28  0:00   ` John McCabe
@ 1996-03-29  0:00   ` Vince Del Vecchio
  1996-03-29  0:00   ` Ada Core Technologies and Ada95 Standards steved
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 100+ messages in thread
From: Vince Del Vecchio @ 1996-03-29  0:00 UTC (permalink / raw)
  To: crispen

>>>>> On Fri, 29 Mar 1996 19:03:18 GMT, crispen@hiwaay.net (Bob Crispen) said:

> I've got Applet Magic working with very few problems on a Windows 95
> machine with LabTek Gnat 3.01.

> The only problems are:

> (b) Netscape 2.01 won't display any of the applets, but appletviewer
> will.  Suspect this is a problem with running it locally without httpd
> running or some other silly.  The errors are:

I don't know for sure, but if it is working under appletviewer,
I agree that it is _probably_ something in the way you have things set up.
Are you sure you have a CLASSPATH set in the environment from which you
are starting netscape?

> (c) You've got to compile things in a certain order (you sure get
> spoiled fast by gnat).  I did it the brute-force way, by compiling
> everything until everything compiled OK.  Since the Readme file is a
> couple of steps behind anyway, could we have a makefile?  (ducking for
> cover, due to a recent discussion here).

The easy way to do this is to register (adareg) all of the files.  This
is fast and you should only need to do it once.  After that you can
compile things in any order.  You really shouldn't need a Makefile.

> I'm reporting this here because:

> (a) I swear I couldn't find a "report bugs to" address in any of the
> documentation or on the webpage.

In small print in the release notes file, it says to report problems
to stt@inmet.com or maclaren@inmet.com.  CLA is probably _not_ the best
place for these.

> (b) Applet Magic is a bloody miracle.  To heck with the bugs, this is
> beta software and less than beta documentation.  Way to go,
> Intermetrics!

Glad you like it, and thanks for the encouragement!

-Vince Del Vecchio
vdelvecc@inmet.com




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-29  0:00       ` John McCabe
@ 1996-03-29  0:00         ` Robert Dewar
  1996-04-01  0:00           ` Ken Garlington
  1996-03-31  0:00         ` Geert Bosch
  1 sibling, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-03-29  0:00 UTC (permalink / raw)


John McCabe said

A particular employee of a particular Ada compiler vendor (who you
probably know) in a presentation in Waterlooville England on 14th
March 1995 stated that ACVC 2.0 consisted of only the parts of Ada
that were common between Ada 83 and 95. I interpreted this to mean
just the core language but looking back on it I can understand that
this would also mean _parts_ of the specialised needs annexes.

  That particular employee did not know what he or she was talking
  about. You can look for yourself at 2.0, it has MANY tests for
  features in Ada 95 that are not in Ada 83, including all the
  annexes. Probably what either you or the sales person got confused
  over was that for transitional validations, you don't have to pass
  all these "new" tests. So you have to look at the resulting VSR's
  to understand the results.

I was obviously thinking of validation of Ada compilers in the same
way that _my_ software is validated - i.e a full set of test cases
proving that _all_ requirements have been met. If I cannot prove this,
my software is not accepted by my customer.

  100% reliability via testing is only achievable for very simple tasks
  that can be fully specified formally, and for which the number of
  possible independent tests is finite.

  In the case of a compiler, first it it extremely difficult to generate
  the starting point of a formal specification. No formal specificatoin
  exists for Ada, C++ or most other modern complex languages.

  Second, it is trivial to see that no finite set of tests can be complete.
  For example, Ada requires that loops can nest arbitrarily deeply. Suppose
  that the suite has 1000 tests for loop nesting from 1 to a 1000 levels.
  It might still be the case that 1001 loops blows up.

  Or, another example, all possible 64-bit IEEE constants must be
  accurately converted at compile time. It is obviously impossible to test
  this. After all, look at Intel, with all their resources -- they could
  not afford to thoroughly test the divide instruction on the Pentium.
  Just do the calculations, exhaustive testing here is out of the question.

With GNAT you've probably got one of the largest user bases of any
single compiler which can only help. I know GNAT is a very good
"product" (I noticed in a posting some time ago you said GNAT is not a
product but...) but the fact that it is available free of charge would
lead me to be more understanding about its faults. When I pay $40000
for a piece of software development kit, I expect it to work.

  It is a mistake to think that quality of software is proportional to
  price. There is plenty of free software that is good, and plenty
  of expensive software that is terrible, and vice versa.

  Equally, it is a mistake to think that the quality of software
  necessarily depends on the amount of resources invested. It is
  true that the amount of effort invested in GNAT, including the
  effort invested in GCC itself is huge, probably far more than
  for any other compiler, but that in itself is not a guarantee
  of quality, which depends on many factors. 

  In any case, quality speaks for itself, I always advise people
  to judge GNAT on quality not price. When it comes to choosing
  a compiler for a serious project, the only thing that makes
  sense is to choose the best tool for the job.

  P.S. I never said that GNAT was not a product, I said it was not a
  proprietary product! Big difference!

At the end of the day, I want validation to mean that the compiler can
produce working object code from Ada source - and by that I mean the
whole language - a subset is of no use to me. If that is not true of
the compiler then I think that the term used to describe this
examination should not be validation.

  Well that's a matter of terminology. There is no way to be sure that
  any compiler is 100% bug free -- I certainly never met a compiler for
  a complex language that met this criterion. What you can ask for is
  a compiler that is reliable enough that it is not the weak link in
  the chain. NIST incidentally prefers the term certification to describe
  this test-suite oriented testing of compilers.

Robert Dewar
Ada Core Technologies





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
  1996-03-28  0:00   ` John McCabe
  1996-03-29  0:00   ` Applet Magic works great, sort of Vince Del Vecchio
@ 1996-03-29  0:00   ` steved
  1996-03-29  0:00     ` Applet Magic works great, sort of Bob Crispen
  1996-04-03  0:00   ` Ada Core Technologies and Ada95 Standards Robert I. Eachus
                     ` (7 subsequent siblings)
  10 siblings, 1 reply; 100+ messages in thread
From: steved @ 1996-03-29  0:00 UTC (permalink / raw)


In ... (Robert Dewar) writes:

>  Version 3.04 is already avaiable for our customers, and we expect public
>  binary releases to be made available soon. As for all platforms, I don't
>  know quite what that might mean, after all GCC supports a couple of hundred
>  platforms, and we don't have GNAT on all of them yet :-) However, there
>  are over 20 GNAT ports now, with more appearing!
>

What 20?
Where do I find out?
Is there a list of supported/unsupported GNAT ports?

Steve Doiel





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-28  0:00     ` Robert Dewar
@ 1996-03-29  0:00       ` John McCabe
  1996-03-29  0:00         ` Robert Dewar
  1996-03-31  0:00         ` Geert Bosch
  0 siblings, 2 replies; 100+ messages in thread
From: John McCabe @ 1996-03-29  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:
<..snip...>

>Wrong! The validation suite does contain tests for all parts of Ada 95
>including all the special needs annexes, and this is true "at the moment"
>(where DO these rumours come from? :-)

A particular employee of a particular Ada compiler vendor (who you
probably know) in a presentation in Waterlooville England on 14th
March 1995 stated that ACVC 2.0 consisted of only the parts of Ada
that were common between Ada 83 and 95. I interpreted this to mean
just the core language but looking back on it I can understand that
this would also mean _parts_ of the specialised needs annexes.

>It is certainly true that the
>initial release of ACVC 2.0 and now 2.0.1 does not thoroughly cover
>all new parts of the language, but as any Ada 95 compiler implementor
>can tell you, they are definitely non-trivial, and any compiler passing
>all or nearly all of these tests is a pretty complete Ada 95 compiler.

Mmmm. Seems a bit contradictory ("all parts of Ada95" and "not all new
parts")

>As for Ada 83 validation not proving much, if you feel this way, probably
<..snip..>

I was obviously thinking of validation of Ada compilers in the same
way that _my_ software is validated - i.e a full set of test cases
proving that _all_ requirements have been met. If I cannot prove this,
my software is not accepted by my customer.

In the tools and utilities market this level of proof does not seem to
be required. I design and implement systems for satellite instrument
control. The software will have 1 user. I cannot just put a 1st
version of the software onto a satellite and wait for the user to send
me bug reports because by that time the chances are that the whole
system could have failed and $250M worth of satellite is lying at the
bottom of the Pacific Ocean.

>What does validation do? It makes sure that the vendor has implemented
>the entire language without significant gaps, and that the vendor has
>implemented large parts of the language (those parts tested) accurately.
>As a result, it is a good guarantee that the vendor undrstands the
>language completely and thoroughly.

There is a large difference between implementing the entire language
and implementing it accurately. I find the number of faults with basic
language handling in my present compiler rather disturbing.

>Can a test suite do more than this? No! Can it do a better or worse job
>of this? Sure. We think the ACVC 2.1 suite will turn out to be more
>effective, because we have learned something in 12 years! In particular
>we (the ACVC team and the reviewers) believe that the orientation to
>more user-oriented testing will be helpful in this regard (compare some
>typical 2.0 test with 1.11 tests, and you will see that the 2.0 tests
>are much more like real programs -- the test writer testing a particular
>feature thinks "how would this feature be used in a real program", and
>constructs a real program to answer that question.

That's good and basically the way it should be.

>HOWEVER, although the suite will, we believe, be even more effective
>than the 1.11 suite, no one would claim that it guarantees 100%
>conformance or usability. If you hear anyone saying this, beware!
>they do not know what they are talking about.

I can accept that it is very difficult to prove a tool such as this
completely but when I buy a _validated_ Ada compiler it is because I
want to compile _valid_ _Ada_ code, not a subset of it!

>There are many ways to evaluate a compiler. GNAT is validated, but it has
<..snip..>

I agree entirely with what you say here. It is obvious that the more
users of a compiler, the more likely the bugs are found and sorted
early on. That has been a problem with our compiler (MIL-STD-1750A
version) because the user base is tiny. What is more disturbing
however is that for every bug that seems to get fixed, the new release
seems to contain even more!

With GNAT you've probably got one of the largest user bases of any
single compiler which can only help. I know GNAT is a very good
"product" (I noticed in a posting some time ago you said GNAT is not a
product but...) but the fact that it is available free of charge would
lead me to be more understanding about its faults. When I pay $40000
for a piece of software development kit, I expect it to work.

At the end of the day, I want validation to mean that the compiler can
produce working object code from Ada source - and by that I mean the
whole language - a subset is of no use to me. If that is not true of
the compiler then I think that the term used to describe this
examination should not be validation.



Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Applet Magic works great, sort of
  1996-03-29  0:00   ` Ada Core Technologies and Ada95 Standards steved
@ 1996-03-29  0:00     ` Bob Crispen
  0 siblings, 0 replies; 100+ messages in thread
From: Bob Crispen @ 1996-03-29  0:00 UTC (permalink / raw)


I've got Applet Magic working with very few problems on a Windows 95
machine with LabTek Gnat 3.01.

The only problems are:

(a) java (v. 1.0) won't run newlife:

% java newlife.test
Can't find class newlife/test

(b) Netscape 2.01 won't display any of the applets, but appletviewer
will.  Suspect this is a problem with running it locally without httpd
running or some other silly.  The errors are:

(for bigcalc): java.lang.ClassCircularityError
Applet LifeRect: exception: java.lang.NullPointerException
Applet mancala: Game can't start: exception: interfaces.Java

(c) You've got to compile things in a certain order (you sure get
spoiled fast by gnat).  I did it the brute-force way, by compiling
everything until everything compiled OK.  Since the Readme file is a
couple of steps behind anyway, could we have a makefile?  (ducking for
cover, due to a recent discussion here).

I'm reporting this here because:

(a) I swear I couldn't find a "report bugs to" address in any of the
documentation or on the webpage.

(b) Applet Magic is a bloody miracle.  To heck with the bugs, this is
beta software and less than beta documentation.  Way to go,
Intermetrics!

Bob Crispen
crispen@hiwaay.net






^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-29  0:00       ` John McCabe
  1996-03-29  0:00         ` Robert Dewar
@ 1996-03-31  0:00         ` Geert Bosch
  1996-04-01  0:00           ` Robert Dewar
  1 sibling, 1 reply; 100+ messages in thread
From: Geert Bosch @ 1996-03-31  0:00 UTC (permalink / raw)


In article <828127251.85@assen.demon.co.uk> John McCabe wrote:
`` I was obviously thinking of validation of Ada compilers in the same
   way that _my_ software is validated - i.e a full set of test cases
   proving that _all_ requirements have been met. ''

You *cannot* prove this for complex software, like Ada compilers.
Creating a full set of test cases for ``proving'' that an Ada-compiler
conforms to the language standard is impossible. For one, this would
mean to prove that the compiler has no bugs. Another point is that 
  a) All requirements should be known
  b) All requirements should be specified unambigiously 
  c) Everybody agrees on b) and on the one and only possible interpretation

An example for how irrelevant this is, is the following: when two
parties on an ethernet-segment want to send each other a message, it
is not certain they will succeed in finite time, because of the ethernet
protocol. On the other hand, billions of dollars would be lost if from
now on, all ethernet packets would keep colliding whenever possible. 
You can't prove it won't happen, but you can *rely* on it not to happen.
These are different things.

`` If I cannot prove this, my software is not accepted by my customer. ''
As a customer, I would not accept any software engineer who thinks
that he can prove his software works right in the case of something as
complex as an Ada compiler.

Regards,
   Geert Bosch
--
E-Mail: geert@sun3.iaf.nl     *** As far as we know, there have not been ***
 Phone: +31-53-4303054        ** any undetected failures in our software. **





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-29  0:00         ` Robert Dewar
@ 1996-04-01  0:00           ` Ken Garlington
  1996-04-01  0:00             ` Robert Dewar
                               ` (2 more replies)
  0 siblings, 3 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-01  0:00 UTC (permalink / raw)


Robert Dewar wrote:

> John McCabe said
> 
> I was obviously thinking of validation of Ada compilers in the same
> way that _my_ software is validated - i.e a full set of test cases
> proving that _all_ requirements have been met. If I cannot prove this,
> my software is not accepted by my customer.
> 
>   100% reliability via testing is only achievable for very simple tasks
>   that can be fully specified formally, and for which the number of
>   possible independent tests is finite.

Since I often find myself expressing the same sentiments as Mr. McCabe, I
thought I'd add my two cents:

I can't disagree with anything in your response. However, when my company
does testing, there are several things that happen. I suspect some of these
happen in Mr. McCabe's shop as well:

1. We have a requirements specification that uniquely identifies each
requirement.

2. We have a test or set of tests which can be traced back to each requirement.

3. We have consultations with the end user of the system to see if the tests
are adequate, and reflect the usage of the system.

4. In addition to functional tests, we may also have other tests designed to
meet certain criteria (particularly for safety-critical software). This criteria
might include measures of statement/branch/path coverage and/or measures of data
coverage.

5. In addition to the use of "tests" in the narrow sense of throwing inputs
at the software and looking at the outputs, we can also use other analytical tools 
with regard to software quality, such as peer reviews of the design and 
implementation of the compiler, static analysis tools, etc.

6. Not that it happens much in my systems, but if a deficiency were found in a
product after release, a test that checks for that deficiency gets added back
into the test suite.

It's probably just ignorance on my part about the ACVC process, but I don't
get that same sense of rigor in the ACVC design. A lot of what's known about
good processes for software testing (as documented by Beizer and others)
isn't apparent from what little I've heard about the ACVC, and from reading the
old ACVC 1.0 tests.

I know that NPL has a tool that they sell that tests Ada compilers for bugs, that
apparently provides much more coverage than the ACVC. Why should such a tool
exist outside of the validation/certification process?

Ada provides some wonderful technology for building dependable systems, but
(and this sounds harsher than I intend) it's not clear that the compiler vendors
always "practice what they preach." It would seem to me that one of the most
dependable systems of a comparible size would be an Ada compiler, since Ada 
encourages the development of dependable software. The presentation of the Verdix
Ada vs.C++ compiler at TRI-Ada aside, does this generally appear to be the case?

Maybe it's just perception that's at issue here. When someone says, "ACVC doesn't
say anything about usability for a particular purpose," I understand why that's
said, but my heart sinks nonetheless. Why not an attitude of, "Even though
we can't guarantee 100% correctness, we will by God use every tool at our
disposal to identify deficiencies"? As it stands now, I get to do that particular 
task (in parallel with every other user who needs a reliable product)...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-31  0:00         ` Geert Bosch
@ 1996-04-01  0:00           ` Robert Dewar
  1996-04-01  0:00             ` Mike Young
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-01  0:00 UTC (permalink / raw)


iJohn McCabe said

"`` I was obviously thinking of validation of Ada compilers in the same
   way that _my_ software is validated - i.e a full set of test cases
   proving that _all_ requirements have been met. ''"

I and others pointed out that this is obviously false for complex
software of any kind. If John really thinke he can provide this
proof via test cases, he is deluding himself (and his clients).

What is more interesting is that even a VERY simple program cannot
be provied by simple input-output tests. Consider the following
program to add the numbers from 1 to 10.

with Text_IO; use Text_IO;

procedure s is
   sum : integer;
begin
   for j in 1 .. 10 loop
      sum := sum + j;
   end loop;

   Put_Line (Integer'Image (j));
end;


this works fine on my machine. exhaustive testing (one case!!) proves
that it works, but of course it has a fatal problem, and may not work
tomorrow





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00           ` Robert Dewar
@ 1996-04-01  0:00             ` Mike Young
  1996-04-03  0:00               ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Mike Young @ 1996-04-01  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> iJohn McCabe said
> 
> "`` I was obviously thinking of validation of Ada compilers in the same
>    way that _my_ software is validated - i.e a full set of test cases
>    proving that _all_ requirements have been met. ''"
> 
> I and others pointed out that this is obviously false for complex
> software of any kind. If John really thinke he can provide this
> proof via test cases, he is deluding himself (and his clients).
> 
> What is more interesting is that even a VERY simple program cannot
> be provied by simple input-output tests. Consider the following
> program to add the numbers from 1 to 10.
> 
> with Text_IO; use Text_IO;
> 
> procedure s is
>    sum : integer;
> begin
>    for j in 1 .. 10 loop
>       sum := sum + j;
>    end loop;
> 
>    Put_Line (Integer'Image (j));
> end;
> 
> this works fine on my machine. exhaustive testing (one case!!) proves
> that it works, but of course it has a fatal problem, and may not work
> tomorrow

=======
I don't see the connection... Are you saying validation is 
happen-stance, that the brightest minds can't come up with a test suite 
to test all expected behavior? Your "test" seems reasonable enough to 
validate that integer var's are initialized to zero by default; ehhh, at 
least the few times you ran the test case...

Mike.




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00           ` Ken Garlington
@ 1996-04-01  0:00             ` Robert Dewar
  1996-04-02  0:00               ` John McCabe
  1996-04-02  0:00               ` Ken Garlington
  1996-04-02  0:00             ` John McCabe
  1996-04-10  0:00             ` Robert Dewar
  2 siblings, 2 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-01  0:00 UTC (permalink / raw)


iKen Garlington says

"
It's probably just ignorance on my part about the ACVC process, but I don't
get that same sense of rigor in the ACVC design. A lot of what's known about
good processes for software testing (as documented by Beizer and others)
isn't apparent from what little I've heard about the ACVC, and from reading the
old ACVC 1.0 tests."

Yup, you are right on target (it is ignorance on your part!).

"1. We have a requirements specification that uniquely identifies each
requirement.
"

Yes, of course this was what was done for ACVC version 1 (ever read
the implementors guide? I guess not! This was the requirements spec
for the testing)

"2. We have a test or set of tests which can be traced back to each requirement."

Yes, of course this was done (don't you see the objectives in the test
traced back to the requirements, you said you read the tests).

"3. We have consultations with the end user of the system to see if the tests
are adequate, and reflect the usage of the system.

"

This is *especially* being done for the new ACVC 2 (I guess you are
unfamiliar with the process here).

Your comments on white box testing are not relevant for a general
validatoin facility, though of course for a given compiler, these
kind of procedures are followed.

It would not be practical to incorporate all test progrms for all bugs
found in all compilers into the ACVC (it would rapidly have tens of
thousands of tests, and become completely unmanagable). For example,
the GNAT regression tests now are larger than the whole ACVC test
suite by a considerable factor. Also, the effort of taking every
bug and putting it into ACVC form is out of the question.

The Ada ACVC suite is by fac the most comprehensive test suite ever
generated for a programming language. THe fact that it is stlil not
truly comprehensive just serves to emphasize how complex compilers
for modern large languages are.

Ken, a minimal effort on your part invested in learning about the ACVC
process would seem a worthwhile effort, and would certainly make your
comments about the ACVC more informed. Have you even read John
Goodenough's papers on the subject?





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00           ` Ken Garlington
  1996-04-01  0:00             ` Robert Dewar
@ 1996-04-02  0:00             ` John McCabe
  1996-04-02  0:00               ` Robert A Duff
  1996-04-10  0:00             ` Robert Dewar
  2 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-04-02  0:00 UTC (permalink / raw)


Ken Garlington <garlingtonke@lfwc.lockheed.com> wrote:

>Since I often find myself expressing the same sentiments as Mr. McCabe, I
>thought I'd add my two cents:

>I can't disagree with anything in your response. However, when my company
>does testing, there are several things that happen. I suspect some of these
>happen in Mr. McCabe's shop as well:

>1. We have a requirements specification that uniquely identifies each
>requirement.

Yes. All cross referenced through Architectural Design, Detailed
Design, code and tests.

>2. We have a test or set of tests which can be traced back to each requirement.

Yes. As above.

>3. We have consultations with the end user of the system to see if the tests
>are adequate, and reflect the usage of the system.

Yes (sort of). Ultimately our customer is the European Space Agency.
In between them and us however are Dornier (DE), another division of
my company, and Alcatel (FR). At the end of the day therefore we have
4 independant judges of the suitability of our testing, and of our
design at all stages.

Unfortunately Alcatel's reps know practically nothing about software
so are not of much use in deciding whether our testing is adequate.
They are responsible for integrating our equipment with theirs.
Unfortunately (again) they don't appear to know how the equipment
they've designed works never mind how it interfaces with ours!

The other division of my company, because of its responsibility for
providing a maintenance facility, takes a much greater interest in S/W
and is probably the most difficult of the lot to please (except ESA -
see later).

Dornier seem to be more interested in how the software looks and
whether it can be maintained easily - they provided the coding rules
which forbid the use of manu extremely useful Ada features!.

Finally, the ESA rep has some very strange ideas about software and
gets very confused. We spend hours explaining things to him, and he
seems to take it in, then he brings up exactly the same topic at the
next meeting - even when the topic has nothing to do with software.
It's very frustrating.

>4. In addition to functional tests, we may also have other tests designed to
>meet certain criteria (particularly for safety-critical software). This criteria
>might include measures of statement/branch/path coverage and/or measures of data
>coverage.

We do this by using LDRA Testbed with limits on the minimum level of
statement and branch coverage of 100%, and 70% on LCSJ's. I'm not sure
exactly where those figures are derived from, but they seem
reasonable. The only problem here is that we've found a few bugs in
that tool as well!

>5. In addition to the use of "tests" in the narrow sense of throwing inputs
>at the software and looking at the outputs, we can also use other analytical tools 
>with regard to software quality, such as peer reviews of the design and 
>implementation of the compiler, static analysis tools, etc.

At the moment the compiler we use (TLD Ada for MIL-STD-1750A) has been
mandated by Dornier. We did have to provide justification on our use
of LDRA Testbed rather than Dornier's preferred Logiscope.

>6. Not that it happens much in my systems, but if a deficiency were found in a
>product after release, a test that checks for that deficiency gets added back
>into the test suite.

Same here.

>It's probably just ignorance on my part about the ACVC process, but I don't

<..snip..>

>I know that NPL has a tool that they sell that tests Ada compilers for bugs, that
>apparently provides much more coverage than the ACVC. Why should such a tool
>exist outside of the validation/certification process?

If it's provides more coverage than the ACVC, why isn't it used
instead, or alongside ACVC.

Going back to point 3, I get the impression that ACVC is inherently
limited by its need to be applicable to all Ada compilers. Based on
the methods you and I use, would it not be better to use the ACVC
suite as a basis for the compiler vendors tests, and also require the
compiler vendors to submit their own test suites for approval. I know
this would create a lot of work for both the vendors and those
responsible for validation, but I think in the long run it would put
more emphasis on improving the quality of Ada compilers.

Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00             ` Robert Dewar
@ 1996-04-02  0:00               ` John McCabe
  1996-04-02  0:00               ` Ken Garlington
  1 sibling, 0 replies; 100+ messages in thread
From: John McCabe @ 1996-04-02  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

>iKen Garlington says

<..snip..>

>Your comments on white box testing are not relevant for a general
>validatoin facility, though of course for a given compiler, these
>kind of procedures are followed.

>It would not be practical to incorporate all test progrms for all bugs
>found in all compilers into the ACVC (it would rapidly have tens of
>thousands of tests, and become completely unmanagable). For example,
>the GNAT regression tests now are larger than the whole ACVC test
>suite by a considerable factor. Also, the effort of taking every
>bug and putting it into ACVC form is out of the question.

You are bound to find a large number of bugs from different compiler
vendors cover essentially the same features of the language. In cases
like these in particular, extra effort should be put in to cover these
areas.

Yes it may be unmanageable to cover all bugs, but I think as many
tests for these bugs should be incorporated as possible. It would take
a lot of effort but in the long run I believe it would be manageable -
and it would benefit the Ada community as a whole.



Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00               ` Ken Garlington
@ 1996-04-02  0:00                 ` John McCabe
  1996-04-02  0:00                   ` Robert A Duff
  1996-04-02  0:00                   ` Robert Dewar
  1996-04-10  0:00                 ` Robert Dewar
  1 sibling, 2 replies; 100+ messages in thread
From: John McCabe @ 1996-04-02  0:00 UTC (permalink / raw)



<..snip..>

Sorry to sound like a "me too" type but your experience with customers
appears to match my own very closely.

As I said before, if I can't prove my software meets all of its
requirements, my customer will not accept it.


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00                 ` John McCabe
@ 1996-04-02  0:00                   ` Robert A Duff
  1996-04-02  0:00                   ` Robert Dewar
  1 sibling, 0 replies; 100+ messages in thread
From: Robert A Duff @ 1996-04-02  0:00 UTC (permalink / raw)


In article <828475321.18492@assen.demon.co.uk>,
John McCabe <john@assen.demon.co.uk> wrote:
>As I said before, if I can't prove my software meets all of its
>requirements, my customer will not accept it.

I think different people are using the word "prove" differently.
Clearly, no set of tests can "prove" beyond a doubt that every
combination of inputs is handled correctly, given a moderate-to-complex
program.  I you disagree with that, then I think you're deeply confused.
On the other hand, it is possible to enumerate classes of inputs, and
"prove" (beyond a "reasonable" doubt?) that each such class is handled
correctly.  And it's perfectly reasonable to complain about some
particular test suite, that it doesn't cover a whole class of inputs
that one thinks it ought to.

I suggest that we all use "prove" in a more mathematical sense -- which
rules out any proof by testing.  Even in the mathematical sense, one has
to worry about whether the "proof" is truly correct.

- Bob




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00             ` John McCabe
@ 1996-04-02  0:00               ` Robert A Duff
  1996-04-16  0:00                 ` John McCabe
  0 siblings, 1 reply; 100+ messages in thread
From: Robert A Duff @ 1996-04-02  0:00 UTC (permalink / raw)


In article <828474655.17825@assen.demon.co.uk>,
John McCabe <john@assen.demon.co.uk> wrote:
>Finally, the ESA rep has some very strange ideas about software and
>gets very confused.

Perhaps they're confused about the use of the term "proof".  ;-)

> We spend hours explaining things to him, and he
>seems to take it in, then he brings up exactly the same topic at the
>next meeting - even when the topic has nothing to do with software.
>It's very frustrating.

>We do this by using LDRA Testbed with limits on the minimum level of
>statement and branch coverage of 100%, and 70% on LCSJ's. I'm not sure
>exactly where those figures are derived from, but they seem
>reasonable. The only problem here is that we've found a few bugs in
>that tool as well!

Surely that's not the *only* problem.  Surely the test cases fail to
cover every combination of requirements, and therefore some bugs slip
through.

>If it's provides more coverage than the ACVC, why isn't it used
>instead, or alongside ACVC.

I don't think that the ACVC ever intended to eliminate all bugs from Ada
compilers.  It intends to prevent compilers that fail to implement whole
portions of the language, or implement extensions.  It does a pretty
good job of that, but could do better.  But the intent, I think, is that
Ada compiler vendors test their own products properly.  Whether they do
so or not is up to the marketplace.

>Going back to point 3, I get the impression that ACVC is inherently
>limited by its need to be applicable to all Ada compilers.

Yes.

>... Based on
>the methods you and I use, would it not be better to use the ACVC
>suite as a basis for the compiler vendors tests, and also require the
>compiler vendors to submit their own test suites for approval.

At least *some* of the compiler vendors' tests include proprietary code
from their customers, and they simply cannot release that code.

- Bob




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00                 ` John McCabe
  1996-04-02  0:00                   ` Robert A Duff
@ 1996-04-02  0:00                   ` Robert Dewar
  1996-04-03  0:00                     ` Ken Garlington
  1 sibling, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-02  0:00 UTC (permalink / raw)


iJohn McCabe said

"As I said before, if I can't prove my software meets all of its
requirements, my customer will not accept it."

That sounds reasonable for high-assurance software, but earlier you
talked about using testing as the basis for this proof, which makes
me think that your standard of proof is rather low.

But maybe I am mistaken, are you in fact using formal specifications,
and formal methods to guarantee the correctness of the software,
reasoning at the generated code level (this is normal procedure
for safety critical software).

Obviously any customer demands proof at some level that the software
meets all the requirements, but the standards of proof vary a lot,
from an essentially informal testing process to a rigorous formal
demonstration of correctness.

Compilers are no different from any other software in this respect,
but certainly the level of proof is not at the level of rigorous
formal proof (we don't even know how to practicaly create a formal
specification of complex languages in the first place -- an EEC
sponsored project to produce a formal definition of Ada 83 resulted
in a couple of large telephone books of fomulae, but still did not
cover the whole language, or form the basis for a practical proof
of correctness of an Ada 83 compiler since it did not tackle some
of the hard parts.






^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00             ` Robert Dewar
  1996-04-02  0:00               ` John McCabe
@ 1996-04-02  0:00               ` Ken Garlington
  1996-04-02  0:00                 ` John McCabe
  1996-04-10  0:00                 ` Robert Dewar
  1 sibling, 2 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-02  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> "1. We have a requirements specification that uniquely identifies each
> requirement.
> "
> Yes, of course this was what was done for ACVC version 1 (ever read
> the implementors guide? I guess not! This was the requirements spec
> for the testing)

Yes, I did. It looks like no requirements specification I've ever used.

> "2. We have a test or set of tests which can be traced back to each requirement."
> 
> Yes, of course this was done (don't you see the objectives in the test
> traced back to the requirements, you said you read the tests).

In the ACVC 1.0 sources I received, each test had an identifier. I did not
receive a cross-reference of that identifier to the requirements, as I recall.

> "3. We have consultations with the end user of the system to see if the tests
> are adequate, and reflect the usage of the system. "
> 
> This is *especially* being done for the new ACVC 2 (I guess you are
> unfamiliar with the process here).

Since I expect to be one of the end users, my being unfamiliar with the process
would tend to support my statement, wouldn't it? :)

> Your comments on white box testing are not relevant for a general
> validatoin facility, though of course for a given compiler, these
> kind of procedures are followed.

1. Then, perhaps, Mr. McCabe and I want something other than a general
validation facility. How about standards that each vendor must meet for
development and test -- SEI, ISO, and/or something else?

2. Please identify the requirement/guide where I can verify that, for
a given compiler, these kind of procedures are followed. Is there a
document in the public domain that describes the GNAT development and
testing process?

> It would not be practical to incorporate all test progrms for all bugs
> found in all compilers into the ACVC (it would rapidly have tens of
> thousands of tests, and become completely unmanagable).

Probably true. I suppose the equivalent in my domain would be to take all
the tests for all the DoD embedded systems and put them in one place.
(Well, actually, we do that -- we deliver them to DoD. But they aren't done
in one facility).

On the other hand, those tests do exist -- for my domain.

> For example,
> the GNAT regression tests now are larger than the whole ACVC test
> suite by a considerable factor.

As the man said, it's not the size of the test suite, it's what you
do with it that counts. :)

Also, if the regression suite is that big, is that a good or a bad thing?

> Also, the effort of taking every
> bug and putting it into ACVC form is out of the question.

Again, a difference in philosophy: In my domain, _not_ having a regression
test for every bug found is out of the question.

> The Ada ACVC suite is by fac the most comprehensive test suite ever
> generated for a programming language. THe fact that it is stlil not
> truly comprehensive just serves to emphasize how complex compilers
> for modern large languages are.

I certainly know that the lines of code in the average Ada toolset
rival those of the average fighter aircraft. I am also painfully
familiar with the problem of continously reducing the defect rate
of such complex systems.

However, if I go to my customer, and say: "Our systems really complex,
and there's no way to develop a test suite that guarantees bug-free
operation, so you'll just have to live with the current defect rate,"
he'll nod knowingly through the first two statements, and cheerfully
chop my head off after the conclusion. That's the environment that
I (and Mr McCabe, I suspect) live in.

As a result, I have to define a means to reduce that error rate over
time. I have to measure that error rate, to show that it is being
reduced. And, (worst of all!) I have to share that error rate with my
customer. When the measured rate fails to meet my goals, I get clobbered.
When it meets or exceeds my goals, do I get flowers? No! But, at least
I don't get clobbered.

I understand that commercial software can and does work differently. I
also know that talking about a set of different, competing companies as
though they were a single entity ("the Ada vendors") is also naive.

> Ken, a minimal effort on your part invested in learning about the ACVC
> process would seem a worthwhile effort, and would certainly make your
> comments about the ACVC more informed.

I've read what AJPO puts out on the net, and that's about as much time
as I can devote to the subject. I'm much too busy tracking down tool
bugs to do much more than that :)

> Have you even read John Goodenough's papers on the subject?

No. Has he read mine? (Sorry, couldn't resist.)




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00                   ` Robert Dewar
@ 1996-04-03  0:00                     ` Ken Garlington
  1996-04-04  0:00                       ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-03  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> But maybe I am mistaken, are you in fact using formal specifications,
> and formal methods to guarantee the correctness of the software,
> reasoning at the generated code level (this is normal procedure
> for safety critical software).

Well, not for DoD software! MIL-STD-882B says nothing about formal
reasoning. My reading of DO-178A says it can be used, but doesn't
appear to assume it is "normal procedure" by any stretch.

Not that I'm saying formal reasoning _shouldn't_ be normal procedure,
but I suspect that, at least in the U.S., it's not used to the extent
you imply.

> Obviously any customer demands proof at some level that the software
> meets all the requirements, but the standards of proof vary a lot,
> from an essentially informal testing process to a rigorous formal
> demonstration of correctness.

And I went to the trouble to agree with you that you could not prove the
correctness of software. Rats, misled again! :)

But seriously, since you appear to accept the use of the phrase "proof at
some level" as a reasonable phrase, let's return to what I think was the
topic at hand: What should be a reasonable minimum "standard of proof" for
the quality of an Ada compiler: a commercial product, but one that is often
advertised as suitable for use in building high-integrity software?

So far, you've mentioned that:

  1. The ACVC is not an adequate standard of proof, because of inherent
     limitations of a general certification test suite. I'm not sure I fully
     accept that rationale, but as we've already agreed, you're far more
     knowledgeable of the ACVC than I, so I'll have to bow to your expertise.

  2. The real standard of proof should come from the actions of the individual
     vendors. In particular, you said that GNAT has a development and test
     process, and that this process was probably common to other vendors.
     I requested a description of this process. Is this an unreasonable demand
     from potential customers like Mr. McCabe and myself? I know that any
     potential customer can ask for my company's process at any time (in fact,
     we have booths at conferences like the Software Technology Conference
     _promoting_ our process).




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (2 preceding siblings ...)
  1996-03-29  0:00   ` Ada Core Technologies and Ada95 Standards steved
@ 1996-04-03  0:00   ` Robert I. Eachus
  1996-04-03  0:00   ` Ken Garlington
                     ` (6 subsequent siblings)
  10 siblings, 0 replies; 100+ messages in thread
From: Robert I. Eachus @ 1996-04-03  0:00 UTC (permalink / raw)


In article <3160EFBF.BF9@lfwc.lockheed.com> Ken Garlington <garlingtonke@lfwc.lockheed.com> writes:

  > In the ACVC 1.0 sources I received, each test had an identifier. I
  > did not receive a cross-reference of that identifier to the
  > requirements, as I recall.

   The naming of ACVC 1.x tests was such that they could be directly
traced to the requirements.  I don't remember a reverse mapping ever
being published, but I remember spending a couple hours with the
Implementors Guide, a list of the tests in ACVC 1.4 and Multics Emacs
to create one.  For DPS-6 Ada, we used it to trace failed tests to
requirements--it was often easier than figuring out what the test was
supposed to be testing--and for both DPS-6 Ada and Ada/SIL we kept a
copy with annotations as to non-applicable tests or edited and why the
tests were ignored or modified.

   (I have to admit that a LOT of the comments in the documents were
of the form "STUPID TEST: ..."  For example one test had two programs,
one of which opened a temporary file and attempted to write it's name
to another file.  The other program tried to open the temporary file,
using that name, and read the contents.  It took a lot of back and
forth to agree that the "right" behavior on our system was to raise
USE_ERROR for NAME of a temporary file, but most of that was over
NAME_ERROR vs USE_ERROR.  Of course the test did not anticipate the
possibility that the call to NAME would fail.)

  > Since I expect to be one of the end users, my being unfamiliar
  > with the process would tend to support my statement, wouldn't it?

   You are planning to validate a compiler?  (This is not a sarcastic
question.  We often put compilers through the ACVC process here
because there are potentially significant differences between the
system as validated and the configuration being used on a project.
Such an informal validation takes a few man-days, including
comparisons to the original VSR.)

  > 1. Then, perhaps, Mr. McCabe and I want something other than a
  > general validation facility. How about standards that each vendor
  > must meet for development and test -- SEI, ISO, and/or something
  > else?

   We also have several collections of domain specific tests we often
run.   If your project is going to make heavy use of generics or
tasking, or you are concerned with how the run-time interacts with
Unix signals, much better to find out up front if the compiler suits
your needs on your project up front.  A centralized validation
facility can never do that for you, they just set a reasonable base
level. 

  Robert Dewar said:

  > > It would not be practical to incorporate all test progrms for all bugs
  > > found in all compilers into the ACVC (it would rapidly have tens of
  > > thousands of tests, and become completely unmanagable).

  Very true, but some regression tests have made it into the ACVC
suite because they were indicative of more general problems.  In any
case, every decent compiler vendor maintains a suite of regression
tests, and runs it regularly.  The only cost--if you never find any
bugs--is the disk space and the weekend computer cycles.  But you do
find bugs, and often for the reason that gives the suite its name.
(Usually this happens when a module is rewritten from scratch because
it contains too many bug fixes.  Some of the fixes get lost in the
rewrite. In some cases this happens becuase the original fix got
buried under other fixes--remember why you rewrote this.)

  > Also, if the regression suite is that big, is that a good or a bad
  > thing? 

   Neither, it is a symptom of a mature compiler.  Some of the tests
come from actual bug fixes, many more are unit tests written to verify
additional features at each build.  Again, there is no reason not to
run a test again and again, once you have gone to the trouble of
writing (and verifying) it.

   > > Also, the effort of taking every
   > > bug and putting it into ACVC form is out of the question.

   > Again, a difference in philosophy: In my domain, _not_ having a
   > regression test for every bug found is out of the question.

   The AVO tries to put in regression tests for every bug in the ACVC
suite.  (And in some cases the ARG slaps them down.  If an ACVC test
tests for something harmful, then no test is the right position.  The
ARG created the category Pathology just for cases like testing
'TERMINATED for a task outside the scope of its master.) But the
individual compiler vendors maintain their own regression suites.

  > However, if I go to my customer, and say: "Our systems really
  > complex, and there's no way to develop a test suite that
  > guarantees bug-free operation, so you'll just have to live with
  > the current defect rate," he'll nod knowingly through the first
  > two statements, and cheerfully chop my head off after the
  > conclusion. That's the environment that I (and Mr McCabe, I
  > suspect) live in.
  
  How about you change the conclusion to "We can keep reducing the
defect rate, but we can never get it to zero."  You might get to keep
your head.

  > As a result, I have to define a means to reduce that error rate
  > over time. I have to measure that error rate, to show that it is
  > being reduced. And, (worst of all!) I have to share that error
  > rate with my customer. When the measured rate fails to meet my
  > goals, I get clobbered.  When it meets or exceeds my goals, do I
  > get flowers? No! But, at least I don't get clobbered.

   Exactly.

  > I understand that commercial software can and does work
  > differently. I also know that talking about a set of different,
  > competing companies as though they were a single entity ("the Ada
  > vendors") is also naive.

   Hmmm... SOME of the commercial software markets work much
differently. Others, especially in where systems are safety critical
look very much the same.

--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (3 preceding siblings ...)
  1996-04-03  0:00   ` Ada Core Technologies and Ada95 Standards Robert I. Eachus
@ 1996-04-03  0:00   ` Ken Garlington
  1996-04-04  0:00     ` Robert Dewar
  1996-04-05  0:00   ` Robert I. Eachus
                     ` (5 subsequent siblings)
  10 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-03  0:00 UTC (permalink / raw)


Robert I. Eachus wrote:
> 
>    The naming of ACVC 1.x tests was such that they could be directly
> traced to the requirements.  I don't remember a reverse mapping ever
> being published...

In my domain, a reverse mapping is always done by the test developers,
to demonstrate effective test coverage. Anyway, since ACVC isn't 
intended to measure product quality, I don't know it matters...

>   > Since I expect to be one of the end users, my being unfamiliar
>   > with the process would tend to support my statement, wouldn't it?
> 
>    You are planning to validate a compiler?

Well, possibly, but by "end user" I meant a user of an Ada compiler. See
Mr. McCabe's post on what "end user" means for us. For example, even though
they aren't conducting the software formal qualification test, we have pilots
and flight-line maintainers involved in the development of our tests.

>    We also have several collections of domain specific tests we often
> run.   If your project is going to make heavy use of generics or
> tasking, or you are concerned with how the run-time interacts with
> Unix signals, much better to find out up front if the compiler suits
> your needs on your project up front.  A centralized validation
> facility can never do that for you, they just set a reasonable base
> level.

Unfortunately, I can never tell the criteria by which "reasonable"
coverage was established. However, as I noted above, I'll take it on
faith that the current ACVC is as good as it can be.

>   Very true, but some regression tests have made it into the ACVC
> suite because they were indicative of more general problems.  In any
> case, every decent compiler vendor maintains a suite of regression
> tests, and runs it regularly.

Sounds like something I would find in a set of guidelines for a "good"
development process for an Ada compiler. Does such a set of guidelines
exist? Is the existence of a suite of regression tests sufficient to
determine if a compiler vendor (and by extension a compiler product)
is "decent"? Is it necessary, for that matter?

> The only cost--if you never find any
> bugs--is the disk space and the weekend computer cycles.  But you do
> find bugs, and often for the reason that gives the suite its name.

Why would the cost of regression testing be so cheap, and the cost of
other kinds of testing be so dear that they are unmanageable for
a system of the complexity of an Ada compiler?

>   > Also, if the regression suite is that big, is that a good or a bad
>   > thing?
> 
>    Neither, it is a symptom of a mature compiler.

This sounds like you are saying, "the bigger the regression test suite,
the more mature the compiler." Does this mean that if compiler A has
10,000 tests, and compiler B has 100,000 tests, that compiler B is more
mature?

>    > Again, a difference in philosophy: In my domain, _not_ having a
>    > regression test for every bug found is out of the question.
> 
>    The AVO tries to put in regression tests for every bug in the ACVC
> suite.

How many attempts were made in the last three years to add a regression
test to the ACVC? How does that compare to the list of known bugs in Ada
compilers? I guess I'm still operating from ignorance: Dr. Dewar seemed
to think that it wasn't possible to try to put in regression tests for
every bug, but you're saying this is what is attempted? Perhaps we're
talking about different types of bugs?

>   > However, if I go to my customer, and say: "Our systems really
>   > complex, and there's no way to develop a test suite that
>   > guarantees bug-free operation, so you'll just have to live with
>   > the current defect rate," he'll nod knowingly through the first
>   > two statements, and cheerfully chop my head off after the
>   > conclusion. That's the environment that I (and Mr McCabe, I
>   > suspect) live in.
> 
>   How about you change the conclusion to "We can keep reducing the
> defect rate, but we can never get it to zero."  You might get to keep
> your head.

Do we believe that there is something in the works within the Ada
community to keep reducing the defect rate? Where is this effort documented?
If there is such a plan, discussing it would be a welcome change from
the "can't be done" answer I've been given so far in this thread!

>   > As a result, I have to define a means to reduce that error rate
>   > over time. I have to measure that error rate, to show that it is
>   > being reduced. And, (worst of all!) I have to share that error
>   > rate with my customer. When the measured rate fails to meet my
>   > goals, I get clobbered.  When it meets or exceeds my goals, do I
>   > get flowers? No! But, at least I don't get clobbered.
> 
>    Exactly.

Exactly what? Are you saying that this is the process commonly used by
Ada tool vendors?

>   > I understand that commercial software can and does work
>   > differently. I also know that talking about a set of different,
>   > competing companies as though they were a single entity ("the Ada
>   > vendors") is also naive.
> 
>    Hmmm... SOME of the commercial software markets work much
> differently. Others, especially in where systems are safety critical
> look very much the same.

Do you believe the Ada compiler vendor market looks more like "some"
markets, or like a "safety critical" market? I think that question is
pretty much at the heart of my grousing, and I suspect at the heart of
Mr. McCabe's statements as well.

> 
> --
> 
>                                         Robert I. Eachus
> 
> with Standard_Disclaimer;
> use  Standard_Disclaimer;
> function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00             ` Mike Young
@ 1996-04-03  0:00               ` Robert Dewar
  0 siblings, 0 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-03  0:00 UTC (permalink / raw)


"I don't see the connection... Are you saying validation is
happen-stance, that the brightest minds can't come up with a test suite
to test all expected behavior? Your "test" seems reasonable enough to
validate that integer var's are initialized to zero by default; ehhh, at
least the few times you ran the test case...
"

what an odd statement ...

of COURSE is impossible to "test all expected behavior" in any complex
program. anyone who thinks otherwise is suffering from serious delusions,
and I trust they never have to write a program that I have to rely on.

As for my test showing that integer variables are initailized to zero
by default, it CANNOT POSSIBLY SHOW THIS!!!

Even if you run it 1,000,000 times, the 1,000,001 time can disagree,
and of course since integer variables are NOT required to be initialized
in Ada to zero or anything else, this is perfectly legitimate behavior.
That is the point of the example.

Maybe a :-) was missing from your message??





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-03  0:00                     ` Ken Garlington
@ 1996-04-04  0:00                       ` Robert Dewar
  1996-04-04  0:00                         ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-04  0:00 UTC (permalink / raw)


Ken Garlington says:

  1. The ACVC is not an adequate standard of proof, because of inherent
     limitations of a general certification test suite. I'm not sure I fully
     accept that rationale, but as we've already agreed, you're far more
     knowledgeable of the ACVC than I, so I'll have to bow to your expertise.

Ken, it continues to worry me that you could possibly think that a set
of black box tests (no code coverage testing, no path testing) could
possibly be sufficient as proof at any high level of assurance of a
complex program. Surely you do not mean to tell me that safety critical
programs that you write are tested only to this extent (or for that
matter that these programs trust the compiler!)

  2. The real standard of proof should come from the actions of the individual
     vendors. In particular, you said that GNAT has a development and test
     process, and that this process was probably common to other vendors.
     I requested a description of this process. Is this an unreasonable demand
     from potential customers like Mr. McCabe and myself? I know that any
     potential customer can ask for my company's process at any time (in fact,
     we have booths at conferences like the Software Technology Conference
     _promoting_ our process).

We have described our process in a number of forums. It is a long story
which I am not about to post here! In short we have an extensive test
suite that we run every night and before any modification to the system
occurs, but there is much more to the story than that. If you are indeed
a serious potential customer for GNAT, contact support@gnat.com.
(or stop by our booth at STC!)





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-04  0:00                       ` Robert Dewar
@ 1996-04-04  0:00                         ` Ken Garlington
  1996-04-05  0:00                           ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-04  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Ken, it continues to worry me that you could possibly think that a set
> of black box tests (no code coverage testing, no path testing) could
> possibly be sufficient as proof at any high level of assurance of a
> complex program.

Comsider the assumptions that appear to be buried in this statement:

1. The ACVC is inherently limited to black box testing.

   I could think of several ways to include other types of testing in an
   ACVC, e.g.  a requirement to deliver of source code and supporting
   data to an AVF for analysis, or a requirement that the vendor do some
   specified level of analysis and deliver a summary of the results to
   the AVF as part of the certification process. However, since you've said
   that this is infeasible, I'll assume you're correct.
   
2. The ACVC does sufficient _black box_ testing to _support_ its stated goal
   (presumably, that users should _reasonably_ expect that the compiler
   will meet the language standard.)

   Is there some quantitative or qualitative measure to support this
   assumption? For example, are there requirements for vendors to report
   discoveries of noncompliance, so trending measures can be done?

The ACVC may inherently be _insufficient_ to support the use of these
compilers for critical systems. In fact, as you've noted previously, there's
no known technique or combination of techniques sufficient to "prove" in the
strictest sense that the software is correct. However, once we all agree to
this statement, it seems to me that there are two choices available:

   a. "We can't get there, so we have to live with the way things are."

   b. "We can't get there, but we can continue to improve on where we are."

All I know is, I don't get to build high-assurance systems without choosing
(b).

> Surely you do not mean to tell me that safety critical
> programs that you write are tested only to this extent (or for that
> matter that these programs trust the compiler!)

Absolutely not. And yet, even though we know that we cannot prove correctness
through black-box testing, we continue to not only _do_ black-box testing,
but continue to invest in tools, process changes, etc. to _measure_ and
_improve_ our black-box testing. Why? It's a Beizer thing.

No, we don't trust the compilers, and so we analyze the object code and all
of that. However, when we find only a few errors as a part of that analysis,
we are more confident of the final result than if we find only a few hundred
errors. Why? It's a Musa thing.

As an intelligent man said very recently, the main thing is that the compiler
not be the weakest link!

What's more, I always have this eerie feeling, as we run our various analyses,
that there's some poor guy (maybe Mr. McCabe) running that same analysis on the
same code, and finding the same errors. Just think, if we weren't having to
isolate those errors, and report them to the vendor, and do workarounds in the
code, etc. etc., we'd have more time to find/prevent errors in _our_ code!

> If you are indeed a serious potential customer for GNAT, contact
> support@gnat.com. (or stop by our booth at STC!)

We'll find out in October (assuming the JSF selection schedule holds) if we're
a serious potential customer. Of course, if/when we go to Alpha/VMS on F-22,
we will definitely be a serious potential customer!

I can't go to STC, but I'll ask someone to stop by and request a copy of your
process manual. Better yet, if you're going to the TTCP meeting, maybe you
could bring it with you?

Of course, that doesn't fully answer the real question. On several occasions,
I've heard people say, "most compiler vendors" do a certain type of testing, or
quality control, or something like that. How is this known? Do vendors share
information on their processes with each other? Is there a minimum set of standards
to which "decent" compiler vendors adhere?




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-03  0:00   ` Ken Garlington
@ 1996-04-04  0:00     ` Robert Dewar
  1996-04-04  0:00       ` John McCabe
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-04  0:00 UTC (permalink / raw)


In my domain, a reverse mapping is always done by the test developers,
to demonstrate effective test coverage. Anyway, since ACVC isn't
intended to measure product quality, I don't know it matters...

   That's a misleading. There are many aspects of quality for
   Ada compilers. Validation helps to measure and assess some of
   these aspects:

      (a) full feature coverage
      (b) accurate interpretation of tricky semantic rules
      (c) lack of improper extensions

   The point is not that the ACVC suite does not measure product quality,
   it is that it does not guarantee product quality, because it only
   addresses certain quality aspects. You can still have a 100% conformant
   compiler with no extensions that is of low quality because it has
   unacceptable capacity limits, or compile/runtime performance for
   example.

   P.S. It is perfectly possible to trace ACVC tests back to the test
   objectives in the implementors guide.

Unfortunately, I can never tell the criteria by which "reasonable"
coverage was established. However, as I noted above, I'll take it on
faith that the current ACVC is as good as it can be.
 
   The reason you cannot tell the criteria is that you have, as you
   noted in your previous message, not made the effort to find out.
   A lot has been written down to answer these questions. As I
   mentioned before, a good starting point is to look at the papers
   John Goodenough has written on the subject. You could also
   contact Nelson Weiderman for material relating to the development
   of ACVC version 2 (this is a large SAIC contract involving tens
   of person years of work, and as you would hope, this work does
   not take place in an informatoin vacuum!) You might be particularly
   interested to study the coverage matrices that have been developed
   for Ada 95.

How many attempts were made in the last three years to add a regression
test to the ACVC? How does that compare to the list of known bugs in Ada
compilers? I guess I'm still operating from ignorance: Dr. Dewar seemed
to think that it wasn't possible to try to put in regression tests for
every bug, but you're saying this is what is attempted? Perhaps we're
talking about different types of bugs?

   An attempt seems to imply something you tried and failed, so I am
   not sure I would apply the word. It is normal procedure in the ACVC
   development process to add tests to the suite as gaps are found.

   In the case of GNAT, if I find a bug that seems like it should be
   caught by the ACVC suite, I send along a message to the ACVC
   development group. If appropriate it is discussed by the ACVC
   review group, but more often than not, it is simply incorporated.

   I do not send along all bug reports for many reasons. Some involve
   proprietary code, and the bug cannot easily be abstracted into a
   simple example, some depend on implementation specific issues. For
   example, the fact that some Ada program triggers an obscure bug
   in the assembler for some obscure machine does not necessarliy
   translate into a useful test. Some are quite GNAT specific. Some
   are untestable issues (nice error messages for instance), etc.

Do we believe that there is something in the works within the Ada
community to keep reducing the defect rate? Where is this effort documented?
If there is such a plan, discussing it would be a welcome change from
the "can't be done" answer I've been given so far in this thread!

   If you are really interested in finding out what is going on in detail
   with validation, informal discussion on CLA is not the place to find
   it out. Why not spend some of the effort you put in on posting these
   messages in finding out more about the ACVC process. Obtain and
   read the ACVC validation procedures. Obtain and read some sample
   VSR's. Obtain and read John Goodenough's paper. etc.

Do you believe the Ada compiler vendor market looks more like "some"
markets, or like a "safety critical" market? I think that question is
pretty much at the heart of my grousing, and I suspect at the heart of
Mr. McCabe's statements as well.

  The Ada compiler vendor market certainly does NOT look like a safety
  critical market (even by Ken's standards, which seem a bit weak to me
  since they seem very testing oriented, and I would never consider
  testing alone to be sufficient for safety critical applications).

  It is definitely the fact that today we do NOT have the technology
  to develop compilers for modern languages that meet the criteria
  of safety critical programs. This is why all safety critical programs
  are tested, validated and certified at the object code level (I will
  certainly not fly a plane where the flight software took on faith
  the 100% accuracy of all generated code).

  Is it possible to develop such technology? Yes, I think so, but it
  involves some substantial work at both the engineering level and
  research level. The NYU group is actually very interested in this
  area. We recently made a proposal to NSF, not funded :-( precisely
  to do work in this area. If anyone is interested, they can contact
  me and I will send a copy of the proposal. We are currently exploring
  other funding sources for this work.

  But that's definitely in the future.

  Compilers are not large programs, but they are extremely complex. They
  involve very complex data structures and algorithms. Far more complex
  than appear in typical application programs. Furthermore, we do not
  begin to have complete formal specifications of the problem, Unlike
  typical safety-critical applications, at least those that I am
  familiar with, where formal specifications are developed, we have
  not yet arrived at a practical technology for generaing formal
  specifcations of large languages.

  The last sentence here is controversial. There are those who would
  disagree (see for example the VDM definition of Chill), but the
  empirical fact is that nothing approaching a full formal specification
  exists for Ada 95, Fortran 90, COBOL, C++ or even, as far as I am
  aware, C. Furthermore, I just don't think that formal definition
  methodology is good enough yet to solve this problem fully.

  Could we do more now with current technology? Possibly. We could do
  for example full coverage and path testing for compilers. There are
  however several problems with this.

    It would be VERY expensive. Look for example at the TSP CSMART
    product. Now this is an interesting example of a chunk of Ada
    compiler technology that does meet Ken's expectations as a
    safety-critical program creator. BUT, this runtime is TINY
    compared to the full Ada 95 runtime, perhaps less than one
    tenth the size, and yet it costs two orders of magnitude
    more than a normal full runtime. I was working with Alsys
    during some of the time they worked on CSMART, and I can
    assure you that this price accurately reflects the enormous
    effort that went into certifying this relatively small
    piece of code (the 386 version was based in part on code
    that I had written for the full Ada 83 runtime for Alsys).

    It would tend to make it much harder to fix problems and
    improve the compiler. Compilers are not static, but tend
    to constantly improve. If the cost of improvement is to
    repeat a very expensive certification and testing process
    then improvements will not happen so often, or at all.

    It would divert energy from other important areas. Even if
    we verified 100% proved conformance to the Ada 95 spec, or
    even to some imaginary formal version of this spec, it
    would still be only one aspect of quality. There is no point
    in spending too much effort strengthening the strong link of
    a chain. What we want to aim at is a situation where problems
    of compiler bugs and conformance are no more of a problem
    in the total development process than other factors, and
    taken as a whole, that thees bugs and other problems do not
    significantly impact project success or timescales.

  I can certainly understand Ken's frustration, but these are tough
  problems. I am sure that when uninformed members of the public,
  or even simply technical people not familiar with avionics, look
  at yet another report of a military aircraft crash, and ask "Why
  can't those guys build planes that don't crash?" Certainly
  congress likes to ask such questions :-)

  Of course, they simply don't understand the engineering problems
  and realities involved. Our technology is not perfect, and this
  is true of many areas. About the best we can hope for is to ensure
  that compiler technology is not the weak link in the chain, and that
  when a plane does crash, it is not because of a bug in an Ada compiler!
  In practice I think we do a pretty good job. I am not aware of any
  major failure of safety-critical software that can be traced to a
  compiler bug.

  Could we do better? Will we do better in the future?
  I would certainly hope that the answer is yes and yes.

P.S. I find it a bit amazing that John McCabe is so unaware of the
validation status of the compiler he is using. One important piece
of advice for any user of validatd Ada compilers is to obtain the
VSR (validation status report), and read it carefully. VSR's are
public documents, available from the AVO, so even if your vendor
does not supply a copy (they should), you can obtain one. John,
along with a lot of other data, the VSR lists the expiration date,
or points to the documents that define the expiration date.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-04  0:00     ` Robert Dewar
@ 1996-04-04  0:00       ` John McCabe
  1996-04-05  0:00         ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-04-04  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

<..snip..>
>   That's a misleading. There are many aspects of quality for
>   Ada compilers. Validation helps to measure and assess some of
>   these aspects:

>      (a) full feature coverage
>      (b) accurate interpretation of tricky semantic rules
>      (c) lack of improper extensions

I assume by point (c) that you mean compilers must not contain
improper extensions. That is an interesting point to me since in
another thread somewhere we discussed TLD's implementation of package
Machine_Code. Your response to my description of their implementation
was that this was precisely an improper extension. As this
implementation ahs been around for donkeys years, how did it get
through the validation process with an improper extension as obvious
as that?

<..snip..>

>   In the case of GNAT, if I find a bug that seems like it should be
>   caught by the ACVC suite, I send along a message to the ACVC
>   development group. If appropriate it is discussed by the ACVC
>   review group, but more often than not, it is simply incorporated.

I fear you may be biased towards this type of bug reporting by being
directly involved with the Ada language itself. How many [other]
compiler vendors are this conscientious?

<..snip..>

>P.S. I find it a bit amazing that John McCabe is so unaware of the
>validation status of the compiler he is using. One important piece
>of advice for any user of validatd Ada compilers is to obtain the
>VSR (validation status report), and read it carefully. VSR's are
>public documents, available from the AVO, so even if your vendor
>does not supply a copy (they should), you can obtain one. John,
>along with a lot of other data, the VSR lists the expiration date,
>or points to the documents that define the expiration date.

I don't find it amazing at all! As I have probably mentioned in this
and other threads, I am mandated to use this version of the compiler
by my customer's customer's customer so I don't really give a s**t
about its validation status.

I will however be far more interested in the validation status of any
compiler I have the responsibility for selecting for future projects
(especially the one we're hopefully going to be using a 386 for! - see
elsewhere) and I will certainly bear in mind the advice you have given
in this thread about that.

As an aside, I've noticed that your responses have been formatted
differently over the last couple of weeks - i.e. you've been using
indentation instead of putting quotes around things etc. Just a minor
point but I find it more difficult to follow that way :-)


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-04  0:00                         ` Ken Garlington
@ 1996-04-05  0:00                           ` Robert Dewar
  1996-04-10  0:00                             ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-05  0:00 UTC (permalink / raw)


Ken Garlington asks why it is infeasible for a compiler vendor to deliver
the source code to the AVF for anaysis.

Ken, you have some experience here. What would you say is the cost of
analysis and thorough testing of half a million lines of someone elses
code, under the conditions that the code is, throughout, extremely
complex. Remember that a typical compiler has had several hundred
person years invested in the code, at least this figure is right
for several Ada compilers that I know of. How much more investment
would be necessary from the AVF to significantly improve the level
of confidence on the basis of examination of the source code.

Let's suppose that for this kind of examination and white box testing,
a figure of 10 lines/day is reasonable (this is ten lines of source code).
I suspect this number is high, but I deliberately what to be on the high
side.

Then we arrive at a figure of 250 person years to evaluate the code
of an Ada compiler. OK, so that's about 25 million dollars.

I *think* it is ok to regard this as infeasible :-)

The real point, which you did not address, is that even if you were to
supply the check for $25 million, it would not solve the problem of
timely delivery and verification of improvements etc. Compilers tend
to be quite dynamic objects, for instance, even a mature compiler that
itself is pretty stable is likely to need tinkering for a new version
of an operating system.

Furthermore, we are still missing a formal specifcation of Ada 95 against
which to formally measure compliance. The EEC spent a couple of million
dollars trying to get such a formal definition of Ada 83, and failed to
produce a complete usable definition. Basically they ended up with
two telephone books of formula that (a) were incomplete (b) could not
be determined to be equiavlent to the RM (c) could not even be determined
to be equivalent to the existing ACVC tests and (d) certainly contained
at least some errors.

Ken, in your message, you again refer to users expecting the ACVC suite
to guarantee conformance to the standard. How many times does it have
to be said? The ACVC suite cannot do this, does not attempt to do this,
and anyone who thinks it does do this, or could do this, is mistaken!

Once again, I refer you to John Goodenough's writings on the subject,
and to the other material I mentioned before.

P.S. If you would like to send a check for $25 million to ACT, I think
I can promise that 5 years from now we wlil have a compiler that is
much closer to conforming to the standard (of course I can also promise
this if you *don't* send us the $25 million :-)





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-04  0:00       ` John McCabe
@ 1996-04-05  0:00         ` Robert Dewar
  1996-04-06  0:00           ` Ada validation is virtually worthless Raj Thomas
  1996-04-07  0:00           ` Ada Core Technologies and Ada95 Standards John McCabe
  0 siblings, 2 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-05  0:00 UTC (permalink / raw)


John McCabe said

  "I assume by point (c) that you mean compilers must not contain
  improper extensions. That is an interesting point to me since in
  another thread somewhere we discussed TLD's implementation of package
  Machine_Code. Your response to my description of their implementation
  was that this was precisely an improper extension. As this
  implementation ahs been around for donkeys years, how did it get
  through the validation process with an improper extension as obvious
  as that?"

First of all, as is clearly stated in the validation procedures document
(I stronly recommend reading these procedures if you want to make informed
comments on the validation procedures), the approach for eliminating
extensions is the DOC (declaration of conformance). Obviously no black
box testing can detect extensions in a systematic way, and even examining
the code is an impractical and ineffective way of making extensions. The
DOC actually declares that you have no deliberate extensions.

So TLD does not think that this unusual use of Machine_Code is an extension.
That's arguable on either side. I would consider it an extension, but I am
not in authority here. I would assume the argument on TLD's side would go
something like this: Package Machine_Code as described in the RM is
optional, we don't implement it. We are allowed to add packages (adding
packages is not an extension). We have added a package which happens to
have the same name as the optional package in the RM we do not implement.
Where is this prohibited? We do not think it is prohibited, and since
the functionality, if not the form, of our package matches the intent of
the RM defined package, it seemed a reasonable name. I guess that you
would have to agree with this argument, although it would probably take
a WG9 ruling to be sure. I still prefer not to confuse the use of the
package name here, but thinking about it more, I agree that this is not
an extension in the formal sense. Informally it feels like an extension
to me, but I have to admit that I cannot prove it.
  
  "I fear you may be biased towards this type of bug reporting by being
  directly involved with the Ada language itself. How many [other]
  compiler vendors are this conscientious?"

Not quite sure what this means. All Ada implementors are involved with
the Ada language. I am not the only vendor with people on various
committees (I do not say representatives here, because none of us
represent our companies in this context). Many suggestions for tests
in the past have come from vendors. I can't speak for how concientious
any vendor may or may not have been in this regard.

    >P.S. I find it a bit amazing that John McCabe is so unaware of the
    >validation status of the compiler he is using. One important piece
    >of advice for any user of validatd Ada compilers is to obtain the
    >VSR (validation status report), and read it carefully. VSR's are
    >public documents, available from the AVO, so even if your vendor
    >does not supply a copy (they should), you can obtain one. John,
    >along with a lot of other data, the VSR lists the expiration date,
    >or points to the documents that define the expiration date.
  
  "I don't find it amazing at all! As I have probably mentioned in this
  and other threads, I am mandated to use this version of the compiler
  by my customer's customer's customer so I don't really give a s**t
  about its validation status.

Well you are free to take the "I don't really give a s**t" attitude to
anything you like, but the fact of the matter is that the validation
status report (VSR) has valuable information about any compiler, and is
often provides information that is valuable in using a compiler. Also
I wish to remind you that it was you who raised the issue of the validation
status of the compiler you are using, not me :-)

P.S. do you like that indentation style better?





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (4 preceding siblings ...)
  1996-04-03  0:00   ` Ken Garlington
@ 1996-04-05  0:00   ` Robert I. Eachus
  1996-04-10  0:00     ` Cordes MJ
  1996-04-11  0:00   ` Robert I. Eachus
                     ` (4 subsequent siblings)
  10 siblings, 1 reply; 100+ messages in thread
From: Robert I. Eachus @ 1996-04-05  0:00 UTC (permalink / raw)


In article <3162B080.490F@lfwc.lockheed.com> Ken Garlington <garlingtonke@lfwc.lockheed.com> writes:

   (Robert Dewar did a very good job of responding to this, I just
want to emphasize one point.)

  > How many attempts were made in the last three years to add a regression
  > test to the ACVC? How does that compare to the list of known bugs in Ada
  > compilers? I guess I'm still operating from ignorance: Dr. Dewar seemed
  > to think that it wasn't possible to try to put in regression tests for
  > every bug, but you're saying this is what is attempted? Perhaps we're
  > talking about different types of bugs?

    Exactly, a compiler bug as such is no reason for adding a test to
the ACVC suite.  A compiler bug that occurs because of a difficult or
unclear part of the reference manual, however, is a very good reason
to add an ACVC test.  My parenthetical remark was to point out that
not everything that is unclear in the reference manual should result
in an ACVC test.  There are, in fact, cases where the ambiguity of the
wording is intended to reflect an intent to leave that feature
undefined.  In some cases that is clearly expressed in the manual, in
other cases it is only stated by omission.

    This is probably best illustrated by example.  There are features
of the Ada language which occasionally make it difficult or impossible
for a compiler to enforce a strict order of evaluation for subprogram
parameters.  For example:

    function "+"(A: Apple; O: Orange) return Fruit_Collection is...

    function "+"(O: Orange; A: Apple) return Fruit_Collection is 
    begin return A+O; end "+";

    pragma INLINE("+");

    Is a construct that happens all over the place in abstract data
type definitions.  What would a requirement that parameters be
evaluated left to right mean in this case?  (Remember both calls get
in-lined.)  As a result, and also for efficiency reasons, Ada programs
are allowed to evaluate parameters in any order.

    Now to very gory detail.  In Ada 83, that order was defined to be
"evaluated in some order that is not defined by the language."  This
"accidently" outlawed evaluting parameters simultaneously.  Using
parameter expressions with side effects, especially the raising of
exceptions, allowed this to be tested, but the opinion of the ARG and
most of the Ada community was that such tests were pathological
programs and should not be incorporated into the test suite.  The very
valid fear was that such tests would result in some very useful
optimizations being removed from compilers.

    In Ada 95, the rule is stated as "These evaluations are done in
arbitrary order," and that phrase is defined in 1.1.4(18) to allow
compilers to ignore side effects.  (But not real dependencies between
operations, just side effects.)  This incidently results in a rule
that can be checked only with debuggers in many cases.  The practical
effect on users is nil.  If you have parameters with side effects, and
the order of evaluation is important to you, you need to pull those
expressions out of line in any case.
    
    One last try to aviod MEGO:

    So if a program had a procedure call with two parameters:

        B: Integer := 5;
        ...
        Foo(A,B);

    ...and evaluation of A changed the value of B:

      function A return Integer is begin B := -3; return 6; end A;

    ...and the second parameter is declared to be of subtype Positive.
Then in Ada 83 an ACVC test program could check that either the call
did not raise Constraint_Error, or that B had the value 5 in the
exception handler.  That sort of test was considered to be not very
helpful, in fact downright harmful, since if B was in a temporary
register it had to be stored before the constraint check.  Ouch!

    Such tests were considered to be counterproductive, and Ada 95
such tests are defined not to work.  (See 11.6(6), and read the
note--the reordering permissions only cut in if an exception is raised
in (a) canonical order of execution.)  The value of B is allowed to be
5 in the execption handler, even if it was the check on B that put the
program in the handler.  (If you do care, you just have to make sure
that the value of B is set outside the scope of the handler.)
--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Ada validation is virtually worthless
  1996-04-05  0:00         ` Robert Dewar
@ 1996-04-06  0:00           ` Raj Thomas
  1996-04-06  0:00             ` Robert Dewar
  1996-04-07  0:00           ` Ada Core Technologies and Ada95 Standards John McCabe
  1 sibling, 1 reply; 100+ messages in thread
From: Raj Thomas @ 1996-04-06  0:00 UTC (permalink / raw)


>John McCabe said
>  Machine_Code. Your response to my description of their implementation
>  was that this was precisely an improper extension. As this
>  implementation ahs been around for donkeys years, how did it get
>  through the validation process with an improper extension as obvious
>  as that?"

>Robert Dewar said
>First of all, as is clearly stated in the validation procedures document
> the approach for eliminating
>extensions is the DOC (declaration of conformance). Obviously no black
>box testing can detect extensions in a systematic way, and even examining
>the code is an impractical and ineffective way of making extensions. The
>DOC actually declares that you have no deliberate extensions.

Aah !
Let me see if I understand:
1. Validation does nothing to guarantee compiler correctness / "
bug-freeness"/ usability / object code quality
2. Validation does provide for ensuring that illegal extensions and
other crapola does not exist.
3. However, some of the techniques used for ensuring this depend on
vendor honesty eg: " the approach for eliminating
extensions is the DOC (declaration of conformance)...."
4. We all know about vendor honesty...

My conclusions:
a. Validated compliers have been validated
 ( Yes, it is a tautology and like all tautologies, of limited use.. )
b. Validation does not guarantee anything that is of great use to the
developer...
c. Validation is a useful marketing tool.
d. Validation is essential if you are marketing to the American DODo

Now it is all perfectly clear :-)
Octopus traps
summer's moonspun dreams
soon fade away.
Basho 1644




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada validation is virtually worthless
  1996-04-06  0:00           ` Ada validation is virtually worthless Raj Thomas
@ 1996-04-06  0:00             ` Robert Dewar
  1996-04-08  0:00               ` Arthur Evans Jr
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-06  0:00 UTC (permalink / raw)


Despite what Raj said, the DOC has been an extremely effective tool in
preventing extensions in Ada. Ada is the only language I know of where
the incidence of vendor extensions of the language is essentially nil.

Obviously there is no objective way to tell if a compiler has extensions,
but in practice the requirement to sign a legal document declarting that
you have no deliberate extensions is a very strong control over such
extensions, considering that nearly all other compilers for nearly all
other languages *do* have deliberate extensions of one kind or another.

I suspect Raj has not really bothered to investigate validation very
much, and has not for example, examined VSR's in detail. I usually find
that people making glib statements about validation like this don't know
very much about it!





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-05  0:00         ` Robert Dewar
  1996-04-06  0:00           ` Ada validation is virtually worthless Raj Thomas
@ 1996-04-07  0:00           ` John McCabe
  1 sibling, 0 replies; 100+ messages in thread
From: John McCabe @ 1996-04-07  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

>John McCabe said

>  "I assume by point (c) that you mean compilers must not contain

<..snip..>

The point you make on invalid extensions re TLD seems reasonable to
me. I would have to agree entirely with your arguments on how TLD
could justify the implmentation of _a_ package Machine_Code in this
way.

>  "I fear you may be biased towards this type of bug reporting by being
>  directly involved with the Ada language itself. How many [other]
>  compiler vendors are this conscientious?"

>Not quite sure what this means. All Ada implementors are involved with
>the Ada language. I am not the only vendor with people on various
>committees (I do not say representatives here, because none of us
>represent our companies in this context). Many suggestions for tests
>in the past have come from vendors. I can't speak for how concientious
>any vendor may or may not have been in this regard.

Your response here seems to suggest that your understanding of my
statement is correct. I don't think I have anything to add here (your
last sentence says it all really!)

<..validation status..>

>Well you are free to take the "I don't really give a s**t" attitude to
>anything you like, but the fact of the matter is that the validation
>status report (VSR) has valuable information about any compiler, and is
>often provides information that is valuable in using a compiler.

I must admit I wasn't aware of that. Presuaby these reports are
available from somewhere. Are they available on-line e.g. from AdaIC?
I'd _now_ be interested in seeing the one for the compiler I use.

Just a point, at the time the compiler was mandated I had absolutely
no control over what tools, languages, or compilers I used. In
addition, I had absolutely no interest in Ada. This has changed to a
certain extent now in that I may be able to make decisions of this
nature on any new projects I am assigned to, and since reading the
articles in this newsgroup, I have started to look at Ada in a new
light (especially now Ada95 is available).

> Also
>I wish to remind you that it was you who raised the issue of the validation
>status of the compiler you are using, not me :-)

Yes, but I just brought it up so we would have something to talk about
:-)

>P.S. do you like that indentation style better?

Yes It's much better - much more obvious who said what :-) Thanks very
much!



Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada validation is virtually worthless
  1996-04-06  0:00             ` Robert Dewar
@ 1996-04-08  0:00               ` Arthur Evans Jr
  0 siblings, 0 replies; 100+ messages in thread
From: Arthur Evans Jr @ 1996-04-08  0:00 UTC (permalink / raw)


In article <dewar.828846331@schonberg>, dewar@cs.nyu.edu (Robert Dewar) wrote:

>       Ada is the only language I know of where
> the incidence of vendor extensions of the language is essentially nil.

Robert apparently misspoke himself here, intending to refer to vendor
extensions beyond those provided for in the language.  GNAT surely has
pragmas and attributes that are not in the Standard.  The important
point is that neither GNAT nor other implementations have other kinds of
extensions.  This fact is one of the many attractions of Ada.

Art Evans

Arthur Evans Jr, PhD        Phone: 412-963-0839
Ada Consulting              FAX:   412-963-0927
461 Fairview Road
Pittsburgh PA  15238-1933
evans@evans.pgh.pa.us




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-05  0:00   ` Robert I. Eachus
@ 1996-04-10  0:00     ` Cordes MJ
  1996-04-10  0:00       ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Cordes MJ @ 1996-04-10  0:00 UTC (permalink / raw)


Robert I. Eachus posted a fine example of a case where testing the
compiler is difficult (if not harmful).

Robert Dewar has responded to Ken Garlington's posts indicating that
it is foolish to expect the ACVCs to guatantee an Ada compiler meets
the RM.

I don't think that Ken, or anybody else in the user community, expects
the ACVC to guarantee 100% compliance with the RM. I did, however, believe 
that the ACVC provided *some* measure of quality. You two can concentrate
on the pathological cases and make a strong argument for doing nothing
while we continue to develop safety critical code with *validated*
compilers which generate bad code for some of the more simple Ada 
features.

Instead of flaming us for expecting to much, take our comments and 
feedback from your "larger" user community, and challenge yourself
(i.e., the Ada compiler vendors) to provide a high quality
product that meets all of our needs.

And to the nattering nabobs of negativism I say: the users are
learning that higher quality toolsets mean lower development
costs - quality will make a difference in future toolset selections.

(this sounds much more snooty than I intended - however, it does state
my point in the fewest words :-)
--
---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lfwc.lockheed.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-05  0:00                           ` Robert Dewar
@ 1996-04-10  0:00                             ` Ken Garlington
  0 siblings, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-10  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Ken Garlington asks why it is infeasible for a compiler vendor to deliver
> the source code to the AVF for anaysis.

Actually, I didn't ask this, but we can talk about it if you like...

What I actually asked was, "Is there some way to modify the scope of the ACVC
process to improve compiler quality across all vendors? Or, is there something
outside the scope of the ACVC that could be done to improve compiler quality
across all vendors?"

Your answer: No, because we'd have to make an _investment_ to improve
compiler quality. To get to 100% quality (whatever that means) would take
too much money (and is technically infeasible). Therefore, no investment
should be made.

> Ken, you have some experience here. What would you say is the cost of
> analysis and thorough testing of half a million lines of someone elses
> code, under the conditions that the code is, throughout, extremely
> complex.

The cost of IIV&V? I don't know the exact F-22 figure at the moment, but
it's probably significantly less than 5% of the development cost. IIV&V
is done on far more than a mere 500K SLOCs on F-22. I recommend AFSC Pamplet
800-5, which helps estimate such costs, and also explains IIV&V. Based on your
discussion below, I'm guessing you're not familiar with this process.

> Remember that a typical compiler has had several hundred
> person years invested in the code, at least this figure is right
> for several Ada compilers that I know of. How much more investment
> would be necessary from the AVF to significantly improve the level
> of confidence on the basis of examination of the source code.

"What statistics there are show that path testing catches approximately
half of all bugs caught during unit testing or approximately 35% of all
bugs.... When path testing is combined with other methods, such as limit
checks on loops, the percentage of bugs caught rises to 50% to 60% in
unit testing."

    - Beizer, Software Testing Techniques, 1983.

So, if path testing were required for Ada vendors - either at the vendor's
site, or at an AVF - this would be the expected benefit over not doing it.

How much would it cost to do path testing for an Ada compiler? I don't know.
Let's ask someone from Rational. They produce TestMate; surely they use it
in their own development process, or at least would know what it would cost!
I bet the folks who build AdaMat, LDRA Analysis, etc. would be happy to provide 
good information in this area, as well.

I don't like focusing on path testing, since there are certainly many other
analyses that could also be done, but that's one idea. Other ideas might be
to audit processes (or use an SEI III/ISO 9000 audit), or do data flow analysis
(see the TRI-Ada '95 paper from Boeing).

> Let's suppose that for this kind of examination and white box testing,
> a figure of 10 lines/day is reasonable (this is ten lines of source code).
> I suspect this number is high, but I deliberately what to be on the high
> side.
> 
> Then we arrive at a figure of 250 person years to evaluate the code
> of an Ada compiler. OK, so that's about 25 million dollars.
> 
> I *think* it is ok to regard this as infeasible :-)

Interesting. So, once you reach 500K SLOCs, you can no longer perform adequate
testing of software. What a relief! Now, if the F-22 fails to satisfy the
customer, I have an ironclad alibi! :)

Of course, no one would do IIV&V this way, so this is a straw man analysis.
Nonetheless, it is gratifying to hear at least one vendor admit that their
product is inadequately analyzed and tested, in order to save development costs.
Or are you saying that, somehow, you do manage to adequately test your product,
despite the exorbitant cost?

> The real point, which you did not address,is that even if you were to
> supply the check for $25 million, it would not solve the problem of
> timely delivery and verification of improvements etc.

Well, I wasn't asked to address this point, but of course IIV&V only would
have to address the delta changes and their interfaces.

Are you saying that we're wasting money re-running ACVC tests on changed
products? Maybe we could use that money to do process audits! See, that's
exactly the kind of thinking I'm looking for here. Good idea!

> Furthermore, we are still missing a formal specifcation of Ada 95 against
> which to formally measure compliance.

We don't have a formal specification of the F-22 software, either.

Can you come to our first flight readiness review, and explain to the pilots
why we're not able to give him any confidence in the performance of the system
because we're missing a formal specification?

> Ken, in your message, you again refer to users expecting the ACVC suite
> to guarantee conformance to the standard.

I did? Must have been my evil twin.

What I actually asked was, "Is there some way to modify the scope of the ACVC
process to improve compiler quality across all vendors? Or, is there something
outside the scope of the ACVC that could be done to improve compiler quality
across all vendors?"

> How many times does it have
> to be said? The ACVC suite cannot do this, does not attempt to do this,
> and anyone who thinks it does do this, or could do this, is mistaken!

Probably as many times as I (and other users) have had to say:

"Is there some way to modify the scope of the ACVC process to improve compiler
quality across all vendors? Or, is there something outside the scope of the
ACVC that could be done to improve compiler quality across all vendors?"

> Once again, I refer you to John Goodenough's writings on the subject,
> and to the other material I mentioned before.

And once again, how about the actual _name_ on the paper? Where on the
Internet it is located?

(See separate message for a review of the "other material").

> P.S. If you would like to send a check for $25 million to ACT, I think
> I can promise that 5 years from now we wlil have a compiler that is
> much closer to conforming to the standard (of course I can also promise
> this if you *don't* send us the $25 million :-)

Interesting. Your process for improving the quality of your product is
unrelated to the available resources? Wish _we_ had a system like that.
(Or maybe I don't.)

I notice you use the word "conformance" rather than "quality". Are these
synonyms, to you? They aren't to me. I suspect they aren't to Mr. McCabe,
or most other vendors.

Again, I think it's a matter of culture. We're both speaking English (more
or less), but discussing completely different subjects. Since you've already
answered my question, I'm not really sure why you're wasting your valuable
time continuing to discuss it...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-01  0:00           ` Ken Garlington
  1996-04-01  0:00             ` Robert Dewar
  1996-04-02  0:00             ` John McCabe
@ 1996-04-10  0:00             ` Robert Dewar
  1996-04-15  0:00               ` Ken Garlington
  2 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-10  0:00 UTC (permalink / raw)


Ken said

"I know that NPL has a tool that they sell that tests Ada compilers for bugs, tha
t
apparently provides much more coverage than the ACVC. Why should such a tool
exist outside of the validation/certification process?"

This is not true at all, and I guess Ken is only aware of this tool by
rumour, since if he had used it he would now that it is not ni the business
AT ALL of providing coverage testing.

Instead this is a stress testing tool, it generates random very complex
(and generally very unrealistic) examples of expressions and other 
constructs to see if the compiler can be broken by such stress testing.
The tool is incidentally available from NPL to users, so Ken it is 
certainly something you could use to test a compiler yourself.

In fact, I talked to Brian Wichman (the author of this tool and a similar
one for Pascal), and the results they have obtained with these tools are
quite surprising at least to me, in the extent to which they show quality
differences between Ada and Pascal compilers. Most (all?) of the Pascal
compilers they have tested have exhibited safety defects (defined as the
generation of incorrect code). None of the Ada compilers have shown
safety defects -- they have managed to break them but not persauded
then to generate wrong code.

Now in practice, I would expect that big projects such as Ken's can point
to safety defects (defined this way) in the compilers they have used, and
just as the ACVC cannot 100% guarantee conformance, the NPL tests cannot
100% guarantee safety, but they are a measure. I find it interesting that
the Ada compilers fair so much better than the Pascal compilers. Brian at
least ascribes this at least in part to the ACVC process.






^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00               ` Ken Garlington
  1996-04-02  0:00                 ` John McCabe
@ 1996-04-10  0:00                 ` Robert Dewar
  1996-04-10  0:00                   ` Robert Dewar
  1996-04-15  0:00                   ` Ken Garlington
  1 sibling, 2 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-10  0:00 UTC (permalink / raw)


Ken said

"Again, a difference in philosophy: In my domain, _not_ having a regression
test for every bug found is out of the question."

No, not a difference in philosophies, but rather a difference in domains.
Certainly an individual vendor can build a complete set of regression tests,
based on every bug found, but I don't see how that could practically be
done across compilers.

A comparison would be if 20 different vendors wrote completely different
F22 software, with different cockpit interfaces. Now trying to build a
regression suite corresponding to all bugs found in all 20 versions 
would be very much more difficult. This still isn't a valid comparison,
because it does not capture the fact that Ada compilers are multi-target,
so many buts are platform specific, so it is more as if 20 companies
built 20 programs that could be retargetted to any aircraft in the sky.
I don't think that makes any sense at all -- as I say, the domains are
VERY different, and trying to apply the compiler model to the F22 makes
as little sense as trying to apply the F22 model to a compiler!





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00     ` Cordes MJ
@ 1996-04-10  0:00       ` Robert Dewar
  1996-04-15  0:00         ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-10  0:00 UTC (permalink / raw)


Michael Cordes said

>Robert Dewar has responded to Ken Garlington's posts indicating that
>it is foolish to expect the ACVCs to guatantee an Ada compiler meets
>the RM.

>I don't think that Ken, or anybody else in the user community, expects
>the ACVC to guarantee 100% compliance with the RM. I did, however, believe
>that the ACVC provided *some* measure of quality.

 I certainly agree, and have never disagreed on this, in fact I precisely
 enumerated in one of my recent messages just how the ACVC does contribute
 to quality. In fact I think Ken *did* expect the ACVC to guarantee
 compliance, and unfortunately this is not possible.

>on the pathological cases and make a strong argument for doing nothing
>while we continue to develop safety critical code with *validated*
>compilers which generate bad code for some of the more simple Ada
>features.

 Unfortunately simple black box testing cannot even guarantee that 100%
 of simple Ada features are completely accurately implemented.

>Instead of flaming us for expecting to much, take our comments and
>feedback from your "larger" user community, and challenge yourself
>(i.e., the Ada compiler vendors) to provide a high quality
>product that meets all of our needs.

 Well of course, we certainly do aim at that. The point is that the
 black box testing of the ACVC suite can only be one part of that process.
 At ACT for example, we have an extremely rigorous development process.
 Among other steps we take, we run our complete regression suite (which
 is far more extensive than the ACVC suite) before making even the most
 minor change to the system (the regression suite has been run well over
 a thousand times in the last six months). This is very helpful in ensuring
 that we do not introduce regressions, but does not guarantee this. We also
 run the entire ACVC suite every day, and this too is another helpful step.
 As with most complex programming tasks, quality is achieved with a
 multi-faceted approach.

 The important thing is to realize that the ACVC can only ever be one
 component of this multi-faceted approach, so the single fact of
 validation, while significant, does not guarantee anything specific,
 and in particular does not guarantee quality for any particular
 definition of quality.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00                 ` Robert Dewar
@ 1996-04-10  0:00                   ` Robert Dewar
  1996-04-12  0:00                     ` Philip Brashear
                                       ` (2 more replies)
  1996-04-15  0:00                   ` Ken Garlington
  1 sibling, 3 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-10  0:00 UTC (permalink / raw)


Ken said (double >> are mine)

>> Ken Garlington asks why it is infeasible for a compiler vendor to deliver
>> the source code to the AVF for anaysis.

>Actually, I didn't ask this, but we can talk about it if you like...

 There must be two Ken Garlington's around, the other one said in
 a previous post:

   "I could think of several ways to include other types of testing in an
   ACVC, e.g.  a requirement to deliver of source code and supporting
   data to an AVF for analysis."

   (this is an exactly cutted-and-pasted quote :-)

>What I actually asked was, "Is there some way to modify the scope of the ACVC
>process to improve compiler quality across all vendors? Or, is there something
>outside the scope of the ACVC that could be done to improve compiler quality
>across all vendors?"

>Your answer: No, because we'd have to make an _investment_ to improve
>compiler quality. To get to 100% quality (whatever that means) would take
>too much money (and is technically infeasible). Therefore, no investment
>should be made.

   You miss the point entirely (seems to be happening consistently, so I
   must not be clear, hence my attempt to clarify!)

   Of COURSE! more investment is desirable to improve quality. ACT is
   a very focused company, our only business is improving the quality
   of GNAT! The issue is whether spending more resources on the ACVC
   testing is the way to do it. I very much doubt it.

   One thing we have not discussed is the ways in which the ACVC can,
   if not carefully handled, actually *decrease* compiler quality.
   Right now the rules in the Ada game are that you MUST pass the
   ACVC tests, above anything else, and in particular, above any other
   testing or quality control procedures that may make sense.

   Passing the ACVC suite is not a trivial excercise, and I think all
   vendors would agree that they have had to devote considerable resources
   to this task. These are resources not available for other quality
   improving tasks. This means that we have to be very careful that we
   do not divert too many resources and reach a point of diminishing
   returns.

   For example: Suppose we spent ten times as much on the ACVC suite, and
   had ten times the number of tests (that's really the only way I
   could see spending this investment, since the current tests are
   as effective as the contractor (SAIC) and the review team know
   how to make them). There is no question that conformance would be
   improved, but I think that the resulting diversion of resources
   would actually reach the point of decreasing returns.

   As Ken has pointed out there are many other activities besides ACVC
   testing that can contribute to quality:

      Process control (with ISO 9000 or SEI audits)
      Systematic white box testing (e.g. path testing)
      Stress testing (the NPL tool)
      Performance testing (e.g. with the ACES and PIWG tools)
      Systematic regression testing
      Purpose built test suites for particular compilers

   All these steps may be extremely useful, and compete with the ACVC
   process for resources.

   If you are acquiring an Ada compiler, you certainly do more in your
   evaluation than assure that it is validated. You may well, if your
   experience shows this is worthwhile, require additional testing,
   or for example, require that the compiler vendor have ISO 9000
   certification. Certainly IBM did quality audits on Ada vendors
   internal process control for the NASA space station project, and
   this seems entirely appropriate to me.

   I think part of the confusion here is that Ken (and John) use the
   ACVC as a kind of code word for "all the things that might be done
   to ensure quality", but in fact the ACVC is and has always been
   defined to be restricted to black box conformance testing. Perhaps
   what has gone wrong here is that Ken and other Ada users have been
   confused into thinking that the ACVC was intended to encapsulate
   the entire question of quality assessment of compilers, and that
   was never its intention.

   Let's go back to the ISO 9000 question for a moment. Suppose that your
   assessment shows that it is essential that a compiler vendor have ISO
   9000 certification (some do, Alsys obtained this certification, what
   was involved was basically a lot of paper work, writing down the
   procedures that had always been followed in formal and precise form).

   Then it is entirely reasonable for you to require this as part of
   your procurement process. Naturally you won't want to do this unless
   your careful analysis shows that this really does contribute to
   quality, because otherwise you will be requiring vendors to divert
   resources, but if your analysis shows it's a good idea, and a good
   way to spend resources, then fine.

   BUT! It is not appropriate for NIST conformance testing to include this
   criterion, becaues the Ada standard does not (and could not) say anything
   about process, since it is a language standard, and hence only defines
   the semantics of the language. So you cannot look to the ACVC for help
   here.

   Similarly, you may determine that performance of generated code is
   critical, and consequently place a lot of weight on the ACES test
   results (if your analysis shows that the ACES accurately captures
   qualities that are important to you). But this cannot be part of 
   NIST conformance testing, since the Ada standard does not (and could
   not) say anything about performance.

   Choosing among available compilers is not an easy task. I think that,
   particuarly early on, procurement officers hoped that the ACVC let
   them off the hook -- "I'll stick to validated compilers and I will
   be guaranteed that I have reasonable tools." Unforunately, it is
   not so simple, and the evaluation process for tools is much more
   complex. The ACVC testing helps as a kind of first-cut qualification,
   but that is all it is, and all it pretends to be. Thorough evaluation
   will have to go well beyond the "Is_Validated" predicate.

>The cost of IIV&V? I don't know the exact F-22 figure at the moment, but
>it's probably significantly less than 5% of the development cost. IIV&V
>is done on far more than a mere 500K SLOCs on F-22. I recommend AFSC Pamplet
>800-5, which helps estimate such costs, and also explains IIV&V. Based on your
>discussion below, I'm guessing you're not familiar with this process.

   That *is* a surprise. I admit I am much more familiar with MoD regulations
   in the Safety Critical area, and with typical British and European
   procedures. These tend to be much more oriented to formal specifications
   and formal proof, and so the certification process tends to be much
   more of the cost, as far as I can gather.

>"What statistics there are show that path testing catches approximately
>half of all bugs caught during unit testing or approximately 35% of all
>bugs.... When path testing is combined with other methods, such as limit
>checks on loops, the percentage of bugs caught rises to 50% to 60% in
>unit testing."

>    - Beizer, Software Testing Techniques, 1983.

>So, if path testing were required for Ada vendors - either at the vendor's
>site, or at an AVF - this would be the expected benefit over not doing it.

  First of all, I know of no data that would suggest that Beizer's results
  extend to the compiler domain, so that would have to be investigated.

  The danger here is that you enormously increase the cost of ACVC testing.
  There would be two ways of implementing what you suggest

    1. Require witness testing of path testing. This seems entirely
       infeasible. Right now, witness testing of the absolutely fixed
       set of tests costs of the order of $20K, and witness testing
       for full path coverage would require a huge amount of analysis,
       and a lot of very specialized expertise.

    2. Require a DOC-like document to be signed saying that full path
       testing should be done

  Either of these approaches would in addition require the technical work
  for full path testing. One very significant problem would be the issue
  of deactivated code. GNAT for example is programmed in a very defensive
  style. It is full of tests which are probably unnecessary, but either
  the proof that they are unnecessary is too difficult, or the tests are
  useful as defense against problems. Of course all code runs into the
  issue of deactivated code, but I suspect that systematic testing of
  a complete compiler would indicate that this problem is intractable
  for compilers, or at least very difficult.

  In any case, the issue is whether you would increase quality or decrease
  quality by this means. It seems to me that this would put too much
  emphasis on conformance, and too little emphasis on performance.

  Note also that in the case of optimization algorithms, proving that they
  do not violate the language rules is significant, but if that is all you
  do, you miss the point! You want to concentrate at least some effort on
  showing that the optimizations work.

  Looking at our customer base, the problems that people have (that are
  due to GNAT, rather than customer code) fall into several different
  categories. I think the list would be similar for any compiler.

     1. Safe compiler bugs (compiler bombs or gives an incorrect message)

     2. Unsafe compiler bugs (compiler generates wrong code)

     3. Performance is inadequate

     4. Optional features define     5. Features not defined in the RM are needed (bindings, libraries,
        preprocessors, other tool interfaces).

     6. Error messages are confusing

     7. Implementation dependent decisions are different from other
        compilers.

     8. Capacity limitations

  Quality improvement for GNAT involves addressing all of these areas. The
  ACVC tests concentrate entirely on points 1 and 2. Certainly these are
  important areas, especially 2, and the few times we have run into code
  generation problems, we certainly consider them to be of the highest
  priority.

  Your suggestion of path testing still adds only to points 1 and 2, and
  I fear in fact that your entire emphasis is on points 1 and 2. The
  trouble is that for most of our customers, 3-8 are also important.
  For example, many of our customers are porting large applications
  from other compilers to GNAT, and in these porting efforts, point
  7 is by far the most significant in our experience.

>Interesting. So, once you reach 500K SLOCs, you can no longer perform adequate
>testing of software. What a relief! Now, if the F-22 fails to satisfy the
>customer, I have an ironclad alibi! :)

  Again, the issue is that the domains are very different. Your assumption
  that F22 techniques and observations can apply to compilers are no more
  valid than if I thought that you should easily be able to make a
  retargetable flight software program that would work on the F22 or 777
  with minimal work :-)

>Are you saying that we're wasting money re-running ACVC tests on changed
>products? Maybe we could use that money to do process audits! See, that's
>exactly the kind of thinking I'm looking for here. Good idea!

  Sorry, I never said that, I think it is definitely very useful to rerun
  the ACVC suite whenever any change is made to the compiler. We depend
  on this as one (among several) of our continuing internal quality audit.

>We don't have a formal specification of the F-22 software, either.
>Can you come to our first flight readiness review, and explain to the pilots
>why we're not able to give him any confidence in the performance of the system
>because we're missing a formal specification?

  I do find that surprising. Talking to the folks doing similar development
  in England, they seem to be very much more focused on formal specifications.
  Of course ultimately one important thing to remember is that the requirement
  on your F22 software is not that it be 100% correct, but that in practice
  it be 100% reliable, and these are rather different criteria. Clealy the
  100% correct criterion would require a formal specification, and could not
  be demonstrated by testing, but the 100% reliability criterion is quite
  different.

>> Ken, in your message, you again refer to users expecting the ACVC suite
>> to guarantee conformance to the standard.
>
>I did? Must have been my evil twin.

  The same one who talked about sending code to the AVF no doubt :-) :-)

>What I actually asked was, "Is there some way to modify the scope of the ACVC
>process to improve compiler quality across all vendors? Or, is there something
>outside the scope of the ACVC that could be done to improve compiler quality
>across all vendors?"

  Not clear, the danger as I note above is that if you move in the wrong
  direction here, you can easily damage compiler quality.

>> P.S. If you would like to send a check for $25 million to ACT, I think
>> I can promise that 5 years from now we wlil have a compiler that is
>> much closer to conforming to the standard (of course I can also promise
>> this if you *don't* send us the $25 million :-)

>Interesting. Your process for improving the quality of your product is
>unrelated to the available resources? Wish _we_ had a system like that.
>(Or maybe I don't.)

  You missed the point of my :-) We expect GNAT to succeed, and to invest
  substantial resources in improving its quality even if we don't get your
  $25 million. There is no doubt that your $25 million check would have a
  positive impact on GNAT, but we can manage without it :-)

I notice you use the word "conformance" rather than "quality". Are these
synonyms, to you? They aren't to me. I suspect they aren't to Mr. McCabe,
or most other vendors.

  No of course they are not synonyms! That's the whole point. ACVC measures
  conformance which is just one aspect of quality. What did I ever say that
  made you think I think these are synonymous? The whole point of my
  comments is that they are NOT synonymous. In practice NIST testing
  using the ACVC suite can only measure conformance and as you point out
  repeatedly, and as I agree with repeatedly, this is NOT the one and
  only measure of quality, even if the ACVC could assure 100% conformance.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (6 preceding siblings ...)
  1996-04-11  0:00   ` Robert I. Eachus
@ 1996-04-11  0:00   ` Robert I. Eachus
  1996-04-19  0:00   ` Laurent Guerby
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 100+ messages in thread
From: Robert I. Eachus @ 1996-04-11  0:00 UTC (permalink / raw)


In article <4kf739$f00@cliffy.lfwc.lockheed.com> l117593@cliffy.lfwc.lockheed.com (Cordes MJ) writes:

  > I don't think that Ken, or anybody else in the user community,
  > expects the ACVC to guarantee 100% compliance with the RM. I did,
  > however, believe that the ACVC provided *some* measure of
  > quality. You two can concentrate on the pathological cases and
  > make a strong argument for doing nothing while we continue to
  > develop safety critical code with *validated* compilers which
  > generate bad code for some of the more simple Ada features.

    Very much the wrong conclusion to draw from our posts.  Robert
Dewar and I have been involved for over fifteen years with many other
experts representing both the user community and the compiler
developers in making Ada compilers the highest quality compilers
available for any language.  None of the ARG members, the AVO testers,
the ACVC developers or even the commpiler developers has ever had any
goal for the ACVC suite other than for it to be the best test suite
possible for validating compilers.  On several occaisions, I have run
into a bug in the compiler I'm developing, created a test and
forwarded it to the AVO because I thought the ACVC suite should have
caught the bug. (Of course the test also ended up in my regression
suite.) I'm sure Robert Dewar had done the same thing--in addition to
creating a whole suite of chapter 13 tests.

   Now having said that, back to the flip side of the issue.  There
are a number of profoundly disturbing results in computability theory,
among them Post's Correspondence Problem and Godel's Proof.  (Go read
Godel, Escher, Bach if you are not familiar with these issues.)  Most
computer programs never bump into these limits, by compilers by their
nature start out there.  It is trivially easy to prove that set of
programs accepted by your Ada compiler can never be proven to be the
same set of programs accepted by my compiler.  (In fact as you will
see, it is quite easy to prove that the sets are different.)  The same
issue prevents your, say, C compiler vendor from proving that his C
compiler accepts all legal C programs, and reject all programs that
are not legal C.

   It gets worse.  Ada is a complex enough language, in the sense of
Godel's proof, that I can demonstrate that ANY Ada compiler either
accepts illegal programs or rejects legal Ada programs.  Compiler
vendors hope that this limit on the perfectability of compilers can be
pushed into the realm of capacity limits.  In other words, that the
incorrectly rejected programs are so large that any attempt to
compiler them runs into memory limits.  But Godel's proof, in spite of
being a constructive proof, fails to provide such a lower limit.  We
do the best we can, test the hell out of the compilers, and keep a
weather eye open.

    Now you know why I switched to C in the paragraph above.  In Ada
there are some cases where the legality of programs depends on whether
or not an exception will be raised by the evaluation of some otherwise
static expression at run time, and it is perfectly legal Ada to put a
Godelian embedding into such expressions.

   It seems farfetched, but I once demonstrated a case where one
compiler accepted a program as legal Ada 83 and another compiler
rejected it as illegal, and BOTH were right...The program was illegal
if, and only if, it raised Constraint_Error when run, and the
expression that could raise Constraint_Error was one where 11.6 allows
the compiler to use a wider range to evaluate it.  So there you have
it, a very unpathological seeming program, and it is undecidable
whether or not it was legal Ada.  I won't include the entire program,
but the key lines are:

   type Int_16 is range -2**15..2**15-1;
   subtype Positive_16 is Int_16 range 1..2**15-1;
   P: Positive_16 := 10;
   ...
   case P is
     when 1..10 => ...
     when 11..Positive_16'LAST => ...
     -- no others clause here...
   end case;

   I'm sure you will all be happy to know that this case was fixed by
the preference rule in Ada 95.  My example program is always legal Ada
95, but I am sure that there are others.  Godel tells me so.  (In fact
I think I can modify this example to avoid the preference rule, but
I'll wait.)

   For those of you trying to puzzle the example out, a few hints.
Case statements require an others clause if the subtype of the
expression is non-static.  Positive_16 is a static subtype if its
bounds are static.  The upper bound is an expression of type Int_16,
and evaluating 2**15 may or may not raise an exception if evaluated at
run-time.  It doesn't matter when it IS evaluated, just whether or not
it would raise an exception, if evaluated at run-time.  The compiler
decides this, then marks Positive_16 as static or non-static.
Incidently, the first 2**16 is not of type Int_16, and is not allowed
to raise an exception.
--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (5 preceding siblings ...)
  1996-04-05  0:00   ` Robert I. Eachus
@ 1996-04-11  0:00   ` Robert I. Eachus
  1996-04-11  0:00   ` Robert I. Eachus
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 100+ messages in thread
From: Robert I. Eachus @ 1996-04-11  0:00 UTC (permalink / raw)



    (I don't know if this discussion is being followed by many
non-participants, but it certainly is on topic.)

    I think Ken is missing the point Robert is making--not that ACVC
type testing of Ada compilers is in need of improvement, but that any
more emphasis on testing _instead_of_other_means_of_improving_quality_
would result in decreased compiler quality.

    I would be surprised if ACT performs less than a MIPS-century of
ACVC and regression testing each year.  Also, from experience, I
suspect that tests that execise the code generator (i.e. other than
ACVC B or L tests) are run much more often that those tests which are
expected to produce error messages.  (Even if you only changed the
front end, you want to test against most code generators, but running
the B-tests once is probably sufficient.)

    I doubt that the F-22 program tests at this level.  Speaking from
experience, the testing on large programs like this are in the
MIPS-month per year range if we are lucky.  That is still one HUGE
lump of testing.  However, the F-22 program probably spends much more
time on formal proofs of correctness--not for the whole system, but
for particular algorithms, like the control laws.  In the compiler
business, a lot of this work is done by tools instead.  I don't know
how much of this ACT does, but I know that, for example, DDC-I uses
VDM tools to formally prove that components meet requirements.  This
approach is very viable on compilers because the effort put into
making assertions about, say parse trees, is used in testing all code
that modifies those trees.  (And in any good Ada compiler, that
includes a lot, usually including but not limited to, parser, semantic
analysis, high and low level optimizers, and hardware specific
modules.) 


--

					Robert I. Eachus

with Standard_Disclaimer;
use  Standard_Disclaimer;
function Message (Text: in Clever_Ideas) return Better_Ideas is...




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-12  0:00                     ` Philip Brashear
@ 1996-04-12  0:00                       ` Robert Dewar
  0 siblings, 0 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-12  0:00 UTC (permalink / raw)


Phil Brashear said

"Robert Dewar says that witness testing for Ada validation costs on the
order of $20,000.  That probably depends on the AVF doing the testing.
I think you might find that some AVFs can do testing of a single
implementation with no complications for $12,000 or less.  As in many
areas, it pays to shop around."

Of course that depends on travel costs too, but sure, it does pay to
shop around, and indeed the price is less than $20K rather than more,
but I was just giving a rough estimate, if you like use $10-20K to
get an idea of the range (the cost actually can be far less if you do
a bunch of targets at the same time).





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00                   ` Robert Dewar
@ 1996-04-12  0:00                     ` Philip Brashear
  1996-04-12  0:00                       ` Robert Dewar
  1996-04-15  0:00                     ` Tiring Arguments Around (not about) Two Questions Ken Garlington
  1996-04-18  0:00                     ` Ada Core Technologies and Ada95 Standards John McCabe
  2 siblings, 1 reply; 100+ messages in thread
From: Philip Brashear @ 1996-04-12  0:00 UTC (permalink / raw)


Robert Dewar says that witness testing for Ada validation costs on the
order of $20,000.  That probably depends on the AVF doing the testing.
I think you might find that some AVFs can do testing of a single
implementation with no complications for $12,000 or less.  As in many
areas, it pays to shop around.

(Yes, I know what "on the order of" means.  However, orders of magnitude
aren't terribly useful in talking about competitive pricing.  Forty per
cent savings might be.)

Phil Brashear





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00             ` Robert Dewar
@ 1996-04-15  0:00               ` Ken Garlington
  1996-04-16  0:00                 ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-15  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Ken said
> 
> "I know that NPL has a tool that they sell that tests Ada compilers for bugs, tha
> t
> apparently provides much more coverage than the ACVC. Why should such a tool
> exist outside of the validation/certification process?"
> 
> This is not true at all, and I guess Ken is only aware of this tool by
> rumour, since if he had used it he would now that it is not ni the business
> AT ALL of providing coverage testing.

Slow down with the speed reading, Dr. Dewar. I said "provides more coverage,"
not "does coverage testing." There is a difference!

The rumor, which some guy named Dr. Brian Wichmann fed me, is that this tool
does the following:

> Instead this is a stress testing tool, it generates random very complex
> (and generally very unrealistic) examples of expressions and other
> constructs to see if the compiler can be broken by such stress testing.

I won't comment on the "unrealistic" part, but at least Dr. Wichmann seems to
think that this tool tells me more about compiler quality than if I don't use it.

> The tool is incidentally available from NPL to users, so Ken it is
> certainly something you could use to test a compiler yourself.

I could also run the ACVC myself. What's the point? 

DO THE MATH. Each user pays for [fill in your favorite process/tool/test here],
or it's done once and the results made available to all users. I wonder which
is more cost-effective?

> In fact, I talked to Brian Wichman (the author of this tool and a similar
> one for Pascal), and the results they have obtained with these tools are
> quite surprising at least to me, in the extent to which they show quality
> differences between Ada and Pascal compilers. Most (all?) of the Pascal
> compilers they have tested have exhibited safety defects (defined as the
> generation of incorrect code). None of the Ada compilers have shown
> safety defects -- they have managed to break them but not persauded
> then to generate wrong code.

"They have managed to break them."

Sounds like a good test to me. Sounds like a test that found something that
neither the vendor's process, nor the ACVC, found before release. Too bad compiler
vendors don't use tools like this _before_ their products are released, to improve
the quality of their software.

Of course, no vendor could afford such a tool, nor could the AVOs afford to
have such a tool. Out of the question! So, I guess the end users will have to
continue to be the guinea pigs, and test the compilers.

By the way, these results surprise me too, since I have certainly managed
to get Ada compilers to generate wrong code. Of course, that doesn't make this
test bad, since no test can guarantee that all possible errors can be caught.
(Notice: _I_ said this.) As you point out:

> Now in practice, I would expect that big projects such as Ken's can point
> to safety defects (defined this way) in the compilers they have used, and
> just as the ACVC cannot 100% guarantee conformance, the NPL tests cannot
> 100% guarantee safety, but they are a measure.

But not a necessary measure, since the vendor process + the ACVC will already
provide high-quality compilers. The tool is unncessary, if I understand your
previous posts. Right?

> I find it interesting that
> the Ada compilers fair so much better than the Pascal compilers. Brian at
> least ascribes this at least in part to the ACVC process.

Well, as long as Ada compilers are higher quality than Pascal compilers, I guess
I have no reason to gripe, eh?

Tom Peters tells this wonderful story about an unnamed company, who hired Mr. Peters
as a quality consultant. At a meeting, one of the managers in frustration said, "Hey!
Get off our backs! We're no worse than anyone else!"

Mr. Peters liked that last expression so much, he put it on a business card. I guess, in
the Ada world, it would have looked like:

      Joe Smith
      XYZ Ada Tools
      "We're No Worse Than Anyone Else, and Better Than Pascal!"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00       ` Robert Dewar
@ 1996-04-15  0:00         ` Ken Garlington
  1996-04-16  0:00           ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-15  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> In fact I think Ken *did* expect the ACVC to guarantee
>  compliance, and unfortunately this is not possible.

Nope. _You_ keep claiming I said this, and I keep saying, "show me
where I said this, and I will retract it."

I asked two questions. Can you recall them? Probably not.

> 
> >on the pathological cases and make a strong argument for doing nothing
> >while we continue to develop safety critical code with *validated*
> >compilers which generate bad code for some of the more simple Ada
> >features.
> 
>  Unfortunately simple black box testing cannot even guarantee that 100%
>  of simple Ada features are completely accurately implemented.
> 
> >Instead of flaming us for expecting to much, take our comments and
> >feedback from your "larger" user community, and challenge yourself
> >(i.e., the Ada compiler vendors) to provide a high quality
> >product that meets all of our needs.
> 
>  Well of course, we certainly do aim at that.

Aim at flaming users for expecting too much?

Or aim at providing a high quality product?

When you decide you can discuss the later issue without the bile rising,
let me know.

>  The point is that the
>  black box testing of the ACVC suite can only be one part of that process.
>  At ACT for example, we have an extremely rigorous development process.
>  Among other steps we take, we run our complete regression suite....

Unfortunately, you don't have time to discuss steps 2-N. Let me know if you
do.

>  As with most complex programming tasks, quality is achieved with a
>  multi-faceted approach.

A regression test suite (which may or may not be a good one), and some other
unspecified steps. Facets > 1? No way for me to tell!

>  The important thing is to realize that the ACVC can only ever be one
>  component of this multi-faceted approach...

Unfortunately, real facets fit togethger. Does the ACVC fit with the other
facets (e.g. at the vendor)? Should the ACVC grow (quantitatively or qualitatively)?
Should it shrink? Who knows? Who will ever know, given the open hostility to
discussing the subject?




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00                 ` Robert Dewar
  1996-04-10  0:00                   ` Robert Dewar
@ 1996-04-15  0:00                   ` Ken Garlington
  1996-04-16  0:00                     ` Robert Dewar
  1 sibling, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-15  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> A comparison would be if 20 different vendors wrote completely different
> F22 software, with different cockpit interfaces.

Twenty sounds a little low, but that's close enough....

> Now trying to build a
> regression suite corresponding to all bugs found in all 20 versions
> would be very much more difficult.

Actually, the only way the Air Force allows us to fly an aircraft is
if we do, in fact, build such a regression suite.

Want to know how we do it? Well, I guess it's not worth discussing, since:

> the domains are
> VERY different, and trying to apply the compiler model to the F22 makes
> as little sense as trying to apply the F22 model to a compiler!




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions
  1996-04-10  0:00                   ` Robert Dewar
  1996-04-12  0:00                     ` Philip Brashear
@ 1996-04-15  0:00                     ` Ken Garlington
  1996-04-15  0:00                       ` Gary McKee
  1996-04-17  0:00                       ` Kenneth Almquist
  1996-04-18  0:00                     ` Ada Core Technologies and Ada95 Standards John McCabe
  2 siblings, 2 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-15  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Ken said (double >> are mine)
> 
> >> Ken Garlington asks why it is infeasible for a compiler vendor to deliver
> >> the source code to the AVF for anaysis.
> 
> >Actually, I didn't ask this, but we can talk about it if you like...
> 
>  There must be two Ken Garlington's around, the other one said in
>  a previous post:
> 
>    "I could think of several ways to include other types of testing in an
>    ACVC, e.g.  a requirement to deliver of source code and supporting
>    data to an AVF for analysis."
> 
>    (this is an exactly cutted-and-pasted quote :-)

It's a quote. However, it isn't a question, so I think you've proven that
I did not, in fact, ask this. Note also that I said "e.g." That's
my own personal shorthand for the phrase "for example." 

> >What I actually asked was, "Is there some way to modify the scope of the ACVC
> >process to improve compiler quality across all vendors? Or, is there something
> >outside the scope of the ACVC that could be done to improve compiler quality
> >across all vendors?"
> 
> >Your answer: No, because we'd have to make an _investment_ to improve
> >compiler quality. To get to 100% quality (whatever that means) would take
> >too much money (and is technically infeasible). Therefore, no investment
> >should be made.
> 
>    You miss the point entirely (seems to be happening consistently, so I
>    must not be clear, hence my attempt to clarify!)
> 
>    Of COURSE! more investment is desirable to improve quality.

Any ideas on where this investment should be applied, to improve quality
"across all vendors"? (Really, you have to stop quoting me without
responding to the quote!)

>    ACT is
>    a very focused company, our only business is improving the quality
>    of GNAT!

Great! Let me know if/when you're interested/have time/have materials
available to actually _discuss_ what you do/plan to do (other than
add more tests to the regression test suite.)

Until then, this statement is pretty much indistinguishable from magic.

>    The issue is whether spending more resources on the ACVC
>    testing is the way to do it. I very much doubt it.

Why? You've agreed that SEI III/ISO 9001 is a good thing, for example.
Why is adding this to the things which AVOs do/collect so terrible?

>    One thing we have not discussed is the ways in which the ACVC can,
>    if not carefully handled, actually *decrease* compiler quality.
>    Right now the rules in the Ada game are that you MUST pass the
>    ACVC tests, above anything else, and in particular, above any other
>    testing or quality control procedures that may make sense.

Right. Is this good or bad, to put the ACVC above other testing or quality
control procedure that may make sense? Possibly, the ACVC should be
changed to focus more on quality control procedures, and less on tests.
Unfortunately, this will never happen. The ACVC is perfect, wouldn't
change a thing!

>    Passing the ACVC suite is not a trivial excercise, and I think all
>    vendors would agree that they have had to devote considerable resources
>    to this task. These are resources not available for other quality
>    improving tasks. This means that we have to be very careful that we
>    do not divert too many resources and reach a point of diminishing
>    returns.

Absolutely! So, let's cut the amount of effort to pass the ACVC 10%, and we'll
get a 10% increase in compiler quality, since the vendors will have more
money for other improvements. Why not?

>    For example: Suppose we spent ten times as much on the ACVC suite, and
>    had ten times the number of tests (that's really the only way I
>    could see spending this investment, since the current tests are
>    as effective as the contractor (SAIC) and the review team know
>    how to make them).

However, those tests only measure conformance, right? Oh, that's right,
conformance = quality. Never mind...

>    As Ken has pointed out there are many other activities besides ACVC
>    testing that can contribute to quality:
> 
>       Process control (with ISO 9000 or SEI audits)
>       Systematic white box testing (e.g. path testing)
>       Stress testing (the NPL tool)
>       Performance testing (e.g. with the ACES and PIWG tools)
>       Systematic regression testing
>       Purpose built test suites for particular compilers
> 
>    All these steps may be extremely useful, and compete with the ACVC
>    process for resources.

Unless, of course, there was a way to _include_ these in the ACVC process.
Nah, impossible! An AVO can ask a vendor, "please sign this statement that
you _didn't_ implement extraneous features," but ask them to sign a statement
that they did "systematic regression testing?" Impossible!

>    If you are acquiring an Ada compiler, you certainly do more in your
>    evaluation than assure that it is validated. You may well, if your
>    experience shows this is worthwhile, require additional testing,
>    or for example, require that the compiler vendor have ISO 9000
>    certification. Certainly IBM did quality audits on Ada vendors
>    internal process control for the NASA space station project, and
>    this seems entirely appropriate to me.

Yep, the users are the guinea pigs, all right.

>    I think part of the confusion here is that Ken (and John) use the
>    ACVC as a kind of code word for "all the things that might be done
>    to ensure quality", but in fact the ACVC is and has always been
>    defined to be restricted to black box conformance testing.

And now, I have to shock you to your very core. I am about to suggest
something so heretical, so impossible, that it will be completely
incomprehensible to you.

Ready?

CHANGE THE DEFINITION. OR, CREATE THE ACQV (ADA COMPILER QUALITY
VALIDATION) PROCESS. I DON'T CARE. EITHER WILL ANSWER:

> >What I actually asked was, "Is there some way to modify the scope of the ACVC
> >process to improve compiler quality across all vendors? Or, is there something
> >outside the scope of the ACVC that could be done to improve compiler quality
> >across all vendors?"

You can quote the questions, but you can't answer them.

>    Perhaps
>    what has gone wrong here is that Ken and other Ada users have been
>    confused into thinking that the ACVC was intended to encapsulate
>    the entire question of quality assessment of compilers, and that
>    was never its intention.

Nope. What has gone wrong is that you can quote the questions, but
can't answer them.

>    Let's go back to the ISO 9000 question for a moment. Suppose that your
>    assessment shows that it is essential that a compiler vendor have ISO
>    9000 certification (some do, Alsys obtained this certification, what
>    was involved was basically a lot of paper work, writing down the
>    procedures that had always been followed in formal and precise form).

By the way, that "paperwork" (having a well-documented process) turns about
to also be part of SEI's process, and in fact there's starting to be a rumor
that you can't be an effective software development organization without
well-documented processes.

>    Then it is entirely reasonable for you to require this as part of
>    your procurement process. Naturally you won't want to do this unless
>    your careful analysis shows that this really does contribute to
>    quality, because otherwise you will be requiring vendors to divert
>    resources, but if your analysis shows it's a good idea, and a good
>    way to spend resources, then fine.

Yep, the users take it on the chin financially again.

>    BUT! It is not appropriate for NIST conformance testing to include this
>    criterion, becaues the Ada standard does not (and could not) say anything
>    about process, since it is a language standard, and hence only defines
>    the semantics of the language. So you cannot look to the ACVC for help
>    here.

Apparently, not without to (oh, my God!) change the scope of the ACVC to
include more than NIST conformance testing. (Still haven't answered the
second question, by the by.)

>    Similarly, you may determine that performance of generated code is
>    critical, and consequently place a lot of weight on the ACES test
>    results (if your analysis shows that the ACES accurately captures
>    qualities that are important to you). But this cannot be part of
>    NIST conformance testing, since the Ada standard does not (and could
>    not) say anything about performance.

Apparently, not without to (oh, my God!) change the scope of the ACVC to
include more than NIST conformance testing. (Still haven't answered the
second question, by the by.)

>    Choosing among available compilers is not an easy task.

Actually, it's very easy. For many host/target pairs, there's only a few vendors.
In some cases, only one. You don't like the quality of that vendor? Tough! Spend
your own money to fix it! Ada users got plenty of cash! They're rolling in it!

>    I think that,
>    particuarly early on, procurement officers hoped that the ACVC let
>    them off the hook -- "I'll stick to validated compilers and I will
>    be guaranteed that I have reasonable tools." Unforunately, it is
>    not so simple, and the evaluation process for tools is much more
>    complex. The ACVC testing helps as a kind of first-cut qualification,
>    but that is all it is, and all it pretends to be. Thorough evaluation
>    will have to go well beyond the "Is_Validated" predicate.

Too bad there's no way to implement a common minimal set of processes/tests/etc.
beyond the ACVC, that could be done once for all users. Oh well, they're just
users. Who cares?

>    That *is* a surprise. I admit I am much more familiar with MoD regulations
>    in the Safety Critical area, and with typical British and European
>    procedures. These tend to be much more oriented to formal specifications
>    and formal proof, and so the certification process tends to be much
>    more of the cost, as far as I can gather.

Oh, you wanted the _certification_ process cost! That's different than the IIV&V
cost, of course. What's certification got to do with my questions?

>   First of all, I know of no data that would suggest that Beizer's results
>   extend to the compiler domain, so that would have to be investigated.

Investigated by whom? There's no incentive to answer my questions, so who cares?

>   The danger here is that you enormously increase the cost of ACVC testing.
>   There would be two ways of implementing what you suggest
> 
>     1. Require witness testing of path testing. This seems entirely
>        infeasible. Right now, witness testing of the absolutely fixed
>        set of tests costs of the order of $20K, and witness testing
>        for full path coverage would require a huge amount of analysis,
>        and a lot of very specialized expertise.

Concur. (Surprise!)

> 
>     2. Require a DOC-like document to be signed saying that full path
>        testing should be done

Say, that's a good idea. Wish I'd thought of it.

>   Either of these approaches would in addition require the technical work
>   for full path testing.

Unless you only required 25%/50%/75% path coverage, of course.

>   One very significant problem would be the issue
>   of deactivated code. ... I suspect that systematic testing of
>   a complete compiler would indicate that this problem is intractable
>   for compilers, or at least very difficult.

Who can say? It's not worth exploring, of course.

>   In any case, the issue is whether you would increase quality or decrease
>   quality by this means. It seems to me that this would put too much
>   emphasis on conformance, and too little emphasis on performance.

Are there tests/process requirements/etc. that would add emphasis to performance?
(I don't know what the distinction is in your mind between conformance and
performance, but maybe I'll get a constructive sugesstion by asking the question.)

>   Note also that in the case of optimization algorithms, proving that they
>   do not violate the language rules is significant, but if that is all you
>   do, you miss the point! You want to concentrate at least some effort on
>   showing that the optimizations work.

Why bother?

>   Looking at our customer base, the problems that people have (that are
>   due to GNAT, rather than customer code) fall into several different
>   categories.... Quality improvement for GNAT involves addressing all of these areas.

No it doesn't! (To which you can reply, "Yes, it does!" etc. etc.)

Let me know if you ever answer my two questions.


>   The
>   ACVC tests concentrate entirely on points 1 and 2.

No, they don't. These tests are not designed to find compiler bugs (safe or
unsafe). They are there to provide some level of assurance as to conformance
to the language specification, which is (as you keep point out) a different
issue.

> Certainly these are
> important areas, especially 2, and the few times we have run into code
> generation problems, we certainly consider them to be of the highest
> priority.

Too bad there's no community-wide initiative to decrease problems in areas
1 and 2 (or 3 or 4 or...)

>   Your suggestion of path testing still adds only to points 1 and 2, and
>   I fear in fact that your entire emphasis is on points 1 and 2.

"Let's go back to the ISO 9000 question for a moment."

How come you can quote my suggestions, but not remember them?

But never mind that. I'll gladly accept your statement that we need to
address 3-8. So, in the context of 1-8, let me pose two questions:

"Is there some way to modify the scope of the ACVC
process to improve compiler quality across all vendors? Or, is there something
outside the scope of the ACVC that could be done to improve compiler quality
across all vendors?"

Can you answer these for 1-8 any better than for 1-2?

>   The
>   trouble is that for most of our customers, 3-8 are also important.

Certainly, it's trouble for the users!

>   Again, the issue is that the domains are very different. Your assumption
>   that F22 techniques and observations can apply to compilers are no more
>   valid than if I thought that you should easily be able to make a
>   retargetable flight software program that would work on the F22 or 777
>   with minimal work :-)

Actually, the same "techniques and observations," surprisingly enough, can
apply to both the F-22 and 777.

Of course, I'll gladly accept that your domain requires different processes,
which is why I couched specific proposals as suggestions. Care to fill the
void with suggestions that _will_ work in your domain? The only one I've
heard is "let the user do it." You are right, in that this suggestion won't
fly (so to speak) in MY domain.

> >Are you saying that we're wasting money re-running ACVC tests on changed
> >products? Maybe we could use that money to do process audits! See, that's
> >exactly the kind of thinking I'm looking for here. Good idea!
> 
>   Sorry, I never said that, I think it is definitely very useful to rerun
>   the ACVC suite whenever any change is made to the compiler. We depend
>   on this as one (among several) of our continuing internal quality audit.

Care to name something new in this "several"?

>   I do find that surprising. Talking to the folks doing similar development
>   in England, they seem to be very much more focused on formal specifications.
>   Of course ultimately one important thing to remember is that the requirement
>   on your F22 software is not that it be 100% correct, but that in practice
>   it be 100% reliable, and these are rather different criteria

Actually, I have neither requirement.

You are correct that, in England, they focus on formal specifications. DoD/FAA
does not require flight control software in the U.S. to have a formal specification
(although DO-178 permits it.)

> >What I actually asked was, "Is there some way to modify the scope of the ACVC
> >process to improve compiler quality across all vendors? Or, is there something
> >outside the scope of the ACVC that could be done to improve compiler quality
> >across all vendors?"
> 
>   Not clear, the danger as I note above is that if you move in the wrong
>   direction here, you can easily damage compiler quality.

And, as far as I can tell, your opinion is that all directions are wrong. If
you care to refute this, you should in fairness indicate at least one _right_
direction.

Furthermore, you appear to find any attempt to make the issue less "Not clear"
to be anathema.

>   You missed the point of my :-) We expect GNAT to succeed, and to invest
>   substantial resources in improving its quality even if we don't get your
>   $25 million. There is no doubt that your $25 million check would have a
>   positive impact on GNAT, but we can manage without it :-)

> I notice you use the word "conformance" rather than "quality". Are these
> synonyms, to you? They aren't to me. I suspect they aren't to Mr. McCabe,
> or most other vendors.
> 
>   No of course they are not synonyms! That's the whole point.

Not _my_ whole point. I suspect Mr McCabe would disagree as well.
Let me know when you want to talk about quality.

> ACVC measures
>   conformance which is just one aspect of quality. What did I ever say that
>   made you think I think these are synonymous?

Specifically? Every time I ask about quality, you respond with arguments on
conformance. You say that conformance is an aspect of quality, yet here's
your list of where users have trouble:

     1. Safe compiler bugs (compiler bombs or gives an incorrect message)
     2. Unsafe compiler bugs (compiler generates wrong code)
     3. Performance is inadequate
     4. Optional features define     
     5. Features not defined in the RM are needed (bindings, libraries,
        preprocessors, other tool interfaces).
     6. Error messages are confusing
     7. Implementation dependent decisions are different from other
        compilers.
     8. Capacity limitations

Too bad there's no industry-wide efforts to tackle _these_....

>   The whole point of my
>   comments is that they are NOT synonymous.

Let me know if you ever decide to answer my questions.




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions
  1996-04-15  0:00                     ` Tiring Arguments Around (not about) Two Questions Ken Garlington
@ 1996-04-15  0:00                       ` Gary McKee
  1996-04-16  0:00                         ` Ken Garlington
  1996-04-17  0:00                       ` Kenneth Almquist
  1 sibling, 1 reply; 100+ messages in thread
From: Gary McKee @ 1996-04-15  0:00 UTC (permalink / raw)


In article <31729C6E.4903@lfwc.lockheed.com>,
Ken Garlington <garlingtonke@lfwc.lockheed.com> wrote:

 > Specifically? Every time I ask about quality, you respond with arguments
on
 > conformance. You say that conformance is an aspect of quality, yet
here's
 > your list of where users have trouble:
 > 
 >      1. Safe compiler bugs (compiler bombs or gives an incorrect
message)
 >      2. Unsafe compiler bugs (compiler generates wrong code)
 >      3. Performance is inadequate
 >      4. Optional features define     
 >      5. Features not defined in the RM are needed (bindings, libraries,
 >         preprocessors, other tool interfaces).
 >      6. Error messages are confusing
 >      7. Implementation dependent decisions are different from other
 >         compilers.
 >      8. Capacity limitations
 > 
 > Too bad there's no industry-wide efforts to tackle _these_....
 > 
 > >   The whole point of my
 > >   comments is that they are NOT synonymous.
 > 
 > Let me know if you ever decide to answer my questions.
--------------------------------------------------------
Ken, As I mentioned in my previous message, there is a WELL-ESTABLISHED
product/technology that is designed to address the quality issues, details
are below. The ACES was developed as a result of a multi-year effort from a
consortium of govt/industry/academia, certainly an "industry-wide effort"
such as you suggest.

The ACES specifically address several of the topics itemized above. In
particular, items (1), (2), (3), (6), (7), and (8) are effectively
addressed, sorry that we missed (4) and (5). Why don't we transition this
discussion from the ACVC (which is NOT about quality) to a discussion of
the ACES (which is about quality)?
--------------------------------------------------------
There is a significant difference in purpose between validation (ACVC) and
evaluation (ACES). The ACES test suite exists PRECISELY for the purpose of
measuring/evaluating the "quality" of Ada compilation systems. It is quite
large, complex, and very useful. I have used this system extensively and I
do 
recommend it for the quality assessment work that you are interested in.
--------------------------------------------------------
INTRODUCTION TO THE ADA COMPILER EVALUATION SYSTEM (ACES)

The Ada Compiler Evaluation System (ACES) provides performance tests, test
management software, and analysis software for assessing the performance
characteristics of Ada compilation and execution systems.
Functionality/usability assessor tools are also provided for examining the
implementation's diagnostic system, library management system, and symbolic
debugger, as well as for determining compile-time and run-time capacities
of the implementation.

http://sw-eng.falls-church.va.us/AdaIC/testing/aces/




--------------------------------------------------------------------
Gary McKee                           McKee Consulting
gmckee@cloudnine.com                 P. O. Box 3009
voice: (303) 795-7287                Littleton, CO 80161-3009
WWW home page =>                     <http://www.csn.net/~gmckee/>
--------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-15  0:00               ` Ken Garlington
@ 1996-04-16  0:00                 ` Robert Dewar
  1996-04-16  0:00                   ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


Ken Garlington said

"Slow down with the speed reading, Dr. Dewar. I said "provides more coverage,"
not "does coverage testing." There is a difference!

The rumor, which some guy named Dr. Brian Wichmann fed me, is that this tool
does the following:"

Ah, I thought by coverage you meant that it covered more features in the
language, not coverage testing, that's something different. And the point
is that Brian's test generator does not test any features not tested
pretty thoroughly in the ACVC suite. What it does is stress test these
features. 

Any additional testing is alwys likely to be helpful, and although
the NPL test suite is limited in scope (it is for example strictly
Ada 83 still, since as far as I know no one has funded the update
to Ada 95), it is definitely a useful tool. We certainly plan to
use this tool in testing GNAT.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-15  0:00         ` Ken Garlington
@ 1996-04-16  0:00           ` Robert Dewar
  1996-04-16  0:00             ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


Ken writes

"Unfortunately, real facets fit togethger. Does the ACVC fit with the other
facets (e.g. at the vendor)? Should the ACVC grow (quantitatively or qualitative
ly)?
Should it shrink? Who knows? Who will ever know, given the open hostility to
discussing the subject?"

That's a bit puzzling, I thought we had been discussion this for a while.
I am cerainly not hostile to discussing it.

The concern with the ACVC has always been that it both contributes to
quality and detracts from it. It contributes by doing a lot of
thorough testing of a wide spread set of features.

It detracts by forcing resources to be spent on validation that miht
better be spent on other activities that would be more cost effective
in improving compiler quality.

In developing the 2.1 suite, the answer Ken's two questions have bot
both been yes. Yes, it should grow, by adding more tests that are more
user-oriented, i.e. more realistic with respect to typical use of the
language, and of course it should grow by testing the Ada 95 features.

Yes, it should shrink, by removing minimal value tests, or tests of
pathological features not worth testing.

Ken, have you examined the tests in the new test suite. I think they are
a significant step forward from the 1.11 tests, and it would be interesting
to know what you think.

The test suite development is an open process, and prereleaseed versions
of the tests are availabel. Comments from everyone, including certainly
users of Ada 95, are welcome.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-15  0:00                   ` Ken Garlington
@ 1996-04-16  0:00                     ` Robert Dewar
  1996-04-16  0:00                       ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


Ken said

"> A comparison would be if 20 different vendors wrote completely different
> F22 software, with different cockpit interfaces.

Twenty sounds a little low, but that's close enough...."

Are you really saying that the ENTIRE F22 software is duplicated by 20
vendors? That's hard to believe. Can you name the 20 vendors in this case.
I am not talking about 20 vendors writing different pieces of the same
system, but 20 COMPETELY SEPARATE versions of the entire program.

Why on earth would the airforce do this (have 20 versions of the software
written by different primes?)





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                 ` Robert Dewar
@ 1996-04-16  0:00                   ` Ken Garlington
  1996-04-16  0:00                     ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-16  0:00 UTC (permalink / raw)


Believe it or not, Robert Dewar wrote these two sentences in the
same post:

> ...the point is that Brian's test generator does not test any
> features not tested pretty thoroughly in the ACVC suite.

> We certainly plan to use this tool in testing GNAT.

Sounds like either (1) a waste of money, given that the ACVC tests
these features thoroughly, or (2) a good idea for all compiler vendors
to do. 

Perhaps some mechanism could be devised to encourage or
require _all_ vendors to use this tool. (Nah - how could we
ever get all vendors to agree to use some common measure of
compiler quality? It's insane! Except for the ACVC, of course...)

--
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00           ` Robert Dewar
@ 1996-04-16  0:00             ` Ken Garlington
  1996-04-16  0:00               ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-16  0:00 UTC (permalink / raw)


Robert Dewar wrote:

> The concern with the ACVC has always been that it both contributes to
> quality and detracts from it. It contributes by doing a lot of
> thorough testing of a wide spread set of features.

And the reason why we believe this is thorough testing? For the
same reason we thought ACVC 1.x was thorough - because the
people involved worked as hard as they could.

There are other ways to determine if testing is thorough. Why are none of
them used?

> It detracts by forcing resources to be spent on validation that miht
> better be spent on other activities that would be more cost effective
> in improving compiler quality.

Name one activity that all vendors should be doing, exclusive of ACVC testing.
Just one. If you can, I will accept this statement. If you can't, why should
I believe that vendors will spend money on activities to improve quality,
as opposed to more glossy brochures?

> In developing the 2.1 suite, the answer Ken's two questions have bot
> both been yes. Yes, it should grow, by adding more tests that are more
> user-oriented, i.e. more realistic with respect to typical use of the
> language, and of course it should grow by testing the Ada 95 features.

This will, of course, have the opposite effect: Adding these tests will,
in fact, detract from quality.

(I have no proof of this, but since you can offer no evidence as to how
the current ACVC contributes to quality - other than "it's better than
Pascal!" - I think my statement is just as valid.)

> Yes, it should shrink, by removing minimal value tests, or tests of
> pathological features not worth testing.

Nope. By deleting those tests, the quality of many compilers will actually
diminish. (Why not?)

> Ken, have you examined the tests in the new test suite. I think they are
> a significant step forward from the 1.11 tests, and it would be interesting
> to know what you think.

I guess I would be more impressed with objective evidence of the change in
test quality, rather than just an opinion (even my own). I guess that
objective evidence will just have to wait until the compilers are delivered,
and us users will be the guinea pigs once again.

It doesn't have to be that way. However, compiler vendors have their own
domain, I guess.

> The test suite development is an open process, and prereleaseed versions
> of the tests are availabel. Comments from everyone, including certainly
> users of Ada 95, are welcome.

I don't think you've quite grasped the concept. It's a radical one, to be
sure, requiring out of the box thinking. In summary, it is: "Rather than
change the number of tests, perhaps we should be thinking of changing the
_types_ of tests, using 'test' in the broad sense of 'a measure of quality'".

In the meantime, those regression test suites continue to grow....

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                     ` Robert Dewar
@ 1996-04-16  0:00                       ` Ken Garlington
  1996-04-16  0:00                         ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-16  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Ken said
> 
> "> A comparison would be if 20 different vendors wrote completely different
> > F22 software, with different cockpit interfaces.
> 
> Twenty sounds a little low, but that's close enough...."
> 
> Are you really saying that the ENTIRE F22 software is duplicated by 20
> vendors?

Nope. Of course, that's not what you asked. See original quote above. :)

However, we've certainly had multiple vendors write the same software (N-version
programming) on previous programs. I don't like that approach myself, but
that's a different discussion.

Of course, it doesn't matter whether or not twenty vendors wrote the same
software, or different software going into the same system, with respect to
the question you asked.




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-02  0:00               ` Robert A Duff
@ 1996-04-16  0:00                 ` John McCabe
  1996-04-16  0:00                   ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-04-16  0:00 UTC (permalink / raw)


bobduff@world.std.com (Robert A Duff) wrote:

>>We do this by using LDRA Testbed with limits on the minimum level of
>>statement and branch coverage of 100%, and 70% on LCSJ's. I'm not sure
>>exactly where those figures are derived from, but they seem
>>reasonable. The only problem here is that we've found a few bugs in
>>that tool as well!

>Surely that's not the *only* problem.  Surely the test cases fail to
>cover every combination of requirements, and therefore some bugs slip
>through.

To some extent this is obviously true. However, one way we have tried
to minimise the effect of this is to keep a close working relationship
with our customer, to ensure that we are as completely aware as
possible of what the possible combinations of requirements are. In
this way our testing can simulate the real operation of the instrument
as possible. One particular case of this is where we have been in
discussions with our customer as to the how they intend to implement
the command sequencing on the 1553 interface. Through these
discussions we have found out that the timing requirements of events
at our end of the bus are also used at the other end of the bus as
minimum inter-command gaps. Therefore if we complete our command
processing (i.e. initiate events and finish any housekeeping etc)
within those times, we do not need to consider the case where the next
command arrives while we are still processing the last one.

>>... Based on
>>the methods you and I use, would it not be better to use the ACVC
>>suite as a basis for the compiler vendors tests, and also require the
>>compiler vendors to submit their own test suites for approval.

>At least *some* of the compiler vendors' tests include proprietary code
>from their customers, and they simply cannot release that code.

I can see that being a minor problem. I would have thought it would be
fairly simple to produce a minimal, modified version of most problem
reports that could be released without jeopardising the
confidentiality of the customer's code. I, for example, tend to cut my
problem reports down to the bare minimum of code which causes a
problem.



Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions
  1996-04-15  0:00                       ` Gary McKee
@ 1996-04-16  0:00                         ` Ken Garlington
  0 siblings, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-16  0:00 UTC (permalink / raw)


Gary McKee wrote:
> 
> Ken, As I mentioned in my previous message, there is a WELL-ESTABLISHED
> product/technology that is designed to address the quality issues, details
> are below. The ACES was developed as a result of a multi-year effort from a
> consortium of govt/industry/academia, certainly an "industry-wide effort"
> such as you suggest.

So, you're saying that all vendors use ACES as a test tool for their products?
If not, then ACES absolutely _is not_ what I suggest.

> The ACES specifically address several of the topics itemized above.

As do many other product/technologies. Which ones should be standardized
(either formally or informally) for all vendors, in order to raise compiler
quality across all compilers.

> Why don't we transition this
> discussion from the ACVC (which is NOT about quality) to a discussion of
> the ACES (which is about quality)?

OK - Are you suggesting that ACES should be a standand for all vendors?

> There is a significant difference in purpose between validation (ACVC) and
> evaluation (ACES).

Why is there a significant difference?

"evaluate" - "to determine the significance or worth of usu. by careful appraisal
and study."

"validate" - "to support or corroborate on a sound or authoritative basis."

Which do I want? As Deion says, "Both."

Why can't we corroborate on a sound or authoritative basis the significance or worth
of Ada compilers by careful appraisal and study? Why are these antonyms in the
Ada community?

> I have used this system extensively and I
> do recommend it for the quality assessment work that you are interested in.

You recommend me, the user, running ACES. Oh, joy. That certainly seems
efficient to me.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                   ` Ken Garlington
@ 1996-04-16  0:00                     ` Robert Dewar
  1996-04-18  0:00                       ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


"Perhaps some mechanism could be devised to encourage or
require _all_ vendors to use this tool. (Nah - how could we
ever get all vendors to agree to use some common measure of
compiler quality? It's insane! Except for the ACVC, of course...)"

The policy has been not to go beyond conformance testing in the minimum
requirement imposed by centralized policy on validated compilers. I think
that is a sound decision, since it leaves much more flexibility for the
project officers. Ken, it sounds like you want to have someone forcing
you to do the "right thing". It is up to you to specify an Ada compiler
that meets your requirements. One component over which you have no
choice is that it must meet ACVC requirements.

Other requirements, e.g.

  ISO 9000 certification
  SEI maturity level certification
  ACES performance testing
  Certification of full path testing
  Successful testing with the NPL tool

are up to you. Are they a waste of money? Yes for some users, No for 
other users. You are interested in one particular application domain,
for which all of the above may be appropriate, but it would be a mistake
to mandate that ALL users of Ada in the DoD for ALL purposes have no 
choice but to require all these features. This would increase tool
cost to no purpose for applicatoin areas in which some or all of the
above criteria are irrelevant.

Yes, I can see how you would like to spread your costs, but the fact of
the matter is that a non-critical accounting application written in
Ada does NOT need this level of testing. On the contrary, such an
application might have other requirements, e.g. to pass the ADAR
decimal arithmetic tests, which for you would be irrelevant.

That's really the issue here -- how much to REQUIRE of all vendors in
all fields. The decision to go no further than ACVC testing as being
universally mandated is very deliberate, but it is assumed that
application domains will specify whatever they need. Certainly in
the past there has been a considerable level of naivity in some
procurements, with an assumption that validatoni is the sole criterion
for determining whether or not a compiler is suitable for intended
use. Clearly different domains will have different requirements,
and it is up to the PEO or whoever is in charge to make sure that
the requirements are stated and met.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00             ` Ken Garlington
@ 1996-04-16  0:00               ` Robert Dewar
  0 siblings, 0 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


Ken said

"Believe it or not, Robert Dewar wrote these two sentences in the
same post:

> ...the point is that Brian's test generator does not test any
> features not tested pretty thoroughly in the ACVC suite.

> We certainly plan to use this tool in testing GNAT.

Sounds like either (1) a waste of money, given that the ACVC tests
these features thoroughly, or (2) a good idea for all compiler vendors
to do."

You should definitely believe it because I certainly wrote those two
sentences, and they are not at all inconsistent. The fact that you
think they are shows you are still missing some important points.

There is a big difference between feature coverage and stress testing.
The ACVC has pretty complete feature coverage, for example, you will
find cases of all possible operators applied to all classes of types.

But that is very differnt from seeing if a compiler will fall apart
if you give it a complex expression containing thousands of tokens
(which is waht the NPL stress testing tool can do).

I use feature in the sense of "feature in the language", not a general
use applying to any possible characteristic of the language.

The ACVC test suite measures conformance, not capacity or performance.
For capacity and performance tests, you should look at the ACES test
suite which concentrates on these areas. Note that the development
of the ACES has been funded by the DoD, and use of this test suite
is considered to be of critical importance by many projects. 





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                       ` Ken Garlington
@ 1996-04-16  0:00                         ` Robert Dewar
  0 siblings, 0 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


Ken said

"Robert Dewar wrote:
>
> Ken said
>
> "> A comparison would be if 20 different vendors wrote completely different
> > F22 software, with different cockpit interfaces.
>
> Twenty sounds a little low, but that's close enough...."
>
> Are you really saying that the ENTIRE F22 software is duplicated by 20
> vendors?

Nope. Of course, that's not what you asked. See original quote above. :)

However, we've certainly had multiple vendors write the same software (N-version
programming) on previous programs. I don't like that approach myself, but
that's a different discussion.

Of course, it doesn't matter whether or not twenty vendors wrote the same
software, or different software going into the same system, with respect to
the question you asked."

OK, that makes sense -- I couldn't imagine that the F22 was indeed like
the compiler field, where we have multiple vendors generating complete
Ada systems for the same target. That IS exactly what I asked, sorry if
I was not clear, and it matters very much. We were talking about taking
failures and creating industry wide regression suites. This is very hard
to do across more than one vendor.. For example, a lot of our tests in
our suite check 

   o text of error messages generated
   o performance and behavior of GNAT specific features
   o particular GNAT choices for implementation dependent behavior

None of these are useful multi-vendor tests, at least not without a lot
of work, which would only recognize some of them.

Other tests in our suite are white-box (path coverage) type tests baed
on the specific algorithms used by GNAT. They are runnable, but not
especially useful, on other compilers.

Nevertheless there are a lot of tests in regression suites that could
be generally used. If we put a test in our suite that seems clearly
related to ACVC-type conformance testing, I send it along to the ACVC
folks for inclusion.

As an example of cross-vendor use of test suites, one of the important
aspects of our agreement with DEC is that we wlil have access to the
DEC test suite. It will take a lot of work to adapt that suite, and part
of the reason that it is feasible is that we are committed to a very
high degree of compatibility with Dec Ada.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                 ` John McCabe
@ 1996-04-16  0:00                   ` Robert Dewar
  1996-04-22  0:00                     ` John McCabe
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-16  0:00 UTC (permalink / raw)


"I can see that being a minor problem. I would have thought it would be
fairly simple to produce a minimal, modified version of most problem
reports that could be released without jeopardising the
confidentiality of the customer's code. I, for example, tend to cut my
problem reports down to the bare minimum of code which causes a
problem."

Not even vaguely true, many of the regression suite tests that we have
are large and complex, and the problem goes away if any attempt is made
to cut the example down. Besides which, the value of the suite is greatly
enhanced by having large complex real-world programs, rather than
simplified ACVC-stlye tests.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions
  1996-04-15  0:00                     ` Tiring Arguments Around (not about) Two Questions Ken Garlington
  1996-04-15  0:00                       ` Gary McKee
@ 1996-04-17  0:00                       ` Kenneth Almquist
  1 sibling, 0 replies; 100+ messages in thread
From: Kenneth Almquist @ 1996-04-17  0:00 UTC (permalink / raw)


>>    Choosing among available compilers is not an easy task.
>
> Actually, it's very easy. For many host/target pairs, there's only a
> few vendors.  In some cases, only one.

In that case, it can get VERY hard.  The part where you evaluate the
various compilers isn't so bad.  Things get more difficult after the
compilers fail your tests and you have to tell your boss to switch
to different hardware.  :-)

> "Is there some way to modify the scope of the ACVC process to
> improve compiler quality across all vendors?  Or, is there something
> outside the scope of the ACVC that could be done to improve compiler
> quality across all vendors?"

The customers for Ada compilers are a diverse lot.  For example, in
the PC world, people expect low quality and low price.  A CS 101 teacher
wants a compiler that does a good job with toy programs, again at a low
price.  This diversity limits what can be accomplished across all vendors.

The basic rules for encouraging industry to produce software which meets
your needs are:

   1)  Make sure your purchasing decisions reflect the effectiveness in
       meeting your needs.  If the quality of the vendor's product is
       poor but you buy it anyway, the vendor has no incentive to
       improve the product.

   2)  Centralize purchasing decisions to increase clout over vendors.
       If vendors are not willing to go to a great deal of effort to
       get your project to buy from them, perhaps Lockheed needs a
       centralized compiler evaluation and purchasing organization.
       Or better yet, perhaps Lockheed and other companies involved
       in the production of safety critical software should form an
       industry consortium to evaluate compilers.  Assuming that the
       members of the consortium agreed not to purchase compilers
       which didn't meet the standards agreed to by the consortium,
       compiler vendors would have a strong incentive to meet those
       standards.

Point 1 is critical--if you can't manage this all bets are off.  Point 2
is less important.  The key point with centralizing purchasing decisions
is that all the people involved must have common needs.  Saying, "we
only purchase validated compilers" is a centralization strategy, but
validation serves as a central point for the entire Ada user population
and thus cannot be expected to reflect your needs very closely.  In fact,
if you buy any compiler which is validated, regardless of how well it
meets your need, then you are violating point 1.

None of this answers your question about how to "improve quality across
all vendors."  But remember, low quality vendors don't hurt you as long
as you don't buy their products.
					Kenneth Almquist




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-10  0:00                   ` Robert Dewar
  1996-04-12  0:00                     ` Philip Brashear
  1996-04-15  0:00                     ` Tiring Arguments Around (not about) Two Questions Ken Garlington
@ 1996-04-18  0:00                     ` John McCabe
  1996-04-19  0:00                       ` Robert Dewar
  2 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-04-18  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

<..snip..>

>   Passing the ACVC suite is not a trivial excercise, and I think all
>   vendors would agree that they have had to devote considerable resources
>   to this task. These are resources not available for other quality
>   improving tasks. This means that we have to be very careful that we
>   do not divert too many resources and reach a point of diminishing
>   returns.

This paragraph confuses me.

I believe that if the compiler vendors were producing high quality
products that conformed to the language, then passing the ACVC suite
should be a cinch.

What you seem to be saying is that compiler vendors are assigning
resources to ensure that they pass the ACVC suite, and in doing so are
compromising the quality of their product.

This type of situation is very common in everyday life - you just have
to look at the number of crap drivers on the road (well, in the UK
anyway) to see that this philosophy is dangerous. People are not
taught to be good drivers, they're taught to pass the driving test,
and your statement above suggests that Ada compilers are not designed
to compile Ada, they're designed to pass the ACVC suite.


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                     ` Robert Dewar
@ 1996-04-18  0:00                       ` Ken Garlington
  0 siblings, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-18  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> The policy has been not to go beyond conformance testing in the minimum
> requirement imposed by centralized policy on validated compilers. I think
> that is a sound decision, since it leaves much more flexibility for the
> project officers.

"Project officers?"  Sounds like you are discussing the use of Ada in a
particular domain. I'm not.

> It is up to you to specify an Ada compiler
> that meets your requirements. One component over which you have no
> choice is that it must meet ACVC requirements.

Explain why I have no choice in ACVC... if I'm using Ada for a non-DoD
project. For that matter, I always have a choice with any DoD requirement:
I can ask for a waiver.

> it would be a mistake
> to mandate that ALL users of Ada in the DoD for ALL purposes have no
> choice but to require all these features.

And yet, it's _not_ a mistake to mandate ACVC for non-DoD users? Why?

> This would increase tool
> cost to no purpose for applicatoin areas in which some or all of the
> above criteria are irrelevant.

But there's an important assumption buried in this statement, which I
have tried (apparently in vain) to exhume. Why is there only _one_
criterion appropriate for all application areas -- ACVC compliance --
particularly when there are users who are _not_ mandated to have an ACVC
certificate? Furthermore, if for some reason we can't have more than
one criterion, why should it be ACVC (as it is defined today)?

As to the former question (more than one criterion), what is there
about such things as the SEI CMM or ISO 9001 that only make it appropriate
for one set of Ada compiler users? Particularly when I see a lot of
Ada propaganda targeted to users who want high quality
products, why are quality standards of this type so inappropriate?

As to the latter question (ACVC is the best single criterion), why?
What information do we have that ACVC is the best way to achieve the
goals which most users share? Because it gives us compilers of higher
quality than the average Pascal compiler?

> Yes, I can see how you would like to spread your costs, but the fact of
> the matter is that a non-critical accounting application written in
> Ada does NOT need this level of testing.

Really? Speaking for myself, I would be very upset (or my employer would
be upset, depending upon the error :) if my paycheck were miscalculated due
to a bug in the compiler.

Nonetheless, you mention "non-critical" acocunting applications. Is the
average Ada user developing non-critical system (using "critical" in the "mission
critical" sense described in the Ada requirements and glossy brochures)? 
Is this the target user base? If so, what does the ACVC provide for
emsthese users? Are there better (cheaper?) ways to provide what the ACVC provides?

If your claim is that the average Ada user is choosing Ada for reasons other
than to build high-quality systems (lowest cost, perhaps?), then I'll admit
I'm on the wrong tack. Is there a language that is focused on supporting the
development of high-quality systems, if not Ada?

> On the contrary, such an
> application might have other requirements, e.g. to pass the ADAR
> decimal arithmetic tests, which for you would be irrelevant.

Sounds like a domain-specific test. I was thinking more of a general-purpose
measure of quality, that could be applied across multiple domains.

> That's really the issue here -- how much to REQUIRE of all vendors in
> all fields.

Absolutely. For example, it appears to be common practice in Europe to
require ISO 9001 certification of vendors across all fields. This is
due to a recognition that poor quality products, although cheaper in
price, are rarely cheaper in use. In the U.S., the DoD is moving toward
SEI III certification for vendors as a prerequisite.

Except for compiler vendors, of course.

> The decision to go no further than ACVC testing as being
> universally mandated is very deliberate, but it is assumed that
> application domains will specify whatever they need.

Could you direct me to the documentation of this decision, and
the deliberations that led to this decision? Perhaps it
could answer my questions.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-03-25  0:00 ` Robert Dewar
                     ` (7 preceding siblings ...)
  1996-04-11  0:00   ` Robert I. Eachus
@ 1996-04-19  0:00   ` Laurent Guerby
  1996-04-25  0:00   ` Tiring Arguments Around (not about) Two Questions [VERY LONG] Laurent Guerby
  1996-04-26  0:00   ` Ken Garlington
  10 siblings, 0 replies; 100+ messages in thread
From: Laurent Guerby @ 1996-04-19  0:00 UTC (permalink / raw)


Ken Garlington writes
[deleted]
: And yet, it's _not_ a mistake to mandate ACVC for non-DoD users? Why?
[deleted]

   The key idea here  is the time (and money)  spent on such items (as
already pointed out by Robert Dewar). The time and  money spent by all
Ada  vendors to reach ACVC compliance  is _not_  spent in GUI builder,
external tools, and other goodies.  If you require compliance with one
million tests, you won't get any tool (and may be not even a compiler,
or worse, only one compiler for your platform ...).

   Think about C++ compilers which are known  to be weak and buggy for
many  features of the (evolving)  draft  C++ standard,  but which  are
coming  with a  good set   of  programming  tools (windows,   editors,
profilers, libraries ...) that people like.

   If you want to enter the Ada market, ACVC is the recognized minimum
for  any "decent" compiler.  It's   already  a  _lot_ of work   (time,
money). And it does an useful (but limited) job.

   But it is a "mistake" to  require more for  every Ada compiler used
by every  Ada  programmer.  Most C/C++ programmers  are  using a buggy
C/C++ compiler  without asking more  than 93.1416%  ANSI C/POSIX/Draft
C++/Whatever compliance and are quite happy with it ;-).

   Of course, if  you have to handle a  big  and serious project,  you
will ask more  from your vendor  (ACES,  other test suites,  ISO 9000,
certifications, etc ...).

[Just student's  thoughts, I    haven't any  real experience   on  the
subject,  except with buggy language X   compilers (a lot of compilers
and Xs) ;-]

-- 
--  Laurent Guerby, student at Telecom Bretagne (France), Team Ada.
--  "Use the Source, Luke. The Source will be with you, always (GPL)."
--  http://www-eleves.enst-bretagne.fr/~guerby/ (GATO Project).
--  Try GNAT, the GNU Ada 95 compiler (ftp://cs.nyu.edu/pub/gnat).




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-18  0:00                     ` Ada Core Technologies and Ada95 Standards John McCabe
@ 1996-04-19  0:00                       ` Robert Dewar
  1996-04-22  0:00                         ` Ken Garlington
  1996-04-22  0:00                         ` John McCabe
  0 siblings, 2 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-19  0:00 UTC (permalink / raw)


John McCabe says

"This paragraph confuses me.

I believe that if the compiler vendors were producing high quality
products that conformed to the language, then passing the ACVC suite
should be a cinch.

What you seem to be saying is that compiler vendors are assigning
resources to ensure that they pass the ACVC suite, and in doing so are
compromising the quality of their product."

This is a very common misconception among people who don't know language
design or compiler implementation very well. Indeed it is the same
misconception that has lead people to misinterprete what conformance 
testing is all about.

Perhaps I can put it this way. Suppose a vendor has resources to do exacty
one of the following two tasks:

1. Rewrite the loop optimizer so that all loops run faster

2. Rewrite the handling of static expressions so that one very obscure
test in the ACVC suite which has never shown up in a custoer programer
and is very unlikely *ever* to show up in customer programs, under the
condition that this rewriting is extensive and will likely cause
regressions (in programs other than ACVC tests).

Which do YOU think would contribute more to quality for most users?

That's the potential trouble with mandated testing, it elevates very
unimportant problems to maximum priority without any regard to the
importance or impact of these problems. 

If we follow Ken's repeated request, and extend the scope of mandatory
testing, then we distort things still more. That's the risk.

The DoD policy in this area is that requiring the ACVC conformance testing
is as far as it is desirable to go for general requirements. This policy
of course recognizes that for certain purposes, other kinds of testing
may be useful, which is why DoD has suppoted the development of the ACES
suite. It is up to a project officer to specify this as a requirement
if this is needed (such requirements are not uncommon). Ken, if your
procurement did not specify this requirement, all one can ask is why
not? Do you really need the DoD to tell you what testing needs to be done?
In the commercial marketplace, the market determines what testing is
desirable (for instance a lot of the C++ commercial market is comfortable
with no testing whatsoever), but in other contexts he commercial marketplace
requires testing, e.g. many commercial COBOL customers wlil only used
NIST certified compilers. Why is it that DoD customers can't work the
same way.

By the way Ken, you ask how the DoD has determined that it is reasonable
to generally require the ACVC testing and nothing more? I find it a bit
odd that a DoD contractor should be asking this question to someone
outside -- why not ask within the DoD, it's their policy!

P.S. when I used critical in talking about non-critical banking applications,
I was abbreviating not for mission-critical, but for safety-critical. Sorry
for not being clear. But to clarify my point here. A banking applicatoin
may well not care about ACES testing because they don't care about
performance, and their own domain specific testing (e.g. actual testing
of the application in question) shows that a given compiler works
adeqately for their purposes.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-19  0:00                       ` Robert Dewar
@ 1996-04-22  0:00                         ` Ken Garlington
  1996-04-22  0:00                         ` John McCabe
  1 sibling, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-22  0:00 UTC (permalink / raw)


Robert Dewar wrote:
> 
> Perhaps I can put it this way. Suppose a vendor has resources to do exacty
> one of the following two tasks:
> 
> 1. Rewrite the loop optimizer so that all loops run faster
> 
> 2. Rewrite the handling of static expressions so that one very obscure
> test in the ACVC suite which has never shown up in a custoer programer
> and is very unlikely *ever* to show up in customer programs, under the
> condition that this rewriting is extensive and will likely cause
> regressions (in programs other than ACVC tests).
> 
> Which do YOU think would contribute more to quality for most users?

I think the former would be better. However, as I understand the state of
affairs today, the vendor will do the latter, since he is mandated to pass
the ACVC.

> If we follow Ken's repeated request, and extend the scope of mandatory
> testing, then we distort things still more. That's the risk.

Actually, my request was to do any of the following:

1. Defend the status quo (ACVC is the best and only mandated measure of
quality).

2. Define ways to change the ACVC -- add tests, delete tests, write
different tests -- that would improve quality, and are not part of
the status quo.

3. Define alternative measures of quality -- either in addition to,
or instead of -- the ACVC, that should be mandated.

I am also willing for these to be "non-mandated" standards; that is,
there's no official requirement, but there is general consensus that
any vendor who fails to use these measures is not a quality vendor.

I will certainly agree with you that mandating measures that have a net
penalty on compiler quality is a bad idea. Would you agree with me that
mandating measures that have a net benefit for compiler quality is a
good idea?

> The DoD policy in this area is that requiring the ACVC conformance testing
> is as far as it is desirable to go for general requirements.

And yet, per your statements above, the ACVC can lead to "the
condition that this rewriting is extensive and will likely cause
regressions..." It sounds like we should be looking for an alternative
that does not cause this sort of problem, or perhaps discontinue
mandated testing. Given your issue with ACVC testing, is the DoD policy
rational?

> Ken, if your
> procurement did not specify this requirement, all one can ask is why
> not? Do you really need the DoD to tell you what testing needs to be done?

An interesting question, given that DoD does in fact tell me what testing
needs to be done -- the ACVC, to be precise. Of course, I don't know that
the DoD is right to demand this testing, since I can't figure out if the
ACVC is the best test to demand, or the only test to demand. As far as why
additional measures aren't required, I certainly agree that "all one can ask is
why not?" That's what I'm doing.

Another interesting question is, "If it's the user's job
to define the measures to be taken, is there any measure that is sufficiently
general-purpose to always request, regardless of use?"

If the answer to that question is "yes," then there's a follow-on: "Since
this measure is always useful, why shouldn't users demand that the compiler
vendors do this measure routinely, and share results with the users, rather
than billing each user to do the same testing on the same product?" ACVC cost
is spread among all users. Should I want to pay my share? Would users be
willing to spread the cost for additional/alternate measures?

> In the commercial marketplace, the market determines what testing is
> desirable (for instance a lot of the C++ commercial market is comfortable
> with no testing whatsoever), but in other contexts he commercial marketplace
> requires testing, e.g. many commercial COBOL customers wlil only used
> NIST certified compilers. Why is it that DoD customers can't work the
> same way.

I thought I _was_ working that way. I'm part of the marketplace (contact
customer-support@tartan.com for verification). I'm trying to discuss what
the marketplace should be demanding of Ada vendors, particularly given that
the supposed market for Ada vendors is in high-quality systems. You're the
one who's hung up on only accepting what the DoD demands, not me. If DoD
decided to stop demanding ACVC testing tomorrow, I would still be asking
my questions. If c.l.a. isn't a place for users to bring up ideas of this
type, and hopefully elicit feedback from users and vendors, where do you suggest
they be raised? Doesn't the commercial C and C++ community use Internet as a
forum for such ideas? (As an aside, should we using the C and C++ community
as the benchmark for responsible compiler users?)

Of course, in the commercial marketplace, there are other de facto measures
than NIST cerification. For example, as I understand it, it is almost
impossible to sell a large transaction processing system without measuring
it against certain industry standard benchmarks. If a TPS vendor said, "I'll
only do these tests if you pay me" the users would run, not walk, to another
vendor. Why can't Ada vendors work the same way?

Furthermore, in the commercial marketplace, software vendors perform surveys
to discover demand, rather than waiting for the users to come to them. Why
can't Ada vendors work the same way? (But that's another useless thread,
so never mind.)

> By the way Ken, you ask how the DoD has determined that it is reasonable
> to generally require the ACVC testing and nothing more? I find it a bit
> odd that a DoD contractor should be asking this question to someone
> outside -- why not ask within the DoD, it's their policy!

I do ask DoD (and his brother RoD :), and will continue to ask DoD, questions
on Ada policy. A few personal observations:

1. DoD policy on Ada seems to be in flux at the moment, and answers of this
type seem to be waiting on the NRC study, etc.

2. Just because the DoD thinks ACVC is a good idea, doesn't make it a good
idea.

3. Just because DoD thinks ACVC is a good idea, doesn't make it the only
good idea.

4. Is DoD excluded from comp.lang.ada?

5. If you have a specific person in DoD who has the answers to my questions,
feel free to let me know.

> P.S. when I used critical in talking about non-critical banking applications,
> I was abbreviating not for mission-critical, but for safety-critical. Sorry
> for not being clear. But to clarify my point here. A banking applicatoin
> may well not care about ACES testing because they don't care about
> performance, and their own domain specific testing (e.g. actual testing
> of the application in question) shows that a given compiler works
> adeqately for their purposes.

Sounds like a good reason not to use ACES as a general-purpose measure, assuming
that it doesn't cover much of the Ada domains. (By the way, I expect domain
specific measures to be required, even if a generally acceptable measure exists.)

Now, is there a measure that _is_ generally useful?

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-19  0:00                       ` Robert Dewar
  1996-04-22  0:00                         ` Ken Garlington
@ 1996-04-22  0:00                         ` John McCabe
  1996-04-23  0:00                           ` Ken Garlington
  1996-04-24  0:00                           ` Robert Dewar
  1 sibling, 2 replies; 100+ messages in thread
From: John McCabe @ 1996-04-22  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

>John McCabe says

>"This paragraph confuses me.

>I believe that if the compiler vendors were producing high quality
>products that conformed to the language, then passing the ACVC suite
>should be a cinch.

>What you seem to be saying is that compiler vendors are assigning
>resources to ensure that they pass the ACVC suite, and in doing so are
>compromising the quality of their product."

>This is a very common misconception among people who don't know language
>design or compiler implementation very well. Indeed it is the same
>misconception that has lead people to misinterprete what conformance 
>testing is all about.

>Perhaps I can put it this way. Suppose a vendor has resources to do exacty
>one of the following two tasks:

>1. Rewrite the loop optimizer so that all loops run faster

>2. Rewrite the handling of static expressions so that one very obscure
>test in the ACVC suite which has never shown up in a custoer programer
>and is very unlikely *ever* to show up in customer programs, under the
>condition that this rewriting is extensive and will likely cause
>regressions (in programs other than ACVC tests).

>Which do YOU think would contribute more to quality for most users?

I cannot speak for most users, only for myself.

You seem to be defending a "make it fast, THEN make it work"
philosophy here (which I completely disagree with) and confusing
quality with run-time performance.

I have seen the effect of this kind of philosophy - my current
compiler supports the ATAC co-processor, yet has had trouble compiling
some basic Ada constructs.

I would choose 2. over 1. as I believe a _quality_ compiler is one
which compiles the language completely, not one which does some parts
of the language quickly.

What you don't appear to appreciate in your statements here is the
cost involved in tracing the cause of a fault (i.e. a compiler bug) at
the end user level, and then trying to find a workaround solution.
Many bugs are extremely obscure and hard to track down - performance
enhancement at an end user level, while non-trivial in many cases, are
often fairly straightforward in my experience.

>That's the potential trouble with mandated testing, it elevates very
>unimportant problems to maximum priority without any regard to the
>importance or impact of these problems. 

Importance, in this case, is relative - I may be the only customer who
uses the technique covered by the "obscure" test in which case it is
_very_important to me. The point of the ACVC suite etc is to attempt
to ensure that compilers conform to the language standard, is it not.
As there appear to be no "priority levels" defined for language
features, then each is equally important.


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-16  0:00                   ` Robert Dewar
@ 1996-04-22  0:00                     ` John McCabe
  1996-04-23  0:00                       ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: John McCabe @ 1996-04-22  0:00 UTC (permalink / raw)


dewar@cs.nyu.edu (Robert Dewar) wrote:

>"I can see that being a minor problem. I would have thought it would be
>fairly simple to produce a minimal, modified version of most problem
>reports that could be released without jeopardising the
>confidentiality of the customer's code. I, for example, tend to cut my
>problem reports down to the bare minimum of code which causes a
>problem."

>Not even vaguely true,

Can you provide figures on that? In my experience it _has_ been
possible so it is (at least) vaguely true.

>many of the regression suite tests that we have
>are large and complex, and the problem goes away if any attempt is made
>to cut the example down.

Yes I have had that experience, but those have been the exception
rather than the rule. Perhaps it's just that the compilers I've used
have been so poor that the faults have been simple to find :-)

>Besides which, the value of the suite is greatly
>enhanced by having large complex real-world programs, rather than
>simplified ACVC-stlye tests.

I can believe that, but I would hope that effort goes into requirement
traceability when these complex tests are included in the suite to
take account of possible unnecessary redundancy between test cases.


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-22  0:00                         ` John McCabe
@ 1996-04-23  0:00                           ` Ken Garlington
  1996-04-24  0:00                             ` John McCabe
  1996-04-24  0:00                             ` Robert Dewar
  1996-04-24  0:00                           ` Robert Dewar
  1 sibling, 2 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-23  0:00 UTC (permalink / raw)


John McCabe wrote:
> 
> You seem to be defending a "make it fast, THEN make it work"
> philosophy here (which I completely disagree with) and confusing
> quality with run-time performance.

Except that, in the choices given, you could either have a performance
improvement (choice #1), or you could pass a test which did _not_ add
to _any_ measure of compiler quality. The second choice involved a test
with no value added to the user (this was a precondition of the second
choice).

I think, given those two options, you would prefer #1, right? Certainly,
if choice #2 involved a useful test, then I might agree with your response.
However, the ground rule for #2 was that it was a useless test. So,
failing test #2 should _not_ translate into a bug in the application code,
and so the effects you described later should not occur.

> Importance, in this case, is relative - I may be the only customer who
> uses the technique covered by the "obscure" test in which case it is
> _very_important to me.

This is certainly true, and one of the questions I asked. If the ACVC is
changing to be more user-oriented, how does the ACVC writer have this
understanding of how the compiler will be used? If his understanding
is in error, then the tests will still be focusing on the wrong things.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-22  0:00                     ` John McCabe
@ 1996-04-23  0:00                       ` Ken Garlington
  1996-04-24  0:00                         ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-23  0:00 UTC (permalink / raw)


John McCabe wrote:

[most regression tests are simple and not proprietary]

That's our experience, as well.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-22  0:00                         ` John McCabe
  1996-04-23  0:00                           ` Ken Garlington
@ 1996-04-24  0:00                           ` Robert Dewar
  1996-04-26  0:00                             ` Ken Garlington
  1 sibling, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-24  0:00 UTC (permalink / raw)


John McCabe says

"You seem to be defending a "make it fast, THEN make it work"
philosophy here (which I completely disagree with) and confusing
quality with run-time performance.

I have seen the effect of this kind of philosophy - my current
compiler supports the ATAC co-processor, yet has had trouble compiling
some basic Ada constructs.

I would choose 2. over 1. as I believe a _quality_ compiler is one
which compiles the language completely, not one which does some parts
of the language quickly."

John, I can for SURE conclude from your writing here that you have never
looked at the ACVC tests closely, and also that you have limited experience
in the definition of language definitions.

The fact of the matter is that for *any* language of any complexity, you can
find extremely marginal cases which are not worth testing. Study for
example the Rosen tasking anomoly as an example (I don't want to waste
space describing this case, since it is well known to anyone who has
followed the ACVC process).

It is a common naive viewpoint that (a) the RM precisely defines the Ada
language -- of course it does not, since it is not a formal document,
and that (b) it is therefore absolutely clear what conformance means,
and that (c) it is therefore valuable to test any aspect of this conformance.

I guess that neither you nor Ken have paid any attention to what is going
on with ACVC 2.1, but the whole idea of this effort is to make sure that
the ACVC suite better reflects actual user usage. I would be interested
in comments from either of you on this reformalation, and in your reaction
to the new tests, but to make useful comments you will have to sepdn more
time actually studying the tests and the ACVC process.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-23  0:00                           ` Ken Garlington
  1996-04-24  0:00                             ` John McCabe
@ 1996-04-24  0:00                             ` Robert Dewar
  1996-04-26  0:00                               ` Ken Garlington
  1 sibling, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-24  0:00 UTC (permalink / raw)


Ken asked:

"This is certainly true, and one of the questions I asked. If the ACVC is
changing to be more user-oriented, how does the ACVC writer have this
understanding of how the compiler will be used? If his understanding
is in error, then the tests will still be focusing on the wrong things."

Why not study the tests?
Why not study the process being used to construct ACVC 2.1?

This development is not being done in a vacuum. 

The basic answer is that each feature, instead of being tested in a
formalistic way is instead resulting in the question: "How would this
feature be used in a real program?", a question that was never even
asked in the context of ACVC version 1. The resulting test is then
reviewed by the ACVC review team, which represents implementors and
users, to see how well it seems to match potential use. 

It is hard to establish objective criterion for how well one is doing
in this process. The review group certainly has not found anyway of
pre-establishing what will turn out to be typical usage.

What *is* encouraging is the following, which actually could possibly
be quantied from our data, with a lot of work.

When we make proposed changes to GNAT, we run three kinds of tests:

 Our main test suite, which is surely user-oriented, since it is mostly
 user code.

 the old ACVC tests

 the new ACVC tests

in pratice we find the general rests on the new ACVC tests closely mirror
the general results on the main test suite.

On the other hand, the old ACVC tests seem to pick up a somewhat separate
set of errors, and often yield conflicting results (lots of problems when
the other two sets show none, or no problems when the other two sets 
show lots).





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                             ` John McCabe
@ 1996-04-24  0:00                               ` Robert Dewar
  1996-04-26  0:00                                 ` John McCabe
                                                   ` (2 more replies)
  1996-04-25  0:00                               ` Ken Garlington
  1 sibling, 3 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-24  0:00 UTC (permalink / raw)


John McCabe said

"I'm loathe to believe that any of the ACVC tests are truly useless
(although I have to admit I haven't looked at them so far), but in a
general case, where the particular test mentioned was _truly_ useless,
then I would choose the performance improvement."

I really think that if you want to express opinions on the ACVC suite,
new or old, you might find it quite helpful to look at the tests!

I suspect that Ken has not looked at them either. Ken, you keep asking
about how the ACVC process is being improved? Why not look?





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-23  0:00                       ` Ken Garlington
@ 1996-04-24  0:00                         ` Robert Dewar
  1996-04-26  0:00                           ` Ken Garlington
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-24  0:00 UTC (permalink / raw)


Ken Garlington said

"That's our experience, as well."

your experience with compiler implementation, or F22 code. If you are
speaking of experience in implementing a compiler, what compiler? 
Because I have worked on many commercial compilers, and NEVER found
this to be true.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-23  0:00                           ` Ken Garlington
@ 1996-04-24  0:00                             ` John McCabe
  1996-04-24  0:00                               ` Robert Dewar
  1996-04-25  0:00                               ` Ken Garlington
  1996-04-24  0:00                             ` Robert Dewar
  1 sibling, 2 replies; 100+ messages in thread
From: John McCabe @ 1996-04-24  0:00 UTC (permalink / raw)


Ken Garlington <garlingtonke@lmtas.lmco.com> wrote:

>John McCabe wrote:
>> 
>> You seem to be defending a "make it fast, THEN make it work"
>> philosophy here (which I completely disagree with) and confusing
>> quality with run-time performance.

>Except that, in the choices given, you could either have a performance
>improvement (choice #1), or you could pass a test which did _not_ add
>to _any_ measure of compiler quality. The second choice involved a test
>with no value added to the user (this was a precondition of the second
>choice).

I was looking at it from the point of view that the test was for a
language feature, and that language feature could be used at some time
by someone - possibly me.

>I think, given those two options, you would prefer #1, right? Certainly,
>if choice #2 involved a useful test, then I might agree with your response.
>However, the ground rule for #2 was that it was a useless test. So,
>failing test #2 should _not_ translate into a bug in the application code,
>and so the effects you described later should not occur.

Given the options of:

1) Improving the performance of something I have that works

and

2) Fixing something I want that doesn't work

Then I'd choose 2).

I'm loathe to believe that any of the ACVC tests are truly useless
(although I have to admit I haven't looked at them so far), but in a
general case, where the particular test mentioned was _truly_ useless,
then I would choose the performance improvement.


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions [VERY LONG]
  1996-03-25  0:00 ` Robert Dewar
                     ` (8 preceding siblings ...)
  1996-04-19  0:00   ` Laurent Guerby
@ 1996-04-25  0:00   ` Laurent Guerby
  1996-04-26  0:00   ` Ken Garlington
  10 siblings, 0 replies; 100+ messages in thread
From: Laurent Guerby @ 1996-04-25  0:00 UTC (permalink / raw)


Ken Garlington writes
: Gary McKee wrote:
[deleted]
: > There is a significant difference in purpose between validation (ACVC) and
: > evaluation (ACES).
: 
: Why is there a significant difference?
: 
: "evaluate" - "to determine the significance or worth of usu. by
: careful appraisal and study."
: 
: "validate" - "to support or corroborate on a sound or authoritative basis."
: 
: Which do I want? As Deion says, "Both."
: 
: Why can't we corroborate on a sound or authoritative basis the
: significance or worth of Ada compilers by careful appraisal and study?

    There  is  a    significant difference   between   evaluation  and
validation.   And this  is  reflected   in   the dual approach    ACVC
(validation) and ACES (evaluation).

*** I  see ACVC validation  as a more  "objective" approach. There are
(to simplify) two categories of tests :

-  the B tests  for testing invalid  construction detection at compile
time, and the  vendor has to  provide a huge listing  of error and  to
show    that all   errors   are  catched (note that   this   is  a bit
subjective),

- the C tests which are testing  run time behaviour, without reporting
of either passed, failed or non applicable. The vendor has to provide
a huge listing of tri state output.

   For  validation  purpose,   there's  also  a  declaration  of   non
deliberate extension to the language and of course the complete set of
switches  used, OS   version, etc  (some clients  wnat    to rerun the
validation   suite  which is  perfectly   reasonable for  some kind of
projects).

   In the Ada community validation  is a strong concern, something not
validated is   not a compiler  for  most users (is  that reasonable is
another   question  ;-).  The Ada  compilers   writers   run  in house
validation,   provide the listings,   and   then are  (very) happy  to
announce a successful validation.

*** I see the ACES evaluation as more subjective, since performance is
measured.  Just have  a look  to what is  happening with  SPECs in the
microprocessor market  to  understand that  performance measurement is
hard to achieve in an objective way.  For example some Intel SPECs are
nearly impossible to reproduce  with  real market motherboards.  SPECs
are provided by vendors. This is not the  case for ACES, which is most
of the time  (not an obligation) run  by users and  a complete  set of
tools come with ACES especially  written for users (note that  there's
no equivalent for ACVC).  Latest ACES provide the "quicklook" facility
for a easy to run set of  test, expected to  be run by an average user
in one day.    There are also   two  categories of tests    (again, to
simplify) :

- "wall  clock    time" (user provided  routines)  measurements,  with
standard  deviation, on code and   compiler (if I remenber well).  The
tests  are well classified, with  for example  good measurement of how
the  use  of  high level feature   impacts  on performance.  Of course
interpretation is  a  tricky  and "subjective"  issue,  but  also  are
configuration, switches, run time settings ans so on.

- a list  of    questions about  the  environnement  coming  with  the
compiler, like debugger, interface,  bindings  and whatever.  This  is
completly subjective,   and  the  market is   here for   this  kind of
evaluation.

   I  think putting  ACES on  the user  side  is the right (political)
approach (again, think  about SPECs). Of  course the user has to  know
what  he wants and what  he  is talking  about,  but ACES reports give
useful information to select a compiler tailored to your needs.

*** Both  ACVC and ACES are evolving,  and as far as  I can judge , in
the right direction. For  example some ACVC  tests have moved to ACES,
quicklook has  been added, etc  ... And the  new  ACVC (2.x) test have
very little in common with old ones (1.x). This is  my opinion, but it
is important to note that these processes are very open to vendors and
users, and that everything   available with papers, sources,   so it's
easy to   have a look  at  them, at this point  in   the discussion it
becomes important.

   Personal note:  my knowledge of  ACVC/ACES comes from source, docs,
papers and news  reading (for the first three  items, it  takes indeed
not that  much time for  a lot of useful knowledge  ;-), but also from
discussions  with the GNAT Team (in  particular Gary Dismukes, Cyrille
Comar and Robert Dewar), and from  the development of the "mailserver"
at Ada Core Technologies (summer 1995).

: Why are these antonyms in the Ada community?

   The "Ada community" has a long and interesting history (plus active
development ;-). But  there  are also  a  lot of easy bashing  without
complete knowledge  around.  Please have a careful  look  at all these
_freely_ available items before asserting such things.

   I think  the  Ada 9X project,  managed  by AJPO,  with  a very open
attitude, a positive thing that is not  often associated with Ada, but
always here, had taken into account _all_ user/vendor feedback, as far
as this was possible.   The new standard,  new ACVC, new ACES and GNAT
are   perfect  examples of   user/vendor-driven  improvements  (of old
standard, old  ACVC, old ACES,  old Ada/Ed ;-).  See also a new freely
available with sources real time portable run-time namely RTEMS.

[Thanks for your reading all of this Ada 95 propaganda ;-]

-- 
--  Laurent Guerby, student at Telecom Bretagne (France), Team Ada.
--  "Use the Source, Luke. The Source will be with you, always (GPL)."
--  http://www-eleves.enst-bretagne.fr/~guerby/ (GATO Project).
--  Try GNAT, the GNU Ada 95 compiler (ftp://cs.nyu.edu/pub/gnat).




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                             ` John McCabe
  1996-04-24  0:00                               ` Robert Dewar
@ 1996-04-25  0:00                               ` Ken Garlington
  1 sibling, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-25  0:00 UTC (permalink / raw)


John McCabe wrote:
> 
> I'm loathe to believe that any of the ACVC tests are truly useless
> (although I have to admit I haven't looked at them so far), but in a
> general case, where the particular test mentioned was _truly_ useless,
> then I would choose the performance improvement.

As I understand it, one of the changes from ACVC 1.x to ACVC 2.x was
exactly to address the problem you are loathe to believe (although "useless"
may be a little strong). There were tests in the ACVC 1.x suite that represented 
language construct used in ways  not found in nature. Vendors who failed these
strange tests had to fix their compiler, even though they believed this bug would
never be seen in user code. If you read the ACVC 2.x documentation, it says
that those tests are being written in such a way as to reflect how Ada
is really used. Of course, it's not clear to me how they know this...

And now, a message from our sponsor (well, MY sponsor, anyway...)

-----

Date      : 25-APR-1996 13:49:00.00
Posted on : 25-APR-1996 13:50:00.00
					April 25, 1996
To:	All Employees

The ISO 9001 Registration Audit of LMTAS by representatives of the British 
Standards Institution was completed yesterday.  The British Standards 
Institution is the objective, third-party audit agency that was chosen to 
assess our degree of compliance with requirements of the internationally 
recognized ISO 9001 quality standard.  

I am pleased, and proud, to report that this agency will recommend LMTAS 
for ISO 9001 registration and certification, based on the results of the 
audit.  This is a significant event that reflects very favorably on our 
processes, products and people.  We can also be proud that LMTAS will be 
the first major aircraft manufacturer to achieve registration in the United 
States.  My thanks and congratulations to everyone!

Besides serving as a strong endorsement of the effectiveness of our quality 
system, ISO 9001 registration demonstrates our ability to operate according 
to commercial quality standards, in addition to traditional military 
standards.  This could be an important factor when customer decisions are 
made about future aircraft programs such as the Joint Strike Fighter.

The auditors did identify some areas where improvements can be made.  We 
will be working these areas in the near future and will submit specific 
actions to the auditing agency, which is the final step in attaining 
registration.  

We must remember, of course, that ISO 9001 registration does not signify an 
end to the need to continually improve quality.  I believe our company 
achieved a heightened awareness of the importance of quality during the 
weeks in which we prepared for this audit.  It is critical for us to 
maintain this awareness and to strive toward ever-higher standards, so that 
our customers can continue to share acceptance of the LMTAS quality policy:  
Our Brand Means Quality. 

Again, congratulations.  

					Dain M. Hancock
     Lockheed Martin Tactical Aircraft Systems




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                               ` Robert Dewar
  1996-04-26  0:00                                 ` John McCabe
@ 1996-04-26  0:00                                 ` John McCabe
  1996-04-26  0:00                                 ` Ken Garlington
  2 siblings, 0 replies; 100+ messages in thread
From: John McCabe @ 1996-04-26  0:00 UTC (permalink / raw)



dewar@cs.nyu.edu (Robert Dewar) wrote:

>John McCabe said

>"I'm loathe to believe that any of the ACVC tests are truly useless
>(although I have to admit I haven't looked at them so far), but in a
>general case, where the particular test mentioned was _truly_ useless,
>then I would choose the performance improvement."

>I really think that if you want to express opinions on the ACVC suite,
>new or old, you might find it quite helpful to look at the tests!

Robert, I am (unfortunately) not paid to be an Ada expert, I have
other work to do, and I'm just trying to find out how I can make that
job easier and less stressful by contributing to a discussion on
compiler quality.

When I have some time, or when I can persuade my company to let me, I
will look at the ACVC tests, but the fact that I haven't so far must
not exclude me from expressing opinions on the effect of the tests,
based on the experience I have of using the products that have
(mysteriously) passed the tests!


Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                             ` Robert Dewar
@ 1996-04-26  0:00                               ` Ken Garlington
  0 siblings, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-26  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> Why not study the tests?
> Why not study the process being used to construct ACVC 2.1?

[to determine if the new tests reflect "common" usage]

Well, I did. Unfortunately, I don't know what represents common usage.
I only know how I (and my group use Ada). Is it common usage? Beats me.
This was exactly my comment: Without some sort of survey, etc. how can
_any_ one person or small group of people know what represents common usage?

There's another issue. Common usage today is based on Ada 83 (unless you
wish to claim that most existing Ada code was developed using Ada 95).
Therefore, even if such a survey was done, how could it predict how people
will be using Ada 95 unique features?

> The resulting test is then
> reviewed by the ACVC review team, which represents implementors and
> users, to see how well it seems to match potential use.

How does this team represent me, if they have never contacted me? Or have
they contacted most users, and I represent some irrelevant minority?

> It is hard to establish objective criterion for how well one is doing
> in this process. The review group certainly has not found anyway of
> pre-establishing what will turn out to be typical usage.

Exactly. EXACTLY. I don't how they _could_ do this.

Granted, it's a good thing that ACVC is at least asking the question,
"Does this represent real use?" My comment, as you have finally acknowledged,
is that it's not clear how they develop a meaningful answer.

> What *is* encouraging is the following, which actually could possibly
> be quantied from our data, with a lot of work.
> 
> When we make proposed changes to GNAT, we run three kinds of tests:
> 
>  Our main test suite, which is surely user-oriented, since it is mostly
>  user code.
> 
>  the old ACVC tests
> 
>  the new ACVC tests
> 
> in pratice we find the general rests on the new ACVC tests closely mirror
> the general results on the main test suite.

That _is_ encouraging. In fact, it's so encouraging that I think we should
do this for all vendor regression suites, to further build confidence in the
ACVC. In fact, let's be bold. If there is a vendor regression suite that _fails_
to generally track the ACVC results, then there should be some requirement to 
propose an update to the ACVC based on that result. If the divergence were due
to some inapplicable condition, then of course the ACVC should not be updated.
However, for the other cases, it should.

Good idea! Wish I'd thought of it.

> On the other hand, the old ACVC tests seem to pick up a somewhat separate
> set of errors, and often yield conflicting results (lots of problems when
> the other two sets show none, or no problems when the other two sets
> show lots).

Of course, there is also the possibility that those results, while not applicable
to GNAT users, might be very relevant to users from other vendors. How do we know
whether those old tests still have value?

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                           ` Robert Dewar
@ 1996-04-26  0:00                             ` Ken Garlington
  1996-04-27  0:00                               ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-26  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> I guess that neither you nor Ken have paid any attention to what is going
> on with ACVC 2.1, but the whole idea of this effort is to make sure that
> the ACVC suite better reflects actual user usage. I would be interested
> in comments from either of you on this reformalation, and in your reaction
> to the new tests, but to make useful comments you will have to sepdn more
> time actually studying the tests and the ACVC process.

OK, well, I'll restate one comment I made AFTER studying the new ACVC process
and the tests (so far as the documents on the AdaIC server would permit):

If the writers of the ACVC tests are expected to write tests that reflect how
Ada is really used, how do they gain this information? For that matter, when
looking at a particular test, how do I judge whether that test reflects common
Ada usage? I know how I use Ada, but how do I know whether my usage is common
or "marginal"?

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                         ` Robert Dewar
@ 1996-04-26  0:00                           ` Ken Garlington
  1996-04-27  0:00                             ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-26  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> your experience with compiler implementation, or F22 code. If you are
> speaking of experience in implementing a compiler, what compiler?
> Because I have worked on many commercial compilers, and NEVER found
> this to be true.

It is the F-22 program's experience that, of the hundreds of change 
requests we as users have submitted to our compiler vendors, the 
majority have test cases which are both small and non-proprietary.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                               ` Robert Dewar
  1996-04-26  0:00                                 ` John McCabe
  1996-04-26  0:00                                 ` John McCabe
@ 1996-04-26  0:00                                 ` Ken Garlington
  2 siblings, 0 replies; 100+ messages in thread
From: Ken Garlington @ 1996-04-26  0:00 UTC (permalink / raw)



Robert Dewar wrote:
> 
> I suspect that Ken has not looked at them either. Ken, you keep asking
> about how the ACVC process is being improved? Why not look?

Why not read the older posts where I said I _did_ look, going so far as 
to quote from the ACVC documents I reviewed on the AdaIC server? Why not 
read my response to Mr. McCabe, discussing what I found? Why not answer 
my questions regarding what I _couldn't_ find, AFTER the review?

If you have other documents you wish for me to review (and they, in 
fact, can be accessed in a reasonably easy fashion), please feel free to 
send me pointers to them. If not, please discontinue asking me to review 
something that you can't identify?

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions [VERY LONG]
  1996-03-25  0:00 ` Robert Dewar
                     ` (9 preceding siblings ...)
  1996-04-25  0:00   ` Tiring Arguments Around (not about) Two Questions [VERY LONG] Laurent Guerby
@ 1996-04-26  0:00   ` Ken Garlington
  1996-04-29  0:00     ` Philip Brashear
  10 siblings, 1 reply; 100+ messages in thread
From: Ken Garlington @ 1996-04-26  0:00 UTC (permalink / raw)



Laurent Guerby wrote:
> 
> Ken Garlington writes
> : Gary McKee wrote:

> *** I  see ACVC validation  as a more  "objective" approach. There are
> (to simplify) two categories of tests :

This implies that evaluation can't be objective. However, there are many
examples of evaluations donw with precise, objective criteria. This
also implies that ACVC validation is always objective. However, with respect
to extensions, the evaluation is (in my mind) subjective, based on the vendor's
interpretation of the meaning of "extension." As Dr. Dewar has pointed out,
there is not always full agreement on the meaning of "extension."

This doesn't convince me that a bright line exists between validation and
evaluation. There is certainly such a line between the ACVC and ACEC today,
but that doesn't mean it makes sense for that line to stay as is.

>    In the Ada community validation  is a strong concern, something not
> validated is   not a compiler  for  most users (is  that reasonable is
> another   question  ;-).

Not only is it _another_ question, it's MY question!

I'm glad to see that at least one person has, at least unintentionally, stumbled
onto the idea that describing what is done today is not the same as describing
what _should_ be done today. Bless you!

> *** I see the ACES evaluation as more subjective, since performance is
> measured.

Actually, the test is objective. The _interpretation_ of that test might be
subjective, but the test is quite objective. Similarly, the ACVC is quite
objective, but the interpretation of what that test means to an end-user
is quite subjective, at least in my mind.

> Just have  a look  to what is  happening with  SPECs in the
> microprocessor market  to  understand that  performance measurement is
> hard to achieve in an objective way.

Yes, let's! It's almost impossible to sell a microprocessor today without
the vendor quoting SPECmarks. Users expect the vendor to have that data
available. They don't expect the vendor to say: "Measures of microprocessor
performance? I expect the user to determine that!"

> For example some Intel SPECs are
> nearly impossible to reproduce  with  real market motherboards.  SPECs
> are provided by vendors.

And yet, SPECmarks are useful as a general guide to microprocessor selection.
Granted, once you've narrowed the field, you have to validate those numbers.
But SPECmarks are used all the time as a criteria for selection, along with
things like power dissipation, size, and so forth. All criteria which the
vendor does once, and shares with potential customers. In fact, they even
include such measures in their ads!

Granted, common measures aren't always common. That comes with the territory.
But I think your example overall supports my position.

> This is not the  case for ACES, which is most
> of the time  (not an obligation) run  by users and  a complete  set of
> tools come with ACES especially  written for users (note that  there's
> no equivalent for ACVC).  Latest ACES provide the "quicklook" facility
> for a easy to run set of  test, expected to  be run by an average user
> in one day. 

Actually, users can also run a lot of standard benchmarks, like SPECmarks,
on their own as well. However, they don't have to do it for every
potential part, since the vendors will provide them with that data. This
is a productivity benefit.

Too bad we can't have that for compilers. Or can we?

>    I  think putting  ACES on  the user  side  is the right (political)
> approach (again, think  about SPECs).

Keep in mind there's THREE sides: the vendor, the user, and a neutral
third-party. Nonetheless, when I think about SPECs, I think of a de facto
standard that all vendors use, and to which all users have access. Again,
you're not exactly discouraging me, here!

> Of  course the user has to  know
> what  he wants and what  he  is talking  about,  but ACES reports give
> useful information to select a compiler tailored to your needs.

Assuming ACES reports are generally useful, then it seems we need that
data available for all vendors. If you think having the vendor do the
test will cause problems, why not a third party? Why should each potential
user pay to do the same ACES run on a particular compiler?

> *** Both  ACVC and ACES are evolving,  and as far as  I can judge , in
> the right direction. For  example some ACVC  tests have moved to ACES,

Holy cow! There were tests that were once validation tests, and then
somehow became evaluation tests? How could this be? You can't use a
validation test for evaluation purposes (or vice versa)... right?

> And the  new  ACVC (2.x) test have
> very little in common with old ones (1.x). This is  my opinion, but it
> is important to note that these processes are very open to vendors and
> users, and that everything   available with papers, sources,   so it's
> easy to   have a look  at  them, at this point  in   the discussion it
> becomes important.

It's easy to have a look at them. However, it's impossible, as far as I
can tell, to actually have a conversation that questions the criteria
under which they are developed.

>    The "Ada community" has a long and interesting history (plus active
> development ;-). But  there  are also  a  lot of easy bashing  without
> complete knowledge  around.  Please have a careful  look  at all these
> _freely_ available items before asserting such things.

Speaking of bashing someone using incomplete information...

I withdraw from this conversation. Good bye.

-- 
LMTAS - "Our Brand Means Quality"




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-24  0:00                               ` Robert Dewar
@ 1996-04-26  0:00                                 ` John McCabe
  1996-04-26  0:00                                 ` John McCabe
  1996-04-26  0:00                                 ` Ken Garlington
  2 siblings, 0 replies; 100+ messages in thread
From: John McCabe @ 1996-04-26  0:00 UTC (permalink / raw)



dewar@cs.nyu.edu (Robert Dewar) wrote:

>John McCabe said

>"I'm loathe to believe that any of the ACVC tests are truly useless
>(although I have to admit I haven't looked at them so far), but in a
>general case, where the particular test mentioned was _truly_ useless,
>then I would choose the performance improvement."

>I really think that if you want to express opinions on the ACVC suite,
>new or old, you might find it quite helpful to look at the tests!

I know that, that's why what I've written above _isn't_ an opinion!




Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-26  0:00                             ` Ken Garlington
@ 1996-04-27  0:00                               ` Robert Dewar
  0 siblings, 0 replies; 100+ messages in thread
From: Robert Dewar @ 1996-04-27  0:00 UTC (permalink / raw)



Ken asks

"If the writers of the ACVC tests are expected to write tests that reflect how
Ada is really used, how do they gain this information? For that matter, when
looking at a particular test, how do I judge whether that test reflects common
Ada usage? I know how I use Ada, but how do I know whether my usage is common
or "marginal"?
"

First of all, you are way ahead just asking the question "How would this
feature be used in a real program?" That question was not even asked
in the test generation philosophy of the earlier ACVC suites.

Very often, the answer is pretty obvious and non-controversial.

But it is true that sometimes it is not so obvious. The process we
follow is that tests are reviewed by a review team that is coposed
of experienced users, implementors, and testers. If all members of
the review team agree that a given test is realistically usage-oriented,
that is certainly not a guarantee that this is the case, but it is
a useful indicator.

There is no other way to do things *at this stage*, since there is not
enough experience with many of these features to actually measure usage
patterns, though if there is an ACVC 2.2 to follow on, then you could
certainly consider such measurement.

As I mentioned in an earlier message, our anecdotal experience with GNAT
is encouraging. IN broad terms, we see the new Ada 95 tests have similar
general testing profiles to the tests that come from our users code (and
which presmably are definitely user-oriented).

As for your last question, all I can say is that is why we have more
than one person on the review team. It helps avoid individual
idiosyncratic usage influencing the suite (note I said help, not
avoid, this is definitely not a purely objective process).

But as I said before, why not look at the 2.1 tests and see what you
think in comparison to the ACVC 1.11 tests. They are availabe for
your perusal, and comments are welcome!





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-26  0:00                           ` Ken Garlington
@ 1996-04-27  0:00                             ` Robert Dewar
  1996-04-29  0:00                               ` Cordes MJ
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-27  0:00 UTC (permalink / raw)



"It is the F-22 program's experience that, of the hundreds of change
requests we as users have submitted to our compiler vendors, the
majority have test cases which are both small and non-proprietary."

I see no reason to think that the profile for the F-22 would be similar
to that of a compier, but in fact I would probably say the same of
GNAT, but 49% can be quite a lot of tests!





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-29  0:00                               ` Cordes MJ
@ 1996-04-29  0:00                                 ` Robert Dewar
  1996-05-06  0:00                                   ` John McCabe
  0 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-04-29  0:00 UTC (permalink / raw)



Michael said

"Ken was wrong when he said "majority". He should have said "vast
majority". But if you want specific numbers, lets say that 97% of
our tests to demonstrate compiler errors are both small and non-
proprietary. (I give myself a 2% margin of error for this estimate).
And while I'm at it, I should also define "small". Our small examples
are generally between 5 and 300 lines of Ada code. And most (lets
say 70%) of these are less than 50 lines."

OK, well that *is* interesting. What that says is that the F22 code
may not be entirely typical code. Actually I hope that is the case.
A lot of the Ada 95 code we use seems extraordiarily comlex, almost
as if every possible feature and every possible interaction was
deliberately being introduced. I would not at all be surprised to
find that the F22 takes a more conservative approach (as does GNAT
itself, the GNAT compiler itself very rarely runs into compiler
problems -- that's partly because it is its own best regression
test, but also because we are very conservative in the use of
fancy features in the compiler itself).

What we found with Ada 83 was that what was least well tested in the
ACVC suite was complex interactions between features.

It is true that many of the problems in GNAT can be reduced to simple
test programs, but there are also many examples that show up only
in large cases. I will give just one example of this. The Irix
implemntation does not handle jumps of more than 128K bytes nicely.
It takes a failry big progrm to show up this bug! Big enough in fact
that no C program had ever run into it, but sure enough one big project
had some VERY large Ada procedures that did bump into this limit.






^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Tiring Arguments Around (not about) Two Questions [VERY LONG]
  1996-04-26  0:00   ` Ken Garlington
@ 1996-04-29  0:00     ` Philip Brashear
  0 siblings, 0 replies; 100+ messages in thread
From: Philip Brashear @ 1996-04-29  0:00 UTC (permalink / raw)




I've seen a couple of references to ACES tests that were taken from the
ACVC.  Now, I've been the technical manager for the ACES project for
several years (beginning with supervising our subcontractor, Boeing, in
the merger of the old ACEC and the British AES to form the current ACES).

I know I'm aging, and my memory isn't what it used to be (actually, I don't
remember a time when my memory was trustworthy), but I don't recall any
instance of incorporating an ACVC test into the ACES.  The nearest to that
idea that I recall was surveying the proposed ACVC 2.x test for possible
evaluation TOPICS that we might have overlooked.

Phil Brashear
CTA INCORPORATED




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-27  0:00                             ` Robert Dewar
@ 1996-04-29  0:00                               ` Cordes MJ
  1996-04-29  0:00                                 ` Robert Dewar
  0 siblings, 1 reply; 100+ messages in thread
From: Cordes MJ @ 1996-04-29  0:00 UTC (permalink / raw)



Robert Dewar (dewar@cs.nyu.edu) wrote:
: "It is the F-22 program's experience that, of the hundreds of change
: requests we as users have submitted to our compiler vendors, the
: majority have test cases which are both small and non-proprietary."

: I see no reason to think that the profile for the F-22 would be similar
: to that of a compier, but in fact I would probably say the same of
: GNAT, but 49% can be quite a lot of tests!

The change requests which Ken was refering to *were* written against
the compiler - I see no reason to think that the profile would be so
dissimilar.

Ken was wrong when he said "majority". He should have said "vast
majority". But if you want specific numbers, lets say that 97% of
our tests to demonstrate compiler errors are both small and non-
proprietary. (I give myself a 2% margin of error for this estimate).
And while I'm at it, I should also define "small". Our small examples
are generally between 5 and 300 lines of Ada code. And most (lets 
say 70%) of these are less than 50 lines.

--
---------------------------------------------------------------------
Michael J Cordes
Phone: (817) 935-3823
Fax:   (817) 935-3800
EMail: CordesMJ@lmtas.lmco.com
---------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-05-06  0:00                                   ` John McCabe
@ 1996-05-06  0:00                                     ` Robert Dewar
  1996-05-08  0:00                                       ` John McCabe
  1996-05-07  0:00                                     ` Mike Cordes
  1996-05-07  0:00                                     ` Mike Cordes
  2 siblings, 1 reply; 100+ messages in thread
From: Robert Dewar @ 1996-05-06  0:00 UTC (permalink / raw)



John published an example of a supposed bug report. But it concerned
pragma Shared, and pragma Shared only has non-null semantics in the
presence of tasking, but there was no task in the program. So this
is clearly not a bug report.

My guess is that you were assuming that pragma Shared should someone
behave reasonably with respect to undefined outside accesses to something
with an address clause. OK, that is a reasonable assumption, but there is
no requirement to this effect in the RM. An implementation is free to
ignore pragma Shared in a program with no tasks (because no observable
semantic difference is possible under these circumstances between a
shared and non-shared implementation of a given variable).

I don't know if the original example had tasks or not. If so, it is possible
that the purpose of the report has been destroyed in the cutting down
process. This often happens. On the other hand if the original program
was expecting pragma Shared to have some effect on non-tasking situations,
then this is an expectation not backed up by the RM.

Remember that the RM is always talking about "as-if" semantics. When it
gives an implementation model, then any implementation model which is
formally semantically equivalent is acceptable.

So you can't appeal to the RM here -- you can talk about what would be
the most useful implementation approach. Certainly GNAT would not ignore
the pragma in this situation, even though it is formally allowed to.





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-04-29  0:00                                 ` Robert Dewar
@ 1996-05-06  0:00                                   ` John McCabe
  1996-05-06  0:00                                     ` Robert Dewar
                                                       ` (2 more replies)
  0 siblings, 3 replies; 100+ messages in thread
From: John McCabe @ 1996-05-06  0:00 UTC (permalink / raw)



Just as a brief example, here is a program I emailed to you which
proves the existence of a bug in my MIL-STD-1750A implementation (at
the time you declined to comment on it). I've removed all the comments
it originally had to get an idea of how much Ada is involved :-

with system;
procedure main is

   type int16 is range 0..16#FFFF#;
   for int16'size use 16;

   external_counter_value : int16;
   pragma shared(external_counter_value);   
   for external_counter_value use at 16#4000#;

   initial_counter_value : constant int16 := 16#7AAA#;
   wait_for_value : constant int16 := 16#0#;

procedure wait_for_change is
   wait_for_value_reached : boolean := false;
begin
   external_counter_value := initial_counter_value;

   while (wait_for_value_reached = false)
   loop
      if (external_counter_value = wait_for_value)
      then
         wait_for_value_reached := true;
      end if;
   end loop;
end wait_for_change;

begin
   for i in 0 .. 4 loop
      wait_for_change;
   end loop;
end main;

This is the typical size of the "vast majority" of examples I have
provided in bug reports to compiler vendors. As you can obviously see,
it is very small, and _very_ non-proprietary. In actual fact when I
experimented further I found it could be made even smaller, but I left
it this way to try to give the compiler extra work to do.

The problem with this example turned out to be that the compiler
ignored the pragma shared if the object was address claused. What
particularly surprised me with this one was that I would have thought
this would be a fairly typical type of construct to use, yet the
problem didn't appear to have been found before!

Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-05-06  0:00                                   ` John McCabe
  1996-05-06  0:00                                     ` Robert Dewar
@ 1996-05-07  0:00                                     ` Mike Cordes
  1996-05-07  0:00                                     ` Mike Cordes
  2 siblings, 0 replies; 100+ messages in thread
From: Mike Cordes @ 1996-05-07  0:00 UTC (permalink / raw)



john@assen.demon.co.uk (John McCabe) wrote:

  <snip>
> In actual fact when I
>experimented further I found it could be made even smaller, but I left
>it this way to try to give the compiler extra work to do.
  <snip>

while discussing compiler error reporting techniques, Mary Poppins told
Mr. Banks:

  "Good [small] enough is a job well done!"

I won a free coke from McDonnalds because I knew this bit of Disney trivia.

:-)

Mike
###





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-05-06  0:00                                   ` John McCabe
  1996-05-06  0:00                                     ` Robert Dewar
  1996-05-07  0:00                                     ` Mike Cordes
@ 1996-05-07  0:00                                     ` Mike Cordes
  2 siblings, 0 replies; 100+ messages in thread
From: Mike Cordes @ 1996-05-07  0:00 UTC (permalink / raw)



john@assen.demon.co.uk (John McCabe) wrote:

>Just as a brief example, here is a program I emailed to you which
>proves the existence of a bug in my MIL-STD-1750A implementation (at
>the time you declined to comment on it). I've removed all the comments
>it originally had to get an idea of how much Ada is involved :-

  <example snipped>

>This is the typical size of the "vast majority" of examples I have
>provided in bug reports to compiler vendors. As you can obviously see,
>it is very small, and _very_ non-proprietary. In actual fact when I
>experimented further I found it could be made even smaller, but I left
>it this way to try to give the compiler extra work to do.
>
  <more snipped>

John,

How do you verify the existence/correction of compiler bugs? I.e., do
include command scripts with the Ada source which verify behavior, or
is it simply a general practice to have the compiler generate an assembly
listing and have the examiner check generated code?

In the example you provided, it would be *simple* (but not *automatic*)
to check the generated code for correct behavior. An automated test is
possible if you write a command script which compiles (thus identifying
the exact set of switches used), links, and executes the example on
a debugger/simulator. 

The automated tests turn out to be larger in size (i.e., more files -
not more SLOCs), but the the time to verify is almost nil. This is a 
*HUGE* advantage when you are talking about running a large set (several
hundred or more) of tests.

After years of interfacing between Ada developers and Ada compiler 
vendors, I am still forced to tell the developers that "examine the
assembly code for correct behavior" is not a satisfactory verification
criteria for compiler error reports. (I'm not going to touch the subject
of verification of optimizations and enhancements here ;)

Mike
###





^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
  1996-05-06  0:00                                     ` Robert Dewar
@ 1996-05-08  0:00                                       ` John McCabe
  1996-05-08  0:00                                         ` TARTAN and TI Tom Robinson
       [not found]                                         ` <Dr46LG.2FF@world.std.com>
  0 siblings, 2 replies; 100+ messages in thread
From: John McCabe @ 1996-05-08  0:00 UTC (permalink / raw)



dewar@cs.nyu.edu (Robert Dewar) wrote:

>John published an example of a supposed bug report. But it concerned
>pragma Shared, and pragma Shared only has non-null semantics in the
>presence of tasking, but there was no task in the program. So this
>is clearly not a bug report.

Robert, this topic has been discussed before by a number of people in
this group, and by email. I have also read your report on "Shared
Variables and Ada 9X Issues". It is quite clear from what is written
there that you believe the shared variables section of RM83 to be
extremely unclear, so please do not use the grammar of that section as
an argument defending a duff implementation, especially as you were so
involved in the effort to clear this mess up in Ada 9X.

>My guess is that you were assuming that pragma Shared should someone
>behave reasonably with respect to undefined outside accesses to something
>with an address clause. OK, that is a reasonable assumption, 

I would say that's more than a reasonable assumption given that (as I
stated towards the bottom of that message) the program works _exactly_
as I had expected if the address clause is left out. I cannot think of
any defence for an implementation to do this except that it is a bug -
hence this clearly _is_ a bug report.

>but there is no requirement to this effect in the RM.

True - but see above.

<.. snip ..>

>So you can't appeal to the RM here -- you can talk about what would be
>the most useful implementation approach. Certainly GNAT would not ignore
>the pragma in this situation, even though it is formally allowed to.

I imagine most compilers would not ignore the pragma in this
situation. I know, for example, that Tartan's compiler for
MIL-STD-1750A doesn't ignore the pragma. It is the consistency of the
implementation that is at fault here - why ignore the pragma when the
object is address claused (suggesting outside influences - possibly a
"phantom task"?) but take full account of it when no address clause is
given?

Anyway, this is getting a bit off-topic, it was only meant as an
example of a bug report that I had handy at the time. If you want to
dicuss this particular example further, please contact me via email
but I think it's been thrashed to death already.

Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

* TARTAN and TI
  1996-05-08  0:00                                       ` John McCabe
@ 1996-05-08  0:00                                         ` Tom Robinson
  1996-05-09  0:00                                           ` Arthur Evans Jr
       [not found]                                         ` <Dr46LG.2FF@world.std.com>
  1 sibling, 1 reply; 100+ messages in thread
From: Tom Robinson @ 1996-05-08  0:00 UTC (permalink / raw)



My cube-mate said he saw something fly across the DSP news group that says that
Texas Instruments has purchased Tartan.

Is this true?  If so, have there been news about what TI is planning on doing
with Tartan?

Tom Robinson
GDE Systems, Inc







^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: TARTAN and TI
  1996-05-08  0:00                                         ` TARTAN and TI Tom Robinson
@ 1996-05-09  0:00                                           ` Arthur Evans Jr
  0 siblings, 0 replies; 100+ messages in thread
From: Arthur Evans Jr @ 1996-05-09  0:00 UTC (permalink / raw)



In article <4mr5oa$o2h@gde.GDEsystems.COM>, robinson@GDESystems.com wrote:

> My cube-mate said he saw something fly across the DSP news group that
> says that Texas Instruments has purchased Tartan.
>
> Is this true?  If so, have there been news about what TI is planning
> on doing with Tartan?

True enough.  For a press release, see
     http://www.ti.com/sc/docs/news/1996/96022a.htm

I've heard that TI will leave the Pittsburgh facility in place, doing
pretty much what they are doing now.  All products will be continued,
though I would expect a greater emphasis on those for TI chips.

This will surely be interesting to watch.

Art Evans

Arthur Evans Jr, PhD        Phone: 412-963-0839
Ada Consulting              FAX:   412-963-0927
461 Fairview Road
Pittsburgh PA  15238-1933
evans@evans.pgh.pa.us




^ permalink raw reply	[flat|nested] 100+ messages in thread

* Re: Ada Core Technologies and Ada95 Standards
       [not found]                                         ` <Dr46LG.2FF@world.std.com>
@ 1996-05-09  0:00                                           ` John McCabe
  0 siblings, 0 replies; 100+ messages in thread
From: John McCabe @ 1996-05-09  0:00 UTC (permalink / raw)



bobduff@world.std.com (Robert A Duff) wrote:

>In article <831590385.6539.0@assen.demon.co.uk>,
>John McCabe <john@assen.demon.co.uk> wrote:
>>... It is quite clear from what is written
>>there that you believe the shared variables section of RM83 to be
>>extremely unclear, so please do not use the grammar of that section as
>>an argument defending a duff implementation, ...

>Yeah, please.

>>... especially as you were so
>>involved in the effort to clear this mess up in Ada 9X.

>- Bob

I see what you mean! Completely coincidental I assure you!

Best Regards
John McCabe <john@assen.demon.co.uk>





^ permalink raw reply	[flat|nested] 100+ messages in thread

end of thread, other threads:[~1996-05-09  0:00 UTC | newest]

Thread overview: 100+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1996-03-25  0:00 Ada Core Technologies and Ada95 Standards Kenneth Mays
1996-03-25  0:00 ` Robert Dewar
1996-03-28  0:00   ` John McCabe
1996-03-28  0:00     ` Robert Dewar
1996-03-29  0:00       ` John McCabe
1996-03-29  0:00         ` Robert Dewar
1996-04-01  0:00           ` Ken Garlington
1996-04-01  0:00             ` Robert Dewar
1996-04-02  0:00               ` John McCabe
1996-04-02  0:00               ` Ken Garlington
1996-04-02  0:00                 ` John McCabe
1996-04-02  0:00                   ` Robert A Duff
1996-04-02  0:00                   ` Robert Dewar
1996-04-03  0:00                     ` Ken Garlington
1996-04-04  0:00                       ` Robert Dewar
1996-04-04  0:00                         ` Ken Garlington
1996-04-05  0:00                           ` Robert Dewar
1996-04-10  0:00                             ` Ken Garlington
1996-04-10  0:00                 ` Robert Dewar
1996-04-10  0:00                   ` Robert Dewar
1996-04-12  0:00                     ` Philip Brashear
1996-04-12  0:00                       ` Robert Dewar
1996-04-15  0:00                     ` Tiring Arguments Around (not about) Two Questions Ken Garlington
1996-04-15  0:00                       ` Gary McKee
1996-04-16  0:00                         ` Ken Garlington
1996-04-17  0:00                       ` Kenneth Almquist
1996-04-18  0:00                     ` Ada Core Technologies and Ada95 Standards John McCabe
1996-04-19  0:00                       ` Robert Dewar
1996-04-22  0:00                         ` Ken Garlington
1996-04-22  0:00                         ` John McCabe
1996-04-23  0:00                           ` Ken Garlington
1996-04-24  0:00                             ` John McCabe
1996-04-24  0:00                               ` Robert Dewar
1996-04-26  0:00                                 ` John McCabe
1996-04-26  0:00                                 ` John McCabe
1996-04-26  0:00                                 ` Ken Garlington
1996-04-25  0:00                               ` Ken Garlington
1996-04-24  0:00                             ` Robert Dewar
1996-04-26  0:00                               ` Ken Garlington
1996-04-24  0:00                           ` Robert Dewar
1996-04-26  0:00                             ` Ken Garlington
1996-04-27  0:00                               ` Robert Dewar
1996-04-15  0:00                   ` Ken Garlington
1996-04-16  0:00                     ` Robert Dewar
1996-04-16  0:00                       ` Ken Garlington
1996-04-16  0:00                         ` Robert Dewar
1996-04-02  0:00             ` John McCabe
1996-04-02  0:00               ` Robert A Duff
1996-04-16  0:00                 ` John McCabe
1996-04-16  0:00                   ` Robert Dewar
1996-04-22  0:00                     ` John McCabe
1996-04-23  0:00                       ` Ken Garlington
1996-04-24  0:00                         ` Robert Dewar
1996-04-26  0:00                           ` Ken Garlington
1996-04-27  0:00                             ` Robert Dewar
1996-04-29  0:00                               ` Cordes MJ
1996-04-29  0:00                                 ` Robert Dewar
1996-05-06  0:00                                   ` John McCabe
1996-05-06  0:00                                     ` Robert Dewar
1996-05-08  0:00                                       ` John McCabe
1996-05-08  0:00                                         ` TARTAN and TI Tom Robinson
1996-05-09  0:00                                           ` Arthur Evans Jr
     [not found]                                         ` <Dr46LG.2FF@world.std.com>
1996-05-09  0:00                                           ` Ada Core Technologies and Ada95 Standards John McCabe
1996-05-07  0:00                                     ` Mike Cordes
1996-05-07  0:00                                     ` Mike Cordes
1996-04-10  0:00             ` Robert Dewar
1996-04-15  0:00               ` Ken Garlington
1996-04-16  0:00                 ` Robert Dewar
1996-04-16  0:00                   ` Ken Garlington
1996-04-16  0:00                     ` Robert Dewar
1996-04-18  0:00                       ` Ken Garlington
1996-03-31  0:00         ` Geert Bosch
1996-04-01  0:00           ` Robert Dewar
1996-04-01  0:00             ` Mike Young
1996-04-03  0:00               ` Robert Dewar
1996-03-29  0:00   ` Applet Magic works great, sort of Vince Del Vecchio
1996-03-29  0:00   ` Ada Core Technologies and Ada95 Standards steved
1996-03-29  0:00     ` Applet Magic works great, sort of Bob Crispen
1996-04-03  0:00   ` Ada Core Technologies and Ada95 Standards Robert I. Eachus
1996-04-03  0:00   ` Ken Garlington
1996-04-04  0:00     ` Robert Dewar
1996-04-04  0:00       ` John McCabe
1996-04-05  0:00         ` Robert Dewar
1996-04-06  0:00           ` Ada validation is virtually worthless Raj Thomas
1996-04-06  0:00             ` Robert Dewar
1996-04-08  0:00               ` Arthur Evans Jr
1996-04-07  0:00           ` Ada Core Technologies and Ada95 Standards John McCabe
1996-04-05  0:00   ` Robert I. Eachus
1996-04-10  0:00     ` Cordes MJ
1996-04-10  0:00       ` Robert Dewar
1996-04-15  0:00         ` Ken Garlington
1996-04-16  0:00           ` Robert Dewar
1996-04-16  0:00             ` Ken Garlington
1996-04-16  0:00               ` Robert Dewar
1996-04-11  0:00   ` Robert I. Eachus
1996-04-11  0:00   ` Robert I. Eachus
1996-04-19  0:00   ` Laurent Guerby
1996-04-25  0:00   ` Tiring Arguments Around (not about) Two Questions [VERY LONG] Laurent Guerby
1996-04-26  0:00   ` Ken Garlington
1996-04-29  0:00     ` Philip Brashear

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox