From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: Laurent.Guerby@enst-bretagne.fr (Laurent Guerby) Subject: Re: Tiring Arguments Around (not about) Two Questions [VERY LONG] Date: 1996/04/25 Message-ID: <4xhgu8tni4.fsf_-_@leibniz.enst-bretagne.fr> X-Deja-AN: 151367637 distribution: world sender: guerby@leibniz.enst-bretagne.fr references: <00001a73+00002c20@msn.com> content-type: text/plain; charset=US-ASCII organization: Telecom Bretagne mime-version: 1.0 newsgroups: comp.lang.ada Date: 1996-04-25T00:00:00+00:00 List-Id: Ken Garlington writes : Gary McKee wrote: [deleted] : > There is a significant difference in purpose between validation (ACVC) and : > evaluation (ACES). : : Why is there a significant difference? : : "evaluate" - "to determine the significance or worth of usu. by : careful appraisal and study." : : "validate" - "to support or corroborate on a sound or authoritative basis." : : Which do I want? As Deion says, "Both." : : Why can't we corroborate on a sound or authoritative basis the : significance or worth of Ada compilers by careful appraisal and study? There is a significant difference between evaluation and validation. And this is reflected in the dual approach ACVC (validation) and ACES (evaluation). *** I see ACVC validation as a more "objective" approach. There are (to simplify) two categories of tests : - the B tests for testing invalid construction detection at compile time, and the vendor has to provide a huge listing of error and to show that all errors are catched (note that this is a bit subjective), - the C tests which are testing run time behaviour, without reporting of either passed, failed or non applicable. The vendor has to provide a huge listing of tri state output. For validation purpose, there's also a declaration of non deliberate extension to the language and of course the complete set of switches used, OS version, etc (some clients wnat to rerun the validation suite which is perfectly reasonable for some kind of projects). In the Ada community validation is a strong concern, something not validated is not a compiler for most users (is that reasonable is another question ;-). The Ada compilers writers run in house validation, provide the listings, and then are (very) happy to announce a successful validation. *** I see the ACES evaluation as more subjective, since performance is measured. Just have a look to what is happening with SPECs in the microprocessor market to understand that performance measurement is hard to achieve in an objective way. For example some Intel SPECs are nearly impossible to reproduce with real market motherboards. SPECs are provided by vendors. This is not the case for ACES, which is most of the time (not an obligation) run by users and a complete set of tools come with ACES especially written for users (note that there's no equivalent for ACVC). Latest ACES provide the "quicklook" facility for a easy to run set of test, expected to be run by an average user in one day. There are also two categories of tests (again, to simplify) : - "wall clock time" (user provided routines) measurements, with standard deviation, on code and compiler (if I remenber well). The tests are well classified, with for example good measurement of how the use of high level feature impacts on performance. Of course interpretation is a tricky and "subjective" issue, but also are configuration, switches, run time settings ans so on. - a list of questions about the environnement coming with the compiler, like debugger, interface, bindings and whatever. This is completly subjective, and the market is here for this kind of evaluation. I think putting ACES on the user side is the right (political) approach (again, think about SPECs). Of course the user has to know what he wants and what he is talking about, but ACES reports give useful information to select a compiler tailored to your needs. *** Both ACVC and ACES are evolving, and as far as I can judge , in the right direction. For example some ACVC tests have moved to ACES, quicklook has been added, etc ... And the new ACVC (2.x) test have very little in common with old ones (1.x). This is my opinion, but it is important to note that these processes are very open to vendors and users, and that everything available with papers, sources, so it's easy to have a look at them, at this point in the discussion it becomes important. Personal note: my knowledge of ACVC/ACES comes from source, docs, papers and news reading (for the first three items, it takes indeed not that much time for a lot of useful knowledge ;-), but also from discussions with the GNAT Team (in particular Gary Dismukes, Cyrille Comar and Robert Dewar), and from the development of the "mailserver" at Ada Core Technologies (summer 1995). : Why are these antonyms in the Ada community? The "Ada community" has a long and interesting history (plus active development ;-). But there are also a lot of easy bashing without complete knowledge around. Please have a careful look at all these _freely_ available items before asserting such things. I think the Ada 9X project, managed by AJPO, with a very open attitude, a positive thing that is not often associated with Ada, but always here, had taken into account _all_ user/vendor feedback, as far as this was possible. The new standard, new ACVC, new ACES and GNAT are perfect examples of user/vendor-driven improvements (of old standard, old ACVC, old ACES, old Ada/Ed ;-). See also a new freely available with sources real time portable run-time namely RTEMS. [Thanks for your reading all of this Ada 95 propaganda ;-] -- -- Laurent Guerby, student at Telecom Bretagne (France), Team Ada. -- "Use the Source, Luke. The Source will be with you, always (GPL)." -- http://www-eleves.enst-bretagne.fr/~guerby/ (GATO Project). -- Try GNAT, the GNU Ada 95 compiler (ftp://cs.nyu.edu/pub/gnat).