From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,42427d0d1bf647b1 X-Google-Attributes: gid103376,public From: Ken Garlington Subject: Re: Tiring Arguments Around (not about) Two Questions [VERY LONG] Date: 1996/04/26 Message-ID: <3180CB4E.53BA@lmtas.lmco.com> X-Deja-AN: 151580255 references: <00001a73+00002c20@msn.com> content-type: text/plain; charset=us-ascii organization: Lockheed Martin Tactical Aircraft Systems mime-version: 1.0 newsgroups: comp.lang.ada x-mailer: Mozilla 2.01 (Macintosh; I; 68K) Date: 1996-04-26T00:00:00+00:00 List-Id: Laurent Guerby wrote: > > Ken Garlington writes > : Gary McKee wrote: > *** I see ACVC validation as a more "objective" approach. There are > (to simplify) two categories of tests : This implies that evaluation can't be objective. However, there are many examples of evaluations donw with precise, objective criteria. This also implies that ACVC validation is always objective. However, with respect to extensions, the evaluation is (in my mind) subjective, based on the vendor's interpretation of the meaning of "extension." As Dr. Dewar has pointed out, there is not always full agreement on the meaning of "extension." This doesn't convince me that a bright line exists between validation and evaluation. There is certainly such a line between the ACVC and ACEC today, but that doesn't mean it makes sense for that line to stay as is. > In the Ada community validation is a strong concern, something not > validated is not a compiler for most users (is that reasonable is > another question ;-). Not only is it _another_ question, it's MY question! I'm glad to see that at least one person has, at least unintentionally, stumbled onto the idea that describing what is done today is not the same as describing what _should_ be done today. Bless you! > *** I see the ACES evaluation as more subjective, since performance is > measured. Actually, the test is objective. The _interpretation_ of that test might be subjective, but the test is quite objective. Similarly, the ACVC is quite objective, but the interpretation of what that test means to an end-user is quite subjective, at least in my mind. > Just have a look to what is happening with SPECs in the > microprocessor market to understand that performance measurement is > hard to achieve in an objective way. Yes, let's! It's almost impossible to sell a microprocessor today without the vendor quoting SPECmarks. Users expect the vendor to have that data available. They don't expect the vendor to say: "Measures of microprocessor performance? I expect the user to determine that!" > For example some Intel SPECs are > nearly impossible to reproduce with real market motherboards. SPECs > are provided by vendors. And yet, SPECmarks are useful as a general guide to microprocessor selection. Granted, once you've narrowed the field, you have to validate those numbers. But SPECmarks are used all the time as a criteria for selection, along with things like power dissipation, size, and so forth. All criteria which the vendor does once, and shares with potential customers. In fact, they even include such measures in their ads! Granted, common measures aren't always common. That comes with the territory. But I think your example overall supports my position. > This is not the case for ACES, which is most > of the time (not an obligation) run by users and a complete set of > tools come with ACES especially written for users (note that there's > no equivalent for ACVC). Latest ACES provide the "quicklook" facility > for a easy to run set of test, expected to be run by an average user > in one day. Actually, users can also run a lot of standard benchmarks, like SPECmarks, on their own as well. However, they don't have to do it for every potential part, since the vendors will provide them with that data. This is a productivity benefit. Too bad we can't have that for compilers. Or can we? > I think putting ACES on the user side is the right (political) > approach (again, think about SPECs). Keep in mind there's THREE sides: the vendor, the user, and a neutral third-party. Nonetheless, when I think about SPECs, I think of a de facto standard that all vendors use, and to which all users have access. Again, you're not exactly discouraging me, here! > Of course the user has to know > what he wants and what he is talking about, but ACES reports give > useful information to select a compiler tailored to your needs. Assuming ACES reports are generally useful, then it seems we need that data available for all vendors. If you think having the vendor do the test will cause problems, why not a third party? Why should each potential user pay to do the same ACES run on a particular compiler? > *** Both ACVC and ACES are evolving, and as far as I can judge , in > the right direction. For example some ACVC tests have moved to ACES, Holy cow! There were tests that were once validation tests, and then somehow became evaluation tests? How could this be? You can't use a validation test for evaluation purposes (or vice versa)... right? > And the new ACVC (2.x) test have > very little in common with old ones (1.x). This is my opinion, but it > is important to note that these processes are very open to vendors and > users, and that everything available with papers, sources, so it's > easy to have a look at them, at this point in the discussion it > becomes important. It's easy to have a look at them. However, it's impossible, as far as I can tell, to actually have a conversation that questions the criteria under which they are developed. > The "Ada community" has a long and interesting history (plus active > development ;-). But there are also a lot of easy bashing without > complete knowledge around. Please have a careful look at all these > _freely_ available items before asserting such things. Speaking of bashing someone using incomplete information... I withdraw from this conversation. Good bye. -- LMTAS - "Our Brand Means Quality"