From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,INVALID_DATE autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,bf43a183ce108291 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 1994-09-10 08:14:55 PST Newsgroups: comp.lang.ada Path: nntp.gmd.de!xlink.net!howland.reston.ans.net!gatech!purdue!news.bu.edu!gw1.att.com!nntpa!cbfsb!cbnewst!cbnewsm!cbnewsl!willett From: willett@cbnewsl.cb.att.com (david.c.willett) Subject: Re: Government Policy on Ada Acquisitions Organization: AT&T Bell Laboratories Date: Fri, 9 Sep 1994 16:05:09 GMT Message-ID: References: <34pq9h$b87@theopolis.orl.mmc.com> Date: 1994-09-09T16:05:09+00:00 List-Id: >From article <34pq9h$b87@theopolis.orl.mmc.com>, by dennison@romulus23.DAB.GE.COM (Ted Dennison): > In article <34nmfo$mpi@jac.zko.dec.com>, brett@ada9x.enet.dec.com (Bevin R. Brett) writes: > |> {Ted's number deleted} > |> It is REALLY SILLY to compare the execution times produced by two different > |> compilers for the same platform WITHOUT TURNING ON THE OPTIMISER! > |> > |> Such poor benchmarking techniques, followed by the wide dissemination of the > |> results, are a major source of false information in the computer business. It > |> is being practised by individuals, companies, and even trade magazines; and all > |> it does is cause totally bogus impressions of machines, languages, and compilers > |> to become "common knowlege". > |> > > Well, Bevin it could have been worse; I could have left out that information > altogether. I purposely mentioned the level of optimization I used so that > those more knowlegeable than I (and you CERTAINLY qualify) would be able > to judge the validity of these figures. > > However, since GNAT is still unfinished, wouldn't it be MORE unfair to compare > its optimized code to the code produced by a commercial compiler that has fully > matured optimization capabilities? Perhaps someone associted with the GNAT project > could correct me on this point if I'm wrong. I'm sure you are absolutely correct > as far as comparing two COMMERCIAL compilers. > > A far bigger problem with my numbers is that they are just ONE data point. There > is no evidence that a different application might not run 20% faster compiled > under GNAT. > > I suspect the IO packages I used are the main speed culprit, not the compiler > optimizations. The Sun Ada IO packages appear to be much more complicated > and low-level than the ones I used under GNAT. > > T.E.D. In a (perhaps futile) effort to nip this thread before it becomes ridiculous, let me suggest that benchmarks are valid only within the context in which they are originally measured. That is to say, each development shop needs to make and interpret their own benchmarks if the numbers are to have any real value. Both Bevin and Ted are correct, in my opinion, but the real lesson here is "Don't let anyone else make your tradeoffs for you." The only way to know how a tool set is going to perform for *you* in *your* development environment is to install it in that enviroment, do a spanning subset of whatever it is you do, and measure its performance. One final point, there are many factors which can be used to select a compiler. To my mind, the efficiency of the generated code on a particular platform is one of the lesser ones. It is far more important to me that the compiler help my programmers develop high quality (readable, maintainable, etc.) source than it is that the compiler squeeze the last gram of performance out of the hardware. Of course, your mileage may vary. <<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Dave Willett AT&T Advanced Technology Systems Greensboro, NC USA When short, simple questions have long, complex answers -- your organization's in trouble. Adapted from "In Search of Excellence"