From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,INVALID_DATE autolearn=no autolearn_force=no version=3.4.4 Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!think.com!linus!linus!linus!mbunix!eachus From: eachus@largo.mitre.org (Robert I. Eachus) Newsgroups: comp.lang.ada Subject: Re: Ada vs C implementation efficiency Message-ID: Date: 18 Jun 91 16:48:33 GMT References: <9106151737.AA16750@zach.fit.edu> Sender: news@linus.mitre.org (News Service) Organization: The Mitre Corp., Bedford, MA. In-Reply-To: afes0isi@ZACH.FIT.EDU's message of 15 Jun 91 17:37:17 GMT Nntp-Posting-Host: largo.mitre.org List-Id: There are "benchmarks" which will (almost) always run faster in Ada than in C, and vice-versa. In general however, for good compilers and good benchmarks a program will run fastest in the language it was originally written in. Thus, the Dhrystone benchmark, originally in Ada, normally runs faster in Ada than in C, and people complain that it overuses the string operations in C. If take an application and write it (from scratch) in several languages with each version written by a team experienced in that language, it usually turns out that you measure the difference in cultures not the difference in compilers. I once had the opportunity to do this with a package heavy on matrix operations in FORTRAN, Pascal, and Ada. The FORTRAN version was fastest, the Pascal version had the lowest error bounds on non-stiff matrices, and the Ada version was the only one that could be trusted with near singular data. Which one is "best"? We did a "second iteration," putting checking code into the FORTRAN and Pascal versions, and using the Pascal code in all three versions (after conditioning), and now the program performance differences were in the noise. (The Ada I/O was slower, and the FORTRAN floating point with overflow checking was much more cumbersome, etc. so there were input cases where each version was "fastest," but now all were correct...) My personal approach, based on this experiment and others, is to go for correctness first, and if you need better performance, look at the algorithms, THEN at the code. I have gotten to the point of looking at generated code once or twice, but in every case I have found a way to coerce the compiler to generate what I wanted. -- Robert I. Eachus with STANDARD_DISCLAIMER; use STANDARD_DISCLAIMER; function MESSAGE (TEXT: in CLEVER_IDEAS) return BETTER_IDEAS is...