From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,2cdc6c2ee911fe77 X-Google-Attributes: gid103376,public From: "Robert I. Eachus" Subject: Re: Ada vs. C++ Date: 2000/03/04 Message-ID: <38C09202.75341262@earthlink.net>#1/1 X-Deja-AN: 592954837 Content-Transfer-Encoding: 7bit References: <38A37C97.9E181025@interact.net.au> <38A9980E.A50A0D91@quadruscorp.com> X-Accept-Language: en,pdf Content-Type: text/plain; charset=us-ascii X-Complaints-To: abuse@earthlink.net X-Trace: newsread1.prod.itd.earthlink.net 952144079 63.24.56.224 (Fri, 03 Mar 2000 20:27:59 PST) Organization: The MITRE Corporation MIME-Version: 1.0 NNTP-Posting-Date: Fri, 03 Mar 2000 20:27:59 PST Newsgroups: comp.lang.ada Date: 2000-03-04T00:00:00+00:00 List-Id: "Marin D. Condic" wrote: > If your question is more along the lines of "Which language is faster > and more memory efficient?" then I'm afraid you will get no useful > information on that subject. For benchmarking purposes, you cannot > separate the language from the implementation. One man's Ada compiler > may produce dramatically better code than another man's C++ compiler. > Likewise, the opposite. This really tells you nothing about either > language - just how well/poorly someone implemented the language. > This much can be said: There is nothing inherent in Ada that would make > it less efficient than C++. In some ways, Ada syntax is superior for > optimization purposes because more information is available to the > compiler. In other ways, Ada could be slower because of the requirements > for runtime checks. However, the language allows you to turn off runtime > checks if efficiency is a major concern. (When doing realtime control > systems, we routinely turned off checks and had code that was every bit > as efficient as that which could be produced by any other language.) Both are true, but neither addresses the real reason that it is difficult to develop good multi-lingual benchmarks. Let me take a simple example. Say I want to measure the speed of string assignment in C and Ada. I write a simple string assignment in C, Ada, C++, and, just for the fun of it, in PL/I. The naive C looks something like: *char[30] a, b int i ... for(i=0, i<30, i++) a[i] = b[i] */ please excuse any errors in writing poor C. ;-) */ And the naive Ada would be: A, B: String(1..30); ... B := (others => 'b'); -- To avoid erroneousness and bounded error issues... ... A := B; Of course the assignement outside the timing loop may turn out to be slower than the one inside, or you could find that the compiler initializes B and A statically, and takes no time at all. (But that is a side issue here, except that you figure out that you need to read the value of B from outside the program to avoid such optimizations.) Now a decent C programmer looks at the C code, and says, "Oh no, that is not how it is done." He then writes code which mallocs a and b, and uses strcpy for the assignment. This is a significant improvement in the benchmark, but now you need to alter the Ada program to match. The length of the strings is now determined at run-time, but does this mean you should use the heap, or maybe the right match is to use Unbounded_String? After a lot of back and forth, aggrement is reached to use Unbounded_String. This makes the C++ programmer happy, since he has a foundation class which seems to match the Ada nicely. But the PL/I programmer wants to use char (128) varying. (Corresponds to Ada Bounded_Strings.) So now you have several alternatives. Not all langauges support all of them, and which are "native" to the language, and which are part of the standard run-time varys from language to language. The benchmarks you want can be written, but it is very tough. You end up writing a detailed formal specification, debugging the specification through several iterations, having groups familiar with both langauges and implementations code against the formal specification, then finally test to the specifications. There are such benchmarks, for example TPC for transaction processing, the LINPAC benchmarks, etc. However, even if you go though all that, you end up testing the quality of the implementation teams more than anything else. My favorite example of such "cheating" was a case where the hands down winner of the benchmark was the slowest (by far) hardware system offered. However, their benchmark team had taken advantage of the specification that certain large matrices were sparse. Using a sparse representation of the data significantly increased the cache hit ratio, and as a bonus, kept all the data in physical memory. Since the benchmark was designed to reflect the real application, we modified other benchmarks to use the same technique. On some it actually slowed the benchmark down significantly. We might have been able to come up with different matrix representations better suited to the other hardware, but that wasn't the purpose of the benchmark. We were trying to insure that the hardware proposed could meet the requirements. So the bidder with the clever programmers could bid less expensive equipment and gain an advantage.