comp.lang.ada
 help / color / mirror / Atom feed
* Ada vs. C: performance, size
@ 1997-01-09  0:00 W. Wesley Groleau (Wes)
  1997-01-10  0:00 ` Robert Dewar
  1997-01-13  0:00 ` Richard A. O'Keefe
  0 siblings, 2 replies; 5+ messages in thread
From: W. Wesley Groleau (Wes) @ 1997-01-09  0:00 UTC (permalink / raw)



I posted (not in this newsgroup):

> We are straying off the charter of this list, though, so if you wish to
> continue discussing it, I'd suggest direct email.

> IMO if you leave the list, please go to comp.lang.ada, rather than
> private mail. This is a very interesting topic, that more people should
> contribute to.

So to honor that request, here is the topic so far
(the part about code _size_ popped in toward the end):

Date:         Wed, 8 Jan 1997 12:48:12 MET
From: Dirk Craeynest <dirk.craeynest@EUROCONTROL.BE>
Subject:      comp.compilers article on Ada vs. C performance

   From: Arch Robison <robison@kai.com>
   Newsgroups: comp.compilers
   Subject: Ada vs. C performance, was Possible to write compiler to Java VM?
   Date: 7 Jan 1997 12:31:07 -0500
   Organization: Kuck & Associates, Inc.

   >[Do Ada compilers really generate better code than C compilers for similar
   >source code? -John]

   I have one anecdotal data point that says no.  While working for my
   former employer, I was asked to investigate why a new piece of
   software written in Ada was so much slower than its old counterpart in
   Fortran.  The presumed reason was "new feature and flexibility bloat".
   But profiling indicated the problem was really with some basic
   old-fashioned numerical inner loops.  Not being strong on Fortran, I
   rewrote the loops in C for comparison.  The C was at least 2x faster.
   Inspection of the assembly code indicated that the Ada compiler was
   lacking fundamental optimizations present in the C compiler.

   This is of course a single data point, probably obsolete.  But I think
   there is a strong reason to suspect that C compilers will generally
   generate better code than Ada compilers.  Quality of code is
   ultimately not a technology issue; it is an economics issue.
   Optimizers (and the rest of compilers) are as good as what people are
   willing to pay for, and the C market is much bigger.  (Data points to
   the contrary cheerfully accepted!)

   Arch D. Robison                         Kuck & Associates Inc.
   robison@kai.com                         1906 Fox Drive
   217-356-2288                            Champaign IL 61820


Date:         Wed, 8 Jan 1997 08:59:49 -0600
From: "Richard G. Hash" <rgh@SHELLUS.COM>
Subject:      comp.compilers article on Ada vs. C performance (fwd)

According to Dirk Craeynest:
  [wrt comp.compilers article]

Arch Robison is describing his experience with Ada at Shell in the late
80's/early 90's - and I'm quite familiar with his experience, since I was
there (and still am!).  If any of you SVUGs are still around, I'm sure you
can understand his thinking the optimization of that time was bad - it was.

I've already replied to comp.compilers:
> ---------------  Forwarding  -----------------
>
> >[Do Ada compilers really generate better code than C compilers for similar
> >source code? -John]
>
> I have one anecdotal data point that says no.  While working for my
> former employer, I was asked to investigate why a new piece of
> software written in Ada was so much slower than its old counterpart in
> Fortran.  The presumed reason was "new feature and flexibility bloat".
> But profiling indicated the problem was really with some basic
> old-fashioned numerical inner loops.  Not being strong on Fortran, I
> rewrote the loops in C for comparison.  The C was at least 2x faster.
> Inspection of the assembly code indicated that the Ada compiler was
> lacking fundamental optimizations present in the C compiler.
>
> This is of course a single data point, probably obsolete.

Indeed. I worked in that Ada group at his former employer (and still do),
and can offer another data point that says "(modern) Ada compilers nearly
*always* generate as-good or better code for *similar* source code"

The Ada in question was being compiled with a 1988-vintage compiler
(Verdix) which wasn't all that great (if you claim it was awful, I won't
argue).  We never compiled with any optimization at all (-O0), since it
didn't work, and stuff like unconstrained arrays would kill you every time
performance-wise.  Verdix was struggling to convince people their compiler
worked at all, and didn't seem to put much effort into optimizations as the
time.  But other vendors at that time (like Tartan), were well known for
very good optimizations.

It wasn't so much the Ada compiler was "lacking fundamental optimizations",
they were there (in intent), but generated such buggy code that nobody
would use them.  Any reasonably modern Ada compiler has pretty darn good
optimizations.  If you examine the GNU Ada compiler you will find that it
generates identical output compared to the GNU C compiler (for similar
code).  Since Arch's time here, our Ada benchmark code runs roughly 2.4
times faster than it did (on same machine-type), and the only thing that's
changed is the compiler version.  I like not spending my time on
optimizations like that!

What can be tricky, of course, is what constitutes "similar" code.
--
Richard Hash              rgh@shellus.com           (713) 245-7311
Subsurface Information Technology Services, Shell Services Company



Date:         Wed, 8 Jan 1997 11:09:39 EST
From: "W. Wesley Groleau (Wes)" <wwgrol@PSESERV3.fw.hac.com>
Subject:      Re: comp.compilers article on Ada vs. C performance

:> > >[Do Ada compilers really generate better code than C compilers for similar
:> > >source code? -John]
:> >
:> > I have one anecdotal data point that says no.  While working for my
:> > ....
:> > This is of course a single data point, probably obsolete.
:>
:> The Ada in question was being compiled with a 1988-vintage compiler
:> (Verdix) which wasn't all that great (if you claim it was awful, I won't
:> argue).  We never compiled with any optimization at all (-O0), since it
:> didn't work, ......

For what it's worth, in _1994_, Verdix optimizers for SPARC and 680x0 would
create a bug in about one percent or less of the source files.  (One cute
bug was to optimize away a loop control variable that wasn't referenced
within the loop!)  What irritated me even more than compiler bugs was
that people would refuse to use the optimizer on any file.

[Note: I should have stated that the 1% figure is a rough estimate from
personal observation on one project.  The project had five million Ada
statements, but I was only familiar with a subset of about 50 thousand
statements.]

I always figured it to turn off the optimizer for one out of a hundred
files was a lot easier than some of the code contortions people would
write for the sake of "hand-optimizing".

Especially since I demonstrated that the optimizer resulted in a speed up
of TWENTY times on some of our product's capabilities.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------


Date:         Thu, 9 Jan 1997 12:05:00 +0000
From: "Pickett, Michael" <msmail@BAESEMA.CO.UK>
Subject:      Re: comp.compilers article on Ada vs. C

W. Wesley Groleau wrote:

>   What irritated me even more than compiler bugs was
> that people would refuse to use the optimizer on any file.
>
> I always figured it to turn off the optimizer for one out of a hundred
> files was a lot easier than some of the code contortions people would
> write for the sake of "hand-optimizing".

I find this difficult to understand. If you understand in detail all the bugs
in your compilation system and you have tools to inspect your source code so
that you can identify any files that might suffer from the bugs, then perhaps
I /would/ agree. My experience is different. Compiler bugs tend to be
pathelogical, otherwise they would have been found before and fixed (I'm an
optimist). Often they cause the code to behave oddly under obscure situations
which, sadly, don't always get tested. If an optimiser significantly
increases the incidence of latent bugs, that is bad news.

Our systems here are built from many hundreds of files. Productivity
requirements lead us to adopt an approach to the building of these systems
which is sytematic and largely mechanised. Special cases, such as supressing
the optimiser, are not welcome, and hence my comment earlier about the need
for tool support.

We don't use the optimiser. Personally, I would like it to be used so that we
had a reasonable chance of hammering out the bugs in it, but we don't have
the resources. Performance /is/ a worry. Fortunately, hardware continues to
improve fast enough for us to be able to deliver the performance improvements
we need. Additionally, our understanding of the right way to do things
continues to improve, and so we sometimes find that a revision to add
functionality also increases performance.

There may well come a time when we need to use the optimiser, but I suspect
that it will only be for isolated units, and the code generated will be
scrutinised most carefully.

Perhaps I should add that the compiler of which I am speaking is rather long
in the tooth now, and I am sure that, were we not constrained by certain
policies, we should have different experiences with a modern compiler.

--
--Michael Pickett--


From: W. Wesley Groleau (Wes) <wwgrol@pseserv3>
Subject: Re: comp.compilers article on Ada vs. C
Date: Thu, 9 Jan 97 13:09:12 EST

:> > I always figured it to turn off the optimizer for one out of a hundred
:> > files was a lot easier than some of the code contortions people would
:> > write for the sake of "hand-optimizing".
:>
:> I find this difficult to understand. If you understand in detail all the bugs
:> in your compilation system and you have tools to inspect your source code so
:> that you can identify any files that might suffer from the bugs, then perhaps
:> I /would/ agree. My experience is different. Compiler bugs tend to be
:> pathelogical, otherwise they would have been found before and fixed (I'm an
:> optimist). Often they cause the code to behave oddly under obscure situations
:> which, sadly, don't always get tested. If an optimiser significantly
:> increases the incidence of latent bugs, that is bad news.

But when it's less than one percent, AND you have adequate testing,
my "personal" process was that when I traced a bug to a particular unit,
I would try it with that unit "unoptimized".  If the bug went away,
announce "fixed".

:> Our systems here are built from many hundreds of files. Productivity
:> requirements lead us to adopt an approach to the building of these systems
:> which is sytematic and largely mechanised. Special cases, such as supressing
:> the optimiser, are not welcome, and hence my comment earlier about the need
:> for tool support.

Our system was about five million Ada statements.  In order to speed up
compilation, we (not I personally) wrote "wrapper" tools to call the compiler.
The wrapper would farm out the compilations to different hosts in parallel.
Since that meant that the wrapper was called for EACH file, it would have
been no problem for the wrapper to check some Configuration Management
file for the correct optimization level.  In our case, even that would not
be necessary, since the compiler itself created/updated/used an "options"
file which told it the optimization level to use for each file.

We even had one and only one source file that would crash the compiler if
we used an optimization level other than ONE. (Zero was no optimization,
one was minimal, two through nine provided greater optimization.)

I should also clear up something else:  I said that the Verdix optimizer
had bugs which affected less than one percent of source files.  I should
have also mentioned that most of the bugs aborted the compilation; only a
few actually generated bad code.

We are straying off the charter of this list, though, so if you wish to
continue discussing it, I'd suggest direct email.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------


Date:         Thu, 9 Jan 1997 10:20:07 -0800
From: "Chris Sparks (Mr. Ada)" <sparks@AISF.COM>
Organization: McDonnell Douglas
Subject:      Re: Comparing size of compiled code between Ada and C

I have been reading articles on code optimation problems with the
Verdix compiler and was wondering if code size would be roughly the
same on equivalent pieces of "C" and "Ada" code.  One condition of
course is that no optimizations is used on both, and Ada would be
allowed to suppress checking.  I would assume that they would be
fairly close.  Just a thought....

Chris Sparks




^ permalink raw reply	[flat|nested] 5+ messages in thread
* Re: Ada vs. C: performance, size
@ 1997-01-09  0:00 W. Wesley Groleau (Wes)
  0 siblings, 0 replies; 5+ messages in thread
From: W. Wesley Groleau (Wes) @ 1997-01-09  0:00 UTC (permalink / raw)



> > I always figured it to turn off the optimizer for one out of a hundred
> > files was a lot easier than some of the code contortions people would
> > write for the sake of "hand-optimizing".

> I find this difficult to understand. If you understand in detail all the bugs
> in your compilation system and you have tools to inspect your source code so
> that you can identify any files that might suffer from the bugs, then perhaps
> I /would/ agree. My experience is different. Compiler bugs tend to be

On a large project, I would hope that at least five percent of the people
are familiar with compiler bugs.  I would further hope that:

1. Every change of any significance has some sort of peer review, which
   could include watching for such things.
2. Every change of any significance has some sort of testing which could
   detect most of the bugs that escape the review.

> pathelogical, otherwise they would have been found before and fixed (I'm an
> optimist). Often they cause the code to behave oddly under obscure situations
> which, sadly, don't always get tested. If an optimiser significantly
> increases the incidence of latent bugs, that is bad news.

If code is well-engineered, in terms of modularity, simplicity, etc.
not only are there fewer "obscure situations" possible to slip by testing,
but there are also fewer wierd coding techniques to confuse an optimizer.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~1997-01-13  0:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-01-09  0:00 Ada vs. C: performance, size W. Wesley Groleau (Wes)
1997-01-10  0:00 ` Robert Dewar
1997-01-10  0:00   ` Stephen Leake
1997-01-13  0:00 ` Richard A. O'Keefe
  -- strict thread matches above, loose matches on Subject: below --
1997-01-09  0:00 W. Wesley Groleau (Wes)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox