comp.lang.ada
 help / color / mirror / Atom feed
* Ada vs. C: performance, size
@ 1997-01-09  0:00 W. Wesley Groleau (Wes)
  1997-01-10  0:00 ` Robert Dewar
  1997-01-13  0:00 ` Richard A. O'Keefe
  0 siblings, 2 replies; 5+ messages in thread
From: W. Wesley Groleau (Wes) @ 1997-01-09  0:00 UTC (permalink / raw)



I posted (not in this newsgroup):

> We are straying off the charter of this list, though, so if you wish to
> continue discussing it, I'd suggest direct email.

> IMO if you leave the list, please go to comp.lang.ada, rather than
> private mail. This is a very interesting topic, that more people should
> contribute to.

So to honor that request, here is the topic so far
(the part about code _size_ popped in toward the end):

Date:         Wed, 8 Jan 1997 12:48:12 MET
From: Dirk Craeynest <dirk.craeynest@EUROCONTROL.BE>
Subject:      comp.compilers article on Ada vs. C performance

   From: Arch Robison <robison@kai.com>
   Newsgroups: comp.compilers
   Subject: Ada vs. C performance, was Possible to write compiler to Java VM?
   Date: 7 Jan 1997 12:31:07 -0500
   Organization: Kuck & Associates, Inc.

   >[Do Ada compilers really generate better code than C compilers for similar
   >source code? -John]

   I have one anecdotal data point that says no.  While working for my
   former employer, I was asked to investigate why a new piece of
   software written in Ada was so much slower than its old counterpart in
   Fortran.  The presumed reason was "new feature and flexibility bloat".
   But profiling indicated the problem was really with some basic
   old-fashioned numerical inner loops.  Not being strong on Fortran, I
   rewrote the loops in C for comparison.  The C was at least 2x faster.
   Inspection of the assembly code indicated that the Ada compiler was
   lacking fundamental optimizations present in the C compiler.

   This is of course a single data point, probably obsolete.  But I think
   there is a strong reason to suspect that C compilers will generally
   generate better code than Ada compilers.  Quality of code is
   ultimately not a technology issue; it is an economics issue.
   Optimizers (and the rest of compilers) are as good as what people are
   willing to pay for, and the C market is much bigger.  (Data points to
   the contrary cheerfully accepted!)

   Arch D. Robison                         Kuck & Associates Inc.
   robison@kai.com                         1906 Fox Drive
   217-356-2288                            Champaign IL 61820


Date:         Wed, 8 Jan 1997 08:59:49 -0600
From: "Richard G. Hash" <rgh@SHELLUS.COM>
Subject:      comp.compilers article on Ada vs. C performance (fwd)

According to Dirk Craeynest:
  [wrt comp.compilers article]

Arch Robison is describing his experience with Ada at Shell in the late
80's/early 90's - and I'm quite familiar with his experience, since I was
there (and still am!).  If any of you SVUGs are still around, I'm sure you
can understand his thinking the optimization of that time was bad - it was.

I've already replied to comp.compilers:
> ---------------  Forwarding  -----------------
>
> >[Do Ada compilers really generate better code than C compilers for similar
> >source code? -John]
>
> I have one anecdotal data point that says no.  While working for my
> former employer, I was asked to investigate why a new piece of
> software written in Ada was so much slower than its old counterpart in
> Fortran.  The presumed reason was "new feature and flexibility bloat".
> But profiling indicated the problem was really with some basic
> old-fashioned numerical inner loops.  Not being strong on Fortran, I
> rewrote the loops in C for comparison.  The C was at least 2x faster.
> Inspection of the assembly code indicated that the Ada compiler was
> lacking fundamental optimizations present in the C compiler.
>
> This is of course a single data point, probably obsolete.

Indeed. I worked in that Ada group at his former employer (and still do),
and can offer another data point that says "(modern) Ada compilers nearly
*always* generate as-good or better code for *similar* source code"

The Ada in question was being compiled with a 1988-vintage compiler
(Verdix) which wasn't all that great (if you claim it was awful, I won't
argue).  We never compiled with any optimization at all (-O0), since it
didn't work, and stuff like unconstrained arrays would kill you every time
performance-wise.  Verdix was struggling to convince people their compiler
worked at all, and didn't seem to put much effort into optimizations as the
time.  But other vendors at that time (like Tartan), were well known for
very good optimizations.

It wasn't so much the Ada compiler was "lacking fundamental optimizations",
they were there (in intent), but generated such buggy code that nobody
would use them.  Any reasonably modern Ada compiler has pretty darn good
optimizations.  If you examine the GNU Ada compiler you will find that it
generates identical output compared to the GNU C compiler (for similar
code).  Since Arch's time here, our Ada benchmark code runs roughly 2.4
times faster than it did (on same machine-type), and the only thing that's
changed is the compiler version.  I like not spending my time on
optimizations like that!

What can be tricky, of course, is what constitutes "similar" code.
--
Richard Hash              rgh@shellus.com           (713) 245-7311
Subsurface Information Technology Services, Shell Services Company



Date:         Wed, 8 Jan 1997 11:09:39 EST
From: "W. Wesley Groleau (Wes)" <wwgrol@PSESERV3.fw.hac.com>
Subject:      Re: comp.compilers article on Ada vs. C performance

:> > >[Do Ada compilers really generate better code than C compilers for similar
:> > >source code? -John]
:> >
:> > I have one anecdotal data point that says no.  While working for my
:> > ....
:> > This is of course a single data point, probably obsolete.
:>
:> The Ada in question was being compiled with a 1988-vintage compiler
:> (Verdix) which wasn't all that great (if you claim it was awful, I won't
:> argue).  We never compiled with any optimization at all (-O0), since it
:> didn't work, ......

For what it's worth, in _1994_, Verdix optimizers for SPARC and 680x0 would
create a bug in about one percent or less of the source files.  (One cute
bug was to optimize away a loop control variable that wasn't referenced
within the loop!)  What irritated me even more than compiler bugs was
that people would refuse to use the optimizer on any file.

[Note: I should have stated that the 1% figure is a rough estimate from
personal observation on one project.  The project had five million Ada
statements, but I was only familiar with a subset of about 50 thousand
statements.]

I always figured it to turn off the optimizer for one out of a hundred
files was a lot easier than some of the code contortions people would
write for the sake of "hand-optimizing".

Especially since I demonstrated that the optimizer resulted in a speed up
of TWENTY times on some of our product's capabilities.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------


Date:         Thu, 9 Jan 1997 12:05:00 +0000
From: "Pickett, Michael" <msmail@BAESEMA.CO.UK>
Subject:      Re: comp.compilers article on Ada vs. C

W. Wesley Groleau wrote:

>   What irritated me even more than compiler bugs was
> that people would refuse to use the optimizer on any file.
>
> I always figured it to turn off the optimizer for one out of a hundred
> files was a lot easier than some of the code contortions people would
> write for the sake of "hand-optimizing".

I find this difficult to understand. If you understand in detail all the bugs
in your compilation system and you have tools to inspect your source code so
that you can identify any files that might suffer from the bugs, then perhaps
I /would/ agree. My experience is different. Compiler bugs tend to be
pathelogical, otherwise they would have been found before and fixed (I'm an
optimist). Often they cause the code to behave oddly under obscure situations
which, sadly, don't always get tested. If an optimiser significantly
increases the incidence of latent bugs, that is bad news.

Our systems here are built from many hundreds of files. Productivity
requirements lead us to adopt an approach to the building of these systems
which is sytematic and largely mechanised. Special cases, such as supressing
the optimiser, are not welcome, and hence my comment earlier about the need
for tool support.

We don't use the optimiser. Personally, I would like it to be used so that we
had a reasonable chance of hammering out the bugs in it, but we don't have
the resources. Performance /is/ a worry. Fortunately, hardware continues to
improve fast enough for us to be able to deliver the performance improvements
we need. Additionally, our understanding of the right way to do things
continues to improve, and so we sometimes find that a revision to add
functionality also increases performance.

There may well come a time when we need to use the optimiser, but I suspect
that it will only be for isolated units, and the code generated will be
scrutinised most carefully.

Perhaps I should add that the compiler of which I am speaking is rather long
in the tooth now, and I am sure that, were we not constrained by certain
policies, we should have different experiences with a modern compiler.

--
--Michael Pickett--


From: W. Wesley Groleau (Wes) <wwgrol@pseserv3>
Subject: Re: comp.compilers article on Ada vs. C
Date: Thu, 9 Jan 97 13:09:12 EST

:> > I always figured it to turn off the optimizer for one out of a hundred
:> > files was a lot easier than some of the code contortions people would
:> > write for the sake of "hand-optimizing".
:>
:> I find this difficult to understand. If you understand in detail all the bugs
:> in your compilation system and you have tools to inspect your source code so
:> that you can identify any files that might suffer from the bugs, then perhaps
:> I /would/ agree. My experience is different. Compiler bugs tend to be
:> pathelogical, otherwise they would have been found before and fixed (I'm an
:> optimist). Often they cause the code to behave oddly under obscure situations
:> which, sadly, don't always get tested. If an optimiser significantly
:> increases the incidence of latent bugs, that is bad news.

But when it's less than one percent, AND you have adequate testing,
my "personal" process was that when I traced a bug to a particular unit,
I would try it with that unit "unoptimized".  If the bug went away,
announce "fixed".

:> Our systems here are built from many hundreds of files. Productivity
:> requirements lead us to adopt an approach to the building of these systems
:> which is sytematic and largely mechanised. Special cases, such as supressing
:> the optimiser, are not welcome, and hence my comment earlier about the need
:> for tool support.

Our system was about five million Ada statements.  In order to speed up
compilation, we (not I personally) wrote "wrapper" tools to call the compiler.
The wrapper would farm out the compilations to different hosts in parallel.
Since that meant that the wrapper was called for EACH file, it would have
been no problem for the wrapper to check some Configuration Management
file for the correct optimization level.  In our case, even that would not
be necessary, since the compiler itself created/updated/used an "options"
file which told it the optimization level to use for each file.

We even had one and only one source file that would crash the compiler if
we used an optimization level other than ONE. (Zero was no optimization,
one was minimal, two through nine provided greater optimization.)

I should also clear up something else:  I said that the Verdix optimizer
had bugs which affected less than one percent of source files.  I should
have also mentioned that most of the bugs aborted the compilation; only a
few actually generated bad code.

We are straying off the charter of this list, though, so if you wish to
continue discussing it, I'd suggest direct email.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------


Date:         Thu, 9 Jan 1997 10:20:07 -0800
From: "Chris Sparks (Mr. Ada)" <sparks@AISF.COM>
Organization: McDonnell Douglas
Subject:      Re: Comparing size of compiled code between Ada and C

I have been reading articles on code optimation problems with the
Verdix compiler and was wondering if code size would be roughly the
same on equivalent pieces of "C" and "Ada" code.  One condition of
course is that no optimizations is used on both, and Ada would be
allowed to suppress checking.  I would assume that they would be
fairly close.  Just a thought....

Chris Sparks




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Ada vs. C: performance, size
@ 1997-01-09  0:00 W. Wesley Groleau (Wes)
  0 siblings, 0 replies; 5+ messages in thread
From: W. Wesley Groleau (Wes) @ 1997-01-09  0:00 UTC (permalink / raw)



> > I always figured it to turn off the optimizer for one out of a hundred
> > files was a lot easier than some of the code contortions people would
> > write for the sake of "hand-optimizing".

> I find this difficult to understand. If you understand in detail all the bugs
> in your compilation system and you have tools to inspect your source code so
> that you can identify any files that might suffer from the bugs, then perhaps
> I /would/ agree. My experience is different. Compiler bugs tend to be

On a large project, I would hope that at least five percent of the people
are familiar with compiler bugs.  I would further hope that:

1. Every change of any significance has some sort of peer review, which
   could include watching for such things.
2. Every change of any significance has some sort of testing which could
   detect most of the bugs that escape the review.

> pathelogical, otherwise they would have been found before and fixed (I'm an
> optimist). Often they cause the code to behave oddly under obscure situations
> which, sadly, don't always get tested. If an optimiser significantly
> increases the incidence of latent bugs, that is bad news.

If code is well-engineered, in terms of modularity, simplicity, etc.
not only are there fewer "obscure situations" possible to slip by testing,
but there are also fewer wierd coding techniques to confuse an optimizer.

---------------------------------------------------------------------------
W. Wesley Groleau (Wes)                                Office: 219-429-4923
Hughes Defense Communications (MS 10-41)                 Home: 219-471-7206
Fort Wayne,  IN   46808                  (Unix): wwgrol@pseserv3.fw.hac.com
---------------------------------------------------------------------------




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Ada vs. C: performance, size
  1997-01-09  0:00 Ada vs. C: performance, size W. Wesley Groleau (Wes)
@ 1997-01-10  0:00 ` Robert Dewar
  1997-01-10  0:00   ` Stephen Leake
  1997-01-13  0:00 ` Richard A. O'Keefe
  1 sibling, 1 reply; 5+ messages in thread
From: Robert Dewar @ 1997-01-10  0:00 UTC (permalink / raw)



Responding to one quote in Wes' message

"I have been reading articles on code optimation problems with the
Verdix compiler and was wondering if code size would be roughly the
same on equivalent pieces of "C" and "Ada" code.  One condition of
course is that no optimizations is used on both, and Ada would be
allowed to suppress checking.  I would assume that they would be
fairly close.  Just a thought...."



Comparing quality of code with "no optimizations" is silly and tells
nothing. I can't see that you can consider it an advantage of one
compiler over another that if you tell both compilers to generate
lousy code, one generates better code under these conditions. 

As to compilers generating errors with optimization on, yes, I know this
has been a pattern for some compilers in the past, and we often find that
people using GCC or GNAT assume that the code will be more reliable at
-O0 than -O2. In fact this is not the case, we have relatively few code
generation problems (after all at this stage the GCC code generator is
pretty well shaken down), and those that we have are as likely, or perhaps
even more likely to occur at -O0 than at -O2, since in practice virtually
all GCC production code is delivered with at least -O1 optimization. The
quality of code at -O0 is (by design and intention) horrible -- GCC really
believes you if you say no optimization and generates lots of junk!

I am certainly not saying you will never find a case with GCC or GNAT where
turning on optimization will cause problems, just that the frequency of
this occurrence is very low, and probably no higher than the case of turning
off optimization causing problems.

Note that by problems here I mean code generation problems, not all problems
that can occur in your code. It is quite often the case that optimization
will reveal underlying problems in your code. Harmless erroneous programs
can become not so harmful when optimized, since the optimizer is allowed to
"believe" that it is dealing with correctly written code. So you can often
find examples of poorly written code where turning optimization on (or moving
from one Ada compiler to another) will reveal previously unseen problems.
In our experience of porting large Ada codes to GNAT, we find that this
kind of occurrence represents a significant part of the porting effort in
some cases.

I'll give one example. In one program we worked on, there was a large
buffer defined as an array of characters, which on the machine we were
working on (Sun Solaris) was byte aligned, which is perfectly reasonable.
The code however, passed the address of the array to an external C routine,
which after many layers of passing backwards and forwards, treated the
buffer as an array of words, causing an alignment trap.

Now of course this is a bug in the code, the alignment of the buffer needed
to be specified (possible in Ada 95, not easily possible in Ada 83), and
was easily fixed once understood, but this kind of problem is not unusual.





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Ada vs. C: performance, size
  1997-01-10  0:00 ` Robert Dewar
@ 1997-01-10  0:00   ` Stephen Leake
  0 siblings, 0 replies; 5+ messages in thread
From: Stephen Leake @ 1997-01-10  0:00 UTC (permalink / raw)



Just an anecdote: for a UT69R00 target, I have some C code that aborts
the gcc compiler when using -O0, but not with -O2. The problem is that
the register allocator runs out of registers! This is a VERY simple
chip, without even indexed addressing, so gcc gives up. But with -O2, it
does a much better job of figuring out how to use the registers
efficiently, and can compile successfully.

If anyone has had a similar experience with a gcc port, I'd like to hear
about it, particularly if you have any advice on how to make it better!
-- 
- Stephe




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Ada vs. C: performance, size
  1997-01-09  0:00 Ada vs. C: performance, size W. Wesley Groleau (Wes)
  1997-01-10  0:00 ` Robert Dewar
@ 1997-01-13  0:00 ` Richard A. O'Keefe
  1 sibling, 0 replies; 5+ messages in thread
From: Richard A. O'Keefe @ 1997-01-13  0:00 UTC (permalink / raw)



>Date:         Thu, 9 Jan 1997 10:20:07 -0800
>From: "Chris Sparks (Mr. Ada)" <sparks@AISF.COM>

>I ... was wondering if code size would be roughly the
>same on equivalent pieces of "C" and "Ada" code.  One condition of
>course is that no optimizations is used on both, and Ada would be
>allowed to suppress checking.  I would assume that they would be
>fairly close.  Just a thought....

This is pretty much meaningless.  I have access to 3 C compilers
on this machine, and the code quality they producve with optimisation
switched off is _radically_ different.  I just compiled 30 000 lines
of C (parts of a program I am trying to install) with these three
compilers.
	Brand X:	 747 k
	Brand Y:	 916 k
	Brand Z:	1091 k
This is from the _same_ set of sources exactly.  For what it's worth,
all three compilers happened to be generating SPARC code, but can
generate code for other machines including 80[34...]86.  On the SPARC,
the Brand Z compiler generates the fastest optimised code, and the Brand X
compiler the slowest.  With default options, the brand X compiler has the
best diagnostics.  With the right options, the brandy Y compiler has the
best.  The brand X compiler compiles the fastest.

And to really make the point, 

	Brandy Y (-O2)	650 k

Size of compiled code is a function of *compilers*, not *languages*.
I see no reason whatsoever to be interested in the size of unoptimised
code.  The size I care about is the size of the code I actually run,
and when using these three compilers I normally ask for a high optimisation
level.  (In practice I never use less than -O2 for brand Y, and I usually
use -xO4 these days for brand Z.)  Of these three compilers, the brand X
compiler is the *only* one where the size of unoptimised code was something
that its authors cared about.

I would expect the same to be true for Ada compilers.  In fact, since gcc
uses the same back end for Ada and C, I would expect unoptimised code from
GNAT to be rather bulky, compared with -O2 optimised code.  (gcc is brand Y.)

-- 
My tertiary education cost a quarter of a million in lost income
(assuming close-to-minimum wage); why make students pay even more?
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~1997-01-13  0:00 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-01-09  0:00 Ada vs. C: performance, size W. Wesley Groleau (Wes)
1997-01-10  0:00 ` Robert Dewar
1997-01-10  0:00   ` Stephen Leake
1997-01-13  0:00 ` Richard A. O'Keefe
  -- strict thread matches above, loose matches on Subject: below --
1997-01-09  0:00 W. Wesley Groleau (Wes)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox