comp.lang.ada
 help / color / mirror / Atom feed
From: smize@news.imagin.net (Samuel Mize)
Subject: Re: Efficiency of gaurds
Date: 1997/02/26
Date: 1997-02-26T00:00:00+00:00	[thread overview]
Message-ID: <5f2352$j5o@prime.imagin.net> (raw)
In-Reply-To: dewar.856870011@merv



Let me tie some answers together for the new coders reading this thread.

[student] asked:
> My professor claims that guarding select alternatives which won't ever
> contain a rendezvous (in that iteration of the select) is more efficient
...there could be some implementations where it
> would actually be less efficient to do this, and it adds complexity to
>the user program. (Which means errors).

Robert Dewar <dewar@merv.cs.nyu.edu> wrote:
>This may or may not be true, but in any case, such guesses should not
>influence coding style. You should use or not use guards depending on
>what is clearer at the level of your program. Trying to guess which
>of these two might be more efficient is unlikely to be a productive
>way to spend time.

You DO want to code initially in the way that is clearest and
simplest, as Mr. Dewar points out.

However, if you have real-time constraints, you then may need to
optimize your code for speed.  The rule of thumb is: make it run
right, THEN make it run fast.

There are a lot of ways to do this.

Line-by-line optimizations are better done by the compiler -- in
fact, I've gotten better performance from code by REMOVING manual
"optimizations," because the compiler could then optimize better.

Manual optimization is best done at a higher level -- finding more
efficient algorithms, restructuring code.  If the MEASURED time
for your particular compiler suggests that checking a task/protected
entry queue takes a LOT longer than evaluating a guard, this would
be a candidate for change in the optimization phase.  If you add
guards for this purpose, they should be clearly commented as
optimizations.  And you are right, this DOES add complexity and
reduce maintainability.  Don't do it by habit, but only when
(1) it has a measurable impact on execution time, and (2) you
actually do need the speed.

If you look through the online resources, I'd bet you'll find
Ada-specific info about performance optimization.  I know that
the book "Ada Quality and Style" has a section on this; I believe
it's available for download from adahome (http://www.adahome.com).

Keith Shillington pointed out that a guard may be faster OR slower
than checking a rendezvous:
> The evaluation of the state of the entry queue may be as simple as an
> offset comparison of a counter value;
...
> On the other hand, evaluating the state of the entry queue may involve a
> system call, at which points, all bets are off, and your professor is
> right.  

So, when you are looking at performance improvements, this may be
a useful area to consider.  There are a lot of performance impacts
from multitasking.  Ada has the best built-in multitasking
performance of any commercially significant language (also the
worst).  Anything that reliably communicates between tasks is
going to take significant execution time, especially if it is
running on a true multiprocessor.

For instance, where I work, we have found that protected calls
take a LOT of time.  (Just checking for rendezvous in a select
would be faster, but would still entail one or more system calls.)

For our compiler, a protected procedure call takes 10-20 times as long
as a procedure call-- that's reasonable, the run-time system (RTS) has
to call 5 procedures to turn off interrupts and set up the call.  A
protected entry is twice as slow.  HOWEVER, an optimized procedure
call is an order of magnitude faster, and optimizing our code does not
(of course) optimize the RTS, so protected calls are about the same.
So once we optimize, a protected procedure call is about 200 times
slower than a normal procedure call, and a protected entry call is 400
times slower.

There are places where we would LIKE to use a protected object
to control a data structure, so two tasks could safely work on
different parts of it at the same time.  But this happens in tight
loops, and we can't afford the protected overhead.  In some cases
we use protected objects to implement critical regions around the
entire loops -- halting task A until task B is done with the
data structure.  In other cases, our analysis showed that they'd
never be working in the same part of the array at once, so we
just use an uncontrolled array.

Also, we combined some tasks -- instead of task A and task B,
we have task AB that calls procedures A, then B.  This lets
them share a data structure without protection.

The system is now a little harder to maintain, but the speed-up
was perceptible (and needed).  And we use comments and keep
records to ensure that later maintainers will know what's
going on.

Samuel Mize





      reply	other threads:[~1997-02-26  0:00 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1997-02-12  0:00 Efficiency of gaurds Ted Dennison
     [not found] ` <01bc19d6$504842e0$fc00af88@godiva>
1997-02-25  0:00   ` Robert Dewar
1997-02-26  0:00     ` Samuel Mize [this message]
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox