comp.lang.ada
 help / color / mirror / Atom feed
* Efficiency of gaurds
@ 1997-02-12  0:00 Ted Dennison
       [not found] ` <01bc19d6$504842e0$fc00af88@godiva>
  0 siblings, 1 reply; 3+ messages in thread
From: Ted Dennison @ 1997-02-12  0:00 UTC (permalink / raw)



This question came up in my class:

My professor claims that guarding select alternatives which won't ever
contain a rendezvous (in that iteration of the select) is more efficient
than just leaving them open. He said that that way the tasking executive
won't have to check that paticular rendezvous. I'm not sure I bought this
logic. It seems to me that there could be some implementations where it
would actually be less efficient to do this, and it adds complexity to the
user program. (Which means errors).

Anyway, I'd like to ask folks who have actually *written* tasking
executives if my prof speaks the truth here.

T.E.D.




^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Efficiency of gaurds
       [not found] ` <01bc19d6$504842e0$fc00af88@godiva>
@ 1997-02-25  0:00   ` Robert Dewar
  1997-02-26  0:00     ` Samuel Mize
  0 siblings, 1 reply; 3+ messages in thread
From: Robert Dewar @ 1997-02-25  0:00 UTC (permalink / raw)



<<> My professor claims that guarding select alternatives which won't ever
> contain a rendezvous (in that iteration of the select) is more efficient
> than just leaving them open. He said that that way the tasking executive
> won't have to check that paticular rendezvous. I'm not sure I bought this
> logic. It seems to me that there could be some implementations where it
> would actually be less efficient to do this, and it adds complexity to
the
> user program. (Which means errors).
>
> Anyway, I'd like to ask folks who have actually *written* tasking
> executives if my prof speaks the truth here.>>


This may or may not be true, but in any case, such guesses should not
influence coding style. You should use or not use guards depending on
what is clearer at the level of your program. Trying to guess which
of these two might be more efficient is unlikely to be a productive
way to spend time.





^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Efficiency of gaurds
  1997-02-25  0:00   ` Robert Dewar
@ 1997-02-26  0:00     ` Samuel Mize
  0 siblings, 0 replies; 3+ messages in thread
From: Samuel Mize @ 1997-02-26  0:00 UTC (permalink / raw)




Let me tie some answers together for the new coders reading this thread.

[student] asked:
> My professor claims that guarding select alternatives which won't ever
> contain a rendezvous (in that iteration of the select) is more efficient
...there could be some implementations where it
> would actually be less efficient to do this, and it adds complexity to
>the user program. (Which means errors).

Robert Dewar <dewar@merv.cs.nyu.edu> wrote:
>This may or may not be true, but in any case, such guesses should not
>influence coding style. You should use or not use guards depending on
>what is clearer at the level of your program. Trying to guess which
>of these two might be more efficient is unlikely to be a productive
>way to spend time.

You DO want to code initially in the way that is clearest and
simplest, as Mr. Dewar points out.

However, if you have real-time constraints, you then may need to
optimize your code for speed.  The rule of thumb is: make it run
right, THEN make it run fast.

There are a lot of ways to do this.

Line-by-line optimizations are better done by the compiler -- in
fact, I've gotten better performance from code by REMOVING manual
"optimizations," because the compiler could then optimize better.

Manual optimization is best done at a higher level -- finding more
efficient algorithms, restructuring code.  If the MEASURED time
for your particular compiler suggests that checking a task/protected
entry queue takes a LOT longer than evaluating a guard, this would
be a candidate for change in the optimization phase.  If you add
guards for this purpose, they should be clearly commented as
optimizations.  And you are right, this DOES add complexity and
reduce maintainability.  Don't do it by habit, but only when
(1) it has a measurable impact on execution time, and (2) you
actually do need the speed.

If you look through the online resources, I'd bet you'll find
Ada-specific info about performance optimization.  I know that
the book "Ada Quality and Style" has a section on this; I believe
it's available for download from adahome (http://www.adahome.com).

Keith Shillington pointed out that a guard may be faster OR slower
than checking a rendezvous:
> The evaluation of the state of the entry queue may be as simple as an
> offset comparison of a counter value;
...
> On the other hand, evaluating the state of the entry queue may involve a
> system call, at which points, all bets are off, and your professor is
> right.  

So, when you are looking at performance improvements, this may be
a useful area to consider.  There are a lot of performance impacts
from multitasking.  Ada has the best built-in multitasking
performance of any commercially significant language (also the
worst).  Anything that reliably communicates between tasks is
going to take significant execution time, especially if it is
running on a true multiprocessor.

For instance, where I work, we have found that protected calls
take a LOT of time.  (Just checking for rendezvous in a select
would be faster, but would still entail one or more system calls.)

For our compiler, a protected procedure call takes 10-20 times as long
as a procedure call-- that's reasonable, the run-time system (RTS) has
to call 5 procedures to turn off interrupts and set up the call.  A
protected entry is twice as slow.  HOWEVER, an optimized procedure
call is an order of magnitude faster, and optimizing our code does not
(of course) optimize the RTS, so protected calls are about the same.
So once we optimize, a protected procedure call is about 200 times
slower than a normal procedure call, and a protected entry call is 400
times slower.

There are places where we would LIKE to use a protected object
to control a data structure, so two tasks could safely work on
different parts of it at the same time.  But this happens in tight
loops, and we can't afford the protected overhead.  In some cases
we use protected objects to implement critical regions around the
entire loops -- halting task A until task B is done with the
data structure.  In other cases, our analysis showed that they'd
never be working in the same part of the array at once, so we
just use an uncontrolled array.

Also, we combined some tasks -- instead of task A and task B,
we have task AB that calls procedures A, then B.  This lets
them share a data structure without protection.

The system is now a little harder to maintain, but the speed-up
was perceptible (and needed).  And we use comments and keep
records to ensure that later maintainers will know what's
going on.

Samuel Mize





^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~1997-02-26  0:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1997-02-12  0:00 Efficiency of gaurds Ted Dennison
     [not found] ` <01bc19d6$504842e0$fc00af88@godiva>
1997-02-25  0:00   ` Robert Dewar
1997-02-26  0:00     ` Samuel Mize

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox