comp.lang.ada
 help / color / mirror / Atom feed
* LRM 4.1.3 paragraphs 17-19
@ 1991-02-09  2:39 ` Joe Vlietstra
  1991-02-11 14:57   ` Michael Feldman
  0 siblings, 1 reply; 9+ messages in thread
From: Joe Vlietstra @ 1991-02-09  2:39 UTC (permalink / raw)


In article <1991Feb8.063458.850@kestrel.edu> gyro@kestrel.edu (Scott Layson) writes:
>It has to do with paragraphs 18 and 19 of section 4.1.3.
>(Please refer to the manual at this point.)  Paragraph 17 talks
>about four kinds of named constructs whose names can be used as
>the prefix of an expanded name.  Paragraphs 18 and 19 give
>additional rules about only two of those kinds of constructs,
>viz., subprograms and accept statements.  What about block
>statements and loop statements?
>
>Scott Burson
>Gyro@Reasoning.COM

When the LRM is unclear, check the "Approved Ada Language
Commentaries" (published in ACM Ada Letters, Vol IX, No 3, 1989).

For your specific problem, check commentary AI-0016.
This commentary addresses prefixes for renamed packages.
But the discussion section describes how to interpret 4.1.3, and
ends with the statement:
  In short, since the Standard is unclear on the legality of
  such names, and since implementations tend to allow them, it
  is reasonable to interpret the Standard as allowing such names.
When in doubt, be permissive.

In the case of block statements and loop statements, the only
restrictions are those imposed by the scoping rules.  Variables
declared within a block cannot be referenced outside of the block,
even with a prefix.
Loop statements implicitly declare a variable: the loop index.
So the following works (but may get the programmer fired):
	LOOP1:
	for I in 1..10 loop
	   LOOP2:
	   for I in 1..10 loop
	      X (LOOP1.I,LOOP2.I) := 100.0;
	   end loop LOOP2;
	end loop LOOP1;
But LOOP1.I cannot be referenced outside of the loop.
Hope this helps.
------------------------------------------------------------------
Joe Vlietstra        | Checked Daily:  ...!uunet!mojsys!joevl
Mojave Systems Corp  | Checked Weekly: mojave@hmcvax.claremont.edu
1254 Harvard Avenue  | Iffy Routing:   joevl@mojsys.com
Claremont, CA 91711  ! Voice:          (714) 621-7372
------------------------------------------------------------------
-- 
------------------------------------------------------------------
Joe Vlietstra        | Checked Daily:  ...!uunet!mojsys!joevl
Mojave Systems Corp  | Checked Weekly: mojave@hmcvax.claremont.edu
1254 Harvard Avenue  | Iffy Routing:   joevl@mojsys.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-09  2:39 ` LRM 4.1.3 paragraphs 17-19 Joe Vlietstra
@ 1991-02-11 14:57   ` Michael Feldman
  1991-02-12  9:48     ` (George C. Harrison) Norfolk State University
  1991-02-15  4:11     ` Jim Showalter
  0 siblings, 2 replies; 9+ messages in thread
From: Michael Feldman @ 1991-02-11 14:57 UTC (permalink / raw)


In article <1991Feb09.023913.524@mojsys.com> joevl@mojsys.com (Joe Vlietstra) writes:
>Loop statements implicitly declare a variable: the loop index.
>So the following works (but may get the programmer fired):
>	LOOP1:
>	for I in 1..10 loop
>	   LOOP2:
>	   for I in 1..10 loop
>	      X (LOOP1.I,LOOP2.I) := 100.0;
>	   end loop LOOP2;
>	end loop LOOP1;
>But LOOP1.I cannot be referenced outside of the loop.

Perhaps this will start a new thread. I'd fire the programmer for two reasons
(I presume Joe had both reasons in mind):

1. using the same name for the two loop indices is gratuitously obscure
   (thus violates a common-sense style principle); it doesn't even save a
   variable in this case.

2. IMHO one shouldn't use a loop like this in Ada to begin with, to
   clear an array to a single value. An aggregate like
       X := (others => (others => 100.0));
   expresses the intention of the programmer more clearly not only to the
   human reader but also to the compiler, which can - perhaps - generate
   better code accordingly.

What I figure could start a new thread is this: in your experience, what are
the sequential Ada constructs that (may) lead to _better_ optimized code
than the "old way" would?IMHO aggregates, universal assignment (i.e.
assignment operator for structured types) and universal equality are three
such things. Which others? What is the actual experience with code generators?
Do they exploit this possible optimization (yet)?

If I am correct, we in the Ada community should be using these arguments to
rebut the arguments of the anti-Ada crowd that Ada code is _necessarily_
worse than its equivalent in (pick your other language). I believe that
Ada code is not only non-worse, it has the potential to be considerably 
better once the compilers and the programmers catch on.

Am I correct?

Mike

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-11 14:57   ` Michael Feldman
@ 1991-02-12  9:48     ` (George C. Harrison) Norfolk State University
  1991-02-12 19:13       ` Michael Feldman
  1991-02-15  4:11     ` Jim Showalter
  1 sibling, 1 reply; 9+ messages in thread
From: (George C. Harrison) Norfolk State University @ 1991-02-12  9:48 UTC (permalink / raw)


In article <2704@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
> What I figure could start a new thread is this: in your experience, what are
> the sequential Ada constructs that (may) lead to _better_ optimized code
> than the "old way" would?

What do you mean by "_better_ optimized code?"  If you mean what constructs
_might_ lead the compiler to produce more optimized code than the "Brute Force"
methods like in the example of this original posting, I'd agree with your
assessment below.  "Optimized" to programmers generally means what the compiler
(and often the linker) does to make a faster, smaller executable.  These
constructs, I believe, leave their implementations better open to
these optimizations.

> IMHO aggregates, universal assignment (i.e.
> assignment operator for structured types) and universal equality are three
> such things. 


The other kind of optimization is that which saves the programmers time and
makes him/her more effective.  These include generics, separate compilation,
exceptions, overloading, and other supports for abstraction.  The problem is
that this kind of programmer optimization does not necessarily lead to
executable optimization.  

As an example...  Suppose I have a generic package "MATRIX_OPS" that performs
operations on matrices with elements from an instantiated field of elements. 
By using (and "withing") this package I can solve the system of linear
equations "Coefficient_Matrix * Unknowns_Vector = Constant_Vector" by simply
doing    

       Unknowns_Vector := Coefficient_Matrix ** (-1) * Constant_Vector

That CERTAINLY can save a programmer a lot of time, and with some guarantees of
correctness, it will save his/her job.  However, the implementation of the
exceptions involved, the operators * and **, the hidden implementation of the
underlying field, etc. may have been done by someone else with only abstraction
in mind.  

Ada, unlike most other languages, seems to be (or is getting to be) very
programmer-dependent.  That is, as the compilers and program libraries start
maxing out on optimization, then programmers can really take great pride in
using abstractions effectively.  

I hope I havn't changed Mike's change-in-thread topic, but I thought it
important to make a note on the two kind of optimizations.

-- George C. Harrison -------------- || -- My opinions and observations --
---|| Professor of Computer Science  || -- Only. -------------------------
---|| Norfolk State University, ---- || ----------- Pray for Peace -------
---|| 2401 Corprew Avenue, Norfolk, Virginia 23504 -----------------------
---|| INTERNET:  g_harrison@vger.nsu.edu ---------------------------------

^ permalink raw reply	[flat|nested] 9+ messages in thread

* LRM 4.1.3 paragraphs 17-19
@ 1991-02-12 19:13       ` Michael Feldman
  1991-02-12 21:44         ` Billy Yow 283-4009
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Feldman @ 1991-02-12 19:13 UTC (permalink / raw)


In article <631.27b781ac@vger.nsu.edu> g_harrison@vger.nsu.edu ((George C. Harrison) Norfolk State University) writes:
>
>What do you mean by "_better_ optimized code?"  If you mean what constructs
>_might_ lead the compiler to produce more optimized code than the "Brute Force"
>methods like in the example of this original posting, I'd agree with your
>assessment below.  "Optimized" to programmers generally means what the compiler
>(and often the linker) does to make a faster, smaller executable.  These
>constructs, I believe, leave their implementations better open to
>these optimizations.
>
>> IMHO aggregates, universal assignment (i.e.
>> assignment operator for structured types) and universal equality are three
>> such things. 
That is what I meant, inarticulately though I may have written it. The point is
that many Ada programmers do not focus on some of these constructs, which not
only save their own time and make their code clearer, but may make their
code faster as well. I see a lot of code written with the brute force
structures used in older languages - this is a shame because the Ada is not
only not as "good" but also potentially not as fast.
>
>The other kind of optimization is that which saves the programmers time and
>makes him/her more effective.  These include generics, separate compilation,
>exceptions, overloading, and other supports for abstraction.  The problem is
>that this kind of programmer optimization does not necessarily lead to
>executable optimization.  
>
Well, sure. I certainly wouldn't argue against using the real abstraction
power of Ada. I was really trying to make a different point, namely that
even if one sticks to the "classical" part of Ada - the inner, program-
level syntax, one can potentially get some nice optimization just by
using the language well, without doing tricks. I am still astounded by
the number of programmers who know all about (or think they know all about)
packages, generics, renaming declarations, etc., and who continue to
assert that Ada executables are _necessarily_ slow. They are actually pleased 
when I point out the "little" ways in which Ada can be faster.
They say "hmmmm... I never focused on that..."

Mike

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-12 19:13       ` Michael Feldman
@ 1991-02-12 21:44         ` Billy Yow 283-4009
  1991-02-13  4:32           ` Michael Feldman
  0 siblings, 1 reply; 9+ messages in thread
From: Billy Yow 283-4009 @ 1991-02-12 21:44 UTC (permalink / raw)


... [Stuff Deleted]
  
>assert that Ada executables are _necessarily_ slow. They are actually pleased 
>when I point out the "little" ways in which Ada can be faster.
>They say "hmmmm... I never focused on that..."

What are the ""little" ways"? 

                           Bill Yow
                           yow@sweetpea.jsc.nasa.gov
                           
My opinions are my own!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-12 21:44         ` Billy Yow 283-4009
@ 1991-02-13  4:32           ` Michael Feldman
  1991-02-13 16:32             ` jncs
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Feldman @ 1991-02-13  4:32 UTC (permalink / raw)


In article <1991Feb12.154418@riddler.Berkeley.EDU> yow@riddler.Berkeley.EDU (Billy Yow 283-4009) writes:
>  
>>assert that Ada executables are _necessarily_ slow. They are actually pleased 
>>when I point out the "little" ways in which Ada can be faster.
>>They say "hmmmm... I never focused on that..."
>
>What are the ""little" ways"? 
>
Well, I think that universal assignment and equality testing are one way.
(That's what started this thread...)

Another is the parameter passing scheme, in which reference semantics can
be used to pass arrays to IN parameters with no danger that the actual
will be changed (because IN parameters can't be written to).
Less copying (though a colleague of mine pointed out that the extra
indirection could actually make the program _slower_.)

I think pragma INLINE could be considered one of these, too, in that
it can reduce the number of procedure calls (at the cost of space,
of course) without giving up the abstraction.

I had a few more, but can't think of 'em offhand. Getting old...

Mike

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-13  4:32           ` Michael Feldman
@ 1991-02-13 16:32             ` jncs
  0 siblings, 0 replies; 9+ messages in thread
From: jncs @ 1991-02-13 16:32 UTC (permalink / raw)


In article <2715@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
>In article <1991Feb12.154418@riddler.Berkeley.EDU> yow@riddler.Berkeley.EDU (Billy Yow 283-4009) writes:
>>  
>
>Another is the parameter passing scheme, in which reference semantics can
>be used to pass arrays to IN parameters with no danger that the actual
>will be changed (because IN parameters can't be written to).
>Less copying (though a colleague of mine pointed out that the extra
>indirection could actually make the program _slower_.)
>
The LRM leaves to the implementor to actually decide on the method of parameter
passing IMPLEMENTATION in the case of structured types and IN parameters. I
may use IN for arrays hoping for the COPY-semantics, but the LRM tells me not
to trust it!

J. Nino

^ permalink raw reply	[flat|nested] 9+ messages in thread

* LRM 4.1.3 paragraphs 17-19
@ 1991-02-14  3:41 Michael Feldman
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Feldman @ 1991-02-14  3:41 UTC (permalink / raw)


In article <009442AE.D4BD5E20@uno.edu> jncs@uno.edu writes:
>In article <2715@sparko.gwu.edu>, mfeldman@seas.gwu.edu (Michael Feldman) writes:
>>In article <1991Feb12.154418@riddler.Berkeley.EDU> yow@riddler.Berkeley.EDU (Billy Yow 283-4009) writes:
>>>  
>>
>>Another is the parameter passing scheme, in which reference semantics can
>>be used to pass arrays to IN parameters with no danger that the actual
>>will be changed (because IN parameters can't be written to).
>>Less copying (though a colleague of mine pointed out that the extra
>>indirection could actually make the program _slower_.)
>>
>The LRM leaves to the implementor to actually decide on the method of parameter
>passing IMPLEMENTATION in the case of structured types and IN parameters. I
>may use IN for arrays hoping for the COPY-semantics, but the LRM tells me not
>to trust it!

Well, there are two reasons for desiring copy-semantics. The first is to
guard against alteration of the actual. But this is automatic with an IN
parameter, since an attempt to write into the IN parameter is caught by
the compiler. The other only applies to OUT or IN OUT -  if the procedure
propagates an exception, the old values are still in the actual. Since IN
parameters aren't written to (by definition), this is a moot point.

For an IN OUT scalar, you are always safe in assuming copy. For a structured
type, as you point out this is up to the implementer. In fact, most pass
large structures by reference, to save copying time and space. So you're
out of luck on the exception issue, but on the alteration issue you are
_still_ safe, because even if the array is passed by reference, you
cannot change an IN parameter. IMHO this is NEAT.

So tell me - why do you care if an IN parameter is passed by copy or reference?

Mike Feldman

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: LRM 4.1.3 paragraphs 17-19
  1991-02-11 14:57   ` Michael Feldman
  1991-02-12  9:48     ` (George C. Harrison) Norfolk State University
@ 1991-02-15  4:11     ` Jim Showalter
  1 sibling, 0 replies; 9+ messages in thread
From: Jim Showalter @ 1991-02-15  4:11 UTC (permalink / raw)


I wouldn't fire the programmer, but I WOULD send him/her to reeducation camp.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~1991-02-15  4:11 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1991-02-14  3:41 LRM 4.1.3 paragraphs 17-19 Michael Feldman
  -- strict thread matches above, loose matches on Subject: below --
1991-02-08  6:34 Ada Test Tools Scott Layson
1991-02-09  2:39 ` LRM 4.1.3 paragraphs 17-19 Joe Vlietstra
1991-02-11 14:57   ` Michael Feldman
1991-02-12  9:48     ` (George C. Harrison) Norfolk State University
1991-02-12 19:13       ` Michael Feldman
1991-02-12 21:44         ` Billy Yow 283-4009
1991-02-13  4:32           ` Michael Feldman
1991-02-13 16:32             ` jncs
1991-02-15  4:11     ` Jim Showalter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox